title
stringlengths 4
168
| content
stringlengths 7
1.74M
| commands
sequencelengths 1
5.62k
⌀ | url
stringlengths 79
342
|
---|---|---|---|
Chapter 4. View OpenShift Data Foundation Topology | Chapter 4. View OpenShift Data Foundation Topology The topology shows the mapped visualization of the OpenShift Data Foundation storage cluster at various abstraction levels and also lets you to interact with these layers. The view also shows how the various elements compose the Storage cluster altogether. Procedure On the OpenShift Web Console, navigate to Storage Data Foundation Topology . The view shows the storage cluster and the zones inside it. You can see the nodes depicted by circular entities within the zones, which are indicated by dotted lines. The label of each item or resource contains basic information such as status and health or indication for alerts. Choose a node to view node details on the right-hand panel. You can also access resources or deployments within a node by clicking on the search/preview decorator icon. To view deployment details Click the preview decorator on a node. A modal window appears above the node that displays all of the deployments associated with that node along with their statuses. Click the Back to main view button in the model's upper left corner to close and return to the view. Select a specific deployment to see more information about it. All relevant data is shown in the side panel. Click the Resources tab to view the pods information. This tab provides a deeper understanding of the problems and offers granularity that aids in better troubleshooting. Click the pod links to view the pod information page on OpenShift Container Platform. The link opens in a new window. | null | https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.18/html/deploying_openshift_data_foundation_using_bare_metal_infrastructure/viewing-odf-topology_mcg-verify |
Installation Guide | Installation Guide Red Hat Enterprise Linux 6 Installing Red Hat Enterprise Linux 6.9 for all architectures Red Hat Customer Content Services Clayton Spicer Red Hat Customer Content Services [email protected] Petr Bokoc Red Hat Customer Content Services Tomas Capek Red Hat Customer Content Services Jack Reed Red Hat Customer Content Services Rudiger Landmann Red Hat Customer Content Services David Cantrell VNC installation Hans De Goede iSCSI Jon Masters Driver updates | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/installation_guide/index |
Chapter 4. An active/active Samba Server in a Red Hat High Availability Cluster (Red Hat Enterprise Linux 7.4 and Later) | Chapter 4. An active/active Samba Server in a Red Hat High Availability Cluster (Red Hat Enterprise Linux 7.4 and Later) As of the Red Hat Enterprise Linux 7.4 release, the Red Hat Resilient Storage Add-On provides support for running Samba in an active/active cluster configuration using Pacemaker. The Red Hat Resilient Storage Add-On includes the High Availability Add-On. Note For further information on support policies for Samba, see Support Policies for RHEL Resilient Storage - ctdb General Policies and Support Policies for RHEL Resilient Storage - Exporting gfs2 contents via other protocols on the Red Hat Customer Portal. This chapter describes how to configure a highly available active/active Samba server on a two-node Red Hat Enterprise Linux High Availability Add-On cluster using shared storage. The procedure uses pcs to configure Pacemaker cluster resources. This use case requires that your system include the following components: Two nodes, which will be used to create the cluster running Clustered Samba. In this example, the nodes used are z1.example.com and z2.example.com which have IP address of 192.168.1.151 and 192.168.1.152 . A power fencing device for each node of the cluster. This example uses two ports of the APC power switch with a host name of zapc.example.com . Shared storage for the nodes in the cluster, using iSCSI or Fibre Channel. Configuring a highly available active/active Samba server on a two-node Red Hat Enterprise Linux High Availability Add-On cluster requires that you perform the following steps. Create the cluster that will export the Samba shares and configure fencing for each node in the cluster, as described in Section 4.1, "Creating the Cluster" . Configure a gfs2 file system mounted on the clustered LVM logical volume my_clv on the shared storage for the nodes in the cluster, as described in Section 4.2, "Configuring a Clustered LVM Volume with a GFS2 File System" . Configure Samba on each node in the cluster, Section 4.3, "Configuring Samba" . Create the Samba cluster resources as described in Section 4.4, "Configuring the Samba Cluster Resources" . Test the Samba share you have configured, as described in Section 4.5, "Testing the Resource Configuration" . 4.1. Creating the Cluster Use the following procedure to install and create the cluster to use for the Samba service: Install the cluster software on nodes z1.example.com and z2.example.com , using the procedure provided in Section 1.1, "Cluster Software Installation" . Create the two-node cluster that consists of z1.example.com and z2.example.com , using the procedure provided in Section 1.2, "Cluster Creation" . As in that example procedure, this use case names the cluster my_cluster . Configure fencing devices for each node of the cluster, using the procedure provided in Section 1.3, "Fencing Configuration" . This example configures fencing using two ports of the APC power switch with a host name of zapc.example.com . | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/high_availability_add-on_administration/ch-hasamba-haaa |
36.6.3. IBM S/390 and IBM eServer zSeries Systems | 36.6.3. IBM S/390 and IBM eServer zSeries Systems The IBM S/390 and IBM eServer zSeries systems use z/IPL as the boot loader, which uses /etc/zipl.conf as the configuration file. Confirm that the file contains a section with the same version as the kernel package just installed: Notice that the default is not set to the new kernel. To configure z/IPL to boot the new kernel by default change the value of the default variable to the name of the section that contains the new kernel. The first line of each section contains the name in brackets. After modifying the configuration file, run the following command as root to enable the changes: Begin testing the new kernel by rebooting the computer and watching the messages to ensure that the hardware is detected properly. | [
"[defaultboot] default=old target=/boot/ [linux] image=/boot/vmlinuz-2.6.9-5.EL ramdisk=/boot/initrd-2.6.9-5.EL.img parameters=\"root=LABEL=/\" [old] image=/boot/vmlinuz-2.6.9-1.906_EL ramdisk=/boot/initrd-2.6.9-1.906_EL.img parameters=\"root=LABEL=/\"",
"/sbin/zipl"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/system_administration_guide/verifying_the_boot_loader-ibm_s390_and_ibm_eserver_zseries_systems |
Chapter 1. Introduction to Public-Key Cryptography | Chapter 1. Introduction to Public-Key Cryptography Public-key cryptography and related standards underlie the security features of many products such as signed and encrypted email, single sign-on, and Transport Layer Security/Secure Sockets Layer (SSL/TLS) communications. This chapter covers the basic concepts of public-key cryptography. Internet traffic, which passes information through intermediate computers, can be intercepted by a third party: Eavesdropping Information remains intact, but its privacy is compromised. For example, someone could gather credit card numbers, record a sensitive conversation, or intercept classified information. Tampering Information in transit is changed or replaced and then sent to the recipient. For example, someone could alter an order for goods or change a person's resume. Impersonation Information passes to a person who poses as the intended recipient. Impersonation can take two forms: Spoofing. A person can pretend to be someone else. For example, a person can pretend to have the email address [email protected] or a computer can falsely identify itself as a site called www.example.net . Misrepresentation. A person or organization can misrepresent itself. For example, a site called www.example.net can purport to be an on-line furniture store when it really receives credit-card payments but never sends any goods. Public-key cryptography provides protection against Internet-based attacks through: Encryption and decryption Encryption and decryption allow two communicating parties to disguise information they send to each other. The sender encrypts, or scrambles, information before sending it. The receiver decrypts, or unscrambles, the information after receiving it. While in transit, the encrypted information is unintelligible to an intruder. Tamper detection Tamper detection allows the recipient of information to verify that it has not been modified in transit. Any attempts to modify or substitute data are detected. Authentication Authentication allows the recipient of information to determine its origin by confirming the sender's identity. Nonrepudiation Nonrepudiation prevents the sender of information from claiming at a later date that the information was never sent. 1.1. Encryption and Decryption Encryption is the process of transforming information so it is unintelligible to anyone but the intended recipient. Decryption is the process of decoding encrypted information. A cryptographic algorithm, also called a cipher , is a mathematical function used for encryption or decryption. Usually, two related functions are used, one for encryption and the other for decryption. With most modern cryptography, the ability to keep encrypted information secret is based not on the cryptographic algorithm, which is widely known, but on a number called a key that must be used with the algorithm to produce an encrypted result or to decrypt previously encrypted information. Decryption with the correct key is simple. Decryption without the correct key is very difficult, if not impossible. 1.1.1. Symmetric-Key Encryption With symmetric-key encryption, the encryption key can be calculated from the decryption key and vice versa. With most symmetric algorithms, the same key is used for both encryption and decryption, as shown in Figure 1.1, "Symmetric-Key Encryption" . Figure 1.1. Symmetric-Key Encryption Implementations of symmetric-key encryption can be highly efficient, so that users do not experience any significant time delay as a result of the encryption and decryption. Symmetric-key encryption is effective only if the symmetric key is kept secret by the two parties involved. If anyone else discovers the key, it affects both confidentiality and authentication. A person with an unauthorized symmetric key not only can decrypt messages sent with that key, but can encrypt new messages and send them as if they came from one of the legitimate parties using the key. Symmetric-key encryption plays an important role in SSL/TLS communication, which is widely used for authentication, tamper detection, and encryption over TCP/IP networks. SSL/TLS also uses techniques of public-key encryption, which is described in the section. 1.1.2. Public-Key Encryption Public-key encryption (also called asymmetric encryption) involves a pair of keys, a public key and a private key, associated with an entity. Each public key is published, and the corresponding private key is kept secret. (For more information about the way public keys are published, see Section 1.3, "Certificates and Authentication" .) Data encrypted with a public key can be decrypted only with the corresponding private key. Figure 1.2, "Public-Key Encryption" shows a simplified view of the way public-key encryption works. Figure 1.2. Public-Key Encryption The scheme shown in Figure 1.2, "Public-Key Encryption" allows public keys to be freely distributed, while only authorized people are able to read data encrypted using this key. In general, to send encrypted data, the data is encrypted with that person's public key, and the person receiving the encrypted data decrypts it with the corresponding private key. Compared with symmetric-key encryption, public-key encryption requires more processing and may not be feasible for encrypting and decrypting large amounts of data. However, it is possible to use public-key encryption to send a symmetric key, which can then be used to encrypt additional data. This is the approach used by the SSL/TLS protocols. The reverse of the scheme shown in Figure 1.2, "Public-Key Encryption" also works: data encrypted with a private key can be decrypted only with the corresponding public key. This is not a recommended practice to encrypt sensitive data, however, because it means that anyone with the public key, which is by definition published, could decrypt the data. Nevertheless, private-key encryption is useful because it means the private key can be used to sign data with a digital signature, an important requirement for electronic commerce and other commercial applications of cryptography. Client software such as Mozilla Firefox can then use the public key to confirm that the message was signed with the appropriate private key and that it has not been tampered with since being signed. Section 1.2, "Digital Signatures" illustrates how this confirmation process works. 1.1.3. Key Length and Encryption Strength Breaking an encryption algorithm is finding the key to the access the encrypted data in plain text. For symmetric algorithms, breaking the algorithm usually means trying to determine the key used to encrypt the text. For a public key algorithm, breaking the algorithm usually means acquiring the shared secret information between two recipients. One method of breaking a symmetric algorithm is to simply try every key within the full algorithm until the right key is found. For public key algorithms, since half of the key pair is publicly known, the other half (private key) can be derived using published, though complex, mathematical calculations. Manually finding the key to break an algorithm is called a brute force attack. Breaking an algorithm introduces the risk of intercepting, or even impersonating and fraudulently verifying, private information. The key strength of an algorithm is determined by finding the fastest method to break the algorithm and comparing it to a brute force attack. For symmetric keys, encryption strength is often described in terms of the size or length of the keys used to perform the encryption: longer keys generally provide stronger encryption. Key length is measured in bits. An encryption key is considered full strength if the best known attack to break the key is no faster than a brute force attempt to test every key possibility. Different types of algorithms - particularly public key algorithms - may require different key lengths to achieve the same level of encryption strength as a symmetric-key cipher. The RSA cipher can use only a subset of all possible values for a key of a given length, due to the nature of the mathematical problem on which it is based. Other ciphers, such as those used for symmetric-key encryption, can use all possible values for a key of a given length. More possible matching options means more security. Because it is relatively trivial to break an RSA key, an RSA public-key encryption cipher must have a very long key - at least 2048 bits - to be considered cryptographically strong. On the other hand, symmetric-key ciphers are reckoned to be equivalently strong using a much shorter key length, as little as 80 bits for most algorithms. Similarly, public-key ciphers based on the elliptic curve cryptography (ECC), such as the Elliptic Curve Digital Signature Algorithm (ECDSA) ciphers, also require less bits than RSA ciphers. | null | https://docs.redhat.com/en/documentation/red_hat_certificate_system/10/html/planning_installation_and_deployment_guide/Introduction_to_Public_Key_Cryptography |
Chapter 2. Service Mesh 2.x | Chapter 2. Service Mesh 2.x 2.1. About OpenShift Service Mesh Note Because Red Hat OpenShift Service Mesh releases on a different cadence from OpenShift Container Platform and because the Red Hat OpenShift Service Mesh Operator supports deploying multiple versions of the ServiceMeshControlPlane , the Service Mesh documentation does not maintain separate documentation sets for minor versions of the product. The current documentation set applies to the most recent version of Service Mesh unless version-specific limitations are called out in a particular topic or for a particular feature. For additional information about the Red Hat OpenShift Service Mesh life cycle and supported platforms, refer to the Platform Life Cycle Policy . 2.1.1. Introduction to Red Hat OpenShift Service Mesh Red Hat OpenShift Service Mesh addresses a variety of problems in a microservice architecture by creating a centralized point of control in an application. It adds a transparent layer on existing distributed applications without requiring any changes to the application code. Microservice architectures split the work of enterprise applications into modular services, which can make scaling and maintenance easier. However, as an enterprise application built on a microservice architecture grows in size and complexity, it becomes difficult to understand and manage. Service Mesh can address those architecture problems by capturing or intercepting traffic between services and can modify, redirect, or create new requests to other services. Service Mesh, which is based on the open source Istio project , provides an easy way to create a network of deployed services that provides discovery, load balancing, service-to-service authentication, failure recovery, metrics, and monitoring. A service mesh also provides more complex operational functionality, including A/B testing, canary releases, access control, and end-to-end authentication. Note Red Hat OpenShift Service Mesh 3 is generally available. For more information, see Red Hat OpenShift Service Mesh 3.0 . 2.1.2. Core features Red Hat OpenShift Service Mesh provides a number of key capabilities uniformly across a network of services: Traffic Management - Control the flow of traffic and API calls between services, make calls more reliable, and make the network more robust in the face of adverse conditions. Service Identity and Security - Provide services in the mesh with a verifiable identity and provide the ability to protect service traffic as it flows over networks of varying degrees of trustworthiness. Policy Enforcement - Apply organizational policy to the interaction between services, ensure access policies are enforced and resources are fairly distributed among consumers. Policy changes are made by configuring the mesh, not by changing application code. Telemetry - Gain understanding of the dependencies between services and the nature and flow of traffic between them, providing the ability to quickly identify issues. 2.2. Service Mesh Release Notes 2.2.1. Making open source more inclusive Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see our CTO Chris Wright's message . 2.2.2. Red Hat OpenShift Service Mesh version 2.6.6 This release of Red Hat OpenShift Service Mesh updates the Red Hat OpenShift Service Mesh Operator version to 2.6.6, and includes the following ServiceMeshControlPlane resource version updates: 2.6.6, 2.5.9, and 2.4.15. This release addresses Common Vulnerabilities and Exposures (CVEs) and is supported on OpenShift Container Platform 4.14 and later. The most current version of the Red Hat OpenShift Service Mesh Operator can be used with all supported versions of Service Mesh. The version of Service Mesh is specified by using the ServiceMeshControlPlane resource. You can use the most current version of the Kiali Operator provided by Red Hat with all supported versions of Red Hat OpenShift Service Mesh. The version of Service Mesh is specified by using the ServiceMeshControlPlane resource. The version of Service Mesh automatically ensures a compatible version of Kiali. 2.2.2.1. Component updates Component Version Istio 1.20.8 Envoy Proxy 1.28.7 Kiali Server 1.73.19 2.2.2.2. New features With this update, the Operator for Red Hat OpenShift Service Mesh 2.6 is renamed to Red Hat OpenShift Service Mesh 2 to align with the release of Red Hat OpenShift Service Mesh 3.0 and improve clarity. 2.2.3. Red Hat OpenShift Service Mesh version 2.5.9 This release of Red Hat OpenShift Service Mesh is included with the Red Hat OpenShift Service Mesh Operator 2.6.6 and is supported on OpenShift Container Platform 4.14 and later. This release addresses Common Vulnerabilities and Exposures (CVEs). 2.2.3.1. Component updates Component Version Istio 1.18.7 Envoy Proxy 1.26.8 Kiali Server 1.73.19 2.2.4. Red Hat OpenShift Service Mesh version 2.4.15 This release of Red Hat OpenShift Service Mesh is included with the Red Hat OpenShift Service Mesh Operator 2.6.6 and is supported on OpenShift Container Platform 4.14 and later. 2.2.4.1. Component updates Component Version Istio 1.16.7 Envoy Proxy 1.24.12 Kiali Server 1.65.20 2.2.5. Red Hat OpenShift Service Mesh version 2.6.5 This release of Red Hat OpenShift Service Mesh updates the Red Hat OpenShift Service Mesh Operator version to 2.6.5, and includes the following ServiceMeshControlPlane resource version updates: 2.6.5, 2.5.8, and 2.4.14. This release addresses Common Vulnerabilities and Exposures (CVEs) and is supported on OpenShift Container Platform 4.14 and later. You can use the most current version of the Kiali Operator provided by Red Hat with all supported versions of Red Hat OpenShift Service Mesh. The version of Service Mesh is specified by using the ServiceMeshControlPlane resource. The version of Service Mesh automatically ensures a compatible version of Kiali. 2.2.5.1. Component updates Component Version Istio 1.20.8 Envoy Proxy 1.28.7 Kiali Server 1.73.18 2.2.5.2. New features Red Hat OpenShift distributed tracing platform (Tempo) Stack is now supported on IBM Z. 2.2.5.3. Fixed issues OSSM-8608 Previously, terminating a Container Network Interface (CNI) pod during the installation phase while copying binaries could leave Istio-CNI temporary files on the node file system. Repeated occurrences could eventually fill up the node disk space. Now, while terminating a CNI pod during the installation phase, existing temporary files are deleted before copying the CNI binary, ensuring that only one temporary file per Istio version exists on the node file system. 2.2.6. Red Hat OpenShift Service Mesh version 2.5.8 This release of Red Hat OpenShift Service Mesh is included with the Red Hat OpenShift Service Mesh Operator 2.6.5 and is supported on OpenShift Container Platform 4.14 and later. This release addresses Common Vulnerabilities and Exposures (CVEs). 2.2.6.1. Component updates Component Version Istio 1.18.7 Envoy Proxy 1.26.8 Kiali Server 1.73.18 2.2.6.2. Fixed issues OSSM-8608 Previously, terminating a Container Network Interface (CNI) pod during the installation phase while copying binaries could leave Istio-CNI temporary files on the node file system. Repeated occurrences could eventually fill up the node disk space. Now, while terminating a CNI pod during the installation phase, existing temporary files are deleted before copying the CNI binary, ensuring that only one temporary file per Istio version exists on the node file system. 2.2.7. Red Hat OpenShift Service Mesh version 2.4.14 This release of Red Hat OpenShift Service Mesh is included with the Red Hat OpenShift Service Mesh Operator 2.6.5 and is supported on OpenShift Container Platform 4.14 and later. This release addresses Common Vulnerabilities and Exposures (CVEs). 2.2.7.1. Component updates Component Version Istio 1.16.7 Envoy Proxy 1.24.12 Kiali Server 1.65.19 2.2.7.2. Fixed issues OSSM-8608 Previously, terminating a Container Network Interface (CNI) pod during the installation phase while copying binaries could leave Istio-CNI temporary files on the node file system. Repeated occurrences could eventually fill up the node disk space. Now, while terminating a CNI pod during the installation phase, existing temporary files are deleted before copying the CNI binary, ensuring that only one temporary file per Istio version exists on the node file system. 2.2.8. Red Hat OpenShift Service Mesh version 2.6.4 This release of Red Hat OpenShift Service Mesh updates the Red Hat OpenShift Service Mesh Operator version to 2.6.4, and includes the following ServiceMeshControlPlane resource version updates: 2.6.4, 2.5.7, and 2.4.13. This release addresses Common Vulnerabilities and Exposures (CVEs) and is supported on OpenShift Container Platform 4.14 and later. The most current version of the Kiali Operator provided by Red Hat can be used with all supported versions of Red Hat OpenShift Service Mesh. The version of Service Mesh is specified by using the ServiceMeshControlPlane resource. The version of Service Mesh automatically ensures a compatible version of Kiali. 2.2.8.1. Component updates Component Version Istio 1.20.8 Envoy Proxy 1.28.7 Kiali Server 1.73.17 2.2.9. Red Hat OpenShift Service Mesh version 2.5.7 This release of Red Hat OpenShift Service Mesh is included with the Red Hat OpenShift Service Mesh Operator 2.6.4 and is supported on OpenShift Container Platform 4.14 and later. This release addresses Common Vulnerabilities and Exposures (CVEs). 2.2.9.1. Component updates Component Version Istio 1.18.7 Envoy Proxy 1.26.8 Kiali Server 1.73.17 2.2.10. Red Hat OpenShift Service Mesh version 2.4.13 This release of Red Hat OpenShift Service Mesh is included with the Red Hat OpenShift Service Mesh Operator 2.6.4 and is supported on OpenShift Container Platform 4.14 and later. This release addresses Common Vulnerabilities and Exposures (CVEs). 2.2.10.1. Component updates Component Version Istio 1.16.7 Envoy Proxy 1.24.12 Kiali Server 1.65.18 2.2.11. Red Hat OpenShift Service Mesh version 2.6.3 This release of Red Hat OpenShift Service Mesh updates the Red Hat OpenShift Service Mesh Operator version to 2.6.3, and includes the following ServiceMeshControlPlane resource version updates: 2.6.3, 2.5.6, and 2.4.12. This release addresses Common Vulnerabilities and Exposures (CVEs) and is supported on OpenShift Container Platform 4.14 and later. The most current version of the Kiali Operator provided by Red Hat can be used with all supported versions of Red Hat OpenShift Service Mesh. The version of Service Mesh is specified by using the ServiceMeshControlPlane resource. The version of Service Mesh automatically ensures a compatible version of Kiali. 2.2.11.1. Component updates Component Version Istio 1.20.8 Envoy Proxy 1.28.7 Kiali Server 1.73.16 2.2.12. Red Hat OpenShift Service Mesh version 2.5.6 This release of Red Hat OpenShift Service Mesh is included with the Red Hat OpenShift Service Mesh Operator 2.6.3, addresses Common Vulnerabilities and Exposures (CVEs), and is supported on OpenShift Container Platform 4.14 and later. 2.2.12.1. Component updates Component Version Istio 1.18.7 Envoy Proxy 1.26.8 Kiali Server 1.73.16 2.2.13. Red Hat OpenShift Service Mesh version 2.4.12 This release of Red Hat OpenShift Service Mesh is included with the Red Hat OpenShift Service Mesh Operator 2.6.3, addresses Common Vulnerabilities and Exposures (CVEs), and is supported on OpenShift Container Platform 4.14 and later. 2.2.13.1. Component updates Component Version Istio 1.16.7 Envoy Proxy 1.24.12 Kiali Server 1.65.17 2.2.14. Red Hat OpenShift Service Mesh version 2.6.2 This release of Red Hat OpenShift Service Mesh updates the Red Hat OpenShift Service Mesh Operator version to 2.6.2, and includes the following ServiceMeshControlPlane resource version updates: 2.6.2, 2.5.5 and 2.4.11. This release addresses Common Vulnerabilities and Exposures (CVEs), contains bug fixes, and is supported on OpenShift Container Platform 4.14 and later. The most current version of the Kiali Operator provided by Red Hat can be used with all supported versions of Red Hat OpenShift Service Mesh. The version of Service Mesh is specified by using the ServiceMeshControlPlane resource. The version of Service Mesh automatically ensures a compatible version of Kiali. 2.2.14.1. Component updates Component Version Istio 1.20.8 Envoy Proxy 1.28.7 Kiali Server 1.73.15 2.2.14.2. New features The cert-manager Operator for Red Hat OpenShift is now supported on IBM Power, IBM Z, and IBM(R) LinuxONE. 2.2.14.3. Fixed issues OSSM-8099 Previously, there was an issue supporting persistent session labels when the endpoints were in the draining phase. Now, there is a method of handling draining endpoints for the stateful header sessions. OSSM-8001 Previously, when runAsUser and runAsGroup were set to the same value in pods, the proxy GID was incorrectly set to match the container's GID, causing traffic interception issues with iptables rules applied by Istio CNI. Now, containers can have the same value for runAsUser and runAsGroup, and iptables rules apply correctly. OSSM-8074 Previously, the Kiali Operator failed to install the Kiali server when a Service Mesh had a numeric-only namespace (e.g., 12345 ). Now, namespaces with only numerals work correctly. 2.2.15. Red Hat OpenShift Service Mesh version 2.5.5 This release of Red Hat OpenShift Service Mesh is included with the Red Hat OpenShift Service Mesh Operator 2.6.2, addresses Common Vulnerabilities and Exposures (CVEs), and is supported on OpenShift Container Platform 4.14 and later. 2.2.15.1. Component updates Component Version Istio 1.18.7 Envoy Proxy 1.26.8 Kiali Server 1.73.15 2.2.15.2. Fixed issues OSSM-8001 Previously, when the runAsUser and runAsGroup parameters were set to the same value in pods, the proxy GID was incorrectly set to match the container's GID, causing traffic interception issues with iptables rules applied by Istio CNI. Now, containers can have the same value for the runAsUser and runAsGroup parameters, and iptables rules apply correctly. OSSM-8074 Previously, the Kiali Operator provided by Red Hat failed to install the Kiali Server when a Service Mesh had a numeric-only namespace (e.g., 12345 ). Now, namespaces with only numerals work correctly. 2.2.16. Red Hat OpenShift Service Mesh version 2.4.11 This release of Red Hat OpenShift Service Mesh is included with the Red Hat OpenShift Service Mesh Operator 2.6.2, addresses Common Vulnerabilities and Exposures (CVEs), and is supported on OpenShift Container Platform 4.14 and later. 2.2.16.1. Component updates Component Version Istio 1.16.7 Envoy Proxy 1.24.12 Kiali Server 1.65.16 2.2.16.2. Fixed issues OSSM-8001 Previously, when the runAsUser and runAsGroup parameters were set to the same value in pods, the proxy GID was incorrectly set to match the container's GID, causing traffic interception issues with iptables rules applied by Istio CNI. Now, containers can have the same value for the runAsUser and runAsGroup parameters, and iptables rules apply correctly. OSSM-8074 Previously, the Kiali Operator provided by Red Hat failed to install the Kiali Server when a Service Mesh had a numeric-only namespace (e.g., 12345 ). Now, namespaces with only numerals work correctly. 2.2.17. Red Hat OpenShift Service Mesh version 2.6.1 This release of Red Hat OpenShift Service Mesh updates the Red Hat OpenShift Service Mesh Operator version to 2.6.1, and includes the following ServiceMeshControlPlane resource version updates: 2.6.1, 2.5.4 and 2.4.10. This release addresses Common Vulnerabilities and Exposures (CVEs), contains a bug fix, and is supported on OpenShift Container Platform 4.14 and later. The most current version of the Kiali Operator provided by Red Hat can be used with all supported versions of Red Hat OpenShift Service Mesh. The version of Service Mesh is specified by using the ServiceMeshControlPlane resource. The version of Service Mesh automatically ensures a compatible version of Kiali. 2.2.17.1. Component updates Component Version Istio 1.20.8 Envoy Proxy 1.28.5 Kiali Server 1.73.14 2.2.17.2. Fixed issues OSSM-6766 Previously, the OpenShift Service Mesh Console (OSSMC) plugin failed if the user wanted to update a namespace (for example, enabling or disabling injection), or create any Istio object (for example, creating traffic policies). Now, the OpenShift Service Mesh Console (OSSMC) plugin does not fail if the user updates a namespace or creates any Istio object. 2.2.18. Red Hat OpenShift Service Mesh version 2.5.4 This release of Red Hat OpenShift Service Mesh is included with the Red Hat OpenShift Service Mesh Operator 2.6.1, addresses Common Vulnerabilities and Exposures (CVEs), and is supported on OpenShift Container Platform 4.14 and later. 2.2.18.1. Component updates Component Version Istio 1.18.7 Envoy Proxy 1.26.8 Kiali Server 1.73.14 2.2.19. Red Hat OpenShift Service Mesh version 2.4.10 This release of Red Hat OpenShift Service Mesh is included with the Red Hat OpenShift Service Mesh Operator 2.6.1, addresses Common Vulnerabilities and Exposures (CVEs), and is supported on OpenShift Container Platform 4.14 and later. 2.2.19.1. Component updates Component Version Istio 1.16.7 Envoy Proxy 1.24.12 Kiali Server 1.65.15 2.2.20. Red Hat OpenShift Service Mesh version 2.6.0 This release of Red Hat OpenShift Service Mesh updates the Red Hat OpenShift Service Mesh Operator version to 2.6.0, and includes the following ServiceMeshControlPlane resource version updates: 2.6.0, 2.5.3 and 2.4.9. This release adds new features, addresses Common Vulnerabilities and Exposures (CVEs), and is supported on OpenShift Container Platform 4.14 and later. This release ends maintenance support for Red Hat OpenShift Service Mesh version 2.3. If you are using Service Mesh version 2.3, you should update to a supported version. Important Red Hat OpenShift Service Mesh is designed for FIPS. Service Mesh uses the RHEL cryptographic libraries that have been submitted to NIST for FIPS 140-2/140-3 Validation on the x86_64, ppc64le, and s390x architectures. For more information about the NIST validation program, see Cryptographic Module Validation Program . For the latest NIST status for the individual versions of RHEL cryptographic libraries that have been submitted for validation, see Compliance Activities and Government Standards . 2.2.20.1. Component updates Component Version Istio 1.20.8 Envoy Proxy 1.28.5 Kiali 1.73.9 2.2.20.2. Istio 1.20 support Service Mesh 2.6 is based on Istio 1.20, which provides new features and product enhancements, including: Native sidecars are supported on OpenShift Container Platform 4.16 or later. Example ServiceMeshControlPlane resource apiVersion: maistra.io/v2 kind: ServiceMeshControlPlane metadata: name: basic spec: runtime: components: pilot: container: env: ENABLE_NATIVE_SIDECARS: "true" Traffic mirroring in Istio 1.20 now supports multiple destinations. This feature enables the mirroring of traffic to various endpoints, allowing for simultaneous observation across different service versions or configurations. While Red Hat OpenShift Service Mesh supports many Istio 1.20 features, the following exceptions should be noted: Ambient mesh is not supported QuickAssist Technology (QAT) PrivateKeyProvider in Istio is not supported 2.2.20.3. Istio and Kiali bundle image name changes This release updates the Istio bundle image name and the Kiali bundle image name to better align with Red Hat naming conventions. Istio bundle image name: openshift-service-mesh/istio-operator-bundle Kiali bundle image name: openshift-service-mesh/kiali-operator-bundle 2.2.20.4. Integration with Red Hat OpenShift distributed tracing platform (Tempo) and Red Hat build of OpenTelemetry This release introduces a generally available integration of the tracing extension provider(s) Red Hat OpenShift distributed tracing platform (Tempo) and Red Hat build of OpenTelemetry. You can expose tracing data to the Red Hat OpenShift distributed tracing platform (Tempo) by appending a named element and the opentelemetry provider to the spec.meshConfig.extensionProviders specification in the ServiceMehControlPlane resource. Then, a telemetry custom resource configures Istio proxies to collect trace spans and send them to the OpenTelemetry Collector endpoint. You can create a Red Hat build of OpenTelemetry instance in a mesh namespace and configure it to send tracing data to a tracing platform backend service. 2.2.20.5. Red Hat OpenShift distributed tracing platform (Jaeger) default setting change This release disables Red Hat OpenShift distributed tracing platform (Jaeger) by default for new instances of the ServiceMeshControlPlane resource. When updating existing instances of the ServiceMeshControlPlane resource to Red Hat OpenShift Service Mesh version 2.6, distributed tracing platform (Jaeger) remains enabled by default. Red Hat OpenShift Service Mesh 2.6 is the last release that includes support for Red Hat OpenShift distributed tracing platform (Jaeger) and OpenShift Elasticsearch Operator. Both distributed tracing platform (Jaeger) and OpenShift Elasticsearch Operator will be removed in the release. If you are currently using distributed tracing platform (Jaeger) and OpenShift Elasticsearch Operator, you need to switch to Red Hat OpenShift distributed tracing platform (Tempo) and Red Hat build of OpenTelemetry. 2.2.20.6. Gateway API use is generally available for Red Hat OpenShift Service Mesh cluster-wide deployments This release introduces the General Availability for using the Kubernetes Gateway API version 1.0.0 with Red Hat OpenShift Service Mesh 2.6. This API use is limited to Red Hat OpenShift Service Mesh. The Gateway API custom resource definitions (CRDs) are not supported. Gateway API is now enabled by default if cluster-wide mode is enabled ( spec.mode: ClusterWide ). It can be enabled even if the custom resource definitions (CRDs) are not installed in the cluster. Important Gateway API for multitenant mesh deployments is still in Technology Preview. Refer to the following table to determine which Gateway API version should be installed with the OpenShift Service Mesh version you are using: Service Mesh Version Istio Version Gateway API Version Notes 2.6 1.20.x 1.0.0 N/A 2.5.x 1.18.x 0.6.2 Use the experimental branch because ReferenceGrand is missing in v0.6.2. 2.4.x 1.16.x 0.5.1 For multitenant mesh deployment, all Gateway API CRDs must be present. Use the experimental branch. You can disable this feature by setting PILOT_ENABLE_GATEWAY_API to false : apiVersion: maistra.io/v2 kind: ServiceMeshControlPlane metadata: name: basic spec: runtime: components: pilot: container: env: PILOT_ENABLE_GATEWAY_API: "false" 2.2.20.7. Fixed issues OSSM-6754 Previously, in OpenShift Container Platform 4.15, when users navigated to a Service details page, clicked the Service Mesh tab, and refreshed the page, the Service Mesh details page remained stuck on Service Mesh content information, even though the active tab was the default Details tab. Now, after a refresh, users can navigate through the different tabs of the Service details page without issue. OSSM-2101 Previously, the Istio Operator never deleted the istio-cni-node DaemonSet and other CNI resources when they were no longer needed. Now, after upgrading the Operator, if there is at least one SMCP installed in the cluster, the Operator reconciles this SMCP, and then deletes all unused CNI installations (even very old CNI versions as early as v2.0). 2.2.20.8. Kiali known issues OSSM-6099 Installing the OpenShift Service Mesh Console (OSSMC) plugin fails on an IPv6 cluster. Workaround: Install the OSSMC plugin on an IPv4 cluster. 2.2.21. Red Hat OpenShift Service Mesh version 2.5.3 This release of Red Hat OpenShift Service Mesh is included with the Red Hat OpenShift Service Mesh Operator 2.6.0, addresses Common Vulnerabilities and Exposures (CVEs), and is supported on OpenShift Container Platform 4.12 and later. 2.2.21.1. Component updates Component Version Istio 1.18.7 Envoy Proxy 1.26.8 Kiali 1.73.9 2.2.22. Red Hat OpenShift Service Mesh version 2.4.9 This release of Red Hat OpenShift Service Mesh is included with the Red Hat OpenShift Service Mesh Operator 2.6.0, addresses Common Vulnerabilities and Exposures (CVEs), and is supported on OpenShift Container Platform 4.12 and later. 2.2.22.1. Component updates Component Version Istio 1.16.7 Envoy Proxy 1.24.12 Kiali 1.65.11 2.2.23. Red Hat OpenShift Service Mesh version 2.5.2 This release of Red Hat OpenShift Service Mesh updates the Red Hat OpenShift Service Mesh Operator version to 2.5.2, and includes the following ServiceMeshControlPlane resource version updates: 2.5.2, 2.4.8 and 2.3.12. This release addresses Common Vulnerabilities and Exposures (CVEs), contains bug fixes, and is supported on OpenShift Container Platform 4.12 and later. 2.2.23.1. Component updates Component Version Istio 1.18.7 Envoy Proxy 1.26.8 Kiali 1.73.8 2.2.23.2. Fixed issues OSSM-6331 Previously, the smcp.general.logging.componentLevels spec accepted invalid LogLevel values, and the ServiceMeshControlPlane resource was still created. Now, the terminal shows an error message if an invalid value is used, and the control plane is not created. OSSM-6290 Previously, the Project filter drop-down of the Istio Config list page did not work correctly. All istio config items were displayed from all namespaces even if you selected a specific project from the drop-down menu. Now, only the istio config items that belong to the selected project in the filter drop-down are displayed. OSSM-6298 Previously, when you clicked an item reference within the OpenShift Service Mesh Console (OSSMC) plugin, the console sometimes performed multiple redirects before opening the desired page. As a result, navigating back to the page that was open in the console caused your web browser to open the wrong page. Now, these redirects do not occur, and clicking Back in a web browser opens the correct page. OSSM-6299 Previously, in OpenShift Container Platform 4.15, when you clicked the Node graph menu option of any node menu within the traffic graph, the node graph was not displayed. Instead, the page refreshed with the same traffic graph. Now, clicking the Node graph menu option correctly displays the node graph. OSSM-6267 Previously, configuring a data source in Red Hat OpenShift Service Mesh 2.5 Grafana caused a data query authentication error, and users could not view data in the Istio service and workload dashboards. Now, upgrading an existing 2.5 SMCP to version 2.5.2 or later resolves the Grafana error. 2.2.24. Red Hat OpenShift Service Mesh version 2.4.8 This release of Red Hat OpenShift Service Mesh is included with the Red Hat OpenShift Service Mesh Operator 2.5.2, addresses Common Vulnerabilities and Exposures (CVEs), contains bug fixes, and is supported on OpenShift Container Platform 4.12 and later. The most current version of the Red Hat OpenShift Service Mesh Operator can be used with all supported versions of Service Mesh. The version of Service Mesh is specified using the ServiceMeshControlPlane . 2.2.24.1. Component updates Component Version Istio 1.16.7 Envoy Proxy 1.24.12 Kiali 1.65.11 2.2.25. Red Hat OpenShift Service Mesh version 2.3.12 This release of Red Hat OpenShift Service Mesh is included with the Red Hat OpenShift Service Mesh Operator 2.5.2, addresses Common Vulnerabilities and Exposures (CVEs), contains bug fixes, and is supported on OpenShift Container Platform 4.12 and later. The most current version of the Red Hat OpenShift Service Mesh Operator can be used with all supported versions of Service Mesh. The version of Service Mesh is specified using the ServiceMeshControlPlane resource. 2.2.25.1. Component updates Component Version Istio 1.14.5 Envoy Proxy 1.22.11 Kiali 1.57.14 2.2.26. releases These releases added features and improvements. 2.2.26.1. New features Red Hat OpenShift Service Mesh version 2.5.1 This release of Red Hat OpenShift Service Mesh updates the Red Hat OpenShift Service Mesh Operator version to 2.5.1, and includes the following ServiceMeshControlPlane resource version updates: 2.5.1, 2.4.7 and 2.3.11. This release addresses Common Vulnerabilities and Exposures (CVEs), contains bug fixes, and is supported on OpenShift Container Platform 4.12 and later. 2.2.26.1.1. Component versions for Red Hat OpenShift Service Mesh version 2.5.1 Component Version Istio 1.18.7 Envoy Proxy 1.26.8 Kiali 1.73.7 2.2.26.2. New features Red Hat OpenShift Service Mesh version 2.5 This release of Red Hat OpenShift Service Mesh updates the Red Hat OpenShift Service Mesh Operator version to 2.5.0, and includes the following ServiceMeshControlPlane resource version updates: 2.5.0, 2.4.6 and 2.3.10. This release adds new features, addresses Common Vulnerabilities and Exposures (CVEs), contains bug fixes, and is supported on OpenShift Container Platform 4.12 and later. This release ends maintenance support for OpenShift Service Mesh version 2.2. If you are using OpenShift Service Mesh version 2.2, you should update to a supported version. 2.2.26.2.1. Component versions for Red Hat OpenShift Service Mesh version 2.5 Component Version Istio 1.18.7 Envoy Proxy 1.26.8 Kiali 1.73.4 2.2.26.2.2. Istio 1.18 support Service Mesh 2.5 is based on Istio 1.18, which brings in new features and product enhancements. While Red Hat OpenShift Service Mesh supports many Istio 1.18 features, the following exceptions should be noted: Ambient mesh is not supported QuickAssist Technology (QAT) PrivateKeyProvider in Istio is not supported 2.2.26.2.3. Cluster-Wide mesh migration This release adds documentation for migrating from a multitenant mesh to a cluster-wide mesh. For more information, see the following documentation: "About migrating to a cluster-wide mesh" "Excluding namespaces from a cluster-wide mesh" "Defining which namespaces receive sidecar injection in a cluster-wide mesh" "Excluding individual pods from a cluster-wide mesh" 2.2.26.2.4. Red Hat OpenShift Service Mesh Operator on ARM-based clusters This release provides the Red Hat OpenShift Service Mesh Operator on ARM-based clusters as a generally available feature. 2.2.26.2.5. Integration with Red Hat OpenShift distributed tracing platform (Tempo) Stack This release introduces a generally available integration of the tracing extension provider(s). You can expose tracing data to the Red Hat OpenShift distributed tracing platform (Tempo) stack by appending a named element and the zipkin provider to the spec.meshConfig.extensionProviders specification. Then, a telemetry custom resource configures Istio proxies to collect trace spans and send them to the Tempo distributor service endpoint. Note Red Hat OpenShift distributed tracing platform (Tempo) Stack is not supported on IBM Z. 2.2.26.2.6. OpenShift Service Mesh Console plugin This release introduces a generally available version of the OpenShift Service Mesh Console (OSSMC) plugin. The OSSMC plugin is an extension to the OpenShift Console that provides visibility into your Service Mesh. With the OSSMC plugin installed, a new Service Mesh menu option is available on the navigation pane of the web console, as well as new Service Mesh tabs that enhance existing Workloads and Service console pages. The features of the OSSMC plugin are very similar to those of the standalone Kiali Console. The OSSMC plugin does not replace the Kiali Console, and after installing the OSSMC plugin, you can still access the standalone Kiali Console. 2.2.26.2.7. Istio OpenShift Routing (IOR) default setting change The default setting for Istio OpenShift Routing (IOR) has changed. Starting with this release, automatic routes are disabled by default for new instances of the ServiceMeshControlPlane resource. For new instances of the ServiceMeshControlPlane resources, you can use automatic routes by setting the enabled field to true in the gateways.openshiftRoute specification of the ServiceMeshControlPlane resource. Example ServiceMeshControlPlane resource apiVersion: maistra.io/v2 kind: ServiceMeshControlPlane spec: gateways: openshiftRoute: enabled: true When updating existing instances of the ServiceMeshControlPlane resource to Red Hat OpenShift Service Mesh version 2.5, automatic routes remain enabled by default. 2.2.26.2.8. Istio proxy concurrency configuration enhancement The concurrency parameter in the networking.istio API configures how many worker threads the Istio proxy runs. For consistency across deployments, Istio now configures the concurrency parameter based upon the CPU limit allocated to the proxy container. For example, a limit of 2500m would set the concurrency parameter to 3 . If you set the concurrency parameter to a different value, then Istio uses that value to configure how many threads the proxy runs instead of using the CPU limit. Previously, the default setting for the parameter was 2 . 2.2.26.2.9. Gateway API CRD versions Important OpenShift Container Platform Gateway API support is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . A new version of the Gateway API custom resource definition (CRD) is now available. Refer to the following table to determine which Gateway API version should be installed with the OpenShift Service Mesh version you are using: Service Mesh Version Istio Version Gateway API Version Notes 2.5.x 1.18.x 0.6.2 Use the experimental branch because ReferenceGrand is missing in v0.6.2 2.4.x 1.16.x 0.5.1 For multitenant mesh deployment, all Gateway API CRDs must be present. Use the experimental branch. 2.2.26.3. New features Red Hat OpenShift Service Mesh version 2.4.7 This release of Red Hat OpenShift Service Mesh is included with the Red Hat OpenShift Service Mesh Operator 2.5.1, addresses Common Vulnerabilities and Exposures (CVEs), contains bug fixes, and is supported on OpenShift Container Platform 4.12 and later. 2.2.26.3.1. Component versions for Red Hat OpenShift Service Mesh version 2.4.7 Component Version Istio 1.16.7 Envoy Proxy 1.24.12 Kiali 1.65.11 2.2.26.4. New features Red Hat OpenShift Service Mesh version 2.4.6 This release of Red Hat OpenShift Service Mesh is included with the Red Hat OpenShift Service Mesh Operator 2.5.0, addresses Common Vulnerabilities and Exposures (CVEs), contains bug fixes, and is supported on OpenShift Container Platform 4.12 and later. 2.2.26.4.1. Component versions for Red Hat OpenShift Service Mesh version 2.4.6 Component Version Istio 1.16.7 Envoy Proxy 1.24.12 Kiali 1.65.11 2.2.26.5. New features Red Hat OpenShift Service Mesh version 2.4.5 This release of Red Hat OpenShift Service Mesh updates the Red Hat OpenShift Service Mesh Operator version to 2.4.5, and includes the following ServiceMeshControlPlane resource version updates: 2.4.5, 2.3.9 and 2.2.12. This release addresses Common Vulnerabilities and Exposures (CVEs), contains bug fixes, and is supported on OpenShift Container Platform 4.11 and later. 2.2.26.5.1. Component versions included in Red Hat OpenShift Service Mesh version 2.4.5 Component Version Istio 1.16.7 Envoy Proxy 1.24.12 Kiali 1.65.11 2.2.26.6. New features Red Hat OpenShift Service Mesh version 2.4.4 This release of Red Hat OpenShift Service Mesh addresses Common Vulnerabilities and Exposures (CVEs), contains bug fixes, and is supported on OpenShift Container Platform 4.11 and later versions. 2.2.26.6.1. Component versions included in Red Hat OpenShift Service Mesh version 2.4.4 Component Version Istio 1.16.7 Envoy Proxy 1.24.12 Jaeger 1.47.0 Kiali 1.65.10 2.2.26.7. New features Red Hat OpenShift Service Mesh version 2.4.3 The Red Hat OpenShift Service Mesh Operator is now available on ARM-based clusters as a Technology Preview feature. The envoyExtAuthzGrpc field has been added, which is used to configure an external authorization provider using the gRPC API. Common Vulnerabilities and Exposures (CVEs) have been addressed. This release is supported on OpenShift Container Platform 4.10 and newer versions. 2.2.26.7.1. Component versions included in Red Hat OpenShift Service Mesh version 2.4.3 Component Version Istio 1.16.7 Envoy Proxy 1.24.10 Jaeger 1.42.0 Kiali 1.65.8 2.2.26.7.2. Red Hat OpenShift Service Mesh operator to ARM-based clusters Important Red Hat OpenShift Service Mesh operator to ARM based clusters is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . This release makes the Red Hat OpenShift Service Mesh Operator available on ARM-based clusters as a Technology Preview feature. Images are available for Istio, Envoy, Prometheus, Kiali, and Grafana. Images are not available for Jaeger, so Jaeger must be disabled as a Service Mesh add-on. 2.2.26.7.3. Remote Procedure Calls (gRPC) API support for external authorization configuration This enhancement adds the envoyExtAuthzGrpc field to configure an external authorization provider using the gRPC API. 2.2.26.8. New features Red Hat OpenShift Service Mesh version 2.4.2 This release of Red Hat OpenShift Service Mesh addresses Common Vulnerabilities and Exposures (CVEs), contains bug fixes, and is supported on OpenShift Container Platform 4.10 and later versions. 2.2.26.8.1. Component versions included in Red Hat OpenShift Service Mesh version 2.4.2 Component Version Istio 1.16.7 Envoy Proxy 1.24.10 Jaeger 1.42.0 Kiali 1.65.7 2.2.26.9. New features Red Hat OpenShift Service Mesh version 2.4.1 This release of Red Hat OpenShift Service Mesh addresses Common Vulnerabilities and Exposures (CVEs), contains bug fixes, and is supported on OpenShift Container Platform 4.10 and later versions. 2.2.26.9.1. Component versions included in Red Hat OpenShift Service Mesh version 2.4.1 Component Version Istio 1.16.5 Envoy Proxy 1.24.8 Jaeger 1.42.0 Kiali 1.65.7 2.2.26.10. New features Red Hat OpenShift Service Mesh version 2.4 This release of Red Hat OpenShift Service Mesh addresses Common Vulnerabilities and Exposures (CVEs), contains bug fixes, and is supported on OpenShift Container Platform 4.10 and later versions. 2.2.26.10.1. Component versions included in Red Hat OpenShift Service Mesh version 2.4 Component Version Istio 1.16.5 Envoy Proxy 1.24.8 Jaeger 1.42.0 Kiali 1.65.6 2.2.26.10.2. Cluster-wide deployments This enhancement introduces a generally available version of cluster-wide deployments. A cluster-wide deployment contains a service mesh control plane that monitors resources for an entire cluster. The control plane uses a single query across all namespaces to monitor each Istio or Kubernetes resource that affects the mesh configuration. Reducing the number of queries the control plane performs in a cluster-wide deployment improves performance. 2.2.26.10.3. Support for discovery selectors This enhancement introduces a generally available version of the meshConfig.discoverySelectors field, which can be used in cluster-wide deployments to limit the services the service mesh control plane can discover. spec: meshConfig discoverySelectors: - matchLabels: env: prod region: us-east1 - matchExpressions: - key: app operator: In values: - cassandra - spark 2.2.26.10.4. Integration with cert-manager istio-csr With this update, Red Hat OpenShift Service Mesh integrates with the cert-manager controller and the istio-csr agent. cert-manager adds certificates and certificate issuers as resource types in Kubernetes clusters, and simplifies the process of obtaining, renewing, and using those certificates. cert-manager provides and rotates an intermediate CA certificate for Istio. Integration with istio-csr enables users to delegate signing certificate requests from Istio proxies to cert-manager . ServiceMeshControlPlane v2.4 accepts CA certificates provided by cert-manager as cacerts secret. Note Integration with cert-manager and istio-csr is not supported on IBM Power(R), IBM Z(R), and IBM(R) LinuxONE. 2.2.26.10.5. Integration with external authorization systems This enhancement introduces a generally available method of integrating Red Hat OpenShift Service Mesh with external authorization systems by using the action: CUSTOM field of the AuthorizationPolicy resource. Use the envoyExtAuthzHttp field to delegate the access control to an external authorization system. 2.2.26.10.6. Integration with external Prometheus installation This enhancement introduces a generally available version of the Prometheus extension provider. You can expose metrics to the OpenShift Container Platform monitoring stack or a custom Prometheus installation by setting the value of the extensionProviders field to prometheus in the spec.meshConfig specification. The telemetry object configures Istio proxies to collect traffic metrics. Service Mesh only supports the Telemetry API for Prometheus metrics. spec: meshConfig: extensionProviders: - name: prometheus prometheus: {} --- apiVersion: telemetry.istio.io/v1alpha1 kind: Telemetry metadata: name: enable-prometheus-metrics spec: metrics: - providers: - name: prometheus 2.2.26.10.7. Single stack IPv6 support This enhancement introduces generally available support for single stack IPv6 clusters, providing access to a broader range of IP addresses. Dual stack IPv4 or IPv6 cluster is not supported. Note Single stack IPv6 support is not available on IBM Power(R), IBM Z(R), and IBM(R) LinuxONE. 2.2.26.10.8. OpenShift Container Platform Gateway API support Important OpenShift Container Platform Gateway API support is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . This enhancement introduces an updated Technology Preview version of the OpenShift Container Platform Gateway API. By default, the OpenShift Container Platform Gateway API is disabled. 2.2.26.10.8.1. Enabling OpenShift Container Platform Gateway API To enable the OpenShift Container Platform Gateway API, set the value of the enabled field to true in the techPreview.gatewayAPI specification of the ServiceMeshControlPlane resource. spec: techPreview: gatewayAPI: enabled: true Previously, environment variables were used to enable the Gateway API. spec: runtime: components: pilot: container: env: PILOT_ENABLE_GATEWAY_API: "true" PILOT_ENABLE_GATEWAY_API_STATUS: "true" PILOT_ENABLE_GATEWAY_API_DEPLOYMENT_CONTROLLER: "true" 2.2.26.10.9. Control plane deployment on infrastructure nodes Service Mesh control plane deployment is now supported and documented on OpenShift infrastructure nodes. For more information, see the following documentation: Configuring all Service Mesh control plane components to run on infrastructure nodes Configuring individual Service Mesh control plane components to run on infrastructure nodes 2.2.26.10.10. Istio 1.16 support Service Mesh 2.4 is based on Istio 1.16, which brings in new features and product enhancements. While many Istio 1.16 features are supported, the following exceptions should be noted: HBONE protocol for sidecars is an experimental feature that is not supported. Service Mesh on ARM64 architecture is not supported. OpenTelemetry API remains a Technology Preview feature. 2.2.26.11. New features Red Hat OpenShift Service Mesh version 2.3.11 This release of Red Hat OpenShift Service Mesh is included with the Red Hat OpenShift Service Mesh Operator 2.5.1, addresses Common Vulnerabilities and Exposures (CVEs), contains bug fixes, and is supported on OpenShift Container Platform 4.12 and later. 2.2.26.11.1. Component versions for Red Hat OpenShift Service Mesh version 2.3.11 Component Version Istio 1.14.5 Envoy Proxy 1.22.11 Kiali 1.57.14 2.2.26.12. New features Red Hat OpenShift Service Mesh version 2.3.10 This release of Red Hat OpenShift Service Mesh is included with the Red Hat OpenShift Service Mesh Operator 2.5.0, addresses Common Vulnerabilities and Exposures (CVEs), contains bug fixes, and is supported on OpenShift Container Platform 4.12 and later. 2.2.26.12.1. Component versions for Red Hat OpenShift Service Mesh version 2.3.10 Component Version Istio 1.14.5 Envoy Proxy 1.22.11 Kiali 1.57.14 2.2.26.13. New features Red Hat OpenShift Service Mesh version 2.3.9 This release of Red Hat OpenShift Service Mesh is included with the Red Hat OpenShift Service Mesh Operator 2.4.5, addresses Common Vulnerabilities and Exposures (CVEs), contains bug fixes, and is supported on OpenShift Container Platform 4.11 and later. 2.2.26.13.1. Component versions included in Red Hat OpenShift Service Mesh version 2.3.9 Component Version Istio 1.14.5 Envoy Proxy 1.22.11 Jaeger 1.47.0 Kiali 1.57.14 2.2.26.14. New features Red Hat OpenShift Service Mesh version 2.3.8 This release of Red Hat OpenShift Service Mesh addresses Common Vulnerabilities and Exposures (CVEs), contains bug fixes, and is supported on OpenShift Container Platform 4.11 and later versions. 2.2.26.14.1. Component versions included in Red Hat OpenShift Service Mesh version 2.3.8 Component Version Istio 1.14.5 Envoy Proxy 1.22.11 Jaeger 1.47.0 Kiali 1.57.13 2.2.26.15. New features Red Hat OpenShift Service Mesh version 2.3.7 This release of Red Hat OpenShift Service Mesh addresses Common Vulnerabilities and Exposures (CVEs), contains bug fixes, and is supported on OpenShift Container Platform 4.10 and later versions. 2.2.26.15.1. Component versions included in Red Hat OpenShift Service Mesh version 2.3.7 Component Version Istio 1.14.6 Envoy Proxy 1.22.11 Jaeger 1.42.0 Kiali 1.57.11 2.2.26.16. New features Red Hat OpenShift Service Mesh version 2.3.6 This release of Red Hat OpenShift Service Mesh addresses Common Vulnerabilities and Exposures (CVEs), contains bug fixes, and is supported on OpenShift Container Platform 4.10 and later versions. 2.2.26.16.1. Component versions included in Red Hat OpenShift Service Mesh version 2.3.6 Component Version Istio 1.14.5 Envoy Proxy 1.22.11 Jaeger 1.42.0 Kiali 1.57.10 2.2.26.17. New features Red Hat OpenShift Service Mesh version 2.3.5 This release of Red Hat OpenShift Service Mesh addresses Common Vulnerabilities and Exposures (CVEs), contains bug fixes, and is supported on OpenShift Container Platform 4.10 and later versions. 2.2.26.17.1. Component versions included in Red Hat OpenShift Service Mesh version 2.3.5 Component Version Istio 1.14.5 Envoy Proxy 1.22.9 Jaeger 1.42.0 Kiali 1.57.10 2.2.26.18. New features Red Hat OpenShift Service Mesh version 2.3.4 This release of Red Hat OpenShift Service Mesh addresses Common Vulnerabilities and Exposures (CVEs), contains bug fixes, and is supported on OpenShift Container Platform 4.10 and later versions. 2.2.26.18.1. Component versions included in Red Hat OpenShift Service Mesh version 2.3.4 Component Version Istio 1.14.6 Envoy Proxy 1.22.9 Jaeger 1.42.0 Kiali 1.57.9 2.2.26.19. New features Red Hat OpenShift Service Mesh version 2.3.3 This release of Red Hat OpenShift Service Mesh addresses Common Vulnerabilities and Exposures (CVEs), contains bug fixes, and is supported on OpenShift Container Platform 4.9 and later versions. 2.2.26.19.1. Component versions included in Red Hat OpenShift Service Mesh version 2.3.3 Component Version Istio 1.14.5 Envoy Proxy 1.22.9 Jaeger 1.42.0 Kiali 1.57.7 2.2.26.20. New features Red Hat OpenShift Service Mesh version 2.3.2 This release of Red Hat OpenShift Service Mesh addresses Common Vulnerabilities and Exposures (CVEs), contains bug fixes, and is supported on OpenShift Container Platform 4.9 and later versions. 2.2.26.20.1. Component versions included in Red Hat OpenShift Service Mesh version 2.3.2 Component Version Istio 1.14.5 Envoy Proxy 1.22.7 Jaeger 1.39 Kiali 1.57.6 2.2.26.21. New features Red Hat OpenShift Service Mesh version 2.3.1 This release of Red Hat OpenShift Service Mesh introduces new features, addresses Common Vulnerabilities and Exposures (CVEs), contains bug fixes, and is supported on OpenShift Container Platform 4.9 and later versions. 2.2.26.21.1. Component versions included in Red Hat OpenShift Service Mesh version 2.3.1 Component Version Istio 1.14.5 Envoy Proxy 1.22.4 Jaeger 1.39 Kiali 1.57.5 2.2.26.22. New features Red Hat OpenShift Service Mesh version 2.3 This release of Red Hat OpenShift Service Mesh introduces new features, addresses Common Vulnerabilities and Exposures (CVEs), contains bug fixes, and is supported on OpenShift Container Platform 4.9 and later versions. 2.2.26.22.1. Component versions included in Red Hat OpenShift Service Mesh version 2.3 Component Version Istio 1.14.3 Envoy Proxy 1.22.4 Jaeger 1.38 Kiali 1.57.3 2.2.26.22.2. New Container Network Interface (CNI) DaemonSet container and ConfigMap The openshift-operators namespace includes a new istio CNI DaemonSet istio-cni-node-v2-3 and a new ConfigMap resource, istio-cni-config-v2-3 . When upgrading to Service Mesh Control Plane 2.3, the existing istio-cni-node DaemonSet is not changed, and a new istio-cni-node-v2-3 DaemonSet is created. This name change does not affect releases or any istio-cni-node CNI DaemonSet associated with a Service Mesh Control Plane deployed using a release. 2.2.26.22.3. Gateway injection support This release introduces generally available support for Gateway injection. Gateway configurations are applied to standalone Envoy proxies that are running at the edge of the mesh, rather than the sidecar Envoy proxies running alongside your service workloads. This enables the ability to customize gateway options. When using gateway injection, you must create the following resources in the namespace where you want to run your gateway proxy: Service , Deployment , Role , and RoleBinding . 2.2.26.22.4. Istio 1.14 Support Service Mesh 2.3 is based on Istio 1.14, which brings in new features and product enhancements. While many Istio 1.14 features are supported, the following exceptions should be noted: ProxyConfig API is supported with the exception of the image field. Telemetry API is a Technology Preview feature. SPIRE runtime is not a supported feature. 2.2.26.22.5. OpenShift Service Mesh Console Important OpenShift Service Mesh Console is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . This release introduces a Technology Preview version of the OpenShift Container Platform Service Mesh Console, which integrates the Kiali interface directly into the OpenShift web console. For additional information, see Introducing the OpenShift Service Mesh Console (A Technology Preview) 2.2.26.22.6. Cluster-wide deployment Important Cluster-wide deployment is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . This release introduces cluster-wide deployment as a Technology Preview feature. A cluster-wide deployment contains a Service Mesh Control Plane that monitors resources for an entire cluster. The control plane uses a single query across all namespaces to monitor each Istio or Kubernetes resource kind that affects the mesh configuration. In contrast, the multitenant approach uses a query per namespace for each resource kind. Reducing the number of queries the control plane performs in a cluster-wide deployment improves performance. Note This cluster-wide deployment documentation is only applicable for control planes deployed using SMCP v2.3. cluster-wide deployments created using SMCP v2.3 are not compatible with cluster-wide deployments created using SMCP v2.4. 2.2.26.22.6.1. Configuring cluster-wide deployment The following example ServiceMeshControlPlane object configures a cluster-wide deployment. To create an SMCP for cluster-wide deployment, a user must belong to the cluster-admin ClusterRole. If the SMCP is configured for cluster-wide deployment, it must be the only SMCP in the cluster. You cannot change the control plane mode from multitenant to cluster-wide (or from cluster-wide to multitenant). If a multitenant control plane already exists, delete it and create a new one. This example configures the SMCP for cluster-wide deployment. apiVersion: maistra.io/v2 kind: ServiceMeshControlPlane metadata: name: cluster-wide namespace: istio-system spec: version: v2.3 techPreview: controlPlaneMode: ClusterScoped 1 1 Enables Istiod to monitor resources at the cluster level rather than monitor each individual namespace. Additionally, the SMMR must also be configured for cluster-wide deployment. This example configures the SMMR for cluster-wide deployment. apiVersion: maistra.io/v1 kind: ServiceMeshMemberRoll metadata: name: default spec: members: - '*' 1 1 Adds all namespaces to the mesh, including any namespaces you subsequently create. The following namespaces are not part of the mesh: kube, openshift, kube-* and openshift-*. 2.2.26.23. New features Red Hat OpenShift Service Mesh version 2.2.12 This release of Red Hat OpenShift Service Mesh is included with the Red Hat OpenShift Service Mesh Operator 2.4.5, addresses Common Vulnerabilities and Exposures (CVEs), contains bug fixes, and is supported on OpenShift Container Platform 4.11 and later. 2.2.26.23.1. Component versions included in Red Hat OpenShift Service Mesh version 2.2.12 Component Version Istio 1.12.9 Envoy Proxy 1.20.8 Jaeger 1.47.0 Kiali 1.48.11 2.2.26.24. New features Red Hat OpenShift Service Mesh version 2.2.11 This release of Red Hat OpenShift Service Mesh addresses Common Vulnerabilities and Exposures (CVEs), contains bug fixes, and is supported on OpenShift Container Platform 4.11 and later versions. 2.2.26.24.1. Component versions included in Red Hat OpenShift Service Mesh version 2.2.11 Component Version Istio 1.12.9 Envoy Proxy 1.20.8 Jaeger 1.47.0 Kiali 1.48.10 2.2.26.25. New features Red Hat OpenShift Service Mesh version 2.2.10 This release of Red Hat OpenShift Service Mesh addresses Common Vulnerabilities and Exposures (CVEs), contains bug fixes, and is supported on OpenShift Container Platform 4.10 and later versions. 2.2.26.25.1. Component versions included in Red Hat OpenShift Service Mesh version 2.2.10 Component Version Istio 1.12.9 Envoy Proxy 1.20.8 Jaeger 1.42.0 Kiali 1.48.8 2.2.26.26. New features Red Hat OpenShift Service Mesh version 2.2.9 This release of Red Hat OpenShift Service Mesh addresses Common Vulnerabilities and Exposures (CVEs), contains bug fixes, and is supported on OpenShift Container Platform 4.10 and later versions. 2.2.26.26.1. Component versions included in Red Hat OpenShift Service Mesh version 2.2.9 Component Version Istio 1.12.9 Envoy Proxy 1.20.8 Jaeger 1.42.0 Kiali 1.48.7 2.2.26.27. New features Red Hat OpenShift Service Mesh version 2.2.8 This release of Red Hat OpenShift Service Mesh addresses Common Vulnerabilities and Exposures (CVEs), contains bug fixes, and is supported on OpenShift Container Platform 4.10 and later versions. 2.2.26.27.1. Component versions included in Red Hat OpenShift Service Mesh version 2.2.8 Component Version Istio 1.12.9 Envoy Proxy 1.20.8 Jaeger 1.42.0 Kiali 1.48.7 2.2.26.28. New features Red Hat OpenShift Service Mesh version 2.2.7 This release of Red Hat OpenShift Service Mesh addresses Common Vulnerabilities and Exposures (CVEs), contains bug fixes, and is supported on OpenShift Container Platform 4.10 and later versions. 2.2.26.28.1. Component versions included in Red Hat OpenShift Service Mesh version 2.2.7 Component Version Istio 1.12.9 Envoy Proxy 1.20.8 Jaeger 1.42.0 Kiali 1.48.6 2.2.26.29. New features Red Hat OpenShift Service Mesh version 2.2.6 This release of Red Hat OpenShift Service Mesh addresses Common Vulnerabilities and Exposures (CVEs), contains bug fixes, and is supported on OpenShift Container Platform 4.9 and later versions. 2.2.26.29.1. Component versions included in Red Hat OpenShift Service Mesh version 2.2.6 Component Version Istio 1.12.9 Envoy Proxy 1.20.8 Jaeger 1.39 Kiali 1.48.5 2.2.26.30. New features Red Hat OpenShift Service Mesh version 2.2.5 This release of Red Hat OpenShift Service Mesh addresses Common Vulnerabilities and Exposures (CVEs), contains bug fixes, and is supported on OpenShift Container Platform 4.9 and later versions. 2.2.26.30.1. Component versions included in Red Hat OpenShift Service Mesh version 2.2.5 Component Version Istio 1.12.9 Envoy Proxy 1.20.8 Jaeger 1.39 Kiali 1.48.3 2.2.26.31. New features Red Hat OpenShift Service Mesh version 2.2.4 This release of Red Hat OpenShift Service Mesh addresses Common Vulnerabilities and Exposures (CVEs), contains bug fixes, and is supported on OpenShift Container Platform 4.9 and later versions. 2.2.26.31.1. Component versions included in Red Hat OpenShift Service Mesh version 2.2.4 Component Version Istio 1.12.9 Envoy Proxy 1.20.8 Jaeger 1.36.14 Kiali 1.48.3 2.2.26.32. New features Red Hat OpenShift Service Mesh version 2.2.3 This release of Red Hat OpenShift Service Mesh addresses Common Vulnerabilities and Exposures (CVEs), bug fixes, and is supported on OpenShift Container Platform 4.9 and later versions. 2.2.26.32.1. Component versions included in Red Hat OpenShift Service Mesh version 2.2.3 Component Version Istio 1.12.9 Envoy Proxy 1.20.8 Jaeger 1.36 Kiali 1.48.3 2.2.26.33. New features Red Hat OpenShift Service Mesh version 2.2.2 This release of Red Hat OpenShift Service Mesh addresses Common Vulnerabilities and Exposures (CVEs), bug fixes, and is supported on OpenShift Container Platform 4.9 and later versions. 2.2.26.33.1. Component versions included in Red Hat OpenShift Service Mesh version 2.2.2 Component Version Istio 1.12.7 Envoy Proxy 1.20.6 Jaeger 1.36 Kiali 1.48.2-1 2.2.26.33.2. Copy route labels With this enhancement, in addition to copying annotations, you can copy specific labels for an OpenShift route. Red Hat OpenShift Service Mesh copies all labels and annotations present in the Istio Gateway resource (with the exception of annotations starting with kubectl.kubernetes.io) into the managed OpenShift Route resource. 2.2.26.34. New features Red Hat OpenShift Service Mesh version 2.2.1 This release of Red Hat OpenShift Service Mesh addresses Common Vulnerabilities and Exposures (CVEs), bug fixes, and is supported on OpenShift Container Platform 4.9 and later versions. 2.2.26.34.1. Component versions included in Red Hat OpenShift Service Mesh version 2.2.1 Component Version Istio 1.12.7 Envoy Proxy 1.20.6 Jaeger 1.34.1 Kiali 1.48.2-1 2.2.26.35. New features Red Hat OpenShift Service Mesh 2.2 This release of Red Hat OpenShift Service Mesh adds new features and enhancements, and is supported on OpenShift Container Platform 4.9 and later versions. 2.2.26.35.1. Component versions included in Red Hat OpenShift Service Mesh version 2.2 Component Version Istio 1.12.7 Envoy Proxy 1.20.4 Jaeger 1.34.1 Kiali 1.48.0.16 2.2.26.35.2. WasmPlugin API This release adds support for the WasmPlugin API and deprecates the ServiceMeshExtension API. 2.2.26.35.3. ROSA support This release introduces service mesh support for Red Hat OpenShift on AWS (ROSA), including multi-cluster federation. 2.2.26.35.4. istio-node DaemonSet renamed This release, the istio-node DaemonSet is renamed to istio-cni-node to match the name in upstream Istio. 2.2.26.35.5. Envoy sidecar networking changes Istio 1.10 updated Envoy to send traffic to the application container using eth0 rather than lo by default. 2.2.26.35.6. Service Mesh Control Plane 1.1 This release marks the end of support for Service Mesh Control Planes based on Service Mesh 1.1 for all platforms. 2.2.26.35.7. Istio 1.12 Support Service Mesh 2.2 is based on Istio 1.12, which brings in new features and product enhancements. While many Istio 1.12 features are supported, the following unsupported features should be noted: AuthPolicy Dry Run is a tech preview feature. gRPC Proxyless Service Mesh is a tech preview feature. Telemetry API is a tech preview feature. Discovery selectors is not a supported feature. External control plane is not a supported feature. Gateway injection is not a supported feature. 2.2.26.35.8. Kubernetes Gateway API Important Kubernetes Gateway API is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . Kubernetes Gateway API is a technology preview feature that is disabled by default. If the Kubernetes API deployment controller is disabled, you must manually deploy and link an ingress gateway to the created Gateway object. If the Kubernetes API deployment controller is enabled, then an ingress gateway automatically deploys when a Gateway object is created. 2.2.26.35.8.1. Installing the Gateway API CRDs The Gateway API CRDs do not come preinstalled by default on OpenShift clusters. Install the CRDs prior to enabling Gateway API support in the SMCP. USD kubectl get crd gateways.gateway.networking.k8s.io || { kubectl kustomize "github.com/kubernetes-sigs/gateway-api/config/crd?ref=v0.4.0" | kubectl apply -f -; } 2.2.26.35.8.2. Enabling Kubernetes Gateway API To enable the feature, set the following environment variables for the Istiod container in ServiceMeshControlPlane : spec: runtime: components: pilot: container: env: PILOT_ENABLE_GATEWAY_API: "true" PILOT_ENABLE_GATEWAY_API_STATUS: "true" # and optionally, for the deployment controller PILOT_ENABLE_GATEWAY_API_DEPLOYMENT_CONTROLLER: "true" Restricting route attachment on Gateway API listeners is possible using the SameNamespace or All settings. Istio ignores usage of label selectors in listeners.allowedRoutes.namespaces and reverts to the default behavior ( SameNamespace ). 2.2.26.35.8.3. Manually linking an existing gateway to a Gateway resource If the Kubernetes API deployment controller is disabled, you must manually deploy and then link an ingress gateway to the created Gateway resource. apiVersion: gateway.networking.k8s.io/v1alpha2 kind: Gateway metadata: name: gateway spec: addresses: - value: ingress.istio-gateways.svc.cluster.local type: Hostname 2.2.26.36. New features Red Hat OpenShift Service Mesh 2.1.6 This release of Red Hat OpenShift Service Mesh addresses Common Vulnerabilities and Exposures (CVEs), contains bug fixes, and is supported on OpenShift Container Platform 4.9 and later versions. 2.2.26.36.1. Component versions included in Red Hat OpenShift Service Mesh version 2.1.6 Component Version Istio 1.9.9 Envoy Proxy 1.17.5 Jaeger 1.36 Kiali 1.36.16 2.2.26.37. New features Red Hat OpenShift Service Mesh 2.1.5.2 This release of Red Hat OpenShift Service Mesh addresses Common Vulnerabilities and Exposures (CVEs), contains bug fixes, and is supported on OpenShift Container Platform 4.9 and later versions. 2.2.26.37.1. Component versions included in Red Hat OpenShift Service Mesh version 2.1.5.2 Component Version Istio 1.9.9 Envoy Proxy 1.17.5 Jaeger 1.36 Kiali 1.24.17 2.2.26.38. New features Red Hat OpenShift Service Mesh 2.1.5.1 This release of Red Hat OpenShift Service Mesh addresses Common Vulnerabilities and Exposures (CVEs), bug fixes, and is supported on OpenShift Container Platform 4.9 and later versions. 2.2.26.38.1. Component versions included in Red Hat OpenShift Service Mesh version 2.1.5.1 Component Version Istio 1.9.9 Envoy Proxy 1.17.5 Jaeger 1.36 Kiali 1.36.13 2.2.26.39. New features Red Hat OpenShift Service Mesh 2.1.5 This release of Red Hat OpenShift Service Mesh addresses Common Vulnerabilities and Exposures (CVEs), bug fixes, and is supported on OpenShift Container Platform 4.9 and later versions. 2.2.26.39.1. Component versions included in Red Hat OpenShift Service Mesh version 2.1.5 Component Version Istio 1.9.9 Envoy Proxy 1.17.1 Jaeger 1.36 Kiali 1.36.12-1 2.2.26.40. New features Red Hat OpenShift Service Mesh 2.1.4 This release of Red Hat OpenShift Service Mesh addresses Common Vulnerabilities and Exposures (CVEs) and bug fixes. 2.2.26.40.1. Component versions included in Red Hat OpenShift Service Mesh version 2.1.4 Component Version Istio 1.9.9 Envoy Proxy 1.17.1 Jaeger 1.30.2 Kiali 1.36.12-1 2.2.26.41. New features Red Hat OpenShift Service Mesh 2.1.3 This release of Red Hat OpenShift Service Mesh addresses Common Vulnerabilities and Exposures (CVEs) and bug fixes. 2.2.26.41.1. Component versions included in Red Hat OpenShift Service Mesh version 2.1.3 Component Version Istio 1.9.9 Envoy Proxy 1.17.1 Jaeger 1.30.2 Kiali 1.36.10-2 2.2.26.42. New features Red Hat OpenShift Service Mesh 2.1.2.1 This release of Red Hat OpenShift Service Mesh addresses Common Vulnerabilities and Exposures (CVEs) and bug fixes. 2.2.26.42.1. Component versions included in Red Hat OpenShift Service Mesh version 2.1.2.1 Component Version Istio 1.9.9 Envoy Proxy 1.17.1 Jaeger 1.30.2 Kiali 1.36.9 2.2.26.43. New features Red Hat OpenShift Service Mesh 2.1.2 This release of Red Hat OpenShift Service Mesh addresses Common Vulnerabilities and Exposures (CVEs) and bug fixes. With this release, the Red Hat OpenShift distributed tracing platform (Jaeger) Operator is now installed to the openshift-distributed-tracing namespace by default. Previously the default installation had been in the openshift-operator namespace. 2.2.26.43.1. Component versions included in Red Hat OpenShift Service Mesh version 2.1.2 Component Version Istio 1.9.9 Envoy Proxy 1.17.1 Jaeger 1.30.1 Kiali 1.36.8 2.2.26.44. New features Red Hat OpenShift Service Mesh 2.1.1 This release of Red Hat OpenShift Service Mesh addresses Common Vulnerabilities and Exposures (CVEs) and bug fixes. This release also adds the ability to disable the automatic creation of network policies. 2.2.26.44.1. Component versions included in Red Hat OpenShift Service Mesh version 2.1.1 Component Version Istio 1.9.9 Envoy Proxy 1.17.1 Jaeger 1.24.1 Kiali 1.36.7 2.2.26.44.2. Disabling network policies Red Hat OpenShift Service Mesh automatically creates and manages a number of NetworkPolicies resources in the Service Mesh control plane and application namespaces. This is to ensure that applications and the control plane can communicate with each other. If you want to disable the automatic creation and management of NetworkPolicies resources, for example to enforce company security policies, you can do so. You can edit the ServiceMeshControlPlane to set the spec.security.manageNetworkPolicy setting to false Note When you disable spec.security.manageNetworkPolicy Red Hat OpenShift Service Mesh will not create any NetworkPolicy objects. The system administrator is responsible for managing the network and fixing any issues this might cause. Procedure In the OpenShift Container Platform web console, click Operators Installed Operators . Select the project where you installed the Service Mesh control plane, for example istio-system , from the Project menu. Click the Red Hat OpenShift Service Mesh Operator. In the Istio Service Mesh Control Plane column, click the name of your ServiceMeshControlPlane , for example basic-install . On the Create ServiceMeshControlPlane Details page, click YAML to modify your configuration. Set the ServiceMeshControlPlane field spec.security.manageNetworkPolicy to false , as shown in this example. apiVersion: maistra.io/v2 kind: ServiceMeshControlPlane spec: security: trust: manageNetworkPolicy: false Click Save . 2.2.26.45. New features and enhancements Red Hat OpenShift Service Mesh 2.1 This release of Red Hat OpenShift Service Mesh adds support for Istio 1.9.8, Envoy Proxy 1.17.1, Jaeger 1.24.1, and Kiali 1.36.5 on OpenShift Container Platform 4.6 EUS, 4.7, 4.8, 4.9, along with new features and enhancements. 2.2.26.45.1. Component versions included in Red Hat OpenShift Service Mesh version 2.1 Component Version Istio 1.9.6 Envoy Proxy 1.17.1 Jaeger 1.24.1 Kiali 1.36.5 2.2.26.45.2. Service Mesh Federation New Custom Resource Definitions (CRDs) have been added to support federating service meshes. Service meshes may be federated both within the same cluster or across different OpenShift clusters. These new resources include: ServiceMeshPeer - Defines a federation with a separate service mesh, including gateway configuration, root trust certificate configuration, and status fields. In a pair of federated meshes, each mesh will define its own separate ServiceMeshPeer resource. ExportedServiceMeshSet - Defines which services for a given ServiceMeshPeer are available for the peer mesh to import. ImportedServiceSet - Defines which services for a given ServiceMeshPeer are imported from the peer mesh. These services must also be made available by the peer's ExportedServiceMeshSet resource. Service Mesh Federation is not supported between clusters on Red Hat OpenShift Service on AWS (ROSA), Azure Red Hat OpenShift (ARO), or OpenShift Dedicated (OSD). 2.2.26.45.3. OVN-Kubernetes Container Network Interface (CNI) generally available The OVN-Kubernetes Container Network Interface (CNI) was previously introduced as a Technology Preview feature in Red Hat OpenShift Service Mesh 2.0.1 and is now generally available in Red Hat OpenShift Service Mesh 2.1 and 2.0.x for use on OpenShift Container Platform 4.7.32, OpenShift Container Platform 4.8.12, and OpenShift Container Platform 4.9. 2.2.26.45.4. Service Mesh WebAssembly (WASM) Extensions The ServiceMeshExtensions Custom Resource Definition (CRD), first introduced in 2.0 as Technology Preview, is now generally available. You can use CRD to build your own plugins, but Red Hat does not provide support for the plugins you create. Mixer has been completely removed in Service Mesh 2.1. Upgrading from a Service Mesh 2.0.x release to 2.1 will be blocked if Mixer is enabled. Mixer plugins will need to be ported to WebAssembly Extensions. 2.2.26.45.5. 3scale WebAssembly Adapter (WASM) With Mixer now officially removed, OpenShift Service Mesh 2.1 does not support the 3scale mixer adapter. Before upgrading to Service Mesh 2.1, remove the Mixer-based 3scale adapter and any additional Mixer plugins. Then, manually install and configure the new 3scale WebAssembly adapter with Service Mesh 2.1+ using a ServiceMeshExtension resource. 3scale 2.11 introduces an updated Service Mesh integration based on WebAssembly . 2.2.26.45.6. Istio 1.9 Support Service Mesh 2.1 is based on Istio 1.9, which brings in a large number of new features and product enhancements. While the majority of Istio 1.9 features are supported, the following exceptions should be noted: Virtual Machine integration is not yet supported Kubernetes Gateway API is not yet supported Remote fetch and load of WebAssembly HTTP filters are not yet supported Custom CA Integration using the Kubernetes CSR API is not yet supported Request Classification for monitoring traffic is a tech preview feature Integration with external authorization systems via Authorization policy's CUSTOM action is a tech preview feature 2.2.26.45.7. Improved Service Mesh operator performance The amount of time Red Hat OpenShift Service Mesh uses to prune old resources at the end of every ServiceMeshControlPlane reconciliation has been reduced. This results in faster ServiceMeshControlPlane deployments, and allows changes applied to existing SMCPs to take effect more quickly. 2.2.26.45.8. Kiali updates Kiali 1.36 includes the following features and enhancements: Service Mesh troubleshooting functionality Control plane and gateway monitoring Proxy sync statuses Envoy configuration views Unified view showing Envoy proxy and application logs interleaved Namespace and cluster boxing to support federated service mesh views New validations, wizards, and distributed tracing enhancements 2.2.26.46. New features Red Hat OpenShift Service Mesh 2.0.11.1 This release of Red Hat OpenShift Service Mesh addresses Common Vulnerabilities and Exposures (CVEs), bug fixes, and is supported on OpenShift Container Platform 4.9 or later. 2.2.26.46.1. Component versions included in Red Hat OpenShift Service Mesh version 2.0.11.1 Component Version Istio 1.6.14 Envoy Proxy 1.14.5 Jaeger 1.36 Kiali 1.24.17 2.2.26.47. New features Red Hat OpenShift Service Mesh 2.0.11 This release of Red Hat OpenShift Service Mesh addresses Common Vulnerabilities and Exposures (CVEs), bug fixes, and is supported on OpenShift Container Platform 4.9 or later. 2.2.26.47.1. Component versions included in Red Hat OpenShift Service Mesh version 2.0.11 Component Version Istio 1.6.14 Envoy Proxy 1.14.5 Jaeger 1.36 Kiali 1.24.16-1 2.2.26.48. New features Red Hat OpenShift Service Mesh 2.0.10 This release of Red Hat OpenShift Service Mesh addresses Common Vulnerabilities and Exposures (CVEs) and bug fixes. 2.2.26.48.1. Component versions included in Red Hat OpenShift Service Mesh version 2.0.10 Component Version Istio 1.6.14 Envoy Proxy 1.14.5 Jaeger 1.28.0 Kiali 1.24.16-1 2.2.26.49. New features Red Hat OpenShift Service Mesh 2.0.9 This release of Red Hat OpenShift Service Mesh addresses Common Vulnerabilities and Exposures (CVEs) and bug fixes. 2.2.26.49.1. Component versions included in Red Hat OpenShift Service Mesh version 2.0.9 Component Version Istio 1.6.14 Envoy Proxy 1.14.5 Jaeger 1.24.1 Kiali 1.24.11 2.2.26.50. New features Red Hat OpenShift Service Mesh 2.0.8 This release of Red Hat OpenShift Service Mesh addresses bug fixes. 2.2.26.51. New features Red Hat OpenShift Service Mesh 2.0.7.1 This release of Red Hat OpenShift Service Mesh addresses Common Vulnerabilities and Exposures (CVEs). 2.2.26.51.1. Change in how Red Hat OpenShift Service Mesh handles URI fragments Red Hat OpenShift Service Mesh contains a remotely exploitable vulnerability, CVE-2021-39156 , where an HTTP request with a fragment (a section in the end of a URI that begins with a # character) in the URI path could bypass the Istio URI path-based authorization policies. For instance, an Istio authorization policy denies requests sent to the URI path /user/profile . In the vulnerable versions, a request with URI path /user/profile#section1 bypasses the deny policy and routes to the backend (with the normalized URI path /user/profile%23section1 ), possibly leading to a security incident. You are impacted by this vulnerability if you use authorization policies with DENY actions and operation.paths , or ALLOW actions and operation.notPaths . With the mitigation, the fragment part of the request's URI is removed before the authorization and routing. This prevents a request with a fragment in its URI from bypassing authorization policies which are based on the URI without the fragment part. To opt-out from the new behavior in the mitigation, the fragment section in the URI will be kept. You can configure your ServiceMeshControlPlane to keep URI fragments. Warning Disabling the new behavior will normalize your paths as described above and is considered unsafe. Ensure that you have accommodated for this in any security policies before opting to keep URI fragments. Example ServiceMeshControlPlane modification apiVersion: maistra.io/v2 kind: ServiceMeshControlPlane metadata: name: basic spec: techPreview: meshConfig: defaultConfig: proxyMetadata: HTTP_STRIP_FRAGMENT_FROM_PATH_UNSAFE_IF_DISABLED: "false" 2.2.26.51.2. Required update for authorization policies Istio generates hostnames for both the hostname itself and all matching ports. For instance, a virtual service or Gateway for a host of "httpbin.foo" generates a config matching "httpbin.foo and httpbin.foo:*". However, exact match authorization policies only match the exact string given for the hosts or notHosts fields. Your cluster is impacted if you have AuthorizationPolicy resources using exact string comparison for the rule to determine hosts or notHosts . You must update your authorization policy rules to use prefix match instead of exact match. For example, replacing hosts: ["httpbin.com"] with hosts: ["httpbin.com:*"] in the first AuthorizationPolicy example. First example AuthorizationPolicy using prefix match apiVersion: security.istio.io/v1beta1 kind: AuthorizationPolicy metadata: name: httpbin namespace: foo spec: action: DENY rules: - from: - source: namespaces: ["dev"] to: - operation: hosts: ["httpbin.com","httpbin.com:*"] Second example AuthorizationPolicy using prefix match apiVersion: security.istio.io/v1beta1 kind: AuthorizationPolicy metadata: name: httpbin namespace: default spec: action: DENY rules: - to: - operation: hosts: ["httpbin.example.com:*"] 2.2.26.52. New features Red Hat OpenShift Service Mesh 2.0.7 This release of Red Hat OpenShift Service Mesh addresses Common Vulnerabilities and Exposures (CVEs) and bug fixes. 2.2.26.53. Red Hat OpenShift Service Mesh on Red Hat OpenShift Dedicated and Microsoft Azure Red Hat OpenShift Red Hat OpenShift Service Mesh is now supported through Red Hat OpenShift Dedicated and Microsoft Azure Red Hat OpenShift. 2.2.26.54. New features Red Hat OpenShift Service Mesh 2.0.6 This release of Red Hat OpenShift Service Mesh addresses Common Vulnerabilities and Exposures (CVEs) and bug fixes. 2.2.26.55. New features Red Hat OpenShift Service Mesh 2.0.5 This release of Red Hat OpenShift Service Mesh addresses Common Vulnerabilities and Exposures (CVEs) and bug fixes. 2.2.26.56. New features Red Hat OpenShift Service Mesh 2.0.4 This release of Red Hat OpenShift Service Mesh addresses Common Vulnerabilities and Exposures (CVEs) and bug fixes. Important There are manual steps that must be completed to address CVE-2021-29492 and CVE-2021-31920. 2.2.26.56.1. Manual updates required by CVE-2021-29492 and CVE-2021-31920 Istio contains a remotely exploitable vulnerability where an HTTP request path with multiple slashes or escaped slash characters ( %2F or %5C ) could potentially bypass an Istio authorization policy when path-based authorization rules are used. For example, assume an Istio cluster administrator defines an authorization DENY policy to reject the request at path /admin . A request sent to the URL path //admin will NOT be rejected by the authorization policy. According to RFC 3986 , the path //admin with multiple slashes should technically be treated as a different path from the /admin . However, some backend services choose to normalize the URL paths by merging multiple slashes into a single slash. This can result in a bypass of the authorization policy ( //admin does not match /admin ), and a user can access the resource at path /admin in the backend; this would represent a security incident. Your cluster is impacted by this vulnerability if you have authorization policies using ALLOW action + notPaths field or DENY action + paths field patterns. These patterns are vulnerable to unexpected policy bypasses. Your cluster is NOT impacted by this vulnerability if: You don't have authorization policies. Your authorization policies don't define paths or notPaths fields. Your authorization policies use ALLOW action + paths field or DENY action + notPaths field patterns. These patterns could only cause unexpected rejection instead of policy bypasses. The upgrade is optional for these cases. Note The Red Hat OpenShift Service Mesh configuration location for path normalization is different from the Istio configuration. 2.2.26.56.2. Updating the path normalization configuration Istio authorization policies can be based on the URL paths in the HTTP request. Path normalization , also known as URI normalization, modifies and standardizes the incoming requests' paths so that the normalized paths can be processed in a standard way. Syntactically different paths may be equivalent after path normalization. Istio supports the following normalization schemes on the request paths before evaluating against the authorization policies and routing the requests: Table 2.1. Normalization schemes Option Description Example Notes NONE No normalization is done. Anything received by Envoy will be forwarded exactly as-is to any backend service. ../%2Fa../b is evaluated by the authorization policies and sent to your service. This setting is vulnerable to CVE-2021-31920. BASE This is currently the option used in the default installation of Istio. This applies the normalize_path option on Envoy proxies, which follows RFC 3986 with extra normalization to convert backslashes to forward slashes. /a/../b is normalized to /b . \da is normalized to /da . This setting is vulnerable to CVE-2021-31920. MERGE_SLASHES Slashes are merged after the BASE normalization. /a//b is normalized to /a/b . Update to this setting to mitigate CVE-2021-31920. DECODE_AND_MERGE_SLASHES The strictest setting when you allow all traffic by default. This setting is recommended, with the caveat that you must thoroughly test your authorization policies routes. Percent-encoded slash and backslash characters ( %2F , %2f , %5C and %5c ) are decoded to / or \ , before the MERGE_SLASHES normalization. /a%2fb is normalized to /a/b . Update to this setting to mitigate CVE-2021-31920. This setting is more secure, but also has the potential to break applications. Test your applications before deploying to production. The normalization algorithms are conducted in the following order: Percent-decode %2F , %2f , %5C and %5c . The RFC 3986 and other normalization implemented by the normalize_path option in Envoy. Merge slashes. Warning While these normalization options represent recommendations from HTTP standards and common industry practices, applications may interpret a URL in any way it chooses to. When using denial policies, ensure that you understand how your application behaves. 2.2.26.56.3. Path normalization configuration examples Ensuring Envoy normalizes request paths to match your backend services' expectations is critical to the security of your system. The following examples can be used as a reference for you to configure your system. The normalized URL paths, or the original URL paths if NONE is selected, will be: Used to check against the authorization policies. Forwarded to the backend application. Table 2.2. Configuration examples If your application... Choose... Relies on the proxy to do normalization BASE , MERGE_SLASHES or DECODE_AND_MERGE_SLASHES Normalizes request paths based on RFC 3986 and does not merge slashes. BASE Normalizes request paths based on RFC 3986 and merges slashes, but does not decode percent-encoded slashes. MERGE_SLASHES Normalizes request paths based on RFC 3986 , decodes percent-encoded slashes, and merges slashes. DECODE_AND_MERGE_SLASHES Processes request paths in a way that is incompatible with RFC 3986 . NONE 2.2.26.56.4. Configuring your SMCP for path normalization To configure path normalization for Red Hat OpenShift Service Mesh, specify the following in your ServiceMeshControlPlane . Use the configuration examples to help determine the settings for your system. SMCP v2 pathNormalization spec: techPreview: global: pathNormalization: <option> 2.2.26.56.5. Configuring for case normalization In some environments, it may be useful to have paths in authorization policies compared in a case insensitive manner. For example, treating https://myurl/get and https://myurl/GeT as equivalent. In those cases, you can use the EnvoyFilter shown below. This filter will change both the path used for comparison and the path presented to the application. In this example, istio-system is the name of the Service Mesh control plane project. Save the EnvoyFilter to a file and run the following command: USD oc create -f <myEnvoyFilterFile> apiVersion: networking.istio.io/v1alpha3 kind: EnvoyFilter metadata: name: ingress-case-insensitive namespace: istio-system spec: configPatches: - applyTo: HTTP_FILTER match: context: GATEWAY listener: filterChain: filter: name: "envoy.filters.network.http_connection_manager" subFilter: name: "envoy.filters.http.router" patch: operation: INSERT_BEFORE value: name: envoy.lua typed_config: "@type": "type.googleapis.com/envoy.extensions.filters.http.lua.v3.Lua" inlineCode: | function envoy_on_request(request_handle) local path = request_handle:headers():get(":path") request_handle:headers():replace(":path", string.lower(path)) end 2.2.26.57. New features Red Hat OpenShift Service Mesh 2.0.3 This release of Red Hat OpenShift Service Mesh addresses Common Vulnerabilities and Exposures (CVEs) and bug fixes. In addition, this release has the following new features: Added an option to the must-gather data collection tool that gathers information from a specified Service Mesh control plane namespace. For more information, see OSSM-351 . Improved performance for Service Mesh control planes with hundreds of namespaces 2.2.26.58. New features Red Hat OpenShift Service Mesh 2.0.2 This release of Red Hat OpenShift Service Mesh adds support for IBM Z(R) and IBM Power(R) Systems. It also addresses Common Vulnerabilities and Exposures (CVEs) and bug fixes. 2.2.26.59. New features Red Hat OpenShift Service Mesh 2.0.1 This release of Red Hat OpenShift Service Mesh addresses Common Vulnerabilities and Exposures (CVEs) and bug fixes. 2.2.26.60. New features Red Hat OpenShift Service Mesh 2.0 This release of Red Hat OpenShift Service Mesh adds support for Istio 1.6.5, Jaeger 1.20.0, Kiali 1.24.2, and the 3scale Istio Adapter 2.0 and OpenShift Container Platform 4.6. In addition, this release has the following new features: Simplifies installation, upgrades, and management of the Service Mesh control plane. Reduces the Service Mesh control plane's resource usage and startup time. Improves performance by reducing inter-control plane communication over networking. Adds support for Envoy's Secret Discovery Service (SDS). SDS is a more secure and efficient mechanism for delivering secrets to Envoy side car proxies. Removes the need to use Kubernetes Secrets, which have well known security risks. Improves performance during certificate rotation, as proxies no longer require a restart to recognize new certificates. Adds support for Istio's Telemetry v2 architecture, which is built using WebAssembly extensions. This new architecture brings significant performance improvements. Updates the ServiceMeshControlPlane resource to v2 with a streamlined configuration to make it easier to manage the Service Mesh Control Plane. Introduces WebAssembly extensions as a Technology Preview feature. 2.2.27. Technology Preview Some features in this release are currently in Technology Preview. These experimental features are not intended for production use. Important Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . 2.2.28. Deprecated and removed features Some features available in releases have been deprecated or removed. Deprecated functionality is still included in OpenShift Container Platform and continues to be supported; however, it will be removed in a future release of this product and is not recommended for new deployments. Removed functionality no longer exists in the product. 2.2.28.1. Deprecated and removed features in Red Hat OpenShift Service Mesh 2.5 The v2.2 ServiceMeshControlPlane resource is no longer supported. Customers should update their mesh deployments to use a later version of the ServiceMeshControlPlane resource. Support for the Red Hat OpenShift distributed tracing platform (Jaeger) Operator is deprecated. To collect trace spans, use the Red Hat OpenShift distributed tracing platform (Tempo) Stack. Support for the OpenShift Elasticsearch Operator is deprecated. Istio will remove support for first-party JSON Web Tokens (JWTs). Istio will still support third-Party JWTs. 2.2.28.2. Deprecated and removed features in Red Hat OpenShift Service Mesh 2.4 The v2.1 ServiceMeshControlPlane resource is no longer supported. Customers should upgrade their mesh deployments to use a later version of the ServiceMeshControlPlane resource. Support for Istio OpenShift Routing (IOR) is deprecated and will be removed in a future release. Support for Grafana is deprecated and will be removed in a future release. Support for the following cipher suites, which were deprecated in Red Hat OpenShift Service Mesh 2.3, has been removed from the default list of ciphers used in TLS negotiations on both the client and server sides. Applications that require access to services requiring one of these cipher suites will fail to connect when a TLS connection is initiated from the proxy. ECDHE-ECDSA-AES128-SHA ECDHE-RSA-AES128-SHA AES128-GCM-SHA256 AES128-SHA ECDHE-ECDSA-AES256-SHA ECDHE-RSA-AES256-SHA AES256-GCM-SHA384 AES256-SHA 2.2.28.3. Deprecated and removed features in Red Hat OpenShift Service Mesh 2.3 Support for the following cipher suites has been deprecated. In a future release, they will be removed from the default list of ciphers used in TLS negotiations on both the client and server sides. ECDHE-ECDSA-AES128-SHA ECDHE-RSA-AES128-SHA AES128-GCM-SHA256 AES128-SHA ECDHE-ECDSA-AES256-SHA ECDHE-RSA-AES256-SHA AES256-GCM-SHA384 AES256-SHA The ServiceMeshExtension API, which was deprecated in Red Hat OpenShift Service Mesh version 2.2, was removed in Red Hat OpenShift Service Mesh version 2.3. If you are using the ServiceMeshExtension API, you must migrate to the WasmPlugin API to continue using your WebAssembly extensions. 2.2.28.4. Deprecated features in Red Hat OpenShift Service Mesh 2.2 The ServiceMeshExtension API is deprecated as of release 2.2 and will be removed in a future release. While ServiceMeshExtension API is still supported in release 2.2, customers should start moving to the new WasmPlugin API. 2.2.28.5. Removed features in Red Hat OpenShift Service Mesh 2.2 This release marks the end of support for Service Mesh control planes based on Service Mesh 1.1 for all platforms. 2.2.28.6. Removed features in Red Hat OpenShift Service Mesh 2.1 In Service Mesh 2.1, the Mixer component is removed. Bug fixes and support is provided through the end of the Service Mesh 2.0 life cycle. Upgrading from a Service Mesh 2.0.x release to 2.1 will not proceed if Mixer plugins are enabled. Mixer plugins must be ported to WebAssembly Extensions. 2.2.28.7. Deprecated features in Red Hat OpenShift Service Mesh 2.0 The Mixer component was deprecated in release 2.0 and will be removed in release 2.1. While using Mixer for implementing extensions was still supported in release 2.0, extensions should have been migrated to the new WebAssembly mechanism. The following resource types are no longer supported in Red Hat OpenShift Service Mesh 2.0: Policy (authentication.istio.io/v1alpha1) is no longer supported. Depending on the specific configuration in your Policy resource, you may have to configure multiple resources to achieve the same effect. Use RequestAuthentication (security.istio.io/v1beta1) Use PeerAuthentication (security.istio.io/v1beta1) ServiceMeshPolicy (maistra.io/v1) is no longer supported. Use RequestAuthentication or PeerAuthentication , as mentioned above, but place in the Service Mesh control plane namespace. RbacConfig (rbac.istio.io/v1alpha1) is no longer supported. Replaced by AuthorizationPolicy (security.istio.io/v1beta1), which encompasses behavior of RbacConfig , ServiceRole , and ServiceRoleBinding . ServiceMeshRbacConfig (maistra.io/v1) is no longer supported. Use AuthorizationPolicy as above, but place in Service Mesh control plane namespace. ServiceRole (rbac.istio.io/v1alpha1) is no longer supported. ServiceRoleBinding (rbac.istio.io/v1alpha1) is no longer supported. In Kiali, the login and LDAP strategies are deprecated. A future version will introduce authentication using OpenID providers. 2.2.29. Known issues These limitations exist in Red Hat OpenShift Service Mesh: Red Hat OpenShift Service Mesh does not yet fully support IPv6 . As a result, Red Hat OpenShift Service Mesh does not support dual-stack clusters. Graph layout - The layout for the Kiali graph can render differently, depending on your application architecture and the data to display (number of graph nodes and their interactions). Because it is difficult if not impossible to create a single layout that renders nicely for every situation, Kiali offers a choice of several different layouts. To choose a different layout, you can choose a different Layout Schema from the Graph Settings menu. The first time you access related services such as distributed tracing platform (Jaeger) and Grafana, from the Kiali console, you must accept the certificate and re-authenticate using your OpenShift Container Platform login credentials. This happens due to an issue with how the framework displays embedded pages in the console. The Bookinfo sample application cannot be installed on IBM Power(R), IBM Z(R), and IBM(R) LinuxONE. WebAssembly extensions are not supported on IBM Power(R), IBM Z(R), and IBM(R) LinuxONE. LuaJIT is not supported on IBM Power(R), IBM Z(R), and IBM(R) LinuxONE. Single stack IPv6 support is not available on IBM Power(R), IBM Z(R), and IBM(R) LinuxONE. 2.2.29.1. Service Mesh known issues These are the known issues in Red Hat OpenShift Service Mesh: * OSSM-5556 Gateways are skipped when istio-system labels do not match discovery selectors. + Workaround: Label the control plane namespace to match discovery selectors to avoid skipping the Gateway configurations. + .Example ServiceMeshControlPlane resource apiVersion: maistra.io/v2 kind: ServiceMeshControlPlane metadata: name: basic namespace: istio-system spec: mode: ClusterWide meshConfig: discoverySelectors: - matchLabels: istio-discovery: enabled gateways: ingress: enabled: true + Then, run the following command at the command line: + oc label namespace istio-system istio-discovery=enabled OSSM-3890 Attempting to use the Gateway API in a multitenant mesh deployment generates an error message similar to the following: 2023-05-02T15:20:42.541034Z error watch error in cluster Kubernetes: failed to list *v1alpha2.TLSRoute: the server could not find the requested resource (get tlsroutes.gateway.networking.k8s.io) 2023-05-02T15:20:42.616450Z info kube controller "gateway.networking.k8s.io/v1alpha2/TCPRoute" is syncing... To support Gateway API in a multitenant mesh deployment, all Gateway API Custom Resource Definition (CRD) files must be present in the cluster. In a multitenant mesh deployment, CRD scan is disabled, and Istio has no way to discover which CRDs are present in a cluster. As a result, Istio attempts to watch all supported Gateway API CRDs, but generates errors if some of those CRDs are not present. Service Mesh 2.3.1 and later versions support both v1alpha2 and v1beta1 CRDs. Therefore, both CRD versions must be present for a multitenant mesh deployment to support the Gateway API. Workaround: In the following example, the kubectl get operation installs the v1alpha2 and v1beta1 CRDs. Note the URL contains the additional experimental segment and updates any of your existing scripts accordingly: USD kubectl get crd gateways.gateway.networking.k8s.io || { kubectl kustomize "github.com/kubernetes-sigs/gateway-api/config/crd/experimental?ref=v0.5.1" | kubectl apply -f -; } OSSM-2042 Deployment of SMCP named default fails. If you are creating an SMCP object, and set its version field to v2.3, the name of the object cannot be default . If the name is default , then the control plane fails to deploy, and OpenShift generates a Warning event with the following message: Error processing component mesh-config: error: [mesh-config/templates/telemetryv2_1.6.yaml: Internal error occurred: failed calling webhook "rev.validation.istio.io": Post "https://istiod-default.istio-system.svc:443/validate?timeout=10s": x509: certificate is valid for istiod.istio-system.svc, istiod-remote.istio-system.svc, istio-pilot.istio-system.svc, not istiod-default.istio-system.svc, mesh-config/templates/enable-mesh-permissive.yaml OSSM-1655 Kiali dashboard shows error after enabling mTLS in SMCP . After enabling the spec.security.controlPlane.mtls setting in the SMCP, the Kiali console displays the following error message No subsets defined . OSSM-1505 This issue only occurs when using the ServiceMeshExtension resource on OpenShift Container Platform 4.11. When you use ServiceMeshExtension on OpenShift Container Platform 4.11 the resource never becomes ready. If you inspect the issue using oc describe ServiceMeshExtension you will see the following error: stderr: Error creating mount namespace before pivot: function not implemented . Workaround: ServiceMeshExtension was deprecated in Service Mesh 2.2. Migrate from ServiceMeshExtension to the WasmPlugin resource. For more information, see Migrating from ServiceMeshExtension to WasmPlugin resources. OSSM-1396 If a gateway resource contains the spec.externalIPs setting, instead of being recreated when the ServiceMeshControlPlane is updated, the gateway is removed and never recreated. OSSM-1168 When service mesh resources are created as a single YAML file, the Envoy proxy sidecar is not reliably injected into pods. When the SMCP, SMMR, and Deployment resources are created individually, the deployment works as expected. OSSM-1115 The concurrency field of the spec.proxy API did not propagate to the istio-proxy. The concurrency field works when set with ProxyConfig . The concurrency field specifies the number of worker threads to run. If the field is set to 0 , then the number of worker threads available is equal to the number of CPU cores. If the field is not set, then the number of worker threads available defaults to 2 . In the following example, the concurrency field is set to 0 . apiVersion: networking.istio.io/v1beta1 kind: ProxyConfig metadata: name: mesh-wide-concurrency namespace: <istiod-namespace> spec: concurrency: 0 OSSM-1052 When configuring a Service ExternalIP for the ingressgateway in the Service Mesh control plane, the service is not created. The schema for the SMCP is missing the parameter for the service. Workaround: Disable the gateway creation in the SMCP spec and manage the gateway deployment entirely manually (including Service, Role and RoleBinding). OSSM-882 This applies for Service Mesh 2.1 and earlier. Namespace is in the accessible_namespace list but does not appear in Kiali UI. By default, Kiali will not show any namespaces that start with "kube" because these namespaces are typically internal-use only and not part of a mesh. For example, if you create a namespace called 'akube-a' and add it to the Service Mesh member roll, then the Kiali UI does not display the namespace. For defined exclusion patterns, the software excludes namespaces that start with or contain the pattern. Workaround: Change the Kiali Custom Resource setting so it prefixes the setting with a carat (^). For example: api: namespaces: exclude: - "^istio-operator" - "^kube-.*" - "^openshift.*" - "^ibm.*" - "^kiali-operator" MAISTRA-2692 With Mixer removed, custom metrics that have been defined in Service Mesh 2.0.x cannot be used in 2.1. Custom metrics can be configured using EnvoyFilter . Red Hat is unable to support EnvoyFilter configuration except where explicitly documented. This is due to tight coupling with the underlying Envoy APIs, meaning that backward compatibility cannot be maintained. MAISTRA-2648 Service mesh extensions are currently not compatible with meshes deployed on IBM Z(R). MAISTRA-1959 Migration to 2.0 Prometheus scraping ( spec.addons.prometheus.scrape set to true ) does not work when mTLS is enabled. Additionally, Kiali displays extraneous graph data when mTLS is disabled. This problem can be addressed by excluding port 15020 from proxy configuration, for example, spec: proxy: networking: trafficControl: inbound: excludedPorts: - 15020 MAISTRA-453 If you create a new project and deploy pods immediately, sidecar injection does not occur. The operator fails to add the maistra.io/member-of before the pods are created, therefore the pods must be deleted and recreated for sidecar injection to occur. MAISTRA-158 Applying multiple gateways referencing the same hostname will cause all gateways to stop functioning. 2.2.29.2. Kiali known issues Note New issues for Kiali should be created in the OpenShift Service Mesh project with the Component set to Kiali . These are the known issues in Kiali: OSSM-6299 In OpenShift Container Platform 4.15, when you click the Node graph menu option of any node menu within the traffic graph, the node graph is not displayed. Instead, the page is refreshed with the same traffic graph. Currently, no workaround exists for this issue. OSSM-6298 When you click an item reference within the OpenShift Service Mesh Console (OSSMC) plugin, such as a workload link related to a specific service, the console sometimes performs multiple redirections before opening the desired page. If you click Back in a web browser, a different page of the console opens instead of the page. As a workaround, click Back twice to navigate to the page. OSSM-6290 For OpenShift Container Platform 4.15, the Project filter of the Istio Config list page does not work correctly. All istio items are displayed even if you select a specific project from the dropdown. Currently, no workaround exists for this issue. KIALI-2206 When you are accessing the Kiali console for the first time, and there is no cached browser data for Kiali, the "View in Grafana" link on the Metrics tab of the Kiali Service Details page redirects to the wrong location. The only way you would encounter this issue is if you are accessing Kiali for the first time. KIALI-507 Kiali does not support Internet Explorer 11. This is because the underlying frameworks do not support Internet Explorer. To access the Kiali console, use one of the two most recent versions of the Chrome, Edge, Firefox or Safari browser. 2.2.30. Fixed issues The following issues have been resolved in releases: 2.2.30.1. Service Mesh fixed issues OSSM-6177 Previously, when validation messages were enabled in the ServiceMeshControlPlane (SMCP), the istiod crashed continuously unless GatewayAPI support was enabled. Now, when validation messages are enabled but GatewayAPI support is not, the istiod does not continuously crash. OSSM-6163 Resolves the following issues: Previously, an unstable Prometheus image was included in the Service Mesh control plane (SMCP) v2.5, and users were not able to access the Prometheus dashboard. Now, in the Service Mesh operator 2.5.1, the Prometheus image has been updated. Previously, in the Service Mesh control plane (SMCP), a Grafana data source was not able to set Basic authentication password automatically and users were not able to view metrics from Prometheus in Grafana mesh dashboards. Now, a Grafana data source password is configured under the secureJsonData field. Metrics are displayed correctly in dashboards. OSSM-6148 Previously, the OpenShift Service Mesh Console (OSSMC) plugin did not respond when the user clicked any option in the menu of any node on the Traffic Graph page. Now, the plugin responds to the selected option in the menu by redirecting to the corresponding details page. OSSM-6099 Previously, the OpenShift Service Mesh Console (OSSMC) plugin failed to load correctly in an IPv6 cluster. Now, the OSSMC plugin configuration has been modified to ensure proper loading in an IPv6 cluster. OSSM-5960 Previously, the OpenShift Service Mesh Console (OSSMC) plugin did not display notification messages such as backend errors or Istio validations. Now, these notifications are displayed correctly at the top of the plugin page. OSSM-5959 Previously, the OpenShift Service Mesh Console (OSSMC) plugin did not display TLS and Istio certification information in the Overview page. Now, this information is displayed correctly. OSSM-5902 Previously, the OpenShift Service Mesh Console (OSSMC) plugin redirected to a "Not Found Page" error when the user clicked the Istio config health symbol on the Overview page. Now, the plugin redirects to the correct Istio config details page. OSSM-5541 Previously, an Istio operator pod might keep waiting for the leader lease in some restart conditions. Now, the leader election implementation has been enhanced to avoid this issue. OSSM-1397 Previously, if you removed the maistra.io/member-of label from a namespace, the Service Mesh Operator did not automatically reapply the label to the namespace. As a result, sidecar injection did not work in the namespace. The Operator would reapply the label to the namespace when you made changes to the ServiceMeshMember object, which triggered the reconciliation of this member object. Now, any change to the namespace also triggers the member object reconciliation. OSSM-3647 Previously, in the Service Mesh control plane (SMCP) v2.2 (Istio 1.12), WasmPlugins were applied only to inbound listeners. Since SMCP v2.3 (Istio 1.14), WasmPlugins have been applied to inbound and outbound listeners by default, which introduced regression for users of the 3scale WasmPlugin. Now, the environment variable APPLY_WASM_PLUGINS_TO_INBOUND_ONLY is added, which allows safe migration from SMCP v2.2 to v2.3 and v2.4. The following setting should be added to the SMCP config: spec: runtime: components: pilot: container: env: APPLY_WASM_PLUGINS_TO_INBOUND_ONLY: "true" To ensure safe migration, perform the following steps: Set APPLY_WASM_PLUGINS_TO_INBOUND_ONLY in SMCP v2.2. Upgrade to 2.4. Set spec.match[].mode: SERVER in WasmPlugins. Remove the previously-added environment variable. OSSM-4851 Previously, an error occurred in the operator deploying new pods in a namespace scoped inside the mesh when runAsGroup , runAsUser , or fsGroup parameters were nil . Now, a yaml validation has been added to avoid the nil value. OSSM-3771 Previously, OpenShift routes could not be disabled for additional ingress gateways defined in a Service Mesh Control Plane (SMCP). Now, a routeConfig block can be added to each additionalIngress gateway so the creation of OpenShift routes can be enabled or disabled for each gateway. OSSM-4197 Previously, if you deployed a v2.2 or v2.1 of the 'ServiceMeshControlPlane' resource, the /etc/cni/multus/net.d/ directory was not created. As a result, the istio-cni pod failed to become ready, and the istio-cni pods log contained the following message: USD error Installer exits with open /host/etc/cni/multus/net.d/v2-2-istio-cni.kubeconfig.tmp.841118073: no such file or directory Now, if you deploy a v2.2 or v2.1 of the 'ServiceMeshControlPlane' resource, the /etc/cni/multus/net.d/ directory is created, and the istio-cni pod becomes ready. OSSM-3993 Previously, Kiali only supported OpenShift OAuth via a proxy on the standard HTTPS port of 443 . Now, Kiali supports OpenShift OAuth over a non-standard HTTPS port. To enable the port, you must set the spec.server.web_port field to the proxy's non-standard HTTPS port in the Kiali CR. OSSM-3936 Previously, the values for the injection_label_rev and injection_label_name attributes were hardcoded. This prevented custom configurations from taking effect in the Kiali Custom Resource Definition (CRD). Now, the attribute values are not hardcoded. You can customize the values for the injection_label_rev and injection_label_name attributes in the spec.istio_labels specification. OSSM-3644 Previously, the federation egress-gateway received the wrong update of network gateway endpoints, causing extra endpoint entries. Now, the federation-egress gateway has been updated on the server side so it receives the correct network gateway endpoints. OSSM-3595 Previously, the istio-cni plugin sometimes failed on RHEL because SELinux did not allow the utility iptables-restore to open files in the /tmp directory. Now, SELinux passes iptables-restore via stdin input stream instead of via a file. OSSM-3586 Previously, Istio proxies were slow to start when Google Cloud Platform (GCP) metadata servers were not available. When you upgrade to Istio 1.14.6, Istio proxies start as expected on GCP, even if metadata servers are not available. OSSM-3025 Istiod sometimes fails to become ready. Sometimes, when a mesh contained many member namespaces, the Istiod pod did not become ready due to a deadlock within Istiod. The deadlock is now resolved and the pod now starts as expected. OSSM-2493 Default nodeSelector and tolerations in SMCP not passed to Kiali. The nodeSelector and tolerations you add to SMCP.spec.runtime.defaults are now passed to the Kiali resource. OSSM-2492 Default tolerations in SMCP not passed to Jaeger. The nodeSelector and tolerations you add to SMCP.spec.runtime.defaults are now passed to the Jaeger resource. OSSM-2374 If you deleted one of the ServiceMeshMember resources, then the Service Mesh operator deleted the ServiceMeshMemberRoll . While this is expected behavior when you delete the last ServiceMeshMember , the operator should not delete the ServiceMeshMemberRoll if it contains any members in addition to the one that was deleted. This issue is now fixed and the operator only deletes the ServiceMeshMemberRoll when the last ServiceMeshMember resource is deleted. OSSM-2373 Error trying to get OAuth metadata when logging in. To fetch the cluster version, the system:anonymous account is used. With the cluster's default bundled ClusterRoles and ClusterRoleBinding, the anonymous account can fetch the version correctly. If the system:anonymous account loses its privileges to fetch the cluster version, OpenShift authentication becomes unusable. This is fixed by using the Kiali SA to fetch the cluster version. This also allows for improved security on the cluster. OSSM-2371 Despite Kiali being configured as "view-only," a user can change the proxy logging level via the Workload details' Logs tab's kebab menu. This issue has been fixed so the options under "Set Proxy Log Level" are disabled when Kiali is configured as "view-only." OSSM-2344 Restarting Istiod causes Kiali to flood CRI-O with port-forward requests. This issue occurred when Kiali could not connect to Istiod and Kiali simultaneously issued a large number of requests to istiod. Kiali now limits the number of requests it sends to istiod. OSSM-2335 Dragging the mouse pointer over the Traces scatterchart plot sometimes caused the Kiali console to stop responding due to concurrent backend requests. OSSM-2221 Previously, gateway injection in the ServiceMeshControlPlane namespace was not possible because the ignore-namespace label was applied to the namespace by default. When creating a v2.4 control plane, the namespace no longer has the ignore-namespace label applied, and gateway injection is possible. In the following example, the oc label command removes the ignore-namespace label from a namespace in an existing deployment: USD oc label namespace istio-system maistra.io/ignore-namespace- where: istio_system Specified the name of the ServiceMeshControlPlane namespace. OSSM-2053 Using Red Hat OpenShift Service Mesh Operator 2.2 or 2.3, during SMCP reconciliation, the SMMR controller removed the member namespaces from SMMR.status.configuredMembers . This caused the services in the member namespaces to become unavailable for a few moments. Using Red Hat OpenShift Service Mesh Operator 2.2 or 2.3, the SMMR controller no longer removes the namespaces from SMMR.status.configuredMembers . Instead, the controller adds the namespaces to SMMR.status.pendingMembers to indicate that they are not up-to-date. During reconciliation, as each namespace synchronizes with the SMCP, the namespace is automatically removed from SMMR.status.pendingMembers . OSSM-1962 Use EndpointSlices in federation controller. The federation controller now uses EndpointSlices , which improves scalability and performance in large deployments. The PILOT_USE_ENDPOINT_SLICE flag is enabled by default. Disabling the flag prevents use of federation deployments. OSSM-1668 A new field spec.security.jwksResolverCA was added to the Version 2.1 SMCP but was missing in the 2.2.0 and 2.2.1 releases. When upgrading from an Operator version where this field was present to an Operator version that was missing this field, the .spec.security.jwksResolverCA field was not available in the SMCP . OSSM-1325 istiod pod crashes and displays the following error message: fatal error: concurrent map iteration and map write . OSSM-1211 Configuring Federated service meshes for failover does not work as expected. The Istiod pilot log displays the following error: envoy connection [C289] TLS error: 337047686:SSL routines:tls_process_server_certificate:certificate verify failed OSSM-1099 The Kiali console displayed the message Sorry, there was a problem. Try a refresh or navigate to a different page. OSSM-1074 Pod annotations defined in SMCP are not injected in the pods. OSSM-999 Kiali retention did not work as expected. Calendar times were greyed out in the dashboard graph. OSSM-797 Kiali Operator pod generates CreateContainerConfigError while installing or updating the operator. OSSM-722 Namespace starting with kube is hidden from Kiali. OSSM-569 There is no CPU memory limit for the Prometheus istio-proxy container. The Prometheus istio-proxy sidecar now uses the resource limits defined in spec.proxy.runtime.container . OSSM-535 Support validationMessages in SMCP. The ValidationMessages field in the Service Mesh Control Plane can now be set to True . This writes a log for the status of the resources, which can be helpful when troubleshooting problems. OSSM-449 VirtualService and Service causes an error "Only unique values for domains are permitted. Duplicate entry of domain." OSSM-419 Namespaces with similar names will all show in Kiali namespace list, even though namespaces may not be defined in Service Mesh Member Role. OSSM-296 When adding health configuration to the Kiali custom resource (CR) is it not being replicated to the Kiali configmap. OSSM-291 In the Kiali console, on the Applications, Services, and Workloads pages, the "Remove Label from Filters" function is not working. OSSM-289 In the Kiali console, on the Service Details pages for the 'istio-ingressgateway' and 'jaeger-query' services there are no Traces being displayed. The traces exist in Jaeger. OSSM-287 In the Kiali console there are no traces being displayed on the Graph Service. OSSM-285 When trying to access the Kiali console, receive the following error message "Error trying to get OAuth Metadata". Workaround: Restart the Kiali pod. MAISTRA-2735 The resources that the Service Mesh Operator deletes when reconciling the SMCP changed in Red Hat OpenShift Service Mesh version 2.1. Previously, the Operator deleted a resource with the following labels: maistra.io/owner app.kubernetes.io/version Now, the Operator ignores resources that does not also include the app.kubernetes.io/managed-by=maistra-istio-operator label. If you create your own resources, you should not add the app.kubernetes.io/managed-by=maistra-istio-operator label to them. MAISTRA-2687 Red Hat OpenShift Service Mesh 2.1 federation gateway does not send the full certificate chain when using external certificates. The Service Mesh federation egress gateway only sends the client certificate. Because the federation ingress gateway only knows about the root certificate, it cannot verify the client certificate unless you add the root certificate to the federation import ConfigMap . MAISTRA-2635 Replace deprecated Kubernetes API. To remain compatible with OpenShift Container Platform 4.8, the apiextensions.k8s.io/v1beta1 API was deprecated as of Red Hat OpenShift Service Mesh 2.0.8. MAISTRA-2631 The WASM feature is not working because podman is failing due to nsenter binary not being present. Red Hat OpenShift Service Mesh generates the following error message: Error: error configuring CNI network plugin exec: "nsenter": executable file not found in USDPATH . The container image now contains nsenter and WASM works as expected. MAISTRA-2534 When istiod attempted to fetch the JWKS for an issuer specified in a JWT rule, the issuer service responded with a 502. This prevented the proxy container from becoming ready and caused deployments to hang. The fix for the community bug has been included in the Service Mesh 2.0.7 release. MAISTRA-2411 When the Operator creates a new ingress gateway using spec.gateways.additionaIngress in the ServiceMeshControlPlane , Operator is not creating a NetworkPolicy for the additional ingress gateway like it does for the default istio-ingressgateway. This is causing a 503 response from the route of the new gateway. Workaround: Manually create the NetworkPolicy in the istio-system namespace. MAISTRA-2401 CVE-2021-3586 servicemesh-operator: NetworkPolicy resources incorrectly specified ports for ingress resources. The NetworkPolicy resources installed for Red Hat OpenShift Service Mesh did not properly specify which ports could be accessed. This allowed access to all ports on these resources from any pod. Network policies applied to the following resources are affected: Galley Grafana Istiod Jaeger Kiali Prometheus Sidecar injector MAISTRA-2378 When the cluster is configured to use OpenShift SDN with ovs-multitenant and the mesh contains a large number of namespaces (200+), the OpenShift Container Platform networking plugin is unable to configure the namespaces quickly. Service Mesh times out causing namespaces to be continuously dropped from the service mesh and then reenlisted. MAISTRA-2370 Handle tombstones in listerInformer. The updated cache codebase was not handling tombstones when translating the events from the namespace caches to the aggregated cache, leading to a panic in the go routine. MAISTRA-2117 Add optional ConfigMap mount to operator. The CSV now contains an optional ConfigMap volume mount, which mounts the smcp-templates ConfigMap if it exists. If the smcp-templates ConfigMap does not exist, the mounted directory is empty. When you create the ConfigMap , the directory is populated with the entries from the ConfigMap and can be referenced in SMCP.spec.profiles . No restart of the Service Mesh operator is required. Customers using the 2.0 operator with a modified CSV to mount the smcp-templates ConfigMap can upgrade to Red Hat OpenShift Service Mesh 2.1. After upgrading, you can continue using an existing ConfigMap, and the profiles it contains, without editing the CSV. Customers that previously used ConfigMap with a different name will either have to rename the ConfigMap or update the CSV after upgrading. MAISTRA-2010 AuthorizationPolicy does not support request.regex.headers field. The validatingwebhook rejects any AuthorizationPolicy with the field, and even if you disable that, Pilot tries to validate it using the same code, and it does not work. MAISTRA-1979 Migration to 2.0 The conversion webhook drops the following important fields when converting SMCP.status from v2 to v1: conditions components observedGeneration annotations Upgrading the operator to 2.0 might break client tools that read the SMCP status using the maistra.io/v1 version of the resource. This also causes the READY and STATUS columns to be empty when you run oc get servicemeshcontrolplanes.v1.maistra.io . MAISTRA-1947 Technology Preview Updates to ServiceMeshExtensions are not applied. Workaround: Remove and recreate the ServiceMeshExtensions . MAISTRA-1983 Migration to 2.0 Upgrading to 2.0.0 with an existing invalid ServiceMeshControlPlane cannot easily be repaired. The invalid items in the ServiceMeshControlPlane resource caused an unrecoverable error. The fix makes the errors recoverable. You can delete the invalid resource and replace it with a new one or edit the resource to fix the errors. For more information about editing your resource, see [Configuring the Red Hat OpenShift Service Mesh installation]. MAISTRA-1502 As a result of CVEs fixes in version 1.0.10, the Istio dashboards are not available from the Home Dashboard menu in Grafana. To access the Istio dashboards, click the Dashboard menu in the navigation panel and select the Manage tab. MAISTRA-1399 Red Hat OpenShift Service Mesh no longer prevents you from installing unsupported CNI protocols. The supported network configurations has not changed. MAISTRA-1089 Migration to 2.0 Gateways created in a non-control plane namespace are automatically deleted. After removing the gateway definition from the SMCP spec, you need to manually delete these resources. MAISTRA-858 The following Envoy log messages describing deprecated options and configurations associated with Istio 1.1.x are expected: [2019-06-03 07:03:28.943][19][warning][misc] [external/envoy/source/common/protobuf/utility.cc:129] Using deprecated option 'envoy.api.v2.listener.Filter.config'. This configuration will be removed from Envoy soon. [2019-08-12 22:12:59.001][13][warning][misc] [external/envoy/source/common/protobuf/utility.cc:174] Using deprecated option 'envoy.api.v2.Listener.use_original_dst' from file lds.proto. This configuration will be removed from Envoy soon. MAISTRA-806 Evicted Istio Operator Pod causes mesh and CNI not to deploy. Workaround: If the istio-operator pod is evicted while deploying the control pane, delete the evicted istio-operator pod. MAISTRA-681 When the Service Mesh control plane has many namespaces, it can lead to performance issues. MAISTRA-193 Unexpected console info messages are visible when health checking is enabled for citadel. Bugzilla 1821432 The toggle controls in OpenShift Container Platform Custom Resource details page does not update the CR correctly. UI Toggle controls in the Service Mesh Control Plane (SMCP) Overview page in the OpenShift Container Platform web console sometimes updates the wrong field in the resource. To update a SMCP, edit the YAML content directly or update the resource from the command line instead of clicking the toggle controls. 2.3. Upgrading Service Mesh To access the most current features of Red Hat OpenShift Service Mesh, upgrade to the current version, 2.6.6. 2.3.1. Understanding versioning Red Hat uses semantic versioning for product releases. Semantic Versioning is a 3-component number in the format of X.Y.Z, where: X stands for a Major version. Major releases usually denote some sort of breaking change: architectural changes, API changes, schema changes, and similar major updates. Y stands for a Minor version. Minor releases contain new features and functionality while maintaining backwards compatibility. Z stands for a Patch version (also known as a z-stream release). Patch releases are used to addresses Common Vulnerabilities and Exposures (CVEs) and release bug fixes. New features and functionality are generally not released as part of a Patch release. 2.3.1.1. How versioning affects Service Mesh upgrades Depending on the version of the update you are making, the upgrade process is different. Patch updates - Patch upgrades are managed by the Operator Lifecycle Manager (OLM); they happen automatically when you update your Operators. Minor upgrades - Minor upgrades require both updating to the most recent Red Hat OpenShift Service Mesh Operator version and manually modifying the spec.version value in your ServiceMeshControlPlane resources. Major upgrades - Major upgrades require both updating to the most recent Red Hat OpenShift Service Mesh Operator version and manually modifying the spec.version value in your ServiceMeshControlPlane resources. Because major upgrades can contain changes that are not backwards compatible, additional manual changes might be required. 2.3.1.2. Understanding Service Mesh versions In order to understand what version of Red Hat OpenShift Service Mesh you have deployed on your system, you need to understand how each of the component versions is managed. Operator version - The most current Operator version is 2.6.6. The Operator version number only indicates the version of the currently installed Operator. Because the Red Hat OpenShift Service Mesh Operator supports multiple versions of the Service Mesh control plane, the version of the Operator does not determine the version of your deployed ServiceMeshControlPlane resources. Important Upgrading to the latest Operator version automatically applies patch updates, but does not automatically upgrade your Service Mesh control plane to the latest minor version. ServiceMeshControlPlane version - The ServiceMeshControlPlane version determines what version of Red Hat OpenShift Service Mesh you are using. The value of the spec.version field in the ServiceMeshControlPlane resource controls the architecture and configuration settings that are used to install and deploy Red Hat OpenShift Service Mesh. When you create the Service Mesh control plane you can set the version in one of two ways: To configure in the Form View, select the version from the Control Plane Version menu. To configure in the YAML View, set the value for spec.version in the YAML file. Operator Lifecycle Manager (OLM) does not manage Service Mesh control plane upgrades, so the version number for your Operator and ServiceMeshControlPlane (SMCP) may not match, unless you have manually upgraded your SMCP. 2.3.2. Upgrade considerations The maistra.io/ label or annotation should not be used on a user-created custom resource, because it indicates that the resource was generated by and should be managed by the Red Hat OpenShift Service Mesh Operator. Warning During the upgrade, the Operator makes changes, including deleting or replacing files, to resources that include the following labels or annotations that indicate that the resource is managed by the Operator. Before upgrading check for user-created custom resources that include the following labels or annotations: maistra.io/ AND the app.kubernetes.io/managed-by label set to maistra-istio-operator (Red Hat OpenShift Service Mesh) kiali.io/ (Kiali) jaegertracing.io/ (Red Hat OpenShift distributed tracing platform (Jaeger)) logging.openshift.io/ (Red Hat Elasticsearch) Before upgrading, check your user-created custom resources for labels or annotations that indicate they are Operator managed. Remove the label or annotation from custom resources that you do not want to be managed by the Operator. When upgrading to version 2.0, the Operator only deletes resources with these labels in the same namespace as the SMCP. When upgrading to version 2.1, the Operator deletes resources with these labels in all namespaces. 2.3.2.1. Known issues that may affect upgrade Known issues that may affect your upgrade include: When upgrading an Operator, custom configurations for Jaeger or Kiali might be reverted. Before upgrading an Operator, note any custom configuration settings for the Jaeger or Kiali objects in the Service Mesh production deployment so that you can recreate them. Red Hat OpenShift Service Mesh does not support the use of EnvoyFilter configuration except where explicitly documented. This is due to tight coupling with the underlying Envoy APIs, meaning that backward compatibility cannot be maintained. If you are using Envoy Filters, and the configuration generated by Istio has changed due to the lastest version of Envoy introduced by upgrading your ServiceMeshControlPlane , that has the potential to break any EnvoyFilter you may have implemented. OSSM-1505 ServiceMeshExtension does not work with OpenShift Container Platform version 4.11. Because ServiceMeshExtension has been deprecated in Red Hat OpenShift Service Mesh 2.2, this known issue will not be fixed and you must migrate your extensions to WasmPluging OSSM-1396 If a gateway resource contains the spec.externalIPs setting, rather than being recreated when the ServiceMeshControlPlane is updated, the gateway is removed and never recreated. OSSM-1052 When configuring a Service ExternalIP for the ingressgateway in the Service Mesh control plane, the service is not created. The schema for the SMCP is missing the parameter for the service. Workaround: Disable the gateway creation in the SMCP spec and manage the gateway deployment entirely manually (including Service, Role and RoleBinding). 2.3.3. Upgrading the Operators In order to keep your Service Mesh patched with the latest security fixes, bug fixes, and software updates, you must keep your Operators updated. You initiate patch updates by upgrading your Operators. Important The version of the Operator does not determine the version of your service mesh. The version of your deployed Service Mesh control plane determines your version of Service Mesh. Because the Red Hat OpenShift Service Mesh Operator supports multiple versions of the Service Mesh control plane, updating the Red Hat OpenShift Service Mesh Operator does not update the spec.version value of your deployed ServiceMeshControlPlane . Also note that the spec.version value is a two digit number, for example 2.2, and that patch updates, for example 2.2.1, are not reflected in the SMCP version value. Operator Lifecycle Manager (OLM) controls the installation, upgrade, and role-based access control (RBAC) of Operators in a cluster. The OLM runs by default in OpenShift Container Platform. OLM queries for available Operators as well as upgrades for installed Operators. Whether or not you have to take action to upgrade your Operators depends on the settings you selected when installing them. When you installed each of your Operators, you selected an Update Channel and an Approval Strategy . The combination of these two settings determine when and how your Operators are updated. Table 2.3. Interaction of Update Channel and Approval Strategy Versioned channel "Stable" or "Preview" Channel Automatic Automatically updates the Operator for minor and patch releases for that version only. Will not automatically update to the major version (that is, from version 2.0 to 3.0). Manual change to Operator subscription required to update to the major version. Automatically updates Operator for all major, minor, and patch releases. Manual Manual updates required for minor and patch releases for the specified version. Manual change to Operator subscription required to update to the major version. Manual updates required for all major, minor, and patch releases. When you update your Red Hat OpenShift Service Mesh Operator the Operator Lifecycle Manager (OLM) removes the old Operator pod and starts a new pod. Once the new Operator pod starts, the reconciliation process checks the ServiceMeshControlPlane (SMCP), and if there are updated container images available for any of the Service Mesh control plane components, it replaces those Service Mesh control plane pods with ones that use the new container images. When you upgrade the Kiali and Red Hat OpenShift distributed tracing platform (Jaeger) Operators, the OLM reconciliation process scans the cluster and upgrades the managed instances to the version of the new Operator. For example, if you update the Red Hat OpenShift distributed tracing platform (Jaeger) Operator from version 1.30.2 to version 1.34.1, the Operator scans for running instances of distributed tracing platform (Jaeger) and upgrades them to 1.34.1 as well. To stay on a particular patch version of Red Hat OpenShift Service Mesh, you would need to disable automatic updates and remain on that specific version of the Operator. For more information about upgrading Operators, refer to the Operator Lifecycle Manager documentation. 2.3.4. Upgrading the control plane You must manually update the control plane for minor and major releases. The community Istio project recommends canary upgrades, Red Hat OpenShift Service Mesh only supports in-place upgrades. Red Hat OpenShift Service Mesh requires that you upgrade from each minor release to the minor release in sequence. For example, you must upgrade from version 2.0 to version 2.1, and then upgrade to version 2.2. You cannot update from Red Hat OpenShift Service Mesh 2.0 to 2.2 directly. When you upgrade the service mesh control plane, all Operator managed resources, for example gateways, are also upgraded. Although you can deploy multiple versions of the control plane in the same cluster, Red Hat OpenShift Service Mesh does not support canary upgrades of the service mesh. That is, you can have different SCMP resources with different values for spec.version , but they cannot be managing the same mesh. For more information about migrating your extensions, refer to Migrating from ServiceMeshExtension to WasmPlugin resources . 2.3.4.1. Upgrade changes from version 2.5 to version 2.6 2.3.4.1.1. Red Hat OpenShift distributed tracing platform (Jaeger) default setting change This release disables Red Hat OpenShift distributed tracing platform (Jaeger) by default for new instances of the ServiceMeshControlPlane resource. When updating existing instances of the ServiceMeshControlPlane resource to Red Hat OpenShift Service Mesh version 2.6, distributed tracing platform (Jaeger) remains enabled by default. Red Hat OpenShift Service Mesh 2.6 is the last release that includes support for Red Hat OpenShift distributed tracing platform (Jaeger) and OpenShift Elasticsearch Operator. Both distributed tracing platform (Jaeger) and OpenShift Elasticsearch Operator will be removed in the release. If you are currently using distributed tracing platform (Jaeger) and OpenShift Elasticsearch Operator, you must migrate to Red Hat OpenShift distributed tracing platform (Tempo) and Red Hat build of OpenTelemetry. 2.3.4.1.2. Envoy sidecar container default setting change To enhance pod startup times, Istio now includes a startupProbe in sidecar containers by default. The pod's readiness probes do not start until the Envoy sidecar has started. 2.3.4.2. Upgrade changes from version 2.4 to version 2.5 2.3.4.2.1. Istio OpenShift Routing (IOR) default setting change The default setting for Istio OpenShift Routing (IOR) has changed. The setting is now disabled by default. You can use IOR by setting the enabled field to true in the spec.gateways.openshiftRoute specification of the ServiceMeshControlPlane resource. apiVersion: maistra.io/v2 kind: ServiceMeshControlPlane spec: gateways: openshiftRoute: enabled: true 2.3.4.2.2. Istio proxy concurrency configuration enhancement For consistency across deployments, Istio now configures the concurrency parameter based on the CPU limit allocated to the proxy container. For example, a limit of 2500m would set the concurrency parameter to 3. If you set the concurrency parameter to a value, Istio uses that value to configure how many threads the proxy runs instead of using the CPU limit. Previously, the default setting for the parameter was 2. 2.3.4.3. Upgrade changes from version 2.3 to version 2.4 Upgrading the Service Mesh control plane from version 2.3 to 2.4 introduces the following behavioral changes: Support for Istio OpenShift Routing (IOR) has been deprecated. IOR functionality is still enabled, but it will be removed in a future release. The following cipher suites are no longer supported, and were removed from the list of ciphers used in client and server side TLS negotiations. ECDHE-ECDSA-AES128-SHA ECDHE-RSA-AES128-SHA AES128-GCM-SHA256 AES128-SHA ECDHE-ECDSA-AES256-SHA ECDHE-RSA-AES256-SHA AES256-GCM-SHA384 AES256-SHA Applications that require access to services that use one of these cipher suites will fail to connect when the proxy initiates a TLS connection. 2.3.4.4. Upgrade changes from version 2.2 to version 2.3 Upgrading the Service Mesh control plane from version 2.2 to 2.3 introduces the following behavioral changes: This release requires use of the WasmPlugin API. Support for the ServiceMeshExtension API, which was deprecated in 2.2, has now been removed. If you attempt to upgrade while using the ServiceMeshExtension API, then the upgrade fails. 2.3.4.5. Upgrade changes from version 2.1 to version 2.2 Upgrading the Service Mesh control plane from version 2.1 to 2.2 introduces the following behavioral changes: The istio-node DaemonSet is renamed to istio-cni-node to match the name in upstream Istio. Istio 1.10 updated Envoy to send traffic to the application container using eth0 rather than lo by default. This release adds support for the WasmPlugin API and deprecates the ServiceMeshExtension API. 2.3.4.6. Upgrade changes from version 2.0 to version 2.1 Upgrading the Service Mesh control plane from version 2.0 to 2.1 introduces the following architectural and behavioral changes. Architecture changes Mixer has been completely removed in Red Hat OpenShift Service Mesh 2.1. Upgrading from a Red Hat OpenShift Service Mesh 2.0.x release to 2.1 will be blocked if Mixer is enabled. If you see the following message when upgrading from v2.0 to v2.1, update the existing Mixer type to Istiod type in the existing Control Plane spec before you update the .spec.version field: An error occurred admission webhook smcp.validation.maistra.io denied the request: [support for policy.type "Mixer" and policy.Mixer options have been removed in v2.1, please use another alternative, support for telemetry.type "Mixer" and telemetry.Mixer options have been removed in v2.1, please use another alternative]" For example: apiVersion: maistra.io/v2 kind: ServiceMeshControlPlane spec: policy: type: Istiod telemetry: type: Istiod version: v2.6 Behavioral changes AuthorizationPolicy updates: With the PROXY protocol, if you're using ipBlocks and notIpBlocks to specify remote IP addresses, update the configuration to use remoteIpBlocks and notRemoteIpBlocks instead. Added support for nested JSON Web Token (JWT) claims. EnvoyFilter breaking changes> Must use typed_config xDS v2 is no longer supported Deprecated filter names Older versions of proxies may report 503 status codes when receiving 1xx or 204 status codes from newer proxies. 2.3.4.7. Upgrading the Service Mesh control plane To upgrade Red Hat OpenShift Service Mesh, you must update the version field of the Red Hat OpenShift Service Mesh ServiceMeshControlPlane v2 resource. Then, once it is configured and applied, restart the application pods to update each sidecar proxy and its configuration. Prerequisites You are running OpenShift Container Platform 4.9 or later. You have the latest Red Hat OpenShift Service Mesh Operator. Procedure Switch to the project that contains your ServiceMeshControlPlane resource. In this example, istio-system is the name of the Service Mesh control plane project. USD oc project istio-system Check your v2 ServiceMeshControlPlane resource configuration to verify it is valid. Run the following command to view your ServiceMeshControlPlane resource as a v2 resource. USD oc get smcp -o yaml Tip Back up your Service Mesh control plane configuration. Update the .spec.version field and apply the configuration. For example: apiVersion: maistra.io/v2 kind: ServiceMeshControlPlane metadata: name: basic spec: version: v2.6 Alternatively, instead of using the command line, you can use the web console to edit the Service Mesh control plane. In the OpenShift Container Platform web console, click Project and select the project name you just entered. Click Operators Installed Operators . Find your ServiceMeshControlPlane instance. Select YAML view and update text of the YAML file, as shown in the example. Click Save . 2.3.4.8. Migrating Red Hat OpenShift Service Mesh from version 1.1 to version 2.0 Upgrading from version 1.1 to 2.0 requires manual steps that migrate your workloads and application to a new instance of Red Hat OpenShift Service Mesh running the new version. Prerequisites You must upgrade to OpenShift Container Platform 4.7. before you upgrade to Red Hat OpenShift Service Mesh 2.0. You must have Red Hat OpenShift Service Mesh version 2.0 operator. If you selected the automatic upgrade path, the operator automatically downloads the latest information. However, there are steps you must take to use the features in Red Hat OpenShift Service Mesh version 2.0. 2.3.4.8.1. Upgrading Red Hat OpenShift Service Mesh To upgrade Red Hat OpenShift Service Mesh, you must create an instance of Red Hat OpenShift Service Mesh ServiceMeshControlPlane v2 resource in a new namespace. Then, once it's configured, move your microservice applications and workloads from your old mesh to the new service mesh. Procedure Check your v1 ServiceMeshControlPlane resource configuration to make sure it is valid. Run the following command to view your ServiceMeshControlPlane resource as a v2 resource. USD oc get smcp -o yaml Check the spec.techPreview.errored.message field in the output for information about any invalid fields. If there are invalid fields in your v1 resource, the resource is not reconciled and cannot be edited as a v2 resource. All updates to v2 fields will be overridden by the original v1 settings. To fix the invalid fields, you can replace, patch, or edit the v1 version of the resource. You can also delete the resource without fixing it. After the resource has been fixed, it can be reconciled, and you can to modify or view the v2 version of the resource. To fix the resource by editing a file, use oc get to retrieve the resource, edit the text file locally, and replace the resource with the file you edited. USD oc get smcp.v1.maistra.io <smcp_name> > smcp-resource.yaml #Edit the smcp-resource.yaml file. USD oc replace -f smcp-resource.yaml To fix the resource using patching, use oc patch . USD oc patch smcp.v1.maistra.io <smcp_name> --type json --patch '[{"op": "replace","path":"/spec/path/to/bad/setting","value":"corrected-value"}]' To fix the resource by editing with command line tools, use oc edit . USD oc edit smcp.v1.maistra.io <smcp_name> Back up your Service Mesh control plane configuration. Switch to the project that contains your ServiceMeshControlPlane resource. In this example, istio-system is the name of the Service Mesh control plane project. USD oc project istio-system Enter the following command to retrieve the current configuration. Your <smcp_name> is specified in the metadata of your ServiceMeshControlPlane resource, for example basic-install or full-install . USD oc get servicemeshcontrolplanes.v1.maistra.io <smcp_name> -o yaml > <smcp_name>.v1.yaml Convert your ServiceMeshControlPlane to a v2 control plane version that contains information about your configuration as a starting point. USD oc get smcp <smcp_name> -o yaml > <smcp_name>.v2.yaml Create a project. In the OpenShift Container Platform console Project menu, click New Project and enter a name for your project, istio-system-upgrade , for example. Or, you can run this command from the CLI. USD oc new-project istio-system-upgrade Update the metadata.namespace field in your v2 ServiceMeshControlPlane with your new project name. In this example, use istio-system-upgrade . Update the version field from 1.1 to 2.0 or remove it in your v2 ServiceMeshControlPlane . Create a ServiceMeshControlPlane in the new namespace. On the command line, run the following command to deploy the control plane with the v2 version of the ServiceMeshControlPlane that you retrieved. In this example, replace `<smcp_name.v2> `with the path to your file. USD oc create -n istio-system-upgrade -f <smcp_name>.v2.yaml Alternatively, you can use the console to create the Service Mesh control plane. In the OpenShift Container Platform web console, click Project . Then, select the project name you just entered. Click Operators Installed Operators . Click Create ServiceMeshControlPlane . Select YAML view and paste text of the YAML file you retrieved into the field. Check that the apiVersion field is set to maistra.io/v2 and modify the metadata.namespace field to use the new namespace, for example istio-system-upgrade . Click Create . 2.3.4.8.2. Configuring the 2.0 ServiceMeshControlPlane The ServiceMeshControlPlane resource has been changed for Red Hat OpenShift Service Mesh version 2.0. After you created a v2 version of the ServiceMeshControlPlane resource, modify it to take advantage of the new features and to fit your deployment. Consider the following changes to the specification and behavior of Red Hat OpenShift Service Mesh 2.0 as you're modifying your ServiceMeshControlPlane resource. You can also refer to the Red Hat OpenShift Service Mesh 2.0 product documentation for new information to features you use. The v2 resource must be used for Red Hat OpenShift Service Mesh 2.0 installations. 2.3.4.8.2.1. Architecture changes The architectural units used by versions have been replaced by Istiod. In 2.0 the Service Mesh control plane components Mixer, Pilot, Citadel, Galley, and the sidecar injector functionality have been combined into a single component, Istiod. Although Mixer is no longer supported as a control plane component, Mixer policy and telemetry plugins are now supported through WASM extensions in Istiod. Mixer can be enabled for policy and telemetry if you need to integrate legacy Mixer plugins. Secret Discovery Service (SDS) is used to distribute certificates and keys to sidecars directly from Istiod. In Red Hat OpenShift Service Mesh version 1.1, secrets were generated by Citadel, which were used by the proxies to retrieve their client certificates and keys. 2.3.4.8.2.2. Annotation changes The following annotations are no longer supported in v2.0. If you are using one of these annotations, you must update your workload before moving it to a v2.0 Service Mesh control plane. sidecar.maistra.io/proxyCPULimit has been replaced with sidecar.istio.io/proxyCPULimit . If you were using sidecar.maistra.io annotations on your workloads, you must modify those workloads to use sidecar.istio.io equivalents instead. sidecar.maistra.io/proxyMemoryLimit has been replaced with sidecar.istio.io/proxyMemoryLimit sidecar.istio.io/discoveryAddress is no longer supported. Also, the default discovery address has moved from pilot.<control_plane_namespace>.svc:15010 (or port 15011, if mtls is enabled) to istiod-<smcp_name>.<control_plane_namespace>.svc:15012 . The health status port is no longer configurable and is hard-coded to 15021. * If you were defining a custom status port, for example, status.sidecar.istio.io/port , you must remove the override before moving the workload to a v2.0 Service Mesh control plane. Readiness checks can still be disabled by setting the status port to 0 . Kubernetes Secret resources are no longer used to distribute client certificates for sidecars. Certificates are now distributed through Istiod's SDS service. If you were relying on mounted secrets, they are longer available for workloads in v2.0 Service Mesh control planes. 2.3.4.8.2.3. Behavioral changes Some features in Red Hat OpenShift Service Mesh 2.0 work differently than they did in versions. The readiness port on gateways has moved from 15020 to 15021 . The target host visibility includes VirtualService, as well as ServiceEntry resources. It includes any restrictions applied through Sidecar resources. Automatic mutual TLS is enabled by default. Proxy to proxy communication is automatically configured to use mTLS, regardless of global PeerAuthentication policies in place. Secure connections are always used when proxies communicate with the Service Mesh control plane regardless of spec.security.controlPlane.mtls setting. The spec.security.controlPlane.mtls setting is only used when configuring connections for Mixer telemetry or policy. 2.3.4.8.2.4. Migration details for unsupported resources Policy (authentication.istio.io/v1alpha1) Policy resources must be migrated to new resource types for use with v2.0 Service Mesh control planes, PeerAuthentication and RequestAuthentication. Depending on the specific configuration in your Policy resource, you may have to configure multiple resources to achieve the same effect. Mutual TLS Mutual TLS enforcement is accomplished using the security.istio.io/v1beta1 PeerAuthentication resource. The legacy spec.peers.mtls.mode field maps directly to the new resource's spec.mtls.mode field. Selection criteria has changed from specifying a service name in spec.targets[x].name to a label selector in spec.selector.matchLabels . In PeerAuthentication, the labels must match the selector on the services named in the targets list. Any port-specific settings will need to be mapped into spec.portLevelMtls . Authentication Additional authentication methods specified in spec.origins , must be mapped into a security.istio.io/v1beta1 RequestAuthentication resource. spec.selector.matchLabels must be configured similarly to the same field on PeerAuthentication. Configuration specific to JWT principals from spec.origins.jwt items map to similar fields in spec.rules items. spec.origins[x].jwt.triggerRules specified in the Policy must be mapped into one or more security.istio.io/v1beta1 AuthorizationPolicy resources. Any spec.selector.labels must be configured similarly to the same field on RequestAuthentication. spec.origins[x].jwt.triggerRules.excludedPaths must be mapped into an AuthorizationPolicy whose spec.action is set to ALLOW, with spec.rules[x].to.operation.path entries matching the excluded paths. spec.origins[x].jwt.triggerRules.includedPaths must be mapped into a separate AuthorizationPolicy whose spec.action is set to ALLOW , with spec.rules[x].to.operation.path entries matching the included paths, and spec.rules.[x].from.source.requestPrincipals entries that align with the specified spec.origins[x].jwt.issuer in the Policy resource. ServiceMeshPolicy (maistra.io/v1) ServiceMeshPolicy was configured automatically for the Service Mesh control plane through the spec.istio.global.mtls.enabled in the v1 resource or spec.security.dataPlane.mtls in the v2 resource setting. For v2 control planes, a functionally equivalent PeerAuthentication resource is created during installation. This feature is deprecated in Red Hat OpenShift Service Mesh version 2.0 RbacConfig, ServiceRole, ServiceRoleBinding (rbac.istio.io/v1alpha1) These resources were replaced by the security.istio.io/v1beta1 AuthorizationPolicy resource. Mimicking RbacConfig behavior requires writing a default AuthorizationPolicy whose settings depend on the spec.mode specified in the RbacConfig. When spec.mode is set to OFF , no resource is required as the default policy is ALLOW, unless an AuthorizationPolicy applies to the request. When spec.mode is set to ON, set spec: {} . You must create AuthorizationPolicy policies for all services in the mesh. spec.mode is set to ON_WITH_INCLUSION , must create an AuthorizationPolicy with spec: {} in each included namespace. Inclusion of individual services is not supported by AuthorizationPolicy. However, as soon as any AuthorizationPolicy is created that applies to the workloads for the service, all other requests not explicitly allowed will be denied. When spec.mode is set to ON_WITH_EXCLUSION , it is not supported by AuthorizationPolicy. A global DENY policy can be created, but an AuthorizationPolicy must be created for every workload in the mesh because there is no allow-all policy that can be applied to either a namespace or a workload. AuthorizationPolicy includes configuration for both the selector to which the configuration applies, which is similar to the function ServiceRoleBinding provides and the rules which should be applied, which is similar to the function ServiceRole provides. ServiceMeshRbacConfig (maistra.io/v1) This resource is replaced by using a security.istio.io/v1beta1 AuthorizationPolicy resource with an empty spec.selector in the Service Mesh control plane's namespace. This policy will be the default authorization policy applied to all workloads in the mesh. For specific migration details, see RbacConfig above. 2.3.4.8.2.5. Mixer plugins Mixer components are disabled by default in version 2.0. If you rely on Mixer plugins for your workload, you must configure your version 2.0 ServiceMeshControlPlane to include the Mixer components. To enable the Mixer policy components, add the following snippet to your ServiceMeshControlPlane . spec: policy: type: Mixer To enable the Mixer telemetry components, add the following snippet to your ServiceMeshControlPlane . spec: telemetry: type: Mixer Legacy mixer plugins can also be migrated to WASM and integrated using the new ServiceMeshExtension (maistra.io/v1alpha1) custom resource. Built-in WASM filters included in the upstream Istio distribution are not available in Red Hat OpenShift Service Mesh 2.0. 2.3.4.8.2.6. Mutual TLS changes When using mTLS with workload specific PeerAuthentication policies, a corresponding DestinationRule is required to allow traffic if the workload policy differs from the namespace/global policy. Auto mTLS is enabled by default, but can be disabled by setting spec.security.dataPlane.automtls to false in the ServiceMeshControlPlane resource. When disabling auto mTLS, DestinationRules may be required for proper communication between services. For example, setting PeerAuthentication to STRICT for one namespace may prevent services in other namespaces from accessing them, unless a DestinationRule configures TLS mode for the services in the namespace. For information about mTLS, see Enabling mutual Transport Layer Security (mTLS) 2.3.4.8.2.6.1. Other mTLS Examples To disable mTLS For productpage service in the bookinfo sample application, your Policy resource was configured the following way for Red Hat OpenShift Service Mesh v1.1. Example Policy resource apiVersion: authentication.istio.io/v1alpha1 kind: Policy metadata: name: productpage-mTLS-disable namespace: <namespace> spec: targets: - name: productpage To disable mTLS For productpage service in the bookinfo sample application, use the following example to configure your PeerAuthentication resource for Red Hat OpenShift Service Mesh v2.0. Example PeerAuthentication resource apiVersion: security.istio.io/v1beta1 kind: PeerAuthentication metadata: name: productpage-mTLS-disable namespace: <namespace> spec: mtls: mode: DISABLE selector: matchLabels: # this should match the selector for the "productpage" service app: productpage To enable mTLS With JWT authentication for the productpage service in the bookinfo sample application, your Policy resource was configured the following way for Red Hat OpenShift Service Mesh v1.1. Example Policy resource apiVersion: authentication.istio.io/v1alpha1 kind: Policy metadata: name: productpage-mTLS-with-JWT namespace: <namespace> spec: targets: - name: productpage ports: - number: 9000 peers: - mtls: origins: - jwt: issuer: "https://securetoken.google.com" audiences: - "productpage" jwksUri: "https://www.googleapis.com/oauth2/v1/certs" jwtHeaders: - "x-goog-iap-jwt-assertion" triggerRules: - excludedPaths: - exact: /health_check principalBinding: USE_ORIGIN To enable mTLS With JWT authentication for the productpage service in the bookinfo sample application, use the following example to configure your PeerAuthentication resource for Red Hat OpenShift Service Mesh v2.0. Example PeerAuthentication resource #require mtls for productpage:9000 apiVersion: security.istio.io/v1beta1 kind: PeerAuthentication metadata: name: productpage-mTLS-with-JWT namespace: <namespace> spec: selector: matchLabels: # this should match the selector for the "productpage" service app: productpage portLevelMtls: 9000: mode: STRICT --- #JWT authentication for productpage apiVersion: security.istio.io/v1beta1 kind: RequestAuthentication metadata: name: productpage-mTLS-with-JWT namespace: <namespace> spec: selector: matchLabels: # this should match the selector for the "productpage" service app: productpage jwtRules: - issuer: "https://securetoken.google.com" audiences: - "productpage" jwksUri: "https://www.googleapis.com/oauth2/v1/certs" fromHeaders: - name: "x-goog-iap-jwt-assertion" --- #Require JWT token to access product page service from #any client to all paths except /health_check apiVersion: security.istio.io/v1beta1 kind: AuthorizationPolicy metadata: name: productpage-mTLS-with-JWT namespace: <namespace> spec: action: ALLOW selector: matchLabels: # this should match the selector for the "productpage" service app: productpage rules: - to: # require JWT token to access all other paths - operation: notPaths: - /health_check from: - source: # if using principalBinding: USE_PEER in the Policy, # then use principals, e.g. # principals: # - "*" requestPrincipals: - "*" - to: # no JWT token required to access health_check - operation: paths: - /health_check 2.3.4.8.3. Configuration recipes You can configure the following items with these configuration recipes. 2.3.4.8.3.1. Mutual TLS in a data plane Mutual TLS for data plane communication is configured through spec.security.dataPlane.mtls in the ServiceMeshControlPlane resource, which is false by default. 2.3.4.8.3.2. Custom signing key Istiod manages client certificates and private keys used by service proxies. By default, Istiod uses a self-signed certificate for signing, but you can configure a custom certificate and private key. For more information about how to configure signing keys, see Adding an external certificate authority key and certificate 2.3.4.8.3.3. Tracing Tracing is configured in spec.tracing . Currently, the only type of tracer that is supported is Jaeger . Sampling is a scaled integer representing 0.01% increments, for example, 1 is 0.01% and 10000 is 100%. The tracing implementation and sampling rate can be specified: spec: tracing: sampling: 100 # 1% type: Jaeger Jaeger is configured in the addons section of the ServiceMeshControlPlane resource. spec: addons: jaeger: name: jaeger install: storage: type: Memory # or Elasticsearch for production mode memory: maxTraces: 100000 elasticsearch: # the following values only apply if storage:type:=Elasticsearch storage: # specific storageclass configuration for the Jaeger Elasticsearch (optional) size: "100G" storageClassName: "storageclass" nodeCount: 3 redundancyPolicy: SingleRedundancy runtime: components: tracing.jaeger: {} # general Jaeger specific runtime configuration (optional) tracing.jaeger.elasticsearch: #runtime configuration for Jaeger Elasticsearch deployment (optional) container: resources: requests: memory: "1Gi" cpu: "500m" limits: memory: "1Gi" The Jaeger installation can be customized with the install field. Container configuration, such as resource limits is configured in spec.runtime.components.jaeger related fields. If a Jaeger resource matching the value of spec.addons.jaeger.name exists, the Service Mesh control plane will be configured to use the existing installation. Use an existing Jaeger resource to fully customize your Jaeger installation. 2.3.4.8.3.4. Visualization Kiali and Grafana are configured under the addons section of the ServiceMeshControlPlane resource. spec: addons: grafana: enabled: true install: {} # customize install kiali: enabled: true name: kiali install: {} # customize install The Grafana and Kiali installations can be customized through their respective install fields. Container customization, such as resource limits, is configured in spec.runtime.components.kiali and spec.runtime.components.grafana . If an existing Kiali resource matching the value of name exists, the Service Mesh control plane configures the Kiali resource for use with the control plane. Some fields in the Kiali resource are overridden, such as the accessible_namespaces list, as well as the endpoints for Grafana, Prometheus, and tracing. Use an existing resource to fully customize your Kiali installation. 2.3.4.8.3.5. Resource utilization and scheduling Resources are configured under spec.runtime.<component> . The following component names are supported. Component Description Versions supported security Citadel container v1.0/1.1 galley Galley container v1.0/1.1 pilot Pilot/Istiod container v1.0/1.1/2.0 mixer istio-telemetry and istio-policy containers v1.0/1.1 mixer.policy istio-policy container v2.0 mixer.telemetry istio-telemetry container v2.0 global.oauthproxy oauth-proxy container used with various addons v1.0/1.1/2.0 sidecarInjectorWebhook sidecar injector webhook container v1.0/1.1 tracing.jaeger general Jaeger container - not all settings may be applied. Complete customization of Jaeger installation is supported by specifying an existing Jaeger resource in the Service Mesh control plane configuration. v1.0/1.1/2.0 tracing.jaeger.agent settings specific to Jaeger agent v1.0/1.1/2.0 tracing.jaeger.allInOne settings specific to Jaeger allInOne v1.0/1.1/2.0 tracing.jaeger.collector settings specific to Jaeger collector v1.0/1.1/2.0 tracing.jaeger.elasticsearch settings specific to Jaeger elasticsearch deployment v1.0/1.1/2.0 tracing.jaeger.query settings specific to Jaeger query v1.0/1.1/2.0 prometheus prometheus container v1.0/1.1/2.0 kiali Kiali container - complete customization of Kiali installation is supported by specifying an existing Kiali resource in the Service Mesh control plane configuration. v1.0/1.1/2.0 grafana Grafana container v1.0/1.1/2.0 3scale 3scale container v1.0/1.1/2.0 wasmExtensions.cacher WASM extensions cacher container v2.0 - tech preview Some components support resource limiting and scheduling. For more information, see Performance and scalability . 2.3.4.8.4. steps for migrating your applications and workloads Move the application workload to the new mesh and remove the old instances to complete your upgrade. 2.3.5. Upgrading the data plane Your data plane will still function after you have upgraded the control plane. But in order to apply updates to the Envoy proxy and any changes to the proxy configuration, you must restart your application pods and workloads. 2.3.5.1. Updating your applications and workloads To complete the migration, restart all of the application pods in the mesh to upgrade the Envoy sidecar proxies and their configuration. To perform a rolling update of a deployment use the following command: USD oc rollout restart <deployment> You must perform a rolling update for all applications that make up the mesh. 2.4. Understanding Service Mesh Red Hat OpenShift Service Mesh provides a platform for behavioral insight and operational control over your networked microservices in a service mesh. With Red Hat OpenShift Service Mesh, you can connect, secure, and monitor microservices in your OpenShift Container Platform environment. 2.4.1. What is Red Hat OpenShift Service Mesh? A service mesh is the network of microservices that make up applications in a distributed microservice architecture and the interactions between those microservices. When a Service Mesh grows in size and complexity, it can become harder to understand and manage. Based on the open source Istio project, Red Hat OpenShift Service Mesh adds a transparent layer on existing distributed applications without requiring any changes to the service code. You add Red Hat OpenShift Service Mesh support to services by deploying a special sidecar proxy to relevant services in the mesh that intercepts all network communication between microservices. You configure and manage the Service Mesh using the Service Mesh control plane features. Red Hat OpenShift Service Mesh gives you an easy way to create a network of deployed services that provide: Discovery Load balancing Service-to-service authentication Failure recovery Metrics Monitoring Red Hat OpenShift Service Mesh also provides more complex operational functions including: A/B testing Canary releases Access control End-to-end authentication 2.4.2. Service Mesh architecture Service mesh technology operates at the network communication level. That is, service mesh components capture or intercept traffic to and from microservices, either modifying requests, redirecting them, or creating new requests to other services. At a high level, Red Hat OpenShift Service Mesh consists of a data plane and a control plane The data plane is a set of intelligent proxies, running alongside application containers in a pod, that intercept and control all inbound and outbound network communication between microservices in the service mesh. The data plane is implemented in such a way that it intercepts all inbound (ingress) and outbound (egress) network traffic. The Istio data plane consists of Envoy containers running along side application containers in a pod. The Envoy container acts as a proxy, controlling all network communication into and out of the pod. Envoy proxies are the only Istio components that interact with data plane traffic. All incoming (ingress) and outgoing (egress) network traffic between services flows through the proxies. The Envoy proxy also collects all metrics related to services traffic within the mesh. Envoy proxies are deployed as sidecars, running in the same pod as services. Envoy proxies are also used to implement mesh gateways. Sidecar proxies manage inbound and outbound communication for their workload instance. Gateways are proxies operating as load balancers receiving incoming or outgoing HTTP/TCP connections. Gateway configurations are applied to standalone Envoy proxies that are running at the edge of the mesh, rather than sidecar Envoy proxies running alongside your service workloads. You use a Gateway to manage inbound and outbound traffic for your mesh, letting you specify which traffic you want to enter or leave the mesh. Ingress-gateway - Also known as an Ingress Controller, the Ingress Gateway is a dedicated Envoy proxy that receives and controls traffic entering the service mesh. An Ingress Gateway allows features such as monitoring and route rules to be applied to traffic entering the cluster. Egress-gateway - Also known as an egress controller, the Egress Gateway is a dedicated Envoy proxy that manages traffic leaving the service mesh. An Egress Gateway allows features such as monitoring and route rules to be applied to traffic exiting the mesh. The control plane manages and configures the proxies that make up the data plane. It is the authoritative source for configuration, manages access control and usage policies, and collects metrics from the proxies in the service mesh. The Istio control plane is composed of Istiod which consolidates several control plane components (Citadel, Galley, Pilot) into a single binary. Istiod provides service discovery, configuration, and certificate management. It converts high-level routing rules to Envoy configurations and propagates them to the sidecars at runtime. Istiod can act as a Certificate Authority (CA), generating certificates supporting secure mTLS communication in the data plane. You can also use an external CA for this purpose. Istiod is responsible for injecting sidecar proxy containers into workloads deployed to an OpenShift cluster. Red Hat OpenShift Service Mesh uses the istio-operator to manage the installation of the control plane. An Operator is a piece of software that enables you to implement and automate common activities in your OpenShift cluster. It acts as a controller, allowing you to set or change the desired state of objects in your cluster, in this case, a Red Hat OpenShift Service Mesh installation. Red Hat OpenShift Service Mesh also bundles the following Istio add-ons as part of the product: Kiali - Kiali is the management console for Red Hat OpenShift Service Mesh. It provides dashboards, observability, and robust configuration and validation capabilities. It shows the structure of your service mesh by inferring traffic topology and displays the health of your mesh. Kiali provides detailed metrics, powerful validation, access to Grafana, and strong integration with the distributed tracing platform (Jaeger). Prometheus - Red Hat OpenShift Service Mesh uses Prometheus to store telemetry information from services. Kiali depends on Prometheus to obtain metrics, health status, and mesh topology. Jaeger - Red Hat OpenShift Service Mesh supports the distributed tracing platform (Jaeger). Jaeger is an open source traceability server that centralizes and displays traces associated with a single request between multiple services. Using the distributed tracing platform (Jaeger) you can monitor and troubleshoot your microservices-based distributed systems. Elasticsearch - Elasticsearch is an open source, distributed, JSON-based search and analytics engine. The distributed tracing platform (Jaeger) uses Elasticsearch for persistent storage. Grafana - Grafana provides mesh administrators with advanced query and metrics analysis and dashboards for Istio data. Optionally, Grafana can be used to analyze service mesh metrics. The following Istio integrations are supported with Red Hat OpenShift Service Mesh: 3scale - Istio provides an optional integration with Red Hat 3scale API Management solutions. For versions prior to 2.1, this integration was achieved via the 3scale Istio adapter. For version 2.1 and later, the 3scale integration is achieved via a WebAssembly module. For information about how to install the 3scale adapter, refer to the 3scale Istio adapter documentation 2.4.3. Understanding Kiali Kiali provides visibility into your service mesh by showing you the microservices in your service mesh, and how they are connected. 2.4.3.1. Kiali overview Kiali provides observability into the Service Mesh running on OpenShift Container Platform. Kiali helps you define, validate, and observe your Istio service mesh. It helps you to understand the structure of your service mesh by inferring the topology, and also provides information about the health of your service mesh. Kiali provides an interactive graph view of your namespace in real time that provides visibility into features like circuit breakers, request rates, latency, and even graphs of traffic flows. Kiali offers insights about components at different levels, from Applications to Services and Workloads, and can display the interactions with contextual information and charts on the selected graph node or edge. Kiali also provides the ability to validate your Istio configurations, such as gateways, destination rules, virtual services, mesh policies, and more. Kiali provides detailed metrics, and a basic Grafana integration is available for advanced queries. Distributed tracing is provided by integrating Jaeger into the Kiali console. Kiali is installed by default as part of the Red Hat OpenShift Service Mesh. 2.4.3.2. Kiali architecture Kiali is based on the open source Kiali project . Kiali is composed of two components: the Kiali application and the Kiali console. Kiali application (back end) - This component runs in the container application platform and communicates with the service mesh components, retrieves and processes data, and exposes this data to the console. The Kiali application does not need storage. When deploying the application to a cluster, configurations are set in ConfigMaps and secrets. Kiali console (front end) - The Kiali console is a web application. The Kiali application serves the Kiali console, which then queries the back end for data to present it to the user. In addition, Kiali depends on external services and components provided by the container application platform and Istio. Red Hat Service Mesh (Istio) - Istio is a Kiali requirement. Istio is the component that provides and controls the service mesh. Although Kiali and Istio can be installed separately, Kiali depends on Istio and will not work if it is not present. Kiali needs to retrieve Istio data and configurations, which are exposed through Prometheus and the cluster API. Prometheus - A dedicated Prometheus instance is included as part of the Red Hat OpenShift Service Mesh installation. When Istio telemetry is enabled, metrics data are stored in Prometheus. Kiali uses this Prometheus data to determine the mesh topology, display metrics, calculate health, show possible problems, and so on. Kiali communicates directly with Prometheus and assumes the data schema used by Istio Telemetry. Prometheus is an Istio dependency and a hard dependency for Kiali, and many of Kiali's features will not work without Prometheus. Cluster API - Kiali uses the API of the OpenShift Container Platform (cluster API) to fetch and resolve service mesh configurations. Kiali queries the cluster API to retrieve, for example, definitions for namespaces, services, deployments, pods, and other entities. Kiali also makes queries to resolve relationships between the different cluster entities. The cluster API is also queried to retrieve Istio configurations like virtual services, destination rules, route rules, gateways, quotas, and so on. Jaeger - Jaeger is optional, but is installed by default as part of the Red Hat OpenShift Service Mesh installation. When you install the distributed tracing platform (Jaeger) as part of the default Red Hat OpenShift Service Mesh installation, the Kiali console includes a tab to display distributed tracing data. Note that tracing data will not be available if you disable Istio's distributed tracing feature. Also note that user must have access to the namespace where the Service Mesh control plane is installed to view tracing data. Grafana - Grafana is optional, but is installed by default as part of the Red Hat OpenShift Service Mesh installation. When available, the metrics pages of Kiali display links to direct the user to the same metric in Grafana. Note that user must have access to the namespace where the Service Mesh control plane is installed to view links to the Grafana dashboard and view Grafana data. 2.4.3.3. Kiali features The Kiali console is integrated with Red Hat Service Mesh and provides the following capabilities: Health - Quickly identify issues with applications, services, or workloads. Topology - Visualize how your applications, services, or workloads communicate via the Kiali graph. Metrics - Predefined metrics dashboards let you chart service mesh and application performance for Go, Node.js. Quarkus, Spring Boot, Thorntail and Vert.x. You can also create your own custom dashboards. Tracing - Integration with Jaeger lets you follow the path of a request through various microservices that make up an application. Validations - Perform advanced validations on the most common Istio objects (Destination Rules, Service Entries, Virtual Services, and so on). Configuration - Optional ability to create, update and delete Istio routing configuration using wizards or directly in the YAML editor in the Kiali Console. 2.4.4. Understanding distributed tracing Every time a user takes an action in an application, a request is executed by the architecture that may require dozens of different services to participate to produce a response. The path of this request is a distributed transaction. The distributed tracing platform (Jaeger) lets you perform distributed tracing, which follows the path of a request through various microservices that make up an application. Distributed tracing is a technique that is used to tie the information about different units of work together-usually executed in different processes or hosts-to understand a whole chain of events in a distributed transaction. Distributed tracing lets developers visualize call flows in large service oriented architectures. It can be invaluable in understanding serialization, parallelism, and sources of latency. The distributed tracing platform (Jaeger) records the execution of individual requests across the whole stack of microservices, and presents them as traces. A trace is a data/execution path through the system. An end-to-end trace comprises one or more spans. A span represents a logical unit of work that has an operation name, the start time of the operation, and the duration. Spans may be nested and ordered to model causal relationships. 2.4.4.1. Distributed tracing overview As a service owner, you can use distributed tracing to instrument your services to gather insights into your service architecture. You can use the Red Hat OpenShift distributed tracing platform for monitoring, network profiling, and troubleshooting the interaction between components in modern, cloud-native, microservices-based applications. With the distributed tracing platform, you can perform the following functions: Monitor distributed transactions Optimize performance and latency Perform root cause analysis 2.4.4.2. Red Hat OpenShift distributed tracing platform architecture Red Hat OpenShift distributed tracing platform is made up of several components that work together to collect, store, and display tracing data. Red Hat OpenShift distributed tracing platform (Tempo) - This component is based on the open source Grafana Tempo project . Gateway - The Gateway handles authentication, authorization, and forwarding requests to the Distributor or Query front-end service. Distributor - The Distributor accepts spans in multiple formats including Jaeger, OpenTelemetry, and Zipkin. It routes spans to Ingesters by hashing the traceID and using a distributed consistent hash ring. Ingester - The Ingester batches a trace into blocks, creates bloom filters and indexes, and then flushes it all to the back end. Query Frontend - The Query Frontend is responsible for sharding the search space for an incoming query. The search query is then sent to the Queriers. The Query Frontend deployment exposes the Jaeger UI through the Tempo Query sidecar. Querier - The Querier is responsible for finding the requested trace ID in either the Ingesters or the back-end storage. Depending on parameters, it can query the Ingesters and pull Bloom indexes from the back end to search blocks in object storage. Compactor - The Compactors stream blocks to and from the back-end storage to reduce the total number of blocks. Red Hat build of OpenTelemetry - This component is based on the open source OpenTelemetry project . OpenTelemetry Collector - The OpenTelemetry Collector is a vendor-agnostic way to receive, process, and export telemetry data. The OpenTelemetry Collector supports open-source observability data formats, for example, Jaeger and Prometheus, sending to one or more open-source or commercial back-ends. The Collector is the default location instrumentation libraries export their telemetry data. Red Hat OpenShift distributed tracing platform (Jaeger) - This component is based on the open source Jaeger project . Important The Red Hat OpenShift distributed tracing platform (Jaeger) is a deprecated feature. Deprecated functionality is still included in OpenShift Container Platform and continues to be supported; however, it will be removed in a future release of this product and is not recommended for new deployments. The Red Hat OpenShift distributed tracing platform Operator (Jaeger) will be removed from the redhat-operators catalog in a future release. For more information, see the Red Hat Knowledgebase solution Jaeger Deprecation and Removal in OpenShift . Users must migrate to the Tempo Operator and the Red Hat build of OpenTelemetry for distributed tracing collection and storage. For the most recent list of major functionality that has been deprecated or removed within OpenShift Container Platform, refer to the Deprecated and removed features section of the OpenShift Container Platform release notes. Client (Jaeger client, Tracer, Reporter, instrumented application, client libraries)- The distributed tracing platform (Jaeger) clients are language-specific implementations of the OpenTracing API. They might be used to instrument applications for distributed tracing either manually or with a variety of existing open source frameworks, such as Camel (Fuse), Spring Boot (RHOAR), MicroProfile (RHOAR/Thorntail), Wildfly (EAP), and many more, that are already integrated with OpenTracing. Agent (Jaeger agent, Server Queue, Processor Workers) - The distributed tracing platform (Jaeger) agent is a network daemon that listens for spans sent over User Datagram Protocol (UDP), which it batches and sends to the Collector. The agent is meant to be placed on the same host as the instrumented application. This is typically accomplished by having a sidecar in container environments such as Kubernetes. Jaeger Collector (Collector, Queue, Workers) - Similar to the Jaeger agent, the Jaeger Collector receives spans and places them in an internal queue for processing. This allows the Jaeger Collector to return immediately to the client/agent instead of waiting for the span to make its way to the storage. Storage (Data Store) - Collectors require a persistent storage backend. Red Hat OpenShift distributed tracing platform (Jaeger) has a pluggable mechanism for span storage. Red Hat OpenShift distributed tracing platform (Jaeger) supports the Elasticsearch storage. Query (Query Service) - Query is a service that retrieves traces from storage. Ingester (Ingester Service) - Red Hat OpenShift distributed tracing platform can use Apache Kafka as a buffer between the Collector and the actual Elasticsearch backing storage. Ingester is a service that reads data from Kafka and writes to the Elasticsearch storage backend. Jaeger Console - With the Red Hat OpenShift distributed tracing platform (Jaeger) user interface, you can visualize your distributed tracing data. On the Search page, you can find traces and explore details of the spans that make up an individual trace. 2.4.4.3. Red Hat OpenShift distributed tracing platform features Red Hat OpenShift distributed tracing platform provides the following capabilities: Integration with Kiali - When properly configured, you can view distributed tracing platform data from the Kiali console. High scalability - The distributed tracing platform back end is designed to have no single points of failure and to scale with the business needs. Distributed Context Propagation - Enables you to connect data from different components together to create a complete end-to-end trace. Backwards compatibility with Zipkin - Red Hat OpenShift distributed tracing platform has APIs that enable it to be used as a drop-in replacement for Zipkin, but Red Hat is not supporting Zipkin compatibility in this release. 2.4.5. steps Prepare to install Red Hat OpenShift Service Mesh in your OpenShift Container Platform environment. 2.5. Service mesh deployment models Red Hat OpenShift Service Mesh supports several different deployment models that can be combined in different ways to best suit your business requirements. In Istio, a tenant is a group of users that share common access and privileges for a set of deployed workloads. You can use tenants to provide a level of isolation between different teams. You can segregate access to different tenants using NetworkPolicies , AuthorizationPolicies , and exportTo annotations on istio.io or service resources. 2.5.1. Cluster-Wide (Single Tenant) mesh deployment model A cluster-wide deployment contains a Service Mesh Control Plane that monitors resources for an entire cluster. Monitoring resources for an entire cluster closely resembles Istio functionality in that the control plane uses a single query across all namespaces to monitor Istio and Kubernetes resources. As a result, cluster-wide deployments decrease the number of requests sent to the API server. Similar to Istio, a cluster-wide mesh includes namespaces with the istio-injection=enabled namespace label by default. You can change this label by modifying the spec.memberSelectors field of the ServiceMeshMemberRoll resource. 2.5.2. Multitenant deployment model Red Hat OpenShift Service Mesh installs a ServiceMeshControlPlane that is configured for multitenancy by default. Red Hat OpenShift Service Mesh uses a multitenant Operator to manage the Service Mesh control plane lifecycle. Within a mesh, namespaces are used for tenancy. Red Hat OpenShift Service Mesh uses ServiceMeshControlPlane resources to manage mesh installations, whose scope is limited by default to namespace that contains the resource. You use ServiceMeshMemberRoll and ServiceMeshMember resources to include additional namespaces into the mesh. A namespace can only be included in a single mesh, and multiple meshes can be installed in a single OpenShift cluster. Typical service mesh deployments use a single Service Mesh control plane to configure communication between services in the mesh. Red Hat OpenShift Service Mesh supports "soft multitenancy", where there is one control plane and one mesh per tenant, and there can be multiple independent control planes within the cluster. Multitenant deployments specify the projects that can access the Service Mesh and isolate the Service Mesh from other control plane instances. The cluster administrator gets control and visibility across all the Istio control planes, while the tenant administrator only gets control over their specific Service Mesh, Kiali, and Jaeger instances. You can grant a team permission to deploy its workloads only to a given namespace or set of namespaces. If granted the mesh-user role by the service mesh administrator, users can create a ServiceMeshMember resource to add namespaces to the ServiceMeshMemberRoll . 2.5.2.1. About migrating to a cluster-wide mesh In a cluster-wide mesh, one ServiceMeshControlPlane (SMCP) watches all of the namespaces for an entire cluster. You can migrate an existing cluster from a multitenant mesh to a cluster-wide mesh using Red Hat OpenShift Service Mesh version 2.5 or later. Note If a cluster must have more than one SMCP, then you cannot migrate to a cluster-wide mesh. By default, a cluster-wide mesh discovers all of the namespaces that comprise a cluster. However, you can configure the mesh to access a limited set of namespaces. Namespaces do not receive sidecar injection by default. You must specify which namespaces receive sidecar injection. Similarly, you must specify which pods receive sidecar injection. Pods that exist in a namespace that receives sidecar injection do not inherit sidecar injection. Applying sidecar injection to namespaces and to pods are separate operations. If you change the Istio version when migrating to a cluster-wide mesh, then you must restart the applications. If you use the same Istio version, the application proxies will connect to the new SMCP for the cluster-wide mesh, and work the same way they did for a multitenant mesh. 2.5.2.1.1. Including and excluding namespaces from a cluster-wide mesh by using the web console Using the OpenShift Container Platform web console, you can add discovery selectors to the ServiceMeshControlPlane resource in a cluster-wide mesh. Discovery selectors define the namespaces that the control plane can discover. The control plane ignores any namespace that does not match one of the discovery selectors, which excludes the namespace from the mesh. Note If you install ingress or egress gateways in the control plane namespace, you must include the control plane namespace in the discovery selectors. Prerequisites You have installed the Red Hat OpenShift Service Mesh Operator. You have deployed a ServiceMeshControlPlane resource. You are logged in as a user with the cluster-admin role. If you use Red Hat OpenShift Dedicated, you are logged in as a user with the dedicated-admin role. Procedure Log in to the OpenShift Container Platform web console. Navigate to Operators Installed Operators . Click the Red Hat OpenShift Service Mesh Operator. Click Istio Service Mesh Control Plane . Click the name of the control plane. Click YAML . Modify the YAML file so that the spec.meshConfig field of the ServiceMeshControlPlane resource includes the discovery selector. Note When configuring namespaces that the Istiod service can discover, exclude namespaces that might contain sensitive services that should not be exposed to the rest of the mesh. In the following example, the Istiod service discovers any namespace that is labeled istio-discovery: enabled or any namespace that has the name bookinfo , httpbin or istio-system : apiVersion: maistra.io/v2 kind: ServiceMeshControlPlane metadata: name: basic spec: mode: ClusterWide meshConfig: discoverySelectors: - matchLabels: istio-discovery: enabled 1 - matchExpressions: - key: kubernetes.io/metadata.name 2 operator: In values: - bookinfo - httpbin - istio-system 1 Ensures that the mesh discovers namespaces that contain the label istio-discovery: enabled . 2 Ensures that the mesh discovers namespaces bookinfo , httpbin and istio-system . If a namespace matches any of the discovery selectors, then the mesh discovers the namespace. The mesh excludes namespaces that do not match any of the discovery selectors. Save the file. 2.5.2.1.2. Including and excluding namespaces from a cluster-wide mesh by using the CLI Using the OpenShift Container Platform CLI, you can add discovery selectors to the ServiceMeshControlPlane resource in a cluster-wide mesh. Discovery selectors define the namespaces that the control plane can discover. The control plane ignores any namespace that does not match one of the discovery selectors, which excludes the namespace from the mesh. Note If you install ingress or egress gateways in the control plane namespace, you must include the control plane namespace in the discovery selectors. Prerequisites You have installed the Red Hat OpenShift Service Mesh Operator. You have deployed a ServiceMeshControlPlane resource. You are logged in as a user with the cluster-admin role. If you use Red Hat OpenShift Dedicated, you are logged in as a user with the dedicated-admin role. Procedure Log in to the OpenShift Container Platform CLI. Open the ServiceMeshControlPlane resource as a YAML file by running the following command: USD oc -n istio-system edit smcp <name> 1 1 <name> represents the name of the ServiceMeshControlPlane resource. Modify the YAML file so that the spec.meshConfig field of the ServiceMeshControlPlane resource includes the discovery selector. Note When configuring namespaces that the Istiod service can discover, exclude namespaces that might contain sensitive services that should not be exposed to the rest of the mesh. In the following example, the Istiod service discovers any namespace that is labeled istio-discovery: enabled or any namespace that has the name bookinfo , httpbin or istio-system : apiVersion: maistra.io/v2 kind: ServiceMeshControlPlane metadata: name: basic spec: mode: ClusterWide meshConfig: discoverySelectors: - matchLabels: istio-discovery: enabled 1 - matchExpressions: - key: kubernetes.io/metadata.name 2 operator: In values: - bookinfo - httpbin - istio-system 1 Ensures that the mesh discovers namespaces that contain the label istio-discovery: enabled . 2 Ensures that the mesh discovers namespaces bookinfo , httpbin and istio-system . If a namespace matches any of the discovery selectors, then the mesh discovers the namespace. The mesh excludes namespaces that do not match any of the discovery selectors. Save the file and exit the editor. 2.5.2.1.3. Defining which namespaces receive sidecar injection in a cluster-wide mesh by using the web console By default, the Red Hat OpenShift Service Mesh Operator uses member selectors to identify which namespaces receive sidecar injection. Namespaces that do not match the istio-injection=enabled label as defined in the ServiceMeshMemberRoll resource do not receive sidecar injection. Note Using discovery selectors to determine which namespaces the mesh can discover has no effect on sidecar injection. Discovering namespaces and configuring sidecar injection are separate operations. Prerequisites You have installed the Red Hat OpenShift Service Mesh Operator. You have deployed a ServiceMeshControlPlanae resource with the mode: ClusterWide annotation. You are logged in as a user with the cluster-admin role. If you use Red Hat OpenShift Dedicated, you are logged in as a user with the dedicated-admin role. Procedure Log in to the OpenShift Container Platform web console. Navigate to Operators Installed Operators . Click the Red Hat OpenShift Service Mesh Operator. Click Istio Service Mesh Member Roll . Click the ServiceMeshMemberRoll resource. Click YAML . Modify the spec.memberSelectors field in the ServiceMeshMemberRoll resource by adding a member selector that matches the inject label. The following example uses istio-injection: enabled : apiVersion: maistra.io/v1 kind: ServiceMeshMemberRoll metadata: name: default spec: memberSelectors: - matchLabels: istio-injection: enabled 1 1 Ensures that the namespace receives sidecar injection. Save the file. 2.5.2.1.4. Defining which namespaces receive sidecar injection in a cluster-wide mesh by using the CLI By default, the Red Hat OpenShift Service Mesh Operator uses member selectors to identify which namespaces receive sidecar injection. Namespaces that do not match the istio-injection=enabled label as defined in the ServiceMeshMemberRoll resource do not receive sidecar injection. Note Using discovery selectors to determine which namespaces the mesh can discover has no effect on sidecar injection. Discovering namespaces and configuring sidecar injection are separate operations. Prerequisites You have installed the Red Hat OpenShift Service Mesh Operator. You have deployed a ServiceMeshControlPlanae resource with the mode: ClusterWide annotation. You are logged in as a user with the cluster-admin role. If you use Red Hat OpenShift Dedicated, you are logged in as a user with the dedicated-admin role. Procedure Log in to the OpenShift Container Platform CLI. Edit the ServiceMeshMemberRoll resource. USD oc edit smmr -n <controlplane-namespace> Modify the spec.memberSelectors field in the ServiceMeshMemberRoll resource by adding a member selector that matches the inject label. The following example uses istio-injection: enabled : apiVersion: maistra.io/v1 kind: ServiceMeshMemberRoll metadata: name: default spec: memberSelectors: - matchLabels: istio-injection: enabled 1 1 Ensures that the namespace receives sidecar injection. Save the file and exit the editor. 2.5.2.1.5. Excluding individual pods from a cluster-wide mesh by using the web console A pod receives sidecar injection if it has the sidecar.istio.io/inject: true annotation applied, and the pod exists in a namespace that matches either the label selector or the members list defined in the ServiceMeshMemberRoll resource. If a pod does not have the sidecar.istio.io/inject annotation applied, it cannot receive sidecar injection. Prerequisites You have installed the Red Hat OpenShift Service Mesh Operator. You have deployed a ServiceMeshControlPlane resource with the mode: ClusterWide annotation. You are logged in as a user with the cluster-admin role. If you use Red Hat OpenShift Dedicated, you are logged in as a user with the dedicated-admin role. Procedure Log in to the OpenShift Container Platform web console. Navigate to Workloads Deployments . Click the name of the deployment. Click YAML . Modify the YAML file to deploy one application that receives sidecar injection and one that does not, as shown in the following example: apiVersion: apps/v1 kind: Deployment metadata: name: nginx spec: selector: matchLabels: app: nginx template: metadata: annotations: sidecar.istio.io/inject: 'true' 1 labels: app: nginx spec: containers: - name: nginx image: nginx:1.14.2 ports: - containerPort: 80 --- apiVersion: apps/v1 kind: Deployment metadata: name: nginx-without-sidecar spec: selector: matchLabels: app: nginx-without-sidecar template: metadata: labels: app: nginx-without-sidecar 2 spec: containers: - name: nginx image: nginx:1.14.2 ports: - containerPort: 80 1 This pod has the sidecar.istio.io/inject annotation applied, so it receives sidecar injection. 2 This pod does not have the annotation, so it does not receive sidecar injection. Save the file. 2.5.2.1.6. Excluding individual pods from a cluster-wide mesh by using the CLI A pod receives sidecar injection if it has the sidecar.istio.io/inject: true annotation applied, and the pod exists in a namespace that matches either the label selector or the members list defined in the ServiceMeshMemberRoll resource. If a pod does not have the sidecar.istio.io/inject annotation applied, it cannot receive sidecar injection. Prerequisites You have installed the Red Hat OpenShift Service Mesh Operator. You have deployed a ServiceMeshControlPlane resource with the mode: ClusterWide annotation. You are logged in as a user with the cluster-admin role. If you use Red Hat OpenShift Dedicated, you are logged in as a user with the dedicated-admin role. Procedure Log in to the OpenShift Container Platform CLI. Edit the deployment by running the following command: USD oc edit deployment -n <namespace> <deploymentName> Modify the YAML file to deploy one application that receives sidecar injection and one that does not, as shown in the following example: apiVersion: apps/v1 kind: Deployment metadata: name: nginx spec: selector: matchLabels: app: nginx template: metadata: annotations: sidecar.istio.io/inject: 'true' 1 labels: app: nginx spec: containers: - name: nginx image: nginx:1.14.2 ports: - containerPort: 80 --- apiVersion: apps/v1 kind: Deployment metadata: name: nginx-without-sidecar spec: selector: matchLabels: app: nginx-without-sidecar template: metadata: labels: app: nginx-without-sidecar 2 spec: containers: - name: nginx image: nginx:1.14.2 ports: - containerPort: 80 1 This pod has the sidecar.istio.io/inject annotation applied, so it receives sidecar injection. 2 This pod does not have the annotation, so it does not receive sidecar injection. Save the file. 2.5.3. Multimesh or federated deployment model Federation is a deployment model that lets you share services and workloads between separate meshes managed in distinct administrative domains. The Istio multi-cluster model requires a high level of trust between meshes and remote access to all Kubernetes API servers on which the individual meshes reside. Red Hat OpenShift Service Mesh federation takes an opinionated approach to a multi-cluster implementation of Service Mesh that assumes minimal trust between meshes. A federated mesh is a group of meshes behaving as a single mesh. The services in each mesh can be unique services, for example a mesh adding services by importing them from another mesh, can provide additional workloads for the same services across the meshes, providing high availability, or a combination of both. All meshes that are joined into a federated mesh remain managed individually, and you must explicitly configure which services are exported to and imported from other meshes in the federation. Support functions such as certificate generation, metrics and trace collection remain local in their respective meshes. 2.6. Service Mesh and Istio differences Red Hat OpenShift Service Mesh differs from an installation of Istio to provide additional features or to handle differences when deploying on OpenShift Container Platform. 2.6.1. Differences between Istio and Red Hat OpenShift Service Mesh The following features are different in Service Mesh and Istio. 2.6.1.1. Command line tool The command line tool for Red Hat OpenShift Service Mesh is oc . Red Hat OpenShift Service Mesh does not support istioctl . 2.6.1.2. Installation and upgrades Red Hat OpenShift Service Mesh does not support Istio installation profiles. Red Hat OpenShift Service Mesh does not support canary upgrades of the service mesh. 2.6.1.3. Automatic injection The upstream Istio community installation automatically injects the sidecar into pods within the projects you have labeled. Red Hat OpenShift Service Mesh does not automatically inject the sidecar into any pods, but you must opt in to injection using an annotation without labeling projects. This method requires fewer privileges and does not conflict with other OpenShift Container Platform capabilities such as builder pods. To enable automatic injection, specify the sidecar.istio.io/inject label, or annotation, as described in the Automatic sidecar injection section. Table 2.4. Sidecar injection label and annotation settings Upstream Istio Red Hat OpenShift Service Mesh Namespace Label supports "enabled" and "disabled" supports "disabled" Pod Label supports "true" and "false" supports "true" and "false" Pod Annotation supports "false" only supports "true" and "false" 2.6.1.4. Istio Role Based Access Control features Istio Role Based Access Control (RBAC) provides a mechanism you can use to control access to a service. You can identify subjects by user name or by specifying a set of properties and apply access controls accordingly. The upstream Istio community installation includes options to perform exact header matches, match wildcards in headers, or check for a header containing a specific prefix or suffix. Red Hat OpenShift Service Mesh extends the ability to match request headers by using a regular expression. Specify a property key of request.regex.headers with a regular expression. Upstream Istio community matching request headers example apiVersion: security.istio.io/v1beta1 kind: AuthorizationPolicy metadata: name: httpbin-usernamepolicy spec: action: ALLOW rules: - when: - key: 'request.regex.headers[username]' values: - "allowed.*" selector: matchLabels: app: httpbin 2.6.1.5. OpenSSL Red Hat OpenShift Service Mesh replaces BoringSSL with OpenSSL. OpenSSL is a software library that contains an open source implementation of the Secure Sockets Layer (SSL) and Transport Layer Security (TLS) protocols. The Red Hat OpenShift Service Mesh Proxy binary dynamically links the OpenSSL libraries (libssl and libcrypto) from the underlying Red Hat Enterprise Linux operating system. 2.6.1.6. External workloads Red Hat OpenShift Service Mesh does not support external workloads, such as virtual machines running outside OpenShift on bare metal servers. 2.6.1.7. Virtual Machine Support You can deploy virtual machines to OpenShift using OpenShift Virtualization. Then, you can apply a mesh policy, such as mTLS or AuthorizationPolicy, to these virtual machines, just like any other pod that is part of a mesh. 2.6.1.8. Component modifications A maistra-version label has been added to all resources. All Ingress resources have been converted to OpenShift Route resources. Grafana, distributed tracing (Jaeger), and Kiali are enabled by default and exposed through OpenShift routes. Godebug has been removed from all templates The istio-multi ServiceAccount and ClusterRoleBinding have been removed, as well as the istio-reader ClusterRole. 2.6.1.9. Envoy filters Red Hat OpenShift Service Mesh does not support EnvoyFilter configuration except where explicitly documented. Due to tight coupling with the underlying Envoy APIs, backward compatibility cannot be maintained. EnvoyFilter patches are very sensitive to the format of the Envoy configuration that is generated by Istio. If the configuration generated by Istio changes, it has the potential to break the application of the EnvoyFilter . 2.6.1.10. Envoy services Red Hat OpenShift Service Mesh does not support QUIC-based services. 2.6.1.11. Istio Container Network Interface (CNI) plugin Red Hat OpenShift Service Mesh includes CNI plugin, which provides you with an alternate way to configure application pod networking. The CNI plugin replaces the init-container network configuration eliminating the need to grant service accounts and projects access to security context constraints (SCCs) with elevated privileges. Note By default, Istio Container Network Interface (CNI) pods are created on all OpenShift Container Platform nodes. To exclude the creation of CNI pods in a specific node, apply the maistra.io/exclude-cni=true label to the node. Adding this label removes any previously deployed Istio CNI pods from the node. 2.6.1.12. Global mTLS settings Red Hat OpenShift Service Mesh creates a PeerAuthentication resource that enables or disables Mutual TLS authentication (mTLS) within the mesh. 2.6.1.13. Gateways Red Hat OpenShift Service Mesh installs ingress and egress gateways by default. You can disable gateway installation in the ServiceMeshControlPlane (SMCP) resource by using the following settings: spec.gateways.enabled=false to disable both ingress and egress gateways. spec.gateways.ingress.enabled=false to disable ingress gateways. spec.gateways.egress.enabled=false to disable egress gateways. Note The Operator annotates the default gateways to indicate that they are generated by and managed by the Red Hat OpenShift Service Mesh Operator. 2.6.1.14. Multicluster configurations Red Hat OpenShift Service Mesh support for multicluster configurations is limited to the federation of service meshes across multiple clusters. 2.6.1.15. Custom Certificate Signing Requests (CSR) You cannot configure Red Hat OpenShift Service Mesh to process CSRs through the Kubernetes certificate authority (CA). 2.6.1.16. Routes for Istio Gateways OpenShift routes for Istio Gateways are automatically managed in Red Hat OpenShift Service Mesh. Every time an Istio Gateway is created, updated or deleted inside the service mesh, an OpenShift route is created, updated or deleted. A Red Hat OpenShift Service Mesh control plane component called Istio OpenShift Routing (IOR) synchronizes the gateway route. For more information, see Automatic route creation. 2.6.1.16.1. Catch-all domains Catch-all domains ("*") are not supported. If one is found in the Gateway definition, Red Hat OpenShift Service Mesh will create the route, but will rely on OpenShift to create a default hostname. This means that the newly created route will not be a catch all ("*") route, instead it will have a hostname in the form <route-name>[-<project>].<suffix> . See the OpenShift Container Platform documentation for more information about how default hostnames work and how a cluster-admin can customize it. If you use Red Hat OpenShift Dedicated, refer to the Red Hat OpenShift Dedicated the dedicated-admin role. 2.6.1.16.2. Subdomains Subdomains (e.g.: "*.domain.com") are supported. However this ability doesn't come enabled by default in OpenShift Container Platform. This means that Red Hat OpenShift Service Mesh will create the route with the subdomain, but it will only be in effect if OpenShift Container Platform is configured to enable it. 2.6.1.16.3. Transport layer security Transport Layer Security (TLS) is supported. This means that, if the Gateway contains a tls section, the OpenShift Route will be configured to support TLS. Additional resources Automatic route creation 2.6.2. Multitenant installations Whereas upstream Istio takes a single tenant approach, Red Hat OpenShift Service Mesh supports multiple independent control planes within the cluster. Red Hat OpenShift Service Mesh uses a multitenant operator to manage the control plane lifecycle. Red Hat OpenShift Service Mesh installs a multitenant control plane by default. You specify the projects that can access the Service Mesh, and isolate the Service Mesh from other control plane instances. 2.6.2.1. Multitenancy versus cluster-wide installations The main difference between a multitenant installation and a cluster-wide installation is the scope of privileges used by istod. The components no longer use cluster-scoped Role Based Access Control (RBAC) resource ClusterRoleBinding . Every project in the ServiceMeshMemberRoll members list will have a RoleBinding for each service account associated with the control plane deployment and each control plane deployment will only watch those member projects. Each member project has a maistra.io/member-of label added to it, where the member-of value is the project containing the control plane installation. Red Hat OpenShift Service Mesh configures each member project to ensure network access between itself, the control plane, and other member projects. The exact configuration differs depending on how OpenShift Container Platform software-defined networking (SDN) is configured. See About OpenShift SDN for additional details. If the OpenShift Container Platform cluster is configured to use the SDN plugin: NetworkPolicy : Red Hat OpenShift Service Mesh creates a NetworkPolicy resource in each member project allowing ingress to all pods from the other members and the control plane. If you remove a member from Service Mesh, this NetworkPolicy resource is deleted from the project. Note This also restricts ingress to only member projects. If you require ingress from non-member projects, you need to create a NetworkPolicy to allow that traffic through. Multitenant : Red Hat OpenShift Service Mesh joins the NetNamespace for each member project to the NetNamespace of the control plane project (the equivalent of running oc adm pod-network join-projects --to control-plane-project member-project ). If you remove a member from the Service Mesh, its NetNamespace is isolated from the control plane (the equivalent of running oc adm pod-network isolate-projects member-project ). Subnet : No additional configuration is performed. 2.6.2.2. Cluster scoped resources Upstream Istio has two cluster scoped resources that it relies on. The MeshPolicy and the ClusterRbacConfig . These are not compatible with a multitenant cluster and have been replaced as described below. ServiceMeshPolicy replaces MeshPolicy for configuration of control-plane-wide authentication policies. This must be created in the same project as the control plane. ServicemeshRbacConfig replaces ClusterRbacConfig for configuration of control-plane-wide role based access control. This must be created in the same project as the control plane. 2.6.3. Kiali and service mesh Installing Kiali via the Service Mesh on OpenShift Container Platform differs from community Kiali installations in multiple ways. These modifications are sometimes necessary to resolve issues, provide additional features, or to handle differences when deploying on OpenShift Container Platform. Kiali has been enabled by default. Ingress has been enabled by default. Updates have been made to the Kiali ConfigMap. Updates have been made to the ClusterRole settings for Kiali. Do not edit the ConfigMap, because your changes might be overwritten by the Service Mesh or Kiali Operators. Files that the Kiali Operator manages have a kiali.io/ label or annotation. Updating the Operator files should be restricted to those users with cluster-admin privileges. If you use Red Hat OpenShift Dedicated, updating the Operator files should be restricted to those users with dedicated-admin privileges. 2.6.4. Distributed tracing and service mesh Installing the distributed tracing platform (Jaeger) with the Service Mesh on OpenShift Container Platform differs from community Jaeger installations in multiple ways. These modifications are sometimes necessary to resolve issues, provide additional features, or to handle differences when deploying on OpenShift Container Platform. Distributed tracing has been enabled by default for Service Mesh. Ingress has been enabled by default for Service Mesh. The name for the Zipkin port name has changed to jaeger-collector-zipkin (from http ) Jaeger uses Elasticsearch for storage by default when you select either the production or streaming deployment option. The community version of Istio provides a generic "tracing" route. Red Hat OpenShift Service Mesh uses a "jaeger" route that is installed by the Red Hat OpenShift distributed tracing platform (Jaeger) Operator and is already protected by OAuth. Red Hat OpenShift Service Mesh uses a sidecar for the Envoy proxy, and Jaeger also uses a sidecar, for the Jaeger agent. These two sidecars are configured separately and should not be confused with each other. The proxy sidecar creates spans related to the pod's ingress and egress traffic. The agent sidecar receives the spans emitted by the application and sends them to the Jaeger Collector. 2.7. Preparing to install Service Mesh Before you can install Red Hat OpenShift Service Mesh, you must subscribe to OpenShift Container Platform and install OpenShift Container Platform in a supported configuration. 2.7.1. Prerequisites Maintain an active OpenShift Container Platform subscription on your Red Hat account. If you do not have a subscription, contact your sales representative for more information. Review the OpenShift Container Platform 4.15 overview . Install OpenShift Container Platform 4.15. If you are installing Red Hat OpenShift Service Mesh on a restricted network , follow the instructions for your chosen OpenShift Container Platform infrastructure. Install OpenShift Container Platform 4.15 on AWS Install OpenShift Container Platform 4.15 on user-provisioned AWS Install OpenShift Container Platform 4.15 on bare metal Install OpenShift Container Platform 4.15 on vSphere Install OpenShift Container Platform 4.15 on IBM Z(R) and IBM(R) LinuxONE Install OpenShift Container Platform 4.15 on IBM Power(R) Install the version of the OpenShift Container Platform command line utility (the oc client tool) that matches your OpenShift Container Platform version and add it to your path. If you are using OpenShift Container Platform 4.15, see About the OpenShift CLI . For additional information about Red Hat OpenShift Service Mesh lifecycle and supported platforms, refer to the Support Policy . 2.7.2. Supported configurations The following configurations are supported for the current release of Red Hat OpenShift Service Mesh. 2.7.2.1. Supported platforms The Red Hat OpenShift Service Mesh Operator supports multiple versions of the ServiceMeshControlPlane resource. Version 2.6 Service Mesh control planes are supported on the following platform versions: Red Hat OpenShift Container Platform version 4.10 or later Red Hat OpenShift Dedicated version 4 Azure Red Hat OpenShift (ARO) version 4 Red Hat OpenShift Service on AWS (ROSA) 2.7.2.2. Unsupported configurations Explicitly unsupported cases include: OpenShift Online is not supported for Red Hat OpenShift Service Mesh. Red Hat OpenShift Service Mesh does not support the management of microservices outside the cluster where Service Mesh is running. 2.7.2.3. Supported network configurations Red Hat OpenShift Service Mesh supports the following network configurations. OpenShift-SDN OVN-Kubernetes is available on all supported versions of OpenShift Container Platform. Third-Party Container Network Interface (CNI) plugins that have been certified on OpenShift Container Platform and passed Service Mesh conformance testing. See Certified OpenShift CNI Plug-ins for more information. 2.7.2.4. Supported configurations for Service Mesh This release of Red Hat OpenShift Service Mesh is only available on OpenShift Container Platform x86_64, IBM Z(R), and IBM Power(R). IBM Z(R) is only supported on OpenShift Container Platform 4.10 and later. IBM Power(R) is only supported on OpenShift Container Platform 4.10 and later. Configurations where all Service Mesh components are contained within a single OpenShift Container Platform cluster. Configurations that do not integrate external services such as virtual machines. Red Hat OpenShift Service Mesh does not support EnvoyFilter configuration except where explicitly documented. 2.7.2.5. Supported configurations for Kiali The Kiali console is only supported on the two most recent releases of the Google Chrome, Microsoft Edge, Mozilla Firefox, or Apple Safari browsers. The openshift authentication strategy is the only supported authentication configuration when Kiali is deployed with Red Hat OpenShift Service Mesh (OSSM). The openshift strategy controls access based on the individual's role-based access control (RBAC) roles of the OpenShift Container Platform. 2.7.2.6. Supported configurations for Distributed Tracing Jaeger agent as a sidecar is the only supported configuration for Jaeger. Jaeger as a daemonset is not supported for multitenant installations or OpenShift Dedicated. 2.7.2.7. Supported WebAssembly module 3scale WebAssembly is the only provided WebAssembly module. You can create custom WebAssembly modules. 2.7.3. steps Install Red Hat OpenShift Service Mesh in your OpenShift Container Platform environment. 2.8. Installing the Operators To install Red Hat OpenShift Service Mesh, first install the Red Hat OpenShift Service Mesh Operator and any optional Operators on OpenShift Container Platform. Then create a ServiceMeshControlPlane resource to deploy the control plane. Note This basic installation is configured based on the default OpenShift settings and is not designed for production use. Use this default installation to verify your installation, and then configure your service mesh for your specific environment. Prerequisites Read the Preparing to install Red Hat OpenShift Service Mesh process. An account with the cluster-admin role. If you use Red Hat OpenShift Dedicated, you must have an account with the dedicated-admin role. The following steps show how to install a basic instance of Red Hat OpenShift Service Mesh on OpenShift Container Platform. Important Starting with Red Hat OpenShift Service Mesh 2.5, Red Hat OpenShift distributed tracing platform (Jaeger) and OpenShift Elasticsearch Operator are deprecated and will be removed in a future release. Red Hat will provide bug fixes and support for these features during the current release lifecycle, but this feature will no longer receive enhancements and will be removed. As an alternative to Red Hat OpenShift distributed tracing platform (Jaeger), you can use Red Hat OpenShift distributed tracing platform (Tempo) instead. 2.8.1. Service Mesh Operators overview Red Hat OpenShift Service Mesh requires the use of the Red Hat OpenShift Service Mesh Operator which allows you to connect, secure, control, and observe the microservices that comprise your applications. You can also install other Operators to enhance your service mesh experience. Warning Do not install Community versions of the Operators. Community Operators are not supported. The following Operator is required: Red Hat OpenShift Service Mesh Operator Allows you to connect, secure, control, and observe the microservices that comprise your applications. It also defines and monitors the ServiceMeshControlPlane resources that manage the deployment, updating, and deletion of the Service Mesh components. It is based on the open source Istio project. The following Operators are optional: Kiali Operator provided by Red Hat Provides observability for your service mesh. You can view configurations, monitor traffic, and analyze traces in a single console. It is based on the open source Kiali project. Red Hat OpenShift distributed tracing platform (Tempo) Provides distributed tracing to monitor and troubleshoot transactions in complex distributed systems. It is based on the open source Grafana Tempo project. The following optional Operators are deprecated: Important Starting with Red Hat OpenShift Service Mesh 2.5, Red Hat OpenShift distributed tracing platform (Jaeger) and OpenShift Elasticsearch Operator are deprecated and will be removed in a future release. Red Hat will provide bug fixes and support for these features during the current release lifecycle, but these features will no longer receive enhancements and will be removed. As an alternative to Red Hat OpenShift distributed tracing platform (Jaeger), you can use Red Hat OpenShift distributed tracing platform (Tempo) instead. Red Hat OpenShift distributed tracing platform (Jaeger) Provides distributed tracing to monitor and troubleshoot transactions in complex distributed systems. It is based on the open source Jaeger project. OpenShift Elasticsearch Operator Provides database storage for tracing and logging with the distributed tracing platform (Jaeger). It is based on the open source Elasticsearch project. 2.8.2. Installing the Operators To install Red Hat OpenShift Service Mesh, you must install the Red Hat OpenShift Service Mesh Operator. Repeat the procedure for each additional Operator you want to install. Additional Operators include: Kiali Operator provided by Red Hat Tempo Operator Deprecated additional Operators include: Important Starting with Red Hat OpenShift Service Mesh 2.5, Red Hat OpenShift distributed tracing platform (Jaeger) and OpenShift Elasticsearch Operator are deprecated and will be removed in a future release. Red Hat will provide bug fixes and support for these features during the current release lifecycle, but this feature will no longer receive enhancements and will be removed. As an alternative to Red Hat OpenShift distributed tracing platform (Jaeger), you can use Red Hat OpenShift distributed tracing platform (Tempo) instead. Red Hat OpenShift distributed tracing platform (Jaeger) OpenShift Elasticsearch Operator Note If you have already installed the OpenShift Elasticsearch Operator as part of OpenShift Logging, you do not need to install the OpenShift Elasticsearch Operator again. The Red Hat OpenShift distributed tracing platform (Jaeger) Operator creates the Elasticsearch instance using the installed OpenShift Elasticsearch Operator. Procedure Log in to the OpenShift Container Platform web console as a user with the cluster-admin role. In the OpenShift Container Platform web console, click Operators OperatorHub . Type the name of the Operator into the filter box and select the Red Hat version of the Operator. Community versions of the Operators are not supported. Click Install . On the Install Operator page for each Operator, accept the default settings. Click Install . Wait until the Operator installs before repeating the steps for the Operator you want to install. The Red Hat OpenShift Service Mesh Operator installs in the openshift-operators namespace and is available for all namespaces in the cluster. The Kiali Operator provided by Red Hat installs in the openshift-operators namespace and is available for all namespaces in the cluster. The Tempo Operator installs in the openshift-tempo-operator namespace and is available for all namespaces in the cluster. The Red Hat OpenShift distributed tracing platform (Jaeger) installs in the openshift-distributed-tracing namespace and is available for all namespaces in the cluster. Important Starting with Red Hat OpenShift Service Mesh 2.5, Red Hat OpenShift distributed tracing platform (Jaeger) is deprecated and will be removed in a future release. Red Hat will provide bug fixes and support for this feature during the current release lifecycle, but this feature will no longer receive enhancements and will be removed. As an alternative to Red Hat OpenShift distributed tracing platform (Jaeger), you can use Red Hat OpenShift distributed tracing platform (Tempo) instead. The OpenShift Elasticsearch Operator installs in the openshift-operators-redhat namespace and is available for all namespaces in the cluster. Important Starting with Red Hat OpenShift Service Mesh 2.5, OpenShift Elasticsearch Operator is deprecated and will be removed in a future release. Red Hat will provide bug fixes and support for this feature during the current release lifecycle, but this feature will no longer receive enhancements and will be removed. Verification After all you have installed all four Operators, click Operators Installed Operators to verify that your Operators are installed. 2.8.3. Configuring the Service Mesh Operator to run on infrastructure nodes This task should only be performed if the Service Mesh Operator runs on an infrastructure node. If the operator will run on a worker node, skip this task. Prerequisites The Service Mesh Operator must be installed. One of the nodes comprising the deployment must be an infrastructure node. For more information, see "Creating infrastructure machine sets." Procedure List the operators installed in the namespace: USD oc -n openshift-operators get subscriptions Edit the Service Mesh Operator Subscription resource to specify where the operator should run: USD oc -n openshift-operators edit subscription <name> 1 1 <name> represents the name of the Subscription resource. The default name of the Subscription resource is servicemeshoperator . Add the nodeSelector and tolerations to spec.config in the Subscription resource: apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: labels: operators.coreos.com/servicemeshoperator.openshift-operators: "" name: servicemeshoperator namespace: openshift-operators # ... spec: config: nodeSelector: 1 node-role.kubernetes.io/infra: "" tolerations: 2 - effect: NoSchedule key: node-role.kubernetes.io/infra value: reserved - effect: NoExecute key: node-role.kubernetes.io/infra value: reserved 1 Ensures that the operator pod is only scheduled on an infrastructure node. 2 Ensures that the pod is accepted by the infrastructure node. 2.8.4. Verifying the Service Mesh Operator is running on infrastructure node Procedure Verify that the node associated with the Operator pod is an infrastructure node: USD oc -n openshift-operators get po -l name=istio-operator -owide 2.8.5. steps The Red Hat OpenShift Service Mesh Operator does not create the Service Mesh custom resource definitions (CRDs) until you deploy a Service Mesh control plane. You can use the ServiceMeshControlPlane resource to install and configure the Service Mesh components. For more information, see Creating the ServiceMeshControlPlane . 2.9. Creating the ServiceMeshControlPlane 2.9.1. About ServiceMeshControlPlane The control plane includes Istiod, Ingress and Egress Gateways, and other components, such as Kiali and Jaeger. The control plane must be deployed in a separate namespace than the Service Mesh Operators and the data plane applications and services. You can deploy a basic installation of the ServiceMeshControlPlane (SMCP) from the OpenShift Container Platform web console or the command line using the oc client tool. Note This basic installation is configured based on the default OpenShift Container Platform settings and is not designed for production use. Use this default installation to verify your installation, and then configure your ServiceMeshControlPlane settings for your environment. Note The Service Mesh documentation uses istio-system as the example project, but you can deploy the service mesh to any project. 2.9.1.1. Deploying the Service Mesh control plane from the web console You can deploy a basic ServiceMeshControlPlane by using the web console. In this example, istio-system is the name of the Service Mesh control plane project. Prerequisites The Red Hat OpenShift Service Mesh Operator must be installed. You are logged in to the OpenShift Container Platform web console as cluster-admin . Procedure Log in to the OpenShift Container Platform web console as a user with the cluster-admin role. If you use Red Hat OpenShift Dedicated, you must have an account with the dedicated-admin role. Create a project named istio-system . Navigate to Home Projects . Click Create Project . In the Name field, enter istio-system . The ServiceMeshControlPlane resource must be installed in a project that is separate from your microservices and Operators. These steps use istio-system as an example, but you can deploy your Service Mesh control plane in any project as long as it is separate from the project that contains your services. Click Create . Navigate to Operators Installed Operators . Click the Red Hat OpenShift Service Mesh Operator, then click Istio Service Mesh Control Plane . On the Istio Service Mesh Control Plane tab, click Create ServiceMeshControlPlane . Accept the default Service Mesh control plane version to take advantage of the features available in the most current version of the product. The version of the control plane determines the features available regardless of the version of the Operator. Click Create . The Operator creates pods, services, and Service Mesh control plane components based on your configuration parameters. You can configure ServiceMeshControlPlane settings at a later time. Verification To verify the control plane installed correctly, click the Istio Service Mesh Control Plane tab. Click the name of the new control plane. Click the Resources tab to see the Red Hat OpenShift Service Mesh control plane resources the Operator created and configured. 2.9.1.2. Deploying the Service Mesh control plane using the CLI You can deploy a basic ServiceMeshControlPlane from the command line. Prerequisites The Red Hat OpenShift Service Mesh Operator must be installed. Access to the OpenShift CLI ( oc ). You are logged in to OpenShift Container Platform as`cluster-admin`. Procedure Create a project named istio-system . USD oc new-project istio-system Create a ServiceMeshControlPlane file named istio-installation.yaml using the following example. The version of the Service Mesh control plane determines the features available regardless of the version of the Operator. Example version 2.6 istio-installation.yaml apiVersion: maistra.io/v2 kind: ServiceMeshControlPlane metadata: name: basic namespace: istio-system spec: version: v2.6 tracing: type: None sampling: 10000 addons: kiali: enabled: true name: kiali grafana: enabled: true Run the following command to deploy the Service Mesh control plane, where <istio_installation.yaml> includes the full path to your file. USD oc create -n istio-system -f <istio_installation.yaml> To watch the progress of the pod deployment, run the following command: USD oc get pods -n istio-system -w You should see output similar to the following: NAME READY STATUS RESTARTS AGE grafana-b4d59bd7-mrgbr 2/2 Running 0 65m istio-egressgateway-678dc97b4c-wrjkp 1/1 Running 0 108s istio-ingressgateway-b45c9d54d-4qg6n 1/1 Running 0 108s istiod-basic-55d78bbbcd-j5556 1/1 Running 0 108s kiali-6476c7656c-x5msp 1/1 Running 0 43m prometheus-58954b8d6b-m5std 2/2 Running 0 66m 2.9.1.3. Validating your SMCP installation with the CLI You can validate the creation of the ServiceMeshControlPlane from the command line. Prerequisites The Red Hat OpenShift Service Mesh Operator must be installed. Access to the OpenShift CLI ( oc ). You are logged in to OpenShift Container Platform as`cluster-admin`. Procedure Run the following command to verify the Service Mesh control plane installation, where istio-system is the namespace where you installed the Service Mesh control plane. USD oc get smcp -n istio-system The installation has finished successfully when the STATUS column is ComponentsReady . NAME READY STATUS PROFILES VERSION AGE basic 10/10 ComponentsReady ["default"] 2.6.6 66m 2.9.2. About control plane components and infrastructure nodes Infrastructure nodes provide a way to isolate infrastructure workloads for two primary purposes: To prevent incurring billing costs against subscription counts To separate maintenance and management of infrastructure workloads You can configure some or all of the Service Mesh control plane components to run on infrastructure nodes. 2.9.2.1. Configuring all control plane components to run on infrastructure nodes using the web console Perform this task if all of the components deployed by the Service Mesh control plane will run on infrastructure nodes. These deployed components include Istiod, Ingress Gateway, and Egress Gateway, and optional applications such as Prometheus, Grafana, and Distributed Tracing. If the control plane will run on a worker node, skip this task. Prerequisites You have installed the Red Hat OpenShift Service Mesh Operator. You are logged in to OpenShift Container Platform as`cluster-admin`. Procedure Log in to the OpenShift Container Platform web console. Navigate to Operators Installed Operators . Click the Red Hat OpenShift Service Mesh Operator, and then click Istio Service Mesh Control Plane . Click the name of the control plane resource. For example, basic . Click YAML . Add the nodeSelector and tolerations fields to the spec.runtime.defaults.pod specification in the ServiceMeshControlPlane resource, as shown in the following example: spec: runtime: defaults: pod: nodeSelector: 1 node-role.kubernetes.io/infra: "" tolerations: 2 - effect: NoSchedule key: node-role.kubernetes.io/infra value: reserved - effect: NoExecute key: node-role.kubernetes.io/infra value: reserved 1 Ensures that the ServiceMeshControlPlane pod is only scheduled on an infrastructure node. 2 Ensures that the pod is accepted by the infrastructure node for execution. Click Save . Click Reload . 2.9.2.2. Configuring individual control plane components to run on infrastructure nodes using the web console Perform this task if individual components deployed by the Service Mesh control plane will run on infrastructure nodes. These deployed components include Istiod, the Ingress Gateway, and the Egress Gateway. If the control plane will run on a worker node, skip this task. Prerequisites You have installed the Red Hat OpenShift Service Mesh Operator. You are logged in to OpenShift Container Platform as`cluster-admin`. Procedure Log in to the OpenShift Container Platform web console. Navigate to Operators Installed Operators . Click the Red Hat OpenShift Service Mesh Operator, and then click Istio Service Mesh Control Plane . Click the name of the control plane resource. For example, basic . Click YAML . Add the nodeSelector and tolerations fields to the spec.runtime.components.pilot.pod specification in the ServiceMeshControlPlane resource, as shown in the following example: spec: runtime: components: pilot: pod: nodeSelector: 1 node-role.kubernetes.io/infra: "" tolerations: 2 - effect: NoSchedule key: node-role.kubernetes.io/infra value: reserved - effect: NoExecute key: node-role.kubernetes.io/infra value: reserved 1 Ensures that the Istiod pod is only scheduled on an infrastructure node. 2 Ensures that the pod is accepted by the infrastructure node for execution. Add the nodeSelector and the tolerations fields to the spec.gateways.ingress.runtime.pod and spec.gateways.egress.runtime.pod specifications in the ServiceMeshControlPlane resource, as shown in the following example: spec: gateways: ingress: runtime: pod: nodeSelector: 1 node-role.kubernetes.io/infra: "" tolerations: 2 - effect: NoSchedule key: node-role.kubernetes.io/infra value: reserved - effect: NoExecute key: node-role.kubernetes.io/infra value: reserved egress: runtime: pod: nodeSelector: 3 node-role.kubernetes.io/infra: "" tolerations: 4 - effect: NoSchedule key: node-role.kubernetes.io/infra value: reserved - effect: NoExecute key: node-role.kubernetes.io/infra value: reserved 1 3 Ensures that the gateway pod is only scheduled on an infrastructure node 2 4 Ensures that the pod is accepted by the infrastructure node for execution. Click Save . Click Reload . 2.9.2.3. Configuring all control plane components to run on infrastructure nodes using the CLI Perform this task if all of the components deployed by the Service Mesh control plane will run on infrastructure nodes. These deployed components include Istiod, Ingress Gateway, and Egress Gateway, and optional applications such as Prometheus, Grafana, and Distributed Tracing. If the control plane will run on a worker node, skip this task. Prerequisites You have installed the Red Hat OpenShift Service Mesh Operator. You are logged in to OpenShift Container Platform as`cluster-admin`. Procedure Open the ServiceMeshControlPlane resource as a YAML file: USD oc -n istio-system edit smcp <name> 1 1 <name> represents the name of the ServiceMeshControlPlane resource. To run all of the Service Mesh components deployed by the ServiceMeshControlPlane on infrastructure nodes, add the nodeSelector and tolerations fields to the spec.runtime.defaults.pod spec in the ServiceMeshControlPlane resource: spec: runtime: defaults: pod: nodeSelector: 1 node-role.kubernetes.io/infra: "" tolerations: 2 - effect: NoSchedule key: node-role.kubernetes.io/infra value: reserved - effect: NoExecute key: node-role.kubernetes.io/infra value: reserved 1 Ensures that the SMCP pods are only scheduled on an infrastructure node. 2 Ensures that the pods are accepted by the infrastructure node. 2.9.2.4. Configuring individual control plane components to run on infrastructure nodes using the CLI Perform this task if individual components deployed by the Service Mesh control plane will run on infrastructure nodes. These deployed components include Istiod, the Ingress Gateway, and the Egress Gateway. If the control plane will run on a worker node, skip this task. Prerequisites You have installed the Red Hat OpenShift Service Mesh Operator. You are logged in to OpenShift Container Platform as`cluster-admin`. Procedure Open the ServiceMeshControlPlane resource as a YAML file. USD oc -n istio-system edit smcp <name> 1 1 <name> represents the name of the ServiceMeshControlPlane resource. To run the Istiod component on an infrastructure node, add the nodeSelector and the tolerations fields to the spec.runtime.components.pilot.pod spec in the ServiceMeshControlPlane resource. spec: runtime: components: pilot: pod: nodeSelector: 1 node-role.kubernetes.io/infra: "" tolerations: 2 - effect: NoSchedule key: node-role.kubernetes.io/infra value: reserved - effect: NoExecute key: node-role.kubernetes.io/infra value: reserved 1 Ensures that the Istiod pod is only scheduled on an infrastructure node. 2 Ensures that the pod is accepted by the infrastructure node. To run Ingress and Egress Gateways on infrastructure nodes, add the nodeSelector and the tolerations fields to the spec.gateways.ingress.runtime.pod spec and the spec.gateways.egress.runtime.pod spec in the ServiceMeshControlPlane resource. spec: gateways: ingress: runtime: pod: nodeSelector: 1 node-role.kubernetes.io/infra: "" tolerations: 2 - effect: NoSchedule key: node-role.kubernetes.io/infra value: reserved - effect: NoExecute key: node-role.kubernetes.io/infra value: reserved egress: runtime: pod: nodeSelector: 3 node-role.kubernetes.io/infra: "" tolerations: 4 - effect: NoSchedule key: node-role.kubernetes.io/infra value: reserved - effect: NoExecute key: node-role.kubernetes.io/infra value: reserved 1 3 Ensures that the gateway pod is only scheduled on an infrastructure node 2 4 Ensures that the pod is accepted by the infrastructure node. 2.9.2.5. Verifying the Service Mesh control plane is running on infrastructure nodes Procedure Confirm that the nodes associated with Istiod, Ingress Gateway, and Egress Gateway pods are infrastructure nodes: USD oc -n istio-system get pods -owide 2.9.3. About control plane and cluster-wide deployments A cluster-wide deployment contains a Service Mesh Control Plane that monitors resources for an entire cluster. Monitoring resources for an entire cluster closely resembles Istio functionality in that the control plane uses a single query across all namespaces to monitor Istio and Kubernetes resources. As a result, cluster-wide deployments decrease the number of requests sent to the API server. You can configure the Service Mesh Control Plane for cluster-wide deployments using either the OpenShift Container Platform web console or the CLI. 2.9.3.1. Configuring the control plane for cluster-wide deployment with the web console You can configure the ServiceMeshControlPlane resource for cluster-wide deployment using the OpenShift Container Platform web console. In this example, istio-system is the name of the Service Mesh control plane project. Prerequisites The Red Hat OpenShift Service Mesh Operator is installed. You are logged in to OpenShift Container Platform as`cluster-admin`. Procedure Create a project named istio-system . Navigate to Home Projects . Click Create Project . In the Name field, enter istio-system . The ServiceMeshControlPlane resource must be installed in a project that is separate from your microservices and Operators. These steps use istio-system as an example. You can deploy the Service Mesh control plane to any project as long as it is separate from the project that contains your services. Click Create . Navigate to Operators Installed Operators . Click the Red Hat OpenShift Service Mesh Operator, then click Istio Service Mesh Control Plane . On the Istio Service Mesh Control Plane tab, click Create ServiceMeshControlPlane . Click YAML view . The version of the Service Mesh control plane determines the features available regardless of the version of the Operator. Modify the spec.mode field of the YAML file to specify ClusterWide . Example version 2.6 istio-installation.yaml apiVersion: maistra.io/v2 kind: ServiceMeshControlPlane metadata: name: basic namespace: istio-system spec: version: v2.6 mode: ClusterWide Click Create . The Operator creates pods, services, and Service Mesh control plane components based on your configuration parameters. The operator also creates the ServiceMeshMemberRoll if it does not exist as part of the default configuration. Verification To verify that the control plane installed correctly: Click the Istio Service Mesh Control Plane tab. Click the name of the new ServiceMeshControlPlane object. Click the Resources tab to see the Red Hat OpenShift Service Mesh control plane resources that the Operator created and configured. 2.9.3.2. Configuring the control plane for cluster-wide deployment with the CLI You can configure the ServiceMeshControlPlane resource for cluster-wide deployment using the CLI. In this example, istio-system is the name of the Service Mesh control plane namespace. Prerequisites The Red Hat OpenShift Service Mesh Operator is installed. You have access to the OpenShift CLI ( oc ). You are logged in to OpenShift Container Platform as`cluster-admin`. Procedure Create a project named istio-system . USD oc new-project istio-system Create a ServiceMeshControlPlane file named istio-installation.yaml using the following example: Example version 2.6 istio-installation.yaml apiVersion: maistra.io/v2 kind: ServiceMeshControlPlane metadata: name: basic namespace: istio-system spec: version: v2.6 mode: ClusterWide Run the following command to deploy the Service Mesh control plane: USD oc create -n istio-system -f <istio_installation.yaml> where: <istio_installation.yaml> Specifies the full path to your file. Verification To monitor the progress of the pod deployment, run the following command: USD oc get pods -n istio-system -w You should see output similar to the following example: Example output NAME READY STATUS RESTARTS AGE grafana-b4d59bd7-mrgbr 2/2 Running 0 65m istio-egressgateway-678dc97b4c-wrjkp 1/1 Running 0 108s istio-ingressgateway-b45c9d54d-4qg6n 1/1 Running 0 108s istiod-basic-55d78bbbcd-j5556 1/1 Running 0 108s jaeger-67c75bd6dc-jv6k6 2/2 Running 0 65m kiali-6476c7656c-x5msp 1/1 Running 0 43m prometheus-58954b8d6b-m5std 2/2 Running 0 66m 2.9.3.3. Customizing the member roll for a cluster-wide mesh In cluster-wide mode, when you create the ServiceMeshControlPlane resource, the ServiceMeshMemberRoll resource is also created. You can modify the ServiceMeshMemberRoll resource after it gets created. After you modify the resource, the Service Mesh operator no longer changes it. If you modify the ServiceMeshMemberRoll resource by using the OpenShift Container Platform web console, accept the prompt to overwrite the modifications. Alternatively, you can create a ServiceMeshMemberRoll resource before deploying the ServiceMeshControlPlane resource. When you create the ServiceMeshControlPlane resource, the Service Mesh Operator will not modify the ServiceMeshMemberRoll . Note The ServiceMeshMemberRoll resource name must be named default and must be created in the same project namespace as the ServiceMeshControlPlane resource. There are two ways to add a namespace to the mesh. You can either add the namespace by specifying its name in the spec.members list, or configure a set of namespace label selectors to include or exclude namespaces based on their labels. Note Regardless of how members are specified in the ServiceMeshMemberRoll resource, you can also add members to the mesh by creating the ServiceMeshMember resource in each namespace. 2.9.4. Validating your SMCP installation with Kiali You can use the Kiali console to validate your Service Mesh installation. The Kiali console offers several ways to validate your Service Mesh components are deployed and configured properly. Prerequisites The Red Hat OpenShift Service Mesh Operator must be installed. Access to the OpenShift CLI ( oc ). You are logged in to OpenShift Container Platform as`cluster-admin`. Procedure In the OpenShift Container Platform web console, navigate to Networking Routes . On the Routes page, select the Service Mesh control plane project, for example istio-system , from the Namespace menu. The Location column displays the linked address for each route. If necessary, use the filter to find the route for the Kiali console. Click the route Location to launch the console. Click Log In With OpenShift . When you first log in to the Kiali Console, you see the Overview page which displays all the namespaces in your service mesh that you have permission to view. When there are multiple namespaces shown on the Overview page, Kiali shows namespaces with health or validation problems first. Figure 2.1. Kiali Overview page The tile for each namespace displays the number of labels, the Istio Config health, the number of and Applications health, and Traffic for the namespace. If you are validating the console installation and namespaces have not yet been added to the mesh, there might not be any data to display other than istio-system . Kiali has four dashboards specifically for the namespace where the Service Mesh control plane is installed. To view these dashboards, click the Options menu on the tile for the control plane namespace, for example, istio-system , and select one of the following options: Istio Mesh Dashboard Istio Control Plane Dashboard Istio Performance Dashboard Istio Wasm Exetension Dashboard Figure 2.2. Grafana Istio Control Plane Dashboard Kiali also installs two additional Grafana dashboards, available from the Grafana Home page: Istio Workload Dashboard Istio Service Dashboard To view the Service Mesh control plane nodes, click the Graph page, select the Namespace where you installed the ServiceMeshControlPlane from the menu, for example istio-system . If necessary, click Display idle nodes . To learn more about the Graph page, click the Graph tour link. To view the mesh topology, select one or more additional namespaces from the Service Mesh Member Roll from the Namespace menu. To view the list of applications in the istio-system namespace, click the Applications page. Kiali displays the health of the applications. Hover your mouse over the information icon to view any additional information noted in the Details column. To view the list of workloads in the istio-system namespace, click the Workloads page. Kiali displays the health of the workloads. Hover your mouse over the information icon to view any additional information noted in the Details column. To view the list of services in the istio-system namespace, click the Services page. Kiali displays the health of the services and of the configurations. Hover your mouse over the information icon to view any additional information noted in the Details column. To view a list of the Istio Configuration objects in the istio-system namespace, click the Istio Config page. Kiali displays the health of the configuration. If there are configuration errors, click the row and Kiali opens the configuration file with the error highlighted. 2.9.5. Additional resources Red Hat OpenShift Service Mesh supports multiple independent control planes within the cluster. You can create reusable configurations with ServiceMeshControlPlane profiles. For more information, see Creating control plane profiles . 2.9.6. steps Add a project to the Service Mesh so that applications can be made available. For more information, see Adding services to a service mesh . 2.10. Adding services to a service mesh A project contains services; however, the services are only available if you add the project to the service mesh. 2.10.1. About adding projects to a service mesh After installing the Operators and creating the ServiceMeshControlPlane resource, add one or more projects to the service mesh. Note In OpenShift Container Platform, a project is essentially a Kubernetes namespace with additional annotations, such as the range of user IDs that can be used in the project. Typically, the OpenShift Container Platform web console uses the term project, and the CLI uses the term namespace, but the terms are essentially synonymous. You can add projects to an existing service mesh using either the OpenShift Container Platform web console or the CLI. There are three methods to add a project to a service mesh: Specifying the project name in the ServiceMeshMemberRoll resource. Configuring label selectors in the spec.memberSelectors field of the ServiceMeshMemberRoll resource. Creating the ServiceMeshMember resource in the project. If you use the first method, then you must create the ServiceMeshMemberRoll resource. 2.10.2. Creating the Red Hat OpenShift Service Mesh member roll The ServiceMeshMemberRoll lists the projects that belong to the Service Mesh control plane. Only projects listed in the ServiceMeshMemberRoll are affected by the control plane. A project does not belong to a service mesh until you add it to the member roll for a particular control plane deployment. You must create a ServiceMeshMemberRoll resource named default in the same project as the ServiceMeshControlPlane , for example istio-system . 2.10.2.1. Creating the member roll from the web console You can add one or more projects to the Service Mesh member roll from the web console. In this example, istio-system is the name of the Service Mesh control plane project. Prerequisites An installed, verified Red Hat OpenShift Service Mesh Operator. List of existing projects to add to the service mesh. Procedure Log in to the OpenShift Container Platform web console. If you do not already have services for your mesh, or you are starting from scratch, create a project for your applications. It must be different from the project where you installed the Service Mesh control plane. Navigate to Home Projects . Enter a name in the Name field. Click Create . Navigate to Operators Installed Operators . Click the Project menu and choose the project where your ServiceMeshControlPlane resource is deployed from the list, for example istio-system . Click the Red Hat OpenShift Service Mesh Operator. Click the Istio Service Mesh Member Roll tab. Click Create ServiceMeshMemberRoll Click Members , then enter the name of your project in the Value field. You can add any number of projects, but a project can only belong to one ServiceMeshMemberRoll resource. Click Create . 2.10.2.2. Creating the member roll from the CLI You can add a project to the ServiceMeshMemberRoll from the command line. Prerequisites An installed, verified Red Hat OpenShift Service Mesh Operator. List of projects to add to the service mesh. Access to the OpenShift CLI ( oc ). Procedure Log in to the OpenShift Container Platform CLI. USD oc login --username=<NAMEOFUSER> https://<HOSTNAME>:6443 If you do not already have services for your mesh, or you are starting from scratch, create a project for your applications. It must be different from the project where you installed the Service Mesh control plane. USD oc new-project <your-project> To add your projects as members, modify the following example YAML. You can add any number of projects, but a project can only belong to one ServiceMeshMemberRoll resource. In this example, istio-system is the name of the Service Mesh control plane project. Example servicemeshmemberroll-default.yaml apiVersion: maistra.io/v1 kind: ServiceMeshMemberRoll metadata: name: default namespace: istio-system spec: members: # a list of projects joined into the service mesh - your-project-name - another-project-name Run the following command to upload and create the ServiceMeshMemberRoll resource in the istio-system namespace. USD oc create -n istio-system -f servicemeshmemberroll-default.yaml Run the following command to verify the ServiceMeshMemberRoll was created successfully. USD oc get smmr -n istio-system default The installation has finished successfully when the STATUS column is Configured . 2.10.3. About adding projects using the ServiceMeshMemberRoll resource Using the ServiceMeshMemberRoll resource is the simplest way to add a project to a service mesh. To add a project, specify the project name in the spec.members field of the ServiceMeshMemberRoll resource. The ServiceMeshMemberRoll resource specifies which projects are controlled by the ServiceMeshControlPlane resource. Note Adding projects using this method requires the user to have the update servicemeshmemberrolls and the update pods privileges in the project that is being added. If you already have an application, workload, or service to add to the service mesh, see the following: Adding or removing projects from the mesh using the ServiceMeshMemberRoll resource with the web console Adding or removing projects from the mesh using the ServiceMeshMemberRoll resource with the CLI Alternatively, to install a sample application called Bookinfo and add it to a ServiceMeshMemberRoll resource, see the Bookinfo example application tutorial. 2.10.3.1. Adding or removing projects from the mesh using the ServiceMeshMemberRoll resource with the web console You can add or remove projects from the mesh using the ServiceMeshMemberRoll resource with the OpenShift Container Platform web console. You can add any number of projects, but a project can only belong to one mesh. The ServiceMeshMemberRoll resource is deleted when its corresponding ServiceMeshControlPlane resource is deleted. Prerequisites An installed, verified Red Hat OpenShift Service Mesh Operator. An existing ServiceMeshMemberRoll resource. The name of the project with the ServiceMeshMemberRoll resource. The names of the projects you want to add or remove from the mesh. Procedure Log in to the OpenShift Container Platform web console. Navigate to Operators Installed Operators . Click the Project menu and choose the project where your ServiceMeshControlPlane resource is deployed from the list. For example istio-system . Click the Red Hat OpenShift Service Mesh Operator. Click the Istio Service Mesh Member Roll tab. Click the default link. Click the YAML tab. Modify the YAML to add projects as members (or delete them to remove existing members). You can add any number of projects, but a project can only belong to one ServiceMeshMemberRoll resource. Example servicemeshmemberroll-default.yaml apiVersion: maistra.io/v1 kind: ServiceMeshMemberRoll metadata: name: default namespace: istio-system #control plane project spec: members: # a list of projects joined into the service mesh - your-project-name - another-project-name Click Save . Click Reload . 2.10.3.2. Adding or removing projects from the mesh using ServiceMeshMemberRoll resource with the CLI You can add one or more projects to the mesh using the ServiceMeshMemberRoll resource with the CLI. You can add any number of projects, but a project can only belong to one mesh. The ServiceMeshMemberRoll resource is deleted when its corresponding ServiceMeshControlPlane resource is deleted. Prerequisites An installed, verified Red Hat OpenShift Service Mesh Operator. An existing ServiceMeshMemberRoll resource. The name of the project with the ServiceMeshMemberRoll resource. The names of the projects you want to add or remove from the mesh. Access to the OpenShift CLI ( oc ). Procedure Log in to the OpenShift Container Platform CLI. Edit the ServiceMeshMemberRoll resource. USD oc edit smmr -n <controlplane-namespace> Modify the YAML to add or remove projects as members. You can add any number of projects, but a project can only belong to one ServiceMeshMemberRoll resource. Example servicemeshmemberroll-default.yaml apiVersion: maistra.io/v1 kind: ServiceMeshMemberRoll metadata: name: default namespace: istio-system #control plane project spec: members: # a list of projects joined into the service mesh - your-project-name - another-project-name Save the file and exit the editor. 2.10.4. About adding projects using the ServiceMeshMember resource A ServiceMeshMember resource provides a way to add a project to a service mesh without modifying the ServiceMeshMemberRoll resource. To add a project, create a ServiceMeshMember resource in the project that you want to add to the service mesh. When the Service Mesh Operator processes the ServiceMeshMember object, the project appears in the status.members list of the ServiceMeshMemberRoll resource. Then, the services that reside in the project are made available to the mesh. The mesh administrator must grant each mesh user permission to reference the ServiceMeshControlPlane resource in the ServiceMeshMember resource. With this permission in place, a mesh user can add a project to a mesh even when that user does not have direct access rights for the service mesh project or the ServiceMeshMemberRoll resource. For more information, see Creating the Red Hat OpenShift Service Mesh members. 2.10.4.1. Adding a project to the mesh using the ServiceMeshMember resource with the web console You can add one or more projects to the mesh using the ServiceMeshMember resource with the OpenShift Container Platform web console. Prerequisites You have installed the Red Hat OpenShift Service Mesh Operator. You know the name of the ServiceMeshControlPlane resource and the name of the project that the resource belongs to. You know the name of the project you want to add to the mesh. A service mesh administrator must explicitly grant access to the service mesh. Administrators can grant users permissions to access the mesh by assigning them the mesh-user Role using a RoleBinding or ClusterRoleBinding . For more information, see Creating the Red Hat OpenShift Service Mesh members . Procedure Log in to the OpenShift Container Platform web console. Navigate to Operators Installed Operators . Click the Project menu and choose the project that you want to add to the mesh from the drop-down list. For example, istio-system . Click the Red Hat OpenShift Service Mesh Operator. Click the Istio Service Mesh Member tab. Click Create ServiceMeshMember Accept the default name for the ServiceMeshMember . Click to expand ControlPlaneRef . In the Namespace field, select the project that the ServiceMeshControlPlane resource belongs to. For example, istio-system . In the Name field, enter the name of the ServiceMeshControlPlane resource that this namespace belongs to. For example, basic . Click Create . Verification Confirm the ServiceMeshMember resource was created and that the project was added to the mesh by using the following steps: Click the resource name, for example, default . View the Conditions section shown at the end of the screen. Confirm that the Status of the Reconciled and Ready conditions is True . If the Status is False , see the Reason and Message columns for more information. 2.10.4.2. Adding a project to the mesh using the ServiceMeshMember resource with the CLI You can add one or more projects to the mesh using the ServiceMeshMember resource with the CLI. Prerequisites You have installed the Red Hat OpenShift Service Mesh Operator. You know the name of the ServiceMeshControlPlane resource and the name of the project it belongs to. You know the name of the project you want to add to the mesh. A service mesh administrator must explicitly grant access to the service mesh. Administrators can grant users permissions to access the mesh by assigning them the mesh-user Role using a RoleBinding or ClusterRoleBinding . For more information, see Creating the Red Hat OpenShift Service Mesh members . Procedure Log in to the OpenShift Container Platform CLI. Create the YAML file for the ServiceMeshMember manifest. The manifest adds the my-application project to the service mesh that was created by the ServiceMeshControlPlane resource deployed in the istio-system namespace: apiVersion: maistra.io/v1 kind: ServiceMeshMember metadata: name: default namespace: my-application spec: controlPlaneRef: namespace: istio-system name: basic Apply the YAML file to create the ServiceMeshMember resource: USD oc apply -f <file-name> Verification Verify that the namespace is part of the mesh by running the following command. Confirm the that the value True appears in the READY column. USD oc get smm default -n my-application Example output NAME CONTROL PLANE READY AGE default istio-system/basic True 2m11s Alternatively, view the ServiceMeshMemberRoll resource to confirm that the my-application namespace is displayed in the status.members and status.configuredMembers fields of the ServiceMeshMemberRoll resource. USD oc describe smmr default -n istio-system Example output Name: default Namespace: istio-system Labels: <none> # ... Status: # ... Configured Members: default my-application # ... Members: default my-application 2.10.5. About adding projects using label selectors For cluster-wide deployments, you can use label selectors to add projects to the mesh. Label selectors specified in the ServiceMeshMemberRoll resource enable the Service Mesh Operator to add or remove namespaces to or from the mesh based on namespace labels. Unlike other standard OpenShift Container Platform resources that you can use to specify a single label selector, you can use the ServiceMeshMemberRoll resource to specify multiple label selectors. If the labels for a namespace match any of the selectors specified in the ServiceMeshMemberRoll resource, then the namespace is included in the mesh. Note In OpenShift Container Platform, a project is essentially a Kubernetes namespace with additional annotations, such as the range of user IDs that can be used in the project. Typically, the OpenShift Container Platform web console uses the term project , and the CLI uses the term namespace , but the terms are essentially synonymous. 2.10.5.1. Adding a project to the mesh using label selectors with the web console You can use labels selectors to add a project to the Service Mesh with the OpenShift Container Platform web console. Prerequisites You have installed the Red Hat OpenShift Service Mesh Operator. The deployment has an existing ServiceMeshMemberRoll resource. You are logged in to the OpenShift Container Platform web console as cluster-admin . Procedure Navigate to Operators Installed Operators . Click the Project menu, and from the drop-down list, select the project where your ServiceMeshMemberRoll resource is deployed. For example, istio-system . Click the Red Hat OpenShift Service Mesh Operator. Click the Istio Service Mesh Member Roll tab. Click Create ServiceMeshMember Roll . Accept the default name for the ServiceMeshMemberRoll . In the Labels field, enter key-value pairs to define the labels that identify which namespaces to include in the service mesh. If a project namespace has either label specified by the selectors, then the project namespace is included in the service mesh. You do not need to include both labels. For example, entering mykey=myvalue includes all namespaces with this label as part of the mesh. When the selector identifies a match, the project namespace is added to the service mesh. Entering myotherkey=myothervalue includes all namespaces with this label as part of the mesh. When the selector identifies a match, the project namespace is added to the service mesh. Click Create . 2.10.5.2. Adding a project to the mesh using label selectors with the CLI You can use label selectors to add a project to the Service Mesh with the CLI. Prerequisites You have installed the Red Hat OpenShift Service Mesh Operator. The deployment has an existing ServiceMeshMemberRoll resource. You are logged in to OpenShift Container Platform as`cluster-admin`. Procedure Log in to the OpenShift Container Platform CLI. Edit the ServiceMeshMemberRoll resource. USD oc edit smmr default -n istio-system You can deploy the Service Mesh control plane to any project provided that it is separate from the project that contains your services. Modify the YAML file to include namespace label selectors in the spec.memberSelectors field of the ServiceMeshMemberRoll resource. Note Instead of using the matchLabels field, you can also use the matchExpressions field in the selector. apiVersion: maistra.io/v1 kind: ServiceMeshMemberRoll metadata: name: default namespace: istio-system spec: memberSelectors: 1 - matchLabels: 2 mykey: myvalue 3 - matchLabels: 4 myotherkey: myothervalue 5 1 Contains the label selectors used to identify which project namespaces are included in the service mesh. If a project namespace has either label specified by the selectors, then the project namespace is included in the service mesh. The project namespace does not need both labels to be included. 2 3 Specifies all namespaces with the mykey=myvalue label. When the selector identifies a match, the project namespace is added to the service mesh. 4 5 Specifies all namespaces with the myotherkey=myothervalue label. When the selector identifies a match, the project namespace is added to the service mesh. 2.10.6. Bookinfo example application The Bookinfo example application allows you to test your Red Hat OpenShift Service Mesh 2.6.6 installation on OpenShift Container Platform. The Bookinfo application displays information about a book, similar to a single catalog entry of an online book store. The application displays a page that describes the book, book details (ISBN, number of pages, and other information), and book reviews. The Bookinfo application consists of these microservices: The productpage microservice calls the details and reviews microservices to populate the page. The details microservice contains book information. The reviews microservice contains book reviews. It also calls the ratings microservice. The ratings microservice contains book ranking information that accompanies a book review. There are three versions of the reviews microservice: Version v1 does not call the ratings Service. Version v2 calls the ratings Service and displays each rating as one to five black stars. Version v3 calls the ratings Service and displays each rating as one to five red stars. 2.10.6.1. Installing the Bookinfo application This tutorial walks you through how to create a sample application by creating a project, deploying the Bookinfo application to that project, and viewing the running application in Service Mesh. Prerequisites OpenShift Container Platform 4.1 or higher installed. Red Hat OpenShift Service Mesh 2.6.6 installed. Access to the OpenShift CLI ( oc ). You are logged in to OpenShift Container Platform as`cluster-admin`. Note The Bookinfo sample application cannot be installed on IBM Z(R) and IBM Power(R). Note The commands in this section assume the Service Mesh control plane project is istio-system . If you installed the control plane in another namespace, edit each command before you run it. Procedure Click Home Projects . Click Create Project . Enter bookinfo as the Project Name , enter a Display Name , and enter a Description , then click Create . Alternatively, you can run this command from the CLI to create the bookinfo project. USD oc new-project bookinfo Click Operators Installed Operators . Click the Project menu and use the Service Mesh control plane namespace. In this example, use istio-system . Click the Red Hat OpenShift Service Mesh Operator. Click the Istio Service Mesh Member Roll tab. If you have already created a Istio Service Mesh Member Roll, click the name, then click the YAML tab to open the YAML editor. If you have not created a ServiceMeshMemberRoll , click Create ServiceMeshMemberRoll . Click Members , then enter the name of your project in the Value field. Click Create to save the updated Service Mesh Member Roll. Or, save the following example to a YAML file. Bookinfo ServiceMeshMemberRoll example servicemeshmemberroll-default.yaml apiVersion: maistra.io/v1 kind: ServiceMeshMemberRoll metadata: name: default spec: members: - bookinfo Run the following command to upload that file and create the ServiceMeshMemberRoll resource in the istio-system namespace. In this example, istio-system is the name of the Service Mesh control plane project. USD oc create -n istio-system -f servicemeshmemberroll-default.yaml Run the following command to verify the ServiceMeshMemberRoll was created successfully. USD oc get smmr -n istio-system -o wide The installation has finished successfully when the STATUS column is Configured . NAME READY STATUS AGE MEMBERS default 1/1 Configured 70s ["bookinfo"] From the CLI, deploy the Bookinfo application in the `bookinfo` project by applying the bookinfo.yaml file: USD oc apply -n bookinfo -f https://raw.githubusercontent.com/Maistra/istio/maistra-2.6/samples/bookinfo/platform/kube/bookinfo.yaml You should see output similar to the following: service/details created serviceaccount/bookinfo-details created deployment.apps/details-v1 created service/ratings created serviceaccount/bookinfo-ratings created deployment.apps/ratings-v1 created service/reviews created serviceaccount/bookinfo-reviews created deployment.apps/reviews-v1 created deployment.apps/reviews-v2 created deployment.apps/reviews-v3 created service/productpage created serviceaccount/bookinfo-productpage created deployment.apps/productpage-v1 created Create the ingress gateway by applying the bookinfo-gateway.yaml file: USD oc apply -n bookinfo -f https://raw.githubusercontent.com/Maistra/istio/maistra-2.6/samples/bookinfo/networking/bookinfo-gateway.yaml You should see output similar to the following: gateway.networking.istio.io/bookinfo-gateway created virtualservice.networking.istio.io/bookinfo created Set the value for the GATEWAY_URL parameter: USD export GATEWAY_URL=USD(oc -n istio-system get route istio-ingressgateway -o jsonpath='{.spec.host}') 2.10.6.2. Adding default destination rules Before you can use the Bookinfo application, you must first add default destination rules. There are two preconfigured YAML files, depending on whether or not you enabled mutual transport layer security (TLS) authentication. Procedure To add destination rules, run one of the following commands: If you did not enable mutual TLS: USD oc apply -n bookinfo -f https://raw.githubusercontent.com/Maistra/istio/maistra-2.6/samples/bookinfo/networking/destination-rule-all.yaml If you enabled mutual TLS: USD oc apply -n bookinfo -f https://raw.githubusercontent.com/Maistra/istio/maistra-2.6/samples/bookinfo/networking/destination-rule-all-mtls.yaml You should see output similar to the following: destinationrule.networking.istio.io/productpage created destinationrule.networking.istio.io/reviews created destinationrule.networking.istio.io/ratings created destinationrule.networking.istio.io/details created 2.10.6.3. Verifying the Bookinfo installation To confirm that the sample Bookinfo application was successfully deployed, perform the following steps. Prerequisites Red Hat OpenShift Service Mesh installed. Complete the steps for installing the Bookinfo sample app. You are logged in to OpenShift Container Platform as`cluster-admin`. Procedure from CLI Verify that all pods are ready with this command: USD oc get pods -n bookinfo All pods should have a status of Running . You should see output similar to the following: NAME READY STATUS RESTARTS AGE details-v1-55b869668-jh7hb 2/2 Running 0 12m productpage-v1-6fc77ff794-nsl8r 2/2 Running 0 12m ratings-v1-7d7d8d8b56-55scn 2/2 Running 0 12m reviews-v1-868597db96-bdxgq 2/2 Running 0 12m reviews-v2-5b64f47978-cvssp 2/2 Running 0 12m reviews-v3-6dfd49b55b-vcwpf 2/2 Running 0 12m Run the following command to retrieve the URL for the product page: echo "http://USDGATEWAY_URL/productpage" Copy and paste the output in a web browser to verify the Bookinfo product page is deployed. Procedure from Kiali web console Obtain the address for the Kiali web console. Log in to the OpenShift Container Platform web console. Navigate to Networking Routes . On the Routes page, select the Service Mesh control plane project, for example istio-system , from the Namespace menu. The Location column displays the linked address for each route. Click the link in the Location column for Kiali. Click Log In With OpenShift . The Kiali Overview screen presents tiles for each project namespace. In Kiali, click Graph . Select bookinfo from the Namespace list, and App graph from the Graph Type list. Click Display idle nodes from the Display menu. This displays nodes that are defined but have not received or sent requests. It can confirm that an application is properly defined, but that no request traffic has been reported. Use the Duration menu to increase the time period to help ensure older traffic is captured. Use the Refresh Rate menu to refresh traffic more or less often, or not at all. Click Services , Workloads or Istio Config to see list views of bookinfo components, and confirm that they are healthy. 2.10.6.4. Removing the Bookinfo application Follow these steps to remove the Bookinfo application. Prerequisites OpenShift Container Platform 4.1 or higher installed. Red Hat OpenShift Service Mesh 2.6.6 installed. Access to the OpenShift CLI ( oc ). 2.10.6.4.1. Delete the Bookinfo project Procedure Log in to the OpenShift Container Platform web console. Click to Home Projects . Click the bookinfo menu , and then click Delete Project . Type bookinfo in the confirmation dialog box, and then click Delete . Alternatively, you can run this command using the CLI to create the bookinfo project. USD oc delete project bookinfo 2.10.6.4.2. Remove the Bookinfo project from the Service Mesh member roll Procedure Log in to the OpenShift Container Platform web console. Click Operators Installed Operators . Click the Project menu and choose istio-system from the list. Click the Istio Service Mesh Member Roll link under Provided APIS for the Red Hat OpenShift Service Mesh Operator. Click the ServiceMeshMemberRoll menu and select Edit Service Mesh Member Roll . Edit the default Service Mesh Member Roll YAML and remove bookinfo from the members list. Alternatively, you can run this command using the CLI to remove the bookinfo project from the ServiceMeshMemberRoll . In this example, istio-system is the name of the Service Mesh control plane project. USD oc -n istio-system patch --type='json' smmr default -p '[{"op": "remove", "path": "/spec/members", "value":["'"bookinfo"'"]}]' Click Save to update Service Mesh Member Roll. 2.10.7. steps To continue the installation process, you must enable sidecar injection . 2.11. Enabling sidecar injection After adding the namespaces that contain your services to your mesh, the step is to enable automatic sidecar injection in the Deployment resource for your application. You must enable automatic sidecar injection for each deployment. If you have installed the Bookinfo sample application, the application was deployed and the sidecars were injected as part of the installation procedure. If you are using your own project and service, deploy your applications on OpenShift Container Platform. For more information, see the OpenShift Container Platform documentation, Understanding deployments . Note Traffic started by Init Containers, specialized containers that run before the application containers in a pod, cannot travel outside of the service mesh by default. Any action Init Containers perform that requires establishing a network traffic connection outside of the mesh fails. For more information about connecting Init Containers to a service, see the Red Hat Knowledgebase solution initContainer in CrashLoopBackOff on pod with Service Mesh sidecar injected 2.11.1. Prerequisites Services deployed to the mesh , for example the Bookinfo sample application. A Deployment resource file. 2.11.2. Enabling automatic sidecar injection When deploying an application, you must opt-in to injection by configuring the label sidecar.istio.io/inject in spec.template.metadata.labels to true in the deployment object. Opting in ensures that the sidecar injection does not interfere with other OpenShift Container Platform features such as builder pods used by numerous frameworks within the OpenShift Container Platform ecosystem. Prerequisites Identify the namespaces that are part of your service mesh and the deployments that need automatic sidecar injection. Procedure To find your deployments use the oc get command. USD oc get deployment -n <namespace> For example, to view the Deployment YAML file for the 'ratings-v1' microservice in the bookinfo namespace, use the following command to see the resource in YAML format. oc get deployment -n bookinfo ratings-v1 -o yaml Open the application's Deployment YAML file in an editor. Add spec.template.metadata.labels.sidecar.istio/inject to your Deployment YAML file and set sidecar.istio.io/inject to true as shown in the following example. Example snippet from bookinfo deployment-ratings-v1.yaml apiVersion: apps/v1 kind: Deployment metadata: name: ratings-v1 namespace: bookinfo labels: app: ratings version: v1 spec: template: metadata: labels: sidecar.istio.io/inject: 'true' Note Using the annotations parameter when enabling automatic sidecar injection is deprecated and is replaced by using the labels parameter. Save the Deployment YAML file. Add the file back to the project that contains your app. USD oc apply -n <namespace> -f deployment.yaml In this example, bookinfo is the name of the project that contains the ratings-v1 app and deployment-ratings-v1.yaml is the file you edited. USD oc apply -n bookinfo -f deployment-ratings-v1.yaml To verify that the resource uploaded successfully, run the following command. USD oc get deployment -n <namespace> <deploymentName> -o yaml For example, USD oc get deployment -n bookinfo ratings-v1 -o yaml 2.11.3. Validating sidecar injection The Kiali console offers several ways to validate whether or not your applications, services, and workloads have a sidecar proxy. Figure 2.3. Missing sidecar badge The Graph page displays a node badge indicating a Missing Sidecar on the following graphs: App graph Versioned app graph Workload graph Figure 2.4. Missing sidecar icon The Applications page displays a Missing Sidecar icon in the Details column for any applications in a namespace that do not have a sidecar. The Workloads page displays a Missing Sidecar icon in the Details column for any applications in a namespace that do not have a sidecar. The Services page displays a Missing Sidecar icon in the Details column for any applications in a namespace that do not have a sidecar. When there are multiple versions of a service, you use the Service Details page to view Missing Sidecar icons. The Workload Details page has a special unified Logs tab that lets you view and correlate application and proxy logs. You can view the Envoy logs as another way to validate sidecar injection for your application workloads. The Workload Details page also has an Envoy tab for any workload that is an Envoy proxy or has been injected with an Envoy proxy. This tab displays a built-in Envoy dashboard that includes subtabs for Clusters , Listeners , Routes , Bootstrap , Config , and Metrics . For information about enabling Envoy access logs, see the Troubleshooting section. For information about viewing Envoy logs, see Viewing logs in the Kiali console 2.11.4. Setting proxy environment variables through annotations Configuration for the Envoy sidecar proxies is managed by the ServiceMeshControlPlane . You can set environment variables for the sidecar proxy for applications by adding pod annotations to the deployment in the injection-template.yaml file. The environment variables are injected to the sidecar. Example injection-template.yaml apiVersion: apps/v1 kind: Deployment metadata: name: resource spec: replicas: 7 selector: matchLabels: app: resource template: metadata: annotations: sidecar.maistra.io/proxyEnv: "{ \"maistra_test_env\": \"env_value\", \"maistra_test_env_2\": \"env_value_2\" }" Warning You should never include maistra.io/ labels and annotations when creating your own custom resources. These labels and annotations indicate that the resources are generated and managed by the Operator. If you are copying content from an Operator-generated resource when creating your own resources, do not include labels or annotations that start with maistra.io/ . Resources that include these labels or annotations will be overwritten or deleted by the Operator during the reconciliation. 2.11.5. Updating sidecar proxies In order to update the configuration for sidecar proxies the application administrator must restart the application pods. If your deployment uses automatic sidecar injection, you can update the pod template in the deployment by adding or modifying an annotation. Run the following command to redeploy the pods: USD oc patch deployment/<deployment> -p '{"spec":{"template":{"metadata":{"annotations":{"kubectl.kubernetes.io/restartedAt": "'`date -Iseconds`'"}}}}}' If your deployment does not use automatic sidecar injection, you must manually update the sidecars by modifying the sidecar container image specified in the deployment or pod, and then restart the pods. 2.11.6. steps Configure Red Hat OpenShift Service Mesh features for your environment. Security Traffic management Metrics, logs, and traces 2.12. Managing users and profiles 2.12.1. Creating the Red Hat OpenShift Service Mesh members ServiceMeshMember resources provide a way for Red Hat OpenShift Service Mesh administrators to delegate permissions to add projects to a service mesh, even when the respective users don't have direct access to the service mesh project or member roll. While project administrators are automatically given permission to create the ServiceMeshMember resource in their project, they cannot point it to any ServiceMeshControlPlane until the service mesh administrator explicitly grants access to the service mesh. Administrators can grant users permissions to access the mesh by granting them the mesh-user user role. In this example, istio-system is the name of the Service Mesh control plane project. USD oc policy add-role-to-user -n istio-system --role-namespace istio-system mesh-user <user_name> Administrators can modify the mesh-user role binding in the Service Mesh control plane project to specify the users and groups that are granted access. The ServiceMeshMember adds the project to the ServiceMeshMemberRoll within the Service Mesh control plane project that it references. apiVersion: maistra.io/v1 kind: ServiceMeshMember metadata: name: default spec: controlPlaneRef: namespace: istio-system name: basic The mesh-users role binding is created automatically after the administrator creates the ServiceMeshControlPlane resource. An administrator can use the following command to add a role to a user. USD oc policy add-role-to-user The administrator can also create the mesh-user role binding before the administrator creates the ServiceMeshControlPlane resource. For example, the administrator can create it in the same oc apply operation as the ServiceMeshControlPlane resource. This example adds a role binding for alice : apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: namespace: istio-system name: mesh-users roleRef: apiGroup: rbac.authorization.k8s.io kind: Role name: mesh-user subjects: - apiGroup: rbac.authorization.k8s.io kind: User name: alice 2.12.2. Creating Service Mesh control plane profiles You can create reusable configurations with ServiceMeshControlPlane profiles. Individual users can extend the profiles they create with their own configurations. Profiles can also inherit configuration information from other profiles. For example, you can create an accounting control plane for the accounting team and a marketing control plane for the marketing team. If you create a development template and a production template, members of the marketing team and the accounting team can extend the development and production profiles with team-specific customization. When you configure Service Mesh control plane profiles, which follow the same syntax as the ServiceMeshControlPlane , users inherit settings in a hierarchical fashion. The Operator is delivered with a default profile with default settings for Red Hat OpenShift Service Mesh. 2.12.2.1. Creating the ConfigMap To add custom profiles, you must create a ConfigMap named smcp-templates in the openshift-operators project. The Operator container automatically mounts the ConfigMap . Prerequisites An installed, verified Service Mesh Operator. An account with the cluster-admin role. If you use Red Hat OpenShift Dedicated, you must have an account with the dedicated-admin role. Location of the Operator deployment. Access to the OpenShift CLI ( oc ). Procedure Log in to the OpenShift Container Platform CLI as a cluster-admin . If you use Red Hat OpenShift Dedicated, you must have an account with the dedicated-admin role. From the CLI, run this command to create the ConfigMap named smcp-templates in the openshift-operators project and replace <profiles-directory> with the location of the ServiceMeshControlPlane files on your local disk: USD oc create configmap --from-file=<profiles-directory> smcp-templates -n openshift-operators You can use the profiles parameter in the ServiceMeshControlPlane to specify one or more templates. apiVersion: maistra.io/v2 kind: ServiceMeshControlPlane metadata: name: basic spec: profiles: - default 2.12.2.2. Setting the correct network policy Service Mesh creates network policies in the Service Mesh control plane and member namespaces to allow traffic between them. Before you deploy, consider the following conditions to ensure the services in your service mesh that were previously exposed through an OpenShift Container Platform route. Traffic into the service mesh must always go through the ingress-gateway for Istio to work properly. Deploy services external to the service mesh in separate namespaces that are not in any service mesh. Non-mesh services that need to be deployed within a service mesh enlisted namespace should label their deployments maistra.io/expose-route: "true" , which ensures OpenShift Container Platform routes to these services still work. 2.13. Security If your service mesh application is constructed with a complex array of microservices, you can use Red Hat OpenShift Service Mesh to customize the security of the communication between those services. The infrastructure of OpenShift Container Platform along with the traffic management features of Service Mesh help you manage the complexity of your applications and secure microservices. Before you begin If you have a project, add your project to the ServiceMeshMemberRoll resource . If you don't have a project, install the Bookinfo sample application and add it to the ServiceMeshMemberRoll resource. The sample application helps illustrate security concepts. 2.13.1. About mutual Transport Layer Security (mTLS) Mutual Transport Layer Security (mTLS) is a protocol that enables two parties to authenticate each other. It is the default mode of authentication in some protocols (IKE, SSH) and optional in others (TLS). You can use mTLS without changes to the application or service code. The TLS is handled entirely by the service mesh infrastructure and between the two sidecar proxies. By default, mTLS in Red Hat OpenShift Service Mesh is enabled and set to permissive mode, where the sidecars in Service Mesh accept both plain-text traffic and connections that are encrypted using mTLS. If a service in your mesh configured to use strict mTLS is communicating with a service outside the mesh, communication might break between those services because strict mTLS requires both the client and the server to be able to verify the identify of each other. Use permissive mode while you migrate your workloads to Service Mesh. Then, you can enable strict mTLS across your mesh, namespace, or application. Enabling mTLS across your mesh at the Service Mesh control plane level secures all the traffic in your service mesh without rewriting your applications and workloads. You can secure namespaces in your mesh at the data plane level in the ServiceMeshControlPlane resource. To customize traffic encryption connections, configure namespaces at the application level with PeerAuthentication and DestinationRule resources. 2.13.1.1. Enabling strict mTLS across the service mesh If your workloads do not communicate with outside services, you can quickly enable mTLS across your mesh without communication interruptions. You can enable it by setting spec.security.dataPlane.mtls to true in the ServiceMeshControlPlane resource. The Operator creates the required resources. apiVersion: maistra.io/v2 kind: ServiceMeshControlPlane spec: version: v2.6 security: dataPlane: mtls: true You can also enable mTLS by using the OpenShift Container Platform web console. Procedure Log in to the web console. Click the Project menu and select the project where you installed the Service Mesh control plane, for example istio-system . Click Operators Installed Operators . Click Service Mesh Control Plane under Provided APIs . Click the name of your ServiceMeshControlPlane resource, for example, basic . On the Details page, click the toggle in the Security section for Data Plane Security . 2.13.1.1.1. Configuring sidecars for incoming connections for specific services You can also configure mTLS for individual services by creating a policy. Procedure Create a YAML file using the following example. PeerAuthentication Policy example policy.yaml apiVersion: security.istio.io/v1beta1 kind: PeerAuthentication metadata: name: default namespace: <namespace> spec: mtls: mode: STRICT Replace <namespace> with the namespace where the service is located. Run the following command to create the resource in the namespace where the service is located. It must match the namespace field in the Policy resource you just created. USD oc create -n <namespace> -f <policy.yaml> Note If you are not using automatic mTLS and you are setting PeerAuthentication to STRICT, you must create a DestinationRule resource for your service. 2.13.1.1.2. Configuring sidecars for outgoing connections Create a destination rule to configure Service Mesh to use mTLS when sending requests to other services in the mesh. Procedure Create a YAML file using the following example. DestinationRule example destination-rule.yaml apiVersion: networking.istio.io/v1alpha3 kind: DestinationRule metadata: name: default namespace: <namespace> spec: host: "*.<namespace>.svc.cluster.local" trafficPolicy: tls: mode: ISTIO_MUTUAL Replace <namespace> with the namespace where the service is located. Run the following command to create the resource in the namespace where the service is located. It must match the namespace field in the DestinationRule resource you just created. USD oc create -n <namespace> -f <destination-rule.yaml> 2.13.1.1.3. Setting the minimum and maximum protocol versions If your environment has specific requirements for encrypted traffic in your service mesh, you can control the cryptographic functions that are allowed by setting the spec.security.controlPlane.tls.minProtocolVersion or spec.security.controlPlane.tls.maxProtocolVersion in your ServiceMeshControlPlane resource. Those values, configured in your Service Mesh control plane resource, define the minimum and maximum TLS version used by mesh components when communicating securely over TLS. The default is TLS_AUTO and does not specify a version of TLS. Table 2.5. Valid values Value Description TLS_AUTO default TLSv1_0 TLS version 1.0 TLSv1_1 TLS version 1.1 TLSv1_2 TLS version 1.2 TLSv1_3 TLS version 1.3 Procedure Log in to the web console. Click the Project menu and select the project where you installed the Service Mesh control plane, for example istio-system . Click Operators Installed Operators . Click Service Mesh Control Plane under Provided APIs . Click the name of your ServiceMeshControlPlane resource, for example, basic . Click the YAML tab. Insert the following code snippet in the YAML editor. Replace the value in the minProtocolVersion with the TLS version value. In this example, the minimum TLS version is set to TLSv1_2 . ServiceMeshControlPlane snippet kind: ServiceMeshControlPlane spec: security: controlPlane: tls: minProtocolVersion: TLSv1_2 Click Save . Click Refresh to verify that the changes updated correctly. 2.13.1.2. Validating encryption with Kiali The Kiali console offers several ways to validate whether or not your applications, services, and workloads have mTLS encryption enabled. Figure 2.5. Masthead icon mesh-wide mTLS enabled At the right side of the masthead, Kiali shows a lock icon when the mesh has strictly enabled mTLS for the whole service mesh. It means that all communications in the mesh use mTLS. Figure 2.6. Masthead icon mesh-wide mTLS partially enabled Kiali displays a hollow lock icon when either the mesh is configured in PERMISSIVE mode or there is a error in the mesh-wide mTLS configuration. Figure 2.7. Security badge The Graph page has the option to display a Security badge on the graph edges to indicate that mTLS is enabled. To enable security badges on the graph, from the Display menu, under Show Badges , select the Security checkbox. When an edge shows a lock icon, it means at least one request with mTLS enabled is present. In case there are both mTLS and non-mTLS requests, the side-panel will show the percentage of requests that use mTLS. The Applications Detail Overview page displays a Security icon on the graph edges where at least one request with mTLS enabled is present. The Workloads Detail Overview page displays a Security icon on the graph edges where at least one request with mTLS enabled is present. The Services Detail Overview page displays a Security icon on the graph edges where at least one request with mTLS enabled is present. Also note that Kiali displays a lock icon in the Network section to ports that are configured for mTLS. 2.13.2. Configuring Role Based Access Control (RBAC) Role-based access control (RBAC) objects determine whether a user or service is allowed to perform a given action within a project. You can define mesh-, namespace-, and workload-wide access control for your workloads in the mesh. To configure RBAC, create an AuthorizationPolicy resource in the namespace for which you are configuring access. If you are configuring mesh-wide access, use the project where you installed the Service Mesh control plane, for example istio-system . For example, with RBAC, you can create policies that: Configure intra-project communication. Allow or deny full access to all workloads in the default namespace. Allow or deny ingress gateway access. Require a token for access. An authorization policy includes a selector, an action, and a list of rules: The selector field specifies the target of the policy. The action field specifies whether to allow or deny the request. The rules field specifies when to trigger the action. The from field specifies constraints on the request origin. The to field specifies constraints on request target and parameters. The when field specifies additional conditions that to apply the rule. Procedure Create your AuthorizationPolicy resource. The following example shows a resource that updates the ingress-policy AuthorizationPolicy to deny an IP address from accessing the ingress gateway. apiVersion: security.istio.io/v1beta1 kind: AuthorizationPolicy metadata: name: ingress-policy namespace: istio-system spec: selector: matchLabels: app: istio-ingressgateway action: DENY rules: - from: - source: ipBlocks: ["1.2.3.4"] Run the following command after you write your resource to create your resource in your namespace. The namespace must match your metadata.namespace field in your AuthorizationPolicy resource. USD oc create -n istio-system -f <filename> steps Consider the following examples for other common configurations. 2.13.2.1. Configure intra-project communication You can use AuthorizationPolicy to configure your Service Mesh control plane to allow or deny the traffic communicating with your mesh or services in your mesh. 2.13.2.1.1. Restrict access to services outside a namespace You can deny requests from any source that is not in the bookinfo namespace with the following AuthorizationPolicy resource example. apiVersion: security.istio.io/v1beta1 kind: AuthorizationPolicy metadata: name: httpbin-deny namespace: bookinfo spec: selector: matchLabels: app: httpbin version: v1 action: DENY rules: - from: - source: notNamespaces: ["bookinfo"] 2.13.2.1.2. Creating allow-all and default deny-all authorization policies The following example shows an allow-all authorization policy that allows full access to all workloads in the bookinfo namespace. apiVersion: security.istio.io/v1beta1 kind: AuthorizationPolicy metadata: name: allow-all namespace: bookinfo spec: action: ALLOW rules: - {} The following example shows a policy that denies any access to all workloads in the bookinfo namespace. apiVersion: security.istio.io/v1beta1 kind: AuthorizationPolicy metadata: name: deny-all namespace: bookinfo spec: {} 2.13.2.2. Allow or deny access to the ingress gateway You can set an authorization policy to add allow or deny lists based on IP addresses. apiVersion: security.istio.io/v1beta1 kind: AuthorizationPolicy metadata: name: ingress-policy namespace: istio-system spec: selector: matchLabels: app: istio-ingressgateway action: ALLOW rules: - from: - source: ipBlocks: ["1.2.3.4", "5.6.7.0/24"] 2.13.2.3. Restrict access with JSON Web Token You can restrict what can access your mesh with a JSON Web Token (JWT). After authentication, a user or service can access routes, services that are associated with that token. Create a RequestAuthentication resource, which defines the authentication methods that are supported by a workload. The following example accepts a JWT issued by http://localhost:8080/auth/realms/master . apiVersion: "security.istio.io/v1beta1" kind: "RequestAuthentication" metadata: name: "jwt-example" namespace: bookinfo spec: selector: matchLabels: app: httpbin jwtRules: - issuer: "http://localhost:8080/auth/realms/master" jwksUri: "http://keycloak.default.svc:8080/auth/realms/master/protocol/openid-connect/certs" Then, create an AuthorizationPolicy resource in the same namespace to work with RequestAuthentication resource you created. The following example requires a JWT to be present in the Authorization header when sending a request to httpbin workloads. apiVersion: "security.istio.io/v1beta1" kind: "AuthorizationPolicy" metadata: name: "frontend-ingress" namespace: bookinfo spec: selector: matchLabels: app: httpbin action: DENY rules: - from: - source: notRequestPrincipals: ["*"] 2.13.3. Configuring cipher suites and ECDH curves Cipher suites and Elliptic-curve Diffie-Hellman (ECDH curves) can help you secure your service mesh. You can define a comma separated list of cipher suites using spec.security.controlplane.tls.cipherSuites and ECDH curves using spec.security.controlplane.tls.ecdhCurves in your ServiceMeshControlPlane resource. If either of these attributes are empty, then the default values are used. The cipherSuites setting is effective if your service mesh uses TLS 1.2 or earlier. It has no effect when negotiating with TLS 1.3. Set your cipher suites in the comma separated list in order of priority. For example, ecdhCurves: CurveP256, CurveP384 sets CurveP256 as a higher priority than CurveP384 . Note You must include either TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256 or TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256 when you configure the cipher suite. HTTP/2 support requires at least one of these cipher suites. The supported cipher suites are: TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256 TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256 TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256 TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256 TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384 TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384 TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256 TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256 TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA TLS_RSA_WITH_AES_128_GCM_SHA256 TLS_RSA_WITH_AES_256_GCM_SHA384 TLS_RSA_WITH_AES_128_CBC_SHA256 TLS_RSA_WITH_AES_128_CBC_SHA TLS_RSA_WITH_AES_256_CBC_SHA TLS_ECDHE_RSA_WITH_3DES_EDE_CBC_SHA TLS_RSA_WITH_3DES_EDE_CBC_SHA The supported ECDH Curves are: CurveP256 CurveP384 CurveP521 X25519 2.13.4. Configuring JSON Web Key Sets resolver certificate authority You can configure your own JSON Web Key Sets (JWKS) resolver certificate authority (CA) from the ServiceMeshControlPlane (SMCP) spec. Procedure Edit the ServiceMeshControlPlane spec file: USD oc edit smcp <smcp-name> Enable mtls for the data plane by setting the value of the mtls field to true in the ServiceMeshControlPlane spec, as shown in the following example: spec: security: dataPlane: mtls: true # enable mtls for data plane # JWKSResolver extra CA # PEM-encoded certificate content to trust an additional CA jwksResolverCA: | -----BEGIN CERTIFICATE----- [...] [...] -----END CERTIFICATE----- ... Save the changes. OpenShift Container Platform automatically applies them. A ConfigMap such as pilot-jwks-cacerts-<SMCP name> is created with the CA .pem data . Example ConfigMap pilot-jwks-cacerts-<SMCP name> kind: ConfigMap apiVersion: v1 data: extra.pem: | -----BEGIN CERTIFICATE----- [...] [...] -----END CERTIFICATE----- 2.13.5. Adding an external certificate authority key and certificate By default, Red Hat OpenShift Service Mesh generates a self-signed root certificate and key and uses them to sign the workload certificates. You can also use the user-defined certificate and key to sign workload certificates with user-defined root certificate. This task demonstrates an example to plug certificates and key into Service Mesh. Prerequisites Install Red Hat OpenShift Service Mesh with mutual TLS enabled to configure certificates. This example uses the certificates from the Maistra repository . For production, use your own certificates from your certificate authority. Deploy the Bookinfo sample application to verify the results with these instructions. OpenSSL is required to verify certificates. 2.13.5.1. Adding an existing certificate and key To use an existing signing (CA) certificate and key, you must create a chain of trust file that includes the CA certificate, key, and root certificate. You must use the following exact file names for each of the corresponding certificates. The CA certificate is named ca-cert.pem , the key is ca-key.pem , and the root certificate, which signs ca-cert.pem , is named root-cert.pem . If your workload uses intermediate certificates, you must specify them in a cert-chain.pem file. Save the example certificates from the Maistra repository locally and replace <path> with the path to your certificates. Create a secret named cacert that includes the input files ca-cert.pem , ca-key.pem , root-cert.pem and cert-chain.pem . USD oc create secret generic cacerts -n istio-system --from-file=<path>/ca-cert.pem \ --from-file=<path>/ca-key.pem --from-file=<path>/root-cert.pem \ --from-file=<path>/cert-chain.pem In the ServiceMeshControlPlane resource set spec.security.dataPlane.mtls true to true and configure the certificateAuthority field as shown in the following example. The default rootCADir is /etc/cacerts . You do not need to set the privateKey if the key and certs are mounted in the default location. Service Mesh reads the certificates and key from the secret-mount files. apiVersion: maistra.io/v2 kind: ServiceMeshControlPlane spec: security: dataPlane: mtls: true certificateAuthority: type: Istiod istiod: type: PrivateKey privateKey: rootCADir: /etc/cacerts After creating/changing/deleting the cacert secret, the Service Mesh control plane istiod and gateway pods must be restarted so the changes go into effect. Use the following command to restart the pods: USD oc -n istio-system delete pods -l 'app in (istiod,istio-ingressgateway, istio-egressgateway)' The Operator will automatically recreate the pods after they have been deleted. Restart the bookinfo application pods so that the sidecar proxies pick up the secret changes. Use the following command to restart the pods: USD oc -n bookinfo delete pods --all You should see output similar to the following: pod "details-v1-6cd699df8c-j54nh" deleted pod "productpage-v1-5ddcb4b84f-mtmf2" deleted pod "ratings-v1-bdbcc68bc-kmng4" deleted pod "reviews-v1-754ddd7b6f-lqhsv" deleted pod "reviews-v2-675679877f-q67r2" deleted pod "reviews-v3-79d7549c7-c2gjs" deleted Verify that the pods were created and are ready with the following command: USD oc get pods -n bookinfo 2.13.5.2. Verifying your certificates Use the Bookinfo sample application to verify that the workload certificates are signed by the certificates that were plugged into the CA. This process requires that you have openssl installed on your machine. To extract certificates from bookinfo workloads use the following command: USD sleep 60 USD oc -n bookinfo exec "USD(oc -n bookinfo get pod -l app=productpage -o jsonpath={.items..metadata.name})" -c istio-proxy -- openssl s_client -showcerts -connect details:9080 > bookinfo-proxy-cert.txt USD sed -n '/-----BEGIN CERTIFICATE-----/{:start /-----END CERTIFICATE-----/!{N;b start};/.*/p}' bookinfo-proxy-cert.txt > certs.pem USD awk 'BEGIN {counter=0;} /BEGIN CERT/{counter++} { print > "proxy-cert-" counter ".pem"}' < certs.pem After running the command, you should have three files in your working directory: proxy-cert-1.pem , proxy-cert-2.pem and proxy-cert-3.pem . Verify that the root certificate is the same as the one specified by the administrator. Replace <path> with the path to your certificates. USD openssl x509 -in <path>/root-cert.pem -text -noout > /tmp/root-cert.crt.txt Run the following syntax at the terminal window. USD openssl x509 -in ./proxy-cert-3.pem -text -noout > /tmp/pod-root-cert.crt.txt Compare the certificates by running the following syntax at the terminal window. USD diff -s /tmp/root-cert.crt.txt /tmp/pod-root-cert.crt.txt You should see the following result: Files /tmp/root-cert.crt.txt and /tmp/pod-root-cert.crt.txt are identical Verify that the CA certificate is the same as the one specified by the administrator. Replace <path> with the path to your certificates. USD openssl x509 -in <path>/ca-cert.pem -text -noout > /tmp/ca-cert.crt.txt Run the following syntax at the terminal window. USD openssl x509 -in ./proxy-cert-2.pem -text -noout > /tmp/pod-cert-chain-ca.crt.txt Compare the certificates by running the following syntax at the terminal window. USD diff -s /tmp/ca-cert.crt.txt /tmp/pod-cert-chain-ca.crt.txt You should see the following result: Files /tmp/ca-cert.crt.txt and /tmp/pod-cert-chain-ca.crt.txt are identical. Verify the certificate chain from the root certificate to the workload certificate. Replace <path> with the path to your certificates. USD openssl verify -CAfile <(cat <path>/ca-cert.pem <path>/root-cert.pem) ./proxy-cert-1.pem You should see the following result: ./proxy-cert-1.pem: OK 2.13.5.3. Removing the certificates To remove the certificates you added, follow these steps. Remove the secret cacerts . In this example, istio-system is the name of the Service Mesh control plane project. USD oc delete secret cacerts -n istio-system Redeploy Service Mesh with a self-signed root certificate in the ServiceMeshControlPlane resource. apiVersion: maistra.io/v2 kind: ServiceMeshControlPlane spec: security: dataPlane: mtls: true 2.13.6. About integrating Service Mesh with cert-manager and istio-csr The cert-manager tool is a solution for X.509 certificate management on Kubernetes. It delivers a unified API to integrate applications with private or public key infrastructure (PKI), such as Vault, Google Cloud Certificate Authority Service, Let's Encrypt, and other providers. The cert-manager tool ensures the certificates are valid and up-to-date by attempting to renew certificates at a configured time before they expire. For Istio users, cert-manager also provides integration with istio-csr , which is a certificate authority (CA) server that handles certificate signing requests (CSR) from Istio proxies. The server then delegates signing to cert-manager, which forwards CSRs to the configured CA server. Note Red Hat provides support for integrating with istio-csr and cert-manager. Red Hat does not provide direct support for the istio-csr or the community cert-manager components. The use of community cert-manager shown here is for demonstration purposes only. Prerequisites One of these versions of cert-manager: cert-manager Operator for Red Hat OpenShift 1.10 or later community cert-manager Operator 1.11 or later cert-manager 1.11 or later OpenShift Service Mesh Operator 2.4 or later istio-csr 0.6.0 or later Note To avoid creating config maps in all namespaces when the istio-csr server is installed with the jetstack/cert-manager-istio-csr Helm chart, use the following setting: app.controller.configmapNamespaceSelector: "maistra.io/member-of: <istio-namespace>" in the istio-csr.yaml file. 2.13.6.1. Installing cert-manager You can install the cert-manager tool to manage the lifecycle of TLS certificates and ensure that they are valid and up-to-date. If you are running Istio in your environment, you can also install the istio-csr certificate authority (CA) server, which handles certificate signing requests (CSR) from Istio proxies. The istio-csr CA delegates signing to the cert-manager tool, which delegates to the configured CA. Procedure Create the root cluster issuer: Create the cluster-issuer object as in the following example: Example cluster-issuer.yaml apiVersion: cert-manager.io/v1 kind: Issuer metadata: name: selfsigned-root-issuer namespace: cert-manager spec: selfSigned: {} --- apiVersion: cert-manager.io/v1 kind: Certificate metadata: name: root-ca namespace: cert-manager spec: isCA: true duration: 21600h # 900d secretName: root-ca commonName: root-ca.my-company.net subject: organizations: - my-company.net issuerRef: name: selfsigned-root-issuer kind: Issuer group: cert-manager.io --- apiVersion: cert-manager.io/v1 kind: ClusterIssuer metadata: name: root-ca spec: ca: secretName: root-ca Note The namespace of the selfsigned-root-issuer issuer and root-ca certificate is cert-manager because root-ca is a cluster issuer, so the cert-manager looks for a referenced secret in its own namespace. The namespace is called cert-manager in the case of the cert-manager Operator for Red Hat OpenShift. Create the object by using the following command: USD oc apply -f cluster-issuer.yaml Create the istio-ca object as in the following example: Example istio-ca.yaml apiVersion: cert-manager.io/v1 kind: Certificate metadata: name: istio-ca namespace: istio-system spec: isCA: true duration: 21600h secretName: istio-ca commonName: istio-ca.my-company.net subject: organizations: - my-company.net issuerRef: name: root-ca kind: ClusterIssuer group: cert-manager.io --- apiVersion: cert-manager.io/v1 kind: Issuer metadata: name: istio-ca namespace: istio-system spec: ca: secretName: istio-ca Use the following command to create the object: USD oc apply -n istio-system -f istio-ca.yaml Install istio-csr : USD helm install istio-csr jetstack/cert-manager-istio-csr \ -n istio-system \ -f deploy/examples/cert-manager/istio-csr/istio-csr.yaml Example istio-csr.yaml replicaCount: 2 image: repository: quay.io/jetstack/cert-manager-istio-csr tag: v0.6.0 pullSecretName: "" app: certmanager: namespace: istio-system issuer: group: cert-manager.io kind: Issuer name: istio-ca controller: configmapNamespaceSelector: "maistra.io/member-of=istio-system" leaderElectionNamespace: istio-system istio: namespace: istio-system revisions: ["basic"] server: maxCertificateDuration: 5m tls: certificateDNSNames: # This DNS name must be set in the SMCP spec.security.certificateAuthority.cert-manager.address - cert-manager-istio-csr.istio-system.svc Deploy SMCP: USD oc apply -f mesh.yaml -n istio-system Example mesh.yaml apiVersion: maistra.io/v2 kind: ServiceMeshControlPlane metadata: name: basic spec: addons: grafana: enabled: false kiali: enabled: false prometheus: enabled: false proxy: accessLogging: file: name: /dev/stdout security: certificateAuthority: cert-manager: address: cert-manager-istio-csr.istio-system.svc:443 type: cert-manager dataPlane: mtls: true identity: type: ThirdParty tracing: type: None --- apiVersion: maistra.io/v1 kind: ServiceMeshMemberRoll metadata: name: default spec: members: - httpbin - sleep Note security.identity.type: ThirdParty must be set when security.certificateAuthority.type: cert-manager is configured. Verification Use the sample httpbin service and sleep app to check mTLS traffic from ingress gateways and verify that the cert-manager tool is installed. Deploy the HTTP and sleep apps: USD oc new-project <namespace> USD oc apply -f https://raw.githubusercontent.com/maistra/istio/maistra-2.4/samples/httpbin/httpbin.yaml USD oc apply -f https://raw.githubusercontent.com/maistra/istio/maistra-2.4/samples/sleep/sleep.yaml Verify that sleep can access the httpbin service: USD oc exec "USD(oc get pod -l app=sleep -n <namespace> \ -o jsonpath={.items..metadata.name})" -c sleep -n <namespace> -- \ curl http://httpbin.<namespace>:8000/ip -s -o /dev/null \ -w "%{http_code}\n" Example output: 200 Check mTLS traffic from the ingress gateway to the httpbin service: USD oc apply -n <namespace> -f https://raw.githubusercontent.com/maistra/istio/maistra-2.4/samples/httpbin/httpbin-gateway.yaml Get the istio-ingressgateway route: INGRESS_HOST=USD(oc -n istio-system get routes istio-ingressgateway -o jsonpath='{.spec.host}') Verify mTLS traffic from the ingress gateway to the httpbin service: USD curl -s -I http://USDINGRESS_HOST/headers -o /dev/null -w "%{http_code}" -s 2.13.7. Additional resources For information about how to install the cert-manager Operator for OpenShift Container Platform, see: Installing the cert-manager Operator for Red Hat OpenShift . 2.14. Managing traffic in your service mesh Using Red Hat OpenShift Service Mesh, you can control the flow of traffic and API calls between services. Some services in your service mesh might need to communicate within the mesh and others might need to be hidden. You can manage the traffic to hide specific backend services, expose services, create testing or versioning deployments, or add a security layer on a set of services. 2.14.1. Using gateways You can use a gateway to manage inbound and outbound traffic for your mesh to specify which traffic you want to enter or leave the mesh. Gateway configurations are applied to standalone Envoy proxies that are running at the edge of the mesh, rather than sidecar Envoy proxies running alongside your service workloads. Unlike other mechanisms for controlling traffic entering your systems, such as the Kubernetes Ingress APIs, Red Hat OpenShift Service Mesh gateways use the full power and flexibility of traffic routing. The Red Hat OpenShift Service Mesh gateway resource can use layer 4-6 load balancing properties, such as ports, to expose and configure Red Hat OpenShift Service Mesh TLS settings. Instead of adding application-layer traffic routing (L7) to the same API resource, you can bind a regular Red Hat OpenShift Service Mesh virtual service to the gateway and manage gateway traffic like any other data plane traffic in a service mesh. Gateways are primarily used to manage ingress traffic, but you can also configure egress gateways. An egress gateway lets you configure a dedicated exit node for the traffic leaving the mesh. This enables you to limit which services have access to external networks, which adds security control to your service mesh. You can also use a gateway to configure a purely internal proxy. Gateway example A gateway resource describes a load balancer operating at the edge of the mesh receiving incoming or outgoing HTTP/TCP connections. The specification describes a set of ports that should be exposed, the type of protocol to use, SNI configuration for the load balancer, and so on. The following example shows a sample gateway configuration for external HTTPS ingress traffic: apiVersion: networking.istio.io/v1alpha3 kind: Gateway metadata: name: ext-host-gwy spec: selector: istio: ingressgateway # use istio default controller servers: - port: number: 443 name: https protocol: HTTPS hosts: - ext-host.example.com tls: mode: SIMPLE serverCertificate: /tmp/tls.crt privateKey: /tmp/tls.key This gateway configuration lets HTTPS traffic from ext-host.example.com into the mesh on port 443, but doesn't specify any routing for the traffic. To specify routing and for the gateway to work as intended, you must also bind the gateway to a virtual service. You do this using the virtual service's gateways field, as shown in the following example: apiVersion: networking.istio.io/v1alpha3 kind: VirtualService metadata: name: virtual-svc spec: hosts: - ext-host.example.com gateways: - ext-host-gwy You can then configure the virtual service with routing rules for the external traffic. 2.14.1.1. Enabling gateway injection Gateway configurations apply to standalone Envoy proxies running at the edge of the mesh, rather than sidecar Envoy proxies running alongside your service workloads. Because gateways are Envoy proxies, you can configure Service Mesh to inject gateways automatically, similar to how you can inject sidecars. Using automatic injection for gateways, you can deploy and manage gateways independent from the ServiceMeshControlPlane resource and manage the gateways with your user applications. Using auto-injection for gateway deployments gives developers full control over the gateway deployment while simplifying operations. When a new upgrade is available, or a configuration has changed, you restart the gateway pods to update them. Doing so makes the experience of operating a gateway deployment the same as operating sidecars. Note Injection is disabled by default for the ServiceMeshControlPlane namespace, for example the istio-system namespace. As a security best practice, deploy gateways in a different namespace from the control plane. 2.14.1.2. Deploying automatic gateway injection When deploying a gateway, you must opt-in to injection by adding an injection label or annotation to the gateway deployment object. The following example deploys a gateway. Prerequisites The namespace must be a member of the mesh by defining it in the ServiceMeshMemberRoll or by creating a ServiceMeshMember resource. Procedure Set a unique label for the Istio ingress gateway. This setting is required to ensure that the gateway can select the workload. This example uses ingressgateway as the name of the gateway. apiVersion: v1 kind: Service metadata: name: istio-ingressgateway namespace: istio-ingress spec: type: ClusterIP selector: istio: ingressgateway ports: - name: http2 port: 80 targetPort: 8080 - name: https port: 443 targetPort: 8443 --- apiVersion: apps/v1 kind: Deployment metadata: name: istio-ingressgateway namespace: istio-ingress spec: selector: matchLabels: istio: ingressgateway template: metadata: annotations: inject.istio.io/templates: gateway labels: istio: ingressgateway sidecar.istio.io/inject: "true" 1 spec: containers: - name: istio-proxy image: auto 2 1 Enable gateway injection by setting the sidecar.istio.io/inject field to "true" . 2 Set the image field to auto so that the image automatically updates each time the pod starts. Set up roles to allow reading credentials for TLS. apiVersion: rbac.authorization.k8s.io/v1 kind: Role metadata: name: istio-ingressgateway-sds namespace: istio-ingress rules: - apiGroups: [""] resources: ["secrets"] verbs: ["get", "watch", "list"] --- apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: name: istio-ingressgateway-sds namespace: istio-ingress roleRef: apiGroup: rbac.authorization.k8s.io kind: Role name: istio-ingressgateway-sds subjects: - kind: ServiceAccount name: default Grant access to the new gateway from outside the cluster, which is required whenever spec.security.manageNetworkPolicy is set to true . apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: gatewayingress namespace: istio-ingress spec: podSelector: matchLabels: istio: ingressgateway ingress: - {} policyTypes: - Ingress Automatically scale the pod when ingress traffic increases. This example sets the minimum replicas to 2 and the maximum replicas to 5 . It also creates another replica when utilization reaches 80%. apiVersion: autoscaling/v2 kind: HorizontalPodAutoscaler metadata: labels: istio: ingressgateway release: istio name: ingressgatewayhpa namespace: istio-ingress spec: maxReplicas: 5 metrics: - resource: name: cpu target: averageUtilization: 80 type: Utilization type: Resource minReplicas: 2 scaleTargetRef: apiVersion: apps/v1 kind: Deployment name: istio-ingressgateway Specify the minimum number of pods that must be running on the node. This example ensures one replica is running if a pod gets restarted on a new node. apiVersion: policy/v1 kind: PodDisruptionBudget metadata: labels: istio: ingressgateway release: istio name: ingressgatewaypdb namespace: istio-ingress spec: minAvailable: 1 selector: matchLabels: istio: ingressgateway 2.14.1.3. Managing ingress traffic In Red Hat OpenShift Service Mesh, the Ingress Gateway enables features such as monitoring, security, and route rules to apply to traffic that enters the cluster. Use a Service Mesh gateway to expose a service outside of the service mesh. 2.14.1.3.1. Determining the ingress IP and ports Ingress configuration differs depending on if your environment supports an external load balancer. An external load balancer is set in the ingress IP and ports for the cluster. To determine if your cluster's IP and ports are configured for external load balancers, run the following command. In this example, istio-system is the name of the Service Mesh control plane project. USD oc get svc istio-ingressgateway -n istio-system That command returns the NAME , TYPE , CLUSTER-IP , EXTERNAL-IP , PORT(S) , and AGE of each item in your namespace. If the EXTERNAL-IP value is set, your environment has an external load balancer that you can use for the ingress gateway. If the EXTERNAL-IP value is <none> , or perpetually <pending> , your environment does not provide an external load balancer for the ingress gateway. 2.14.1.3.1.1. Determining ingress ports with a load balancer Follow these instructions if your environment has an external load balancer. Procedure Run the following command to set the ingress IP and ports. This command sets a variable in your terminal. USD export INGRESS_HOST=USD(oc -n istio-system get service istio-ingressgateway -o jsonpath='{.status.loadBalancer.ingress[0].ip}') Run the following command to set the ingress port. USD export INGRESS_PORT=USD(oc -n istio-system get service istio-ingressgateway -o jsonpath='{.spec.ports[?(@.name=="http2")].port}') Run the following command to set the secure ingress port. USD export SECURE_INGRESS_PORT=USD(oc -n istio-system get service istio-ingressgateway -o jsonpath='{.spec.ports[?(@.name=="https")].port}') Run the following command to set the TCP ingress port. USD export TCP_INGRESS_PORT=USD(kubectl -n istio-system get service istio-ingressgateway -o jsonpath='{.spec.ports[?(@.name=="tcp")].port}') Note In some environments, the load balancer may be exposed using a hostname instead of an IP address. For that case, the ingress gateway's EXTERNAL-IP value is not an IP address. Instead, it's a hostname, and the command fails to set the INGRESS_HOST environment variable. In that case, use the following command to correct the INGRESS_HOST value: USD export INGRESS_HOST=USD(oc -n istio-system get service istio-ingressgateway -o jsonpath='{.status.loadBalancer.ingress[0].hostname}') 2.14.1.3.1.2. Determining ingress ports without a load balancer If your environment does not have an external load balancer, determine the ingress ports and use a node port instead. Procedure Set the ingress ports. USD export INGRESS_PORT=USD(oc -n istio-system get service istio-ingressgateway -o jsonpath='{.spec.ports[?(@.name=="http2")].nodePort}') Run the following command to set the secure ingress port. USD export SECURE_INGRESS_PORT=USD(oc -n istio-system get service istio-ingressgateway -o jsonpath='{.spec.ports[?(@.name=="https")].nodePort}') Run the following command to set the TCP ingress port. USD export TCP_INGRESS_PORT=USD(kubectl -n istio-system get service istio-ingressgateway -o jsonpath='{.spec.ports[?(@.name=="tcp")].nodePort}') Additional resources Configuring the node port service range 2.14.1.4. Configuring an ingress gateway An ingress gateway is a load balancer operating at the edge of the mesh that receives incoming HTTP/TCP connections. It configures exposed ports and protocols but does not include any traffic routing configuration. Traffic routing for ingress traffic is instead configured with routing rules, the same way as for internal service requests. The following steps show how to create a gateway and configure a VirtualService to expose a service in the Bookinfo sample application to outside traffic for paths /productpage and /login . Procedure Create a gateway to accept traffic. Create a YAML file, and copy the following YAML into it. Gateway example gateway.yaml apiVersion: networking.istio.io/v1alpha3 kind: Gateway metadata: name: bookinfo-gateway spec: selector: istio: ingressgateway servers: - port: number: 80 name: http protocol: HTTP hosts: - "*" Apply the YAML file. USD oc apply -f gateway.yaml Create a VirtualService object to rewrite the host header. Create a YAML file, and copy the following YAML into it. Virtual service example apiVersion: networking.istio.io/v1alpha3 kind: VirtualService metadata: name: bookinfo spec: hosts: - "*" gateways: - bookinfo-gateway http: - match: - uri: exact: /productpage - uri: prefix: /static - uri: exact: /login - uri: exact: /logout - uri: prefix: /api/v1/products route: - destination: host: productpage port: number: 9080 Apply the YAML file. USD oc apply -f vs.yaml Test that the gateway and VirtualService have been set correctly. Set the Gateway URL. export GATEWAY_URL=USD(oc -n istio-system get route istio-ingressgateway -o jsonpath='{.spec.host}') Set the port number. In this example, istio-system is the name of the Service Mesh control plane project. export TARGET_PORT=USD(oc -n istio-system get route istio-ingressgateway -o jsonpath='{.spec.port.targetPort}') Test a page that has been explicitly exposed. curl -s -I "USDGATEWAY_URL/productpage" The expected result is 200 . 2.14.2. Understanding automatic routes Important Istio OpenShift Routing (IOR) is a deprecated feature. Deprecated functionality is still included in OpenShift Container Platform and continues to be supported; however, it will be removed in a future release of this product and is not recommended for new deployments. For the most recent list of major functionality that has been deprecated or removed within OpenShift Container Platform, refer to the Deprecated and removed features section of the OpenShift Container Platform release notes. OpenShift routes for gateways are automatically managed in Service Mesh. Every time an Istio Gateway is created, updated or deleted inside the service mesh, an OpenShift route is created, updated or deleted. Note Starting with Service Mesh 2.5, automatic routes are disabled by default for new instances of the ServiceMeshControlPlane resource. 2.14.2.1. Routes with subdomains Red Hat OpenShift Service Mesh creates the route with the subdomain, but OpenShift Container Platform must be configured to enable it. Subdomains, for example *.domain.com , are supported, but not by default. Configure an OpenShift Container Platform wildcard policy before configuring a wildcard host gateway. For more information, see Using wildcard routes . 2.14.2.2. Creating subdomain routes The following example creates a gateway in the Bookinfo sample application, which creates subdomain routes. apiVersion: networking.istio.io/v1alpha3 kind: Gateway metadata: name: gateway1 spec: selector: istio: ingressgateway servers: - port: number: 80 name: http protocol: HTTP hosts: - www.bookinfo.com - bookinfo.example.com The Gateway resource creates the following OpenShift routes. You can check that the routes are created by using the following command. In this example, istio-system is the name of the Service Mesh control plane project. USD oc -n istio-system get routes Expected output NAME HOST/PORT PATH SERVICES PORT TERMINATION WILDCARD gateway1-lvlfn bookinfo.example.com istio-ingressgateway <all> None gateway1-scqhv www.bookinfo.com istio-ingressgateway <all> None If you delete the gateway, Red Hat OpenShift Service Mesh deletes the routes. However, routes you have manually created are never modified by Red Hat OpenShift Service Mesh. 2.14.2.3. Route labels and annotations Sometimes specific labels or annotations are needed in an OpenShift route. For example, some advanced features in OpenShift routes are managed using special annotations. See "Route-specific annotations" in the following "Additional resources" section. For this and other use cases, Red Hat OpenShift Service Mesh will copy all labels and annotations present in the Istio gateway resource (with the exception of annotations starting with kubectl.kubernetes.io ) into the managed OpenShift route resource. If you need specific labels or annotations in the OpenShift routes created by Service Mesh, create them in the Istio gateway resource and they will be copied into the OpenShift route resources managed by the Service Mesh. Additional resources Route-specific annotations . 2.14.2.4. Disabling automatic route creation By default, the ServiceMeshControlPlane resource automatically synchronizes the Istio gateway resources with OpenShift routes. Disabling the automatic route creation allows you more flexibility to control routes if you have a special case or prefer to control routes manually. 2.14.2.4.1. Disabling automatic route creation for specific cases If you want to disable the automatic management of OpenShift routes for a specific Istio gateway, you must add the annotation maistra.io/manageRoute: false to the gateway metadata definition. Red Hat OpenShift Service Mesh will ignore Istio gateways with this annotation, while keeping the automatic management of the other Istio gateways. 2.14.2.4.2. Disabling automatic route creation for all cases You can disable the automatic management of OpenShift routes for all gateways in your mesh. Disable integration between Istio gateways and OpenShift routes by setting the ServiceMeshControlPlane field gateways.openshiftRoute.enabled to false . For example, see the following resource snippet. apiVersion: maistra.io/v1alpha1 kind: ServiceMeshControlPlane metadata: namespace: istio-system spec: gateways: openshiftRoute: enabled: false 2.14.3. Understanding service entries A service entry adds an entry to the service registry that Red Hat OpenShift Service Mesh maintains internally. After you add the service entry, the Envoy proxies send traffic to the service as if it is a service in your mesh. Service entries allow you to do the following: Manage traffic for services that run outside of the service mesh. Redirect and forward traffic for external destinations (such as, APIs consumed from the web) or traffic to services in legacy infrastructure. Define retry, timeout, and fault injection policies for external destinations. Run a mesh service in a Virtual Machine (VM) by adding VMs to your mesh. Note Add services from a different cluster to the mesh to configure a multicluster Red Hat OpenShift Service Mesh mesh on Kubernetes. Service entry examples The following example is a mesh-external service entry that adds the ext-resource external dependency to the Red Hat OpenShift Service Mesh service registry: apiVersion: networking.istio.io/v1alpha3 kind: ServiceEntry metadata: name: svc-entry spec: hosts: - ext-svc.example.com ports: - number: 443 name: https protocol: HTTPS location: MESH_EXTERNAL resolution: DNS Specify the external resource using the hosts field. You can qualify it fully or use a wildcard prefixed domain name. You can configure virtual services and destination rules to control traffic to a service entry in the same way you configure traffic for any other service in the mesh. For example, the following destination rule configures the traffic route to use mutual TLS to secure the connection to the ext-svc.example.com external service that is configured using the service entry: apiVersion: networking.istio.io/v1alpha3 kind: DestinationRule metadata: name: ext-res-dr spec: host: ext-svc.example.com trafficPolicy: tls: mode: MUTUAL clientCertificate: /etc/certs/myclientcert.pem privateKey: /etc/certs/client_private_key.pem caCertificates: /etc/certs/rootcacerts.pem 2.14.4. Using VirtualServices You can route requests dynamically to multiple versions of a microservice through Red Hat OpenShift Service Mesh with a virtual service. With virtual services, you can: Address multiple application services through a single virtual service. If your mesh uses Kubernetes, for example, you can configure a virtual service to handle all services in a specific namespace. A virtual service enables you to turn a monolithic application into a service consisting of distinct microservices with a seamless consumer experience. Configure traffic rules in combination with gateways to control ingress and egress traffic. 2.14.4.1. Configuring VirtualServices Requests are routed to services within a service mesh with virtual services. Each virtual service consists of a set of routing rules that are evaluated in order. Red Hat OpenShift Service Mesh matches each given request to the virtual service to a specific real destination within the mesh. Without virtual services, Red Hat OpenShift Service Mesh distributes traffic using least requests load balancing between all service instances. With a virtual service, you can specify traffic behavior for one or more hostnames. Routing rules in the virtual service tell Red Hat OpenShift Service Mesh how to send the traffic for the virtual service to appropriate destinations. Route destinations can be versions of the same service or entirely different services. Procedure Create a YAML file using the following example to route requests to different versions of the Bookinfo sample application service depending on which user connects to the application. Example VirtualService.yaml apiVersion: networking.istio.io/v1alpha3 kind: VirtualService metadata: name: reviews spec: hosts: - reviews http: - match: - headers: end-user: exact: jason route: - destination: host: reviews subset: v2 - route: - destination: host: reviews subset: v3 Run the following command to apply VirtualService.yaml , where VirtualService.yaml is the path to the file. USD oc apply -f <VirtualService.yaml> 2.14.4.2. VirtualService configuration reference Parameter Description The hosts field lists the virtual service's destination address to which the routing rules apply. This is the address(es) that are used to send requests to the service. The virtual service hostname can be an IP address, a DNS name, or a short name that resolves to a fully qualified domain name. The http section contains the virtual service's routing rules which describe match conditions and actions for routing HTTP/1.1, HTTP2, and gRPC traffic sent to the destination as specified in the hosts field. A routing rule consists of the destination where you want the traffic to go and any specified match conditions. The first routing rule in the example has a condition that begins with the match field. In this example, this routing applies to all requests from the user jason . Add the headers , end-user , and exact fields to select the appropriate requests. The destination field in the route section specifies the actual destination for traffic that matches this condition. Unlike the virtual service's host, the destination's host must be a real destination that exists in the Red Hat OpenShift Service Mesh service registry. This can be a mesh service with proxies or a non-mesh service added using a service entry. In this example, the hostname is a Kubernetes service name: 2.14.5. Understanding destination rules Destination rules are applied after virtual service routing rules are evaluated, so they apply to the traffic's real destination. Virtual services route traffic to a destination. Destination rules configure what happens to traffic at that destination. By default, Red Hat OpenShift Service Mesh uses a least requests load balancing policy, where the service instance in the pool with the least number of active connections receives the request. Red Hat OpenShift Service Mesh also supports the following models, which you can specify in destination rules for requests to a particular service or service subset. Random: Requests are forwarded at random to instances in the pool. Weighted: Requests are forwarded to instances in the pool according to a specific percentage. Least requests: Requests are forwarded to instances with the least number of requests. Destination rule example The following example destination rule configures three different subsets for the my-svc destination service, with different load balancing policies: apiVersion: networking.istio.io/v1alpha3 kind: DestinationRule metadata: name: my-destination-rule spec: host: my-svc trafficPolicy: loadBalancer: simple: RANDOM subsets: - name: v1 labels: version: v1 - name: v2 labels: version: v2 trafficPolicy: loadBalancer: simple: ROUND_ROBIN - name: v3 labels: version: v3 2.14.6. Understanding network policies Red Hat OpenShift Service Mesh automatically creates and manages a number of NetworkPolicies resources in the Service Mesh control plane and application namespaces. This is to ensure that applications and the control plane can communicate with each other. For example, if you have configured your OpenShift Container Platform cluster to use the SDN plugin, Red Hat OpenShift Service Mesh creates a NetworkPolicy resource in each member project. This enables ingress to all pods in the mesh from the other mesh members and the control plane. This also restricts ingress to only member projects. If you require ingress from non-member projects, you need to create a NetworkPolicy to allow that traffic through. If you remove a namespace from Service Mesh, this NetworkPolicy resource is deleted from the project. 2.14.6.1. Disabling automatic NetworkPolicy creation If you want to disable the automatic creation and management of NetworkPolicy resources, for example to enforce company security policies, or to allow direct access to pods in the mesh, you can do so. You can edit the ServiceMeshControlPlane and set spec.security.manageNetworkPolicy to false . Note When you disable spec.security.manageNetworkPolicy Red Hat OpenShift Service Mesh will not create any NetworkPolicy objects. The system administrator is responsible for managing the network and fixing any issues this might cause. Prerequisites Red Hat OpenShift Service Mesh Operator version 2.1.1 or higher installed. ServiceMeshControlPlane resource updated to version 2.1 or higher. Procedure In the OpenShift Container Platform web console, click Operators Installed Operators . Select the project where you installed the Service Mesh control plane, for example istio-system , from the Project menu. Click the Red Hat OpenShift Service Mesh Operator. In the Istio Service Mesh Control Plane column, click the name of your ServiceMeshControlPlane , for example basic-install . On the Create ServiceMeshControlPlane Details page, click YAML to modify your configuration. Set the ServiceMeshControlPlane field spec.security.manageNetworkPolicy to false , as shown in this example. apiVersion: maistra.io/v2 kind: ServiceMeshControlPlane spec: security: manageNetworkPolicy: false Click Save . 2.14.7. Configuring sidecars for traffic management By default, Red Hat OpenShift Service Mesh configures every Envoy proxy to accept traffic on all the ports of its associated workload, and to reach every workload in the mesh when forwarding traffic. You can use a sidecar configuration to do the following: Fine-tune the set of ports and protocols that an Envoy proxy accepts. Limit the set of services that the Envoy proxy can reach. Note To optimize performance of your service mesh, consider limiting Envoy proxy configurations. In the Bookinfo sample application, configure a Sidecar so all services can reach other services running in the same namespace and control plane. This Sidecar configuration is required for using Red Hat OpenShift Service Mesh policy and telemetry features. Procedure Create a YAML file using the following example to specify that you want a sidecar configuration to apply to all workloads in a particular namespace. Otherwise, choose specific workloads using a workloadSelector . Example sidecar.yaml apiVersion: networking.istio.io/v1alpha3 kind: Sidecar metadata: name: default namespace: bookinfo spec: egress: - hosts: - "./*" - "istio-system/*" Run the following command to apply sidecar.yaml , where sidecar.yaml is the path to the file. USD oc apply -f sidecar.yaml Run the following command to verify that the sidecar was created successfully. USD oc get sidecar 2.14.8. Routing Tutorial This guide references the Bookinfo sample application to provide examples of routing in an example application. Install the Bookinfo application to learn how these routing examples work. 2.14.8.1. Bookinfo routing tutorial The Service Mesh Bookinfo sample application consists of four separate microservices, each with multiple versions. After installing the Bookinfo sample application, three different versions of the reviews microservice run concurrently. When you access the Bookinfo app /product page in a browser and refresh several times, sometimes the book review output contains star ratings and other times it does not. Without an explicit default service version to route to, Service Mesh routes requests to all available versions one after the other. This tutorial helps you apply rules that route all traffic to v1 (version 1) of the microservices. Later, you can apply a rule to route traffic based on the value of an HTTP request header. Prerequisites Deploy the Bookinfo sample application to work with the following examples. 2.14.8.2. Applying a virtual service In the following procedure, the virtual service routes all traffic to v1 of each micro-service by applying virtual services that set the default version for the micro-services. Procedure Apply the virtual services. USD oc apply -f https://raw.githubusercontent.com/Maistra/istio/maistra-2.6/samples/bookinfo/networking/virtual-service-all-v1.yaml To verify that you applied the virtual services, display the defined routes with the following command: USD oc get virtualservices -o yaml That command returns a resource of kind: VirtualService in YAML format. You have configured Service Mesh to route to the v1 version of the Bookinfo microservices including the reviews service version 1. 2.14.8.3. Testing the new route configuration Test the new configuration by refreshing the /productpage of the Bookinfo application. Procedure Set the value for the GATEWAY_URL parameter. You can use this variable to find the URL for your Bookinfo product page later. In this example, istio-system is the name of the control plane project. export GATEWAY_URL=USD(oc -n istio-system get route istio-ingressgateway -o jsonpath='{.spec.host}') Run the following command to retrieve the URL for the product page. echo "http://USDGATEWAY_URL/productpage" Open the Bookinfo site in your browser. The reviews part of the page displays with no rating stars, no matter how many times you refresh. This is because you configured Service Mesh to route all traffic for the reviews service to the version reviews:v1 and this version of the service does not access the star ratings service. Your service mesh now routes traffic to one version of a service. 2.14.8.4. Route based on user identity Change the route configuration so that all traffic from a specific user is routed to a specific service version. In this case, all traffic from a user named jason will be routed to the service reviews:v2 . Service Mesh does not have any special, built-in understanding of user identity. This example is enabled by the fact that the productpage service adds a custom end-user header to all outbound HTTP requests to the reviews service. Procedure Run the following command to enable user-based routing in the Bookinfo sample application. USD oc apply -f https://raw.githubusercontent.com/Maistra/istio/maistra-2.6/samples/bookinfo/networking/virtual-service-reviews-test-v2.yaml Run the following command to confirm the rule is created. This command returns all resources of kind: VirtualService in YAML format. USD oc get virtualservice reviews -o yaml On the /productpage of the Bookinfo app, log in as user jason with no password. Refresh the browser. The star ratings appear to each review. Log in as another user (pick any name you want). Refresh the browser. Now the stars are gone. Traffic is now routed to reviews:v1 for all users except Jason. You have successfully configured the Bookinfo sample application to route traffic based on user identity. 2.15. Gateway migration As a network administrator, the preferred method for deploying ingress and egress gateways is with a Deployment resource using gateway injection. 2.15.1. About gateway migration In Red Hat OpenShift Service Mesh 2.x, the Service Mesh Operator creates an ingress and egress gateway in the control plane namespace by default. You can define additional gateways in the ServiceMeshControlPlane resource. Deploying ingress and egress gateways with a Deployment resource using gateway injection provides greater flexibility and control. This deployment approach is a better practice because it allows you to manage gateways alongside the corresponding applications rather than in the control plane resource. Therefore, you should disable the default gateways, move away from the Service Mesh Control Plane declaration, and begin to use gateway injection. 2.15.2. Migrate from SMCP-Defined gateways to gateway injection This procedure explains how to migrate with zero downtime from gateways defined in the ServiceMeshControlPlane resource to gateways that are managed using gateway injection. This migration is achieved by using the existing gateway Service object to target a new gateway deployment that is created using gateway injection. Prerequisites You are logged in to the OpenShift Container Platform web console as cluster-admin . The Red Hat OpenShift Service Mesh Operator must be installed. The ServiceMeshControlPlane resource must be deployed and an ingress gateway exists in the configuration. Procedure Create a new ingress gateway that is configured to use gateway injection. Note This procedure migrates away from the default ingress gateway deployment defined in the ServiceMeshControlPlane resource to gateway injection. The procedure may be modified to migrate from additional ingress gateways configured in the SMCP. Example ingress gateway resource with gateway injection apiVersion: apps/v1 kind: Deployment metadata: name: istio-ingressgateway-canary namespace: istio-system 1 spec: selector: matchLabels: app: istio-ingressgateway istio: ingressgateway template: metadata: annotations: inject.istio.io/templates: gateway labels: 2 app: istio-ingressgateway istio: ingressgateway sidecar.istio.io/inject: "true" spec: containers: - name: istio-proxy image: auto serviceAccountName: istio-ingressgateway --- apiVersion: v1 kind: ServiceAccount metadata: name: istio-ingressgateway namespace: istio-system --- apiVersion: rbac.authorization.k8s.io/v1 kind: Role metadata: name: secret-reader namespace: istio-system rules: - apiGroups: [""] resources: ["secrets"] verbs: ["get", "watch", "list"] --- apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: name: istio-ingressgateway-secret-reader namespace: istio-system roleRef: apiGroup: rbac.authorization.k8s.io kind: Role name: secret-reader subjects: - kind: ServiceAccount name: istio-ingressgateway --- apiVersion: networking.k8s.io/v1 kind: NetworkPolicy 3 metadata: name: gatewayingress namespace: istio-system spec: podSelector: matchLabels: istio: ingressgateway ingress: - {} policyTypes: - Ingress 1 The gateway injection deployment and all supporting resources should be deployed in the same namespace as the SMCP-defined gateway. 2 Ensure that the labels specified in the pod template include all of the label selectors specified in the Service object associated with the existing SMCP-defined gateway. 3 Grant access to the new gateway from outside the cluster. This access is required whenever the spec.security.manageNetworkPolicy of the ServiceMeshControlPlane resource is set to true , which is the default setting. Verify that the new gateway deployment is successfully handling requests. If access logging was configured in the ServiceMeshControlPlane resource, view the access logs of the new gateway deployment to confirm the behavior. Scale down the old deployment and scale up the new deployment. Gradually shift traffic from the old gateway deployment to the new gateway deployment by performing the following steps: Increase the number of replicas for the new gateway deployment by running the following command: USD oc scale -n istio-system deployment/<new_gateway_deployment> --replicas <new_number_of_replicas> Decrease the number of replicas for the old gateway deployment by running the following command: USD oc scale -n istio-system deployment/<old_gateway_deployment> --replicas <new_number_of_replicas> Repeat running the two commands. Each time, increase the number of replicas for the new gateway deployment and decrease the number of replicas for the old gateway deployment. Continue repeating until the new gateway deployment handles all traffic to the gateway Service object. Remove the app.kubernetes.io/managed-by label from the gateway Service object by running the following command: USD oc label service -n istio-system istio-ingressgateway app.kubernetes.io/managed-by- Removing the label prevents the service from being deleted when the gateway is disabled in the ServiceMeshControlPlane resource. Remove the ownerReferences object from the gateway Service object by running the following command: USD oc patch service -n istio-system istio-ingressgateway --type='json' -p='[{"op": "remove", "path": "/metadata/ownerReferences"}]' Removing this object prevents the service from being garbage collected when the ServiceMeshControlPlane resource is deleted. Disable the old gateway deployment that was managed by the ServiceMeshControlPlane resource by running the following command: USD oc patch smcp -n istio-system <smcp_name> --type='json' -p='[{"op": "replace", "path": "/spec/gateways/ingress/enabled", "value": false}]' Note When the old ingress gateway Service object is disabled it is not deleted. You may save this Service object to a file and manage it alongside the new gateway injection resources. 2.15.3. Additional resources Enabling gateway injection Deploying automatic gateway injection 2.16. Route migration Automatic route creation, also known as Istio OpenShift Routing (IOR), is a deprecated feature that is disabled by default for any ServiceMeshControlPlane resource that was created using Red Hat OpenShift Service Mesh 2.5 and later. Migrating from IOR to explicitly-managed routes provides a more flexible way to manage and configure ingress gateways. When route resources are explicitly created they can be managed alongside the other gateway and application resources as part of a GitOps management model. 2.16.1. Migrating from Istio OpenShift Routing to explicitly-managed routes This procedure explains how to disable Istio OpenShift Routing (IOR) in Red Hat OpenShift Service Mesh, and how to continue to use and manage Routes that were originally created using IOR. This procedure also provides an example of how to explicitly create a new Route targeting an existing gateway Service object. Prerequisites Before migrating to explicitly-managed routes, export the existing route configurations managed by Istio OpenShift Routing (IOR) to files. Save the files so that in the future you can recreate the route configurations without requiring IOR. Procedure Modify the ServiceMeshControlPlane resource to disable IOR: apiVersion: maistra.io/v2 kind: ServiceMeshControlPlane spec: gateways: openshiftRoute: enabled: false You can continue to use the old routes that were previously created using IOR or you can create routes that explicitly target the ingress gateway Service object. The following example specifies how to create routes that explicitly target the ingress gateway Service object: kind: Route apiVersion: route.openshift.io/v1 metadata: name: example-gateway namespace: istio-system 1 spec: host: www.example.com to: kind: Service name: istio-ingressgateway 2 weight: 100 port: targetPort: http2 wildcardPolicy: None 1 Specify new routes in the same namespace as the ingress gateway Service object. 2 Use the name of ingress gateway Service object that is the target. 2.16.2. Additional resources Creating an HTTP-based Route Understanding automatic routes 2.17. Metrics, logs, and traces Once you have added your application to the mesh, you can observe the data flow through your application. If you do not have your own application installed, you can see how observability works in Red Hat OpenShift Service Mesh by installing the Bookinfo sample application . 2.17.1. Discovering console addresses Red Hat OpenShift Service Mesh provides the following consoles to view your service mesh data: Kiali console - Kiali is the management console for Red Hat OpenShift Service Mesh. Jaeger console - Jaeger is the management console for Red Hat OpenShift distributed tracing platform. Grafana console - Grafana provides mesh administrators with advanced query and metrics analysis and dashboards for Istio data. Optionally, Grafana can be used to analyze service mesh metrics. Prometheus console - Red Hat OpenShift Service Mesh uses Prometheus to store telemetry information from services. When you install the Service Mesh control plane, it automatically generates routes for each of the installed components. Once you have the route address, you can access the Kiali, Jaeger, Prometheus, or Grafana console to view and manage your service mesh data. Prerequisite The component must be enabled and installed. For example, if you did not install distributed tracing, you will not be able to access the Jaeger console. Procedure from OpenShift console Log in to the OpenShift Container Platform web console as a user with cluster-admin rights. If you use Red Hat OpenShift Dedicated, you must have an account with the dedicated-admin role. Navigate to Networking Routes . On the Routes page, select the Service Mesh control plane project, for example istio-system , from the Namespace menu. The Location column displays the linked address for each route. If necessary, use the filter to find the component console whose route you want to access. Click the route Location to launch the console. Click Log In With OpenShift . Procedure from the CLI Log in to the OpenShift Container Platform CLI as a user with the cluster-admin role. If you use Red Hat OpenShift Dedicated, you must have an account with the dedicated-admin role. USD oc login --username=<NAMEOFUSER> https://<HOSTNAME>:6443 Switch to the Service Mesh control plane project. In this example, istio-system is the Service Mesh control plane project. Run the following command: USD oc project istio-system To get the routes for the various Red Hat OpenShift Service Mesh consoles, run the folowing command: USD oc get routes This command returns the URLs for the Kiali, Jaeger, Prometheus, and Grafana web consoles, and any other routes in your service mesh. You should see output similar to the following: NAME HOST/PORT SERVICES PORT TERMINATION bookinfo-gateway bookinfo-gateway-yourcompany.com istio-ingressgateway http2 grafana grafana-yourcompany.com grafana <all> reencrypt/Redirect istio-ingressgateway istio-ingress-yourcompany.com istio-ingressgateway 8080 jaeger jaeger-yourcompany.com jaeger-query <all> reencrypt kiali kiali-yourcompany.com kiali 20001 reencrypt/Redirect prometheus prometheus-yourcompany.com prometheus <all> reencrypt/Redirect Copy the URL for the console you want to access from the HOST/PORT column into a browser to open the console. Click Log In With OpenShift . 2.17.2. Accessing the Kiali console You can view your application's topology, health, and metrics in the Kiali console. If your service is experiencing problems, the Kiali console lets you view the data flow through your service. You can view insights about the mesh components at different levels, including abstract applications, services, and workloads. Kiali also provides an interactive graph view of your namespace in real time. To access the Kiali console you must have Red Hat OpenShift Service Mesh installed, Kiali installed and configured. The installation process creates a route to access the Kiali console. If you know the URL for the Kiali console, you can access it directly. If you do not know the URL, use the following directions. Procedure for administrators Log in to the OpenShift Container Platform web console with an administrator role. Click Home Projects . On the Projects page, if necessary, use the filter to find the name of your project. Click the name of your project, for example, bookinfo . On the Project details page, in the Launcher section, click the Kiali link. Log in to the Kiali console with the same user name and password that you use to access the OpenShift Container Platform console. When you first log in to the Kiali Console, you see the Overview page which displays all the namespaces in your service mesh that you have permission to view. If you are validating the console installation and namespaces have not yet been added to the mesh, there might not be any data to display other than istio-system . Procedure for developers Log in to the OpenShift Container Platform web console with a developer role. Click Project . On the Project Details page, if necessary, use the filter to find the name of your project. Click the name of your project, for example, bookinfo . On the Project page, in the Launcher section, click the Kiali link. Click Log In With OpenShift . 2.17.3. Viewing service mesh data in the Kiali console The Kiali Graph offers a powerful visualization of your mesh traffic. The topology combines real-time request traffic with your Istio configuration information to present immediate insight into the behavior of your service mesh, letting you quickly pinpoint issues. Multiple Graph Types let you visualize traffic as a high-level service topology, a low-level workload topology, or as an application-level topology. There are several graphs to choose from: The App graph shows an aggregate workload for all applications that are labeled the same. The Service graph shows a node for each service in your mesh but excludes all applications and workloads from the graph. It provides a high level view and aggregates all traffic for defined services. The Versioned App graph shows a node for each version of an application. All versions of an application are grouped together. The Workload graph shows a node for each workload in your service mesh. This graph does not require you to use the application and version labels. If your application does not use version labels, use this the graph. Graph nodes are decorated with a variety of information, pointing out various route routing options like virtual services and service entries, as well as special configuration like fault-injection and circuit breakers. It can identify mTLS issues, latency issues, error traffic and more. The Graph is highly configurable, can show traffic animation, and has powerful Find and Hide abilities. Click the Legend button to view information about the shapes, colors, arrows, and badges displayed in the graph. To view a summary of metrics, select any node or edge in the graph to display its metric details in the summary details panel. 2.17.3.1. Changing graph layouts in Kiali The layout for the Kiali graph can render differently depending on your application architecture and the data to display. For example, the number of graph nodes and their interactions can determine how the Kiali graph is rendered. Because it is not possible to create a single layout that renders nicely for every situation, Kiali offers a choice of several different layouts. Prerequisites If you do not have your own application installed, install the Bookinfo sample application. Then generate traffic for the Bookinfo application by entering the following command several times. USD curl "http://USDGATEWAY_URL/productpage" This command simulates a user visiting the productpage microservice of the application. Procedure Launch the Kiali console. Click Log In With OpenShift . In Kiali console, click Graph to view a namespace graph. From the Namespace menu, select your application namespace, for example, bookinfo . To choose a different graph layout, do either or both of the following: Select different graph data groupings from the menu at the top of the graph. App graph Service graph Versioned App graph (default) Workload graph Select a different graph layout from the Legend at the bottom of the graph. Layout default dagre Layout 1 cose-bilkent Layout 2 cola 2.17.3.2. Viewing logs in the Kiali console You can view logs for your workloads in the Kiali console. The Workload Detail page includes a Logs tab which displays a unified logs view that displays both application and proxy logs. You can select how often you want the log display in Kiali to be refreshed. To change the logging level on the logs displayed in Kiali, you change the logging configuration for the workload or the proxy. Prerequisites Service Mesh installed and configured. Kiali installed and configured. The address for the Kiali console. Application or Bookinfo sample application added to the mesh. Procedure Launch the Kiali console. Click Log In With OpenShift . The Kiali Overview page displays namespaces that have been added to the mesh that you have permissions to view. Click Workloads . On the Workloads page, select the project from the Namespace menu. If necessary, use the filter to find the workload whose logs you want to view. Click the workload Name . For example, click ratings-v1 . On the Workload Details page, click the Logs tab to view the logs for the workload. Tip If you do not see any log entries, you may need to adjust either the Time Range or the Refresh interval. 2.17.3.3. Viewing metrics in the Kiali console You can view inbound and outbound metrics for your applications, workloads, and services in the Kiali console. The Detail pages include the following tabs: inbound Application metrics outbound Application metrics inbound Workload metrics outbound Workload metrics inbound Service metrics These tabs display predefined metrics dashboards, tailored to the relevant application, workload or service level. The application and workload detail views show request and response metrics such as volume, duration, size, or TCP traffic. The service detail view shows request and response metrics for inbound traffic only. Kiali lets you customize the charts by choosing the charted dimensions. Kiali can also present metrics reported by either source or destination proxy metrics. And for troubleshooting, Kiali can overlay trace spans on the metrics. Prerequisites Service Mesh installed and configured. Kiali installed and configured. The address for the Kiali console. (Optional) Distributed tracing installed and configured. Procedure Launch the Kiali console. Click Log In With OpenShift . The Kiali Overview page displays namespaces that have been added to the mesh that you have permissions to view. Click either Applications , Workloads , or Services . On the Applications , Workloads , or Services page, select the project from the Namespace menu. If necessary, use the filter to find the application, workload, or service whose logs you want to view. Click the Name . On the Application Detail , Workload Details , or Service Details page, click either the Inbound Metrics or Outbound Metrics tab to view the metrics. 2.17.4. Distributed tracing Distributed tracing is the process of tracking the performance of individual services in an application by tracing the path of the service calls in the application. Each time a user takes action in an application, a request is executed that might require many services to interact to produce a response. The path of this request is called a distributed transaction. Red Hat OpenShift Service Mesh uses Red Hat OpenShift distributed tracing platform to allow developers to view call flows in a microservice application. 2.17.4.1. Configuring the Red Hat OpenShift distributed tracing platform (Tempo) and the Red Hat build of OpenTelemetry You can expose tracing data to the Red Hat OpenShift distributed tracing platform (Tempo) by appending a named element and the opentelemetry provider to the spec.meshConfig.extensionProviders specification in the ServiceMeshControlPlane . Then, a telemetry custom resource configures Istio proxies to collect trace spans and send them to the OpenTelemetry Collector endpoint. You can create a Red Hat build of OpenTelemetry instance in a mesh namespace and configure it to send tracing data to a tracing platform backend service. Prerequisites You created a TempoStack instance using the Red Hat Tempo Operator in the tracing-system namespace. For more information, see "Installing Red Hat OpenShift distributed tracing platform (Tempo)". You installed the Red Hat build of OpenTelemetry Operator in either the recommended namespace or the openshift-operators namespace. For more information, see "Installing the Red Hat build of OpenTelemetry". If using Red Hat OpenShift Service Mesh 2.5 or earlier, set the spec.tracing.type parameter of the ServiceMeshControlPlane resource to None so tracing data can be sent to the OpenTelemetry Collector. Procedure Create an OpenTelemetry Collector instance in a mesh namespace. This example uses the bookinfo namespace: Example OpenTelemetry Collector configuration apiVersion: opentelemetry.io/v1alpha1 kind: OpenTelemetryCollector metadata: name: otel namespace: bookinfo 1 spec: mode: deployment config: | receivers: otlp: protocols: grpc: endpoint: 0.0.0.0:4317 exporters: otlp: endpoint: "tempo-sample-distributor.tracing-system.svc.cluster.local:4317" 2 tls: insecure: true service: pipelines: traces: receivers: [otlp] processors: [] exporters: [otlp] 1 Include the namespace in the ServiceMeshMemberRoll member list. 2 In this example, a TempoStack instance is running in the tracing-system namespace. You do not have to include the TempoStack namespace, such as`tracing-system`, in the ServiceMeshMemberRoll member list. Note Create a single instance of the OpenTelemetry Collector in one of the ServiceMeshMemberRoll member namespaces. You can add an otel-collector as a part of the mesh by adding sidecar.istio.io/inject: 'true' to the OpenTelemetryCollector resource. Check the otel-collector pod log and verify that the pod is running: Example otel-collector pod log check USD oc logs -n bookinfo -l app.kubernetes.io/name=otel-collector Create or update an existing ServiceMeshControlPlane custom resource (CR) in the istio-system namespace: Example SMCP custom resource kind: ServiceMeshControlPlane apiVersion: maistra.io/v2 metadata: name: basic namespace: istio-system spec: addons: grafana: enabled: false kiali: enabled: true prometheus: enabled: true meshConfig: extensionProviders: - name: otel opentelemetry: port: 4317 service: otel-collector.bookinfo.svc.cluster.local policy: type: Istiod telemetry: type: Istiod version: v2.6 Note When upgrading from SMCP 2.5 to 2.6, set the spec.tracing.type parameter to None : Example SMCP spec.tracing.type parameter spec: tracing: type: None Create a Telemetry resource in the istio-system namespace: Example Telemetry resource apiVersion: telemetry.istio.io/v1alpha1 kind: Telemetry metadata: name: mesh-default namespace: istio-system spec: tracing: - providers: - name: otel randomSamplingPercentage: 100 Verify the istiod log. Configure the Kiali resource specification to enable a Kiali workload traces dashboard. You can use the dashboard to view tracing query results. Example Kiali resource apiVersion: kiali.io/v1alpha1 kind: Kiali # ... spec: external_services: tracing: query_timeout: 30 1 enabled: true in_cluster_url: 'http://tempo-sample-query-frontend.tracing-system.svc.cluster.local:16685' url: '[Tempo query frontend Route url]' use_grpc: true 2 1 The default query_timeout integer value is 30 seconds. If you set the value to greater than 30 seconds, you must update .spec.server.write_timeout in the Kiali CR and add the annotation haproxy.router.openshift.io/timeout=50s to the Kiali route. Both .spec.server.write_timeout and haproxy.router.openshift.io/timeout= must be greater than query_timeout . 2 If you are not using the default HTTP or gRPC port, replace the in_cluster_url: port with your custom port. Note Kiali 1.73 uses the Jaeger Query API, which causes a longer response time depending on Tempo resource limits. If you see a Could not fetch spans error message in the Kiali UI, then check your Tempo configuration or reduce the limit per query in Kiali. Send requests to your application. Verify the istiod pod logs and the otel-collector pod logs. 2.17.4.1.1. Configuring the OpenTelemetryCollector in a mTLS encrypted Service Mesh member namespace All traffic is TLS encrypted when you enable Service Mesh dataPlane mTLS encryption. To enable the mesh to communicate with the OpenTelemetryCollector service, disable the TLS trafficPolicy by applying a DestinationRule for the OpenTelemetryCollector service: Example DestinationRule Tempo CR apiVersion: networking.istio.io/v1alpha3 kind: DestinationRule metadata: name: otel-disable-tls spec: host: "otel-collector.bookinfo.svc.cluster.local" trafficPolicy: tls: mode: DISABLE 2.17.4.1.2. Configuring the Red Hat OpenShift distributed tracing platform (Tempo) in a mTLS encrypted Service Mesh member namespace Note You don't need this additional DestinationRule configuration if you created a TempoStack instance in a namespace that is not a Service Mesh member namespace. All traffic is TLS encrypted when you enable Service Mesh dataPlane mTLS encryption and you create a TempoStack instance in a Service Mesh member namespace such as tracing-system-mtls . This encryption is not expected from the Tempo distributed service and returns a TLS error. To fix the TLS error, disable the TLS trafficPolicy by applying a DestinationRule for Tempo and Kiali: Example DestinationRule Tempo apiVersion: networking.istio.io/v1alpha3 kind: DestinationRule metadata: name: tempo namespace: tracing-system-mtls spec: host: "*.tracing-system-mtls.svc.cluster.local" trafficPolicy: tls: mode: DISABLE Example DestinationRule Kiali apiVersion: networking.istio.io/v1alpha3 kind: DestinationRule metadata: name: kiali namespace: istio-system spec: host: kiali.istio-system.svc.cluster.local trafficPolicy: tls: mode: DISABLE 2.17.4.2. Connecting an existing distributed tracing Jaeger instance If you already have an existing Red Hat OpenShift distributed tracing platform (Jaeger) instance in OpenShift Container Platform, you can configure your ServiceMeshControlPlane resource to use that instance for distributed tracing platform. Important Starting with Red Hat OpenShift Service Mesh 2.5, Red Hat OpenShift distributed tracing platform (Jaeger) and OpenShift Elasticsearch Operator are deprecated and will be removed in a future release. Red Hat will provide bug fixes and support for these features during the current release lifecycle, but these features will no longer receive enhancements and will be removed. As an alternative to Red Hat OpenShift distributed tracing platform (Jaeger), you can use Red Hat OpenShift distributed tracing platform (Tempo) instead. Prerequisites Red Hat OpenShift distributed tracing platform instance installed and configured. Procedure In the OpenShift Container Platform web console, click Operators Installed Operators . Click the Project menu and select the project where you installed the Service Mesh control plane, for example istio-system . Click the Red Hat OpenShift Service Mesh Operator. In the Istio Service Mesh Control Plane column, click the name of your ServiceMeshControlPlane resource, for example basic . Add the name of your distributed tracing platform (Jaeger) instance to the ServiceMeshControlPlane . Click the YAML tab. Add the name of your distributed tracing platform (Jaeger) instance to spec.addons.jaeger.name in your ServiceMeshControlPlane resource. In the following example, distr-tracing-production is the name of the distributed tracing platform (Jaeger) instance. Example distributed tracing configuration spec: addons: jaeger: name: distr-tracing-production Click Save . Click Reload to verify the ServiceMeshControlPlane resource was configured correctly. 2.17.4.3. Adjusting the sampling rate A trace is an execution path between services in the service mesh. A trace is comprised of one or more spans. A span is a logical unit of work that has a name, start time, and duration. The sampling rate determines how often a trace is persisted. The Envoy proxy sampling rate is set to sample 100% of traces in your service mesh by default. A high sampling rate consumes cluster resources and performance but is useful when debugging issues. Before you deploy Red Hat OpenShift Service Mesh in production, set the value to a smaller proportion of traces. For example, set spec.tracing.sampling to 100 to sample 1% of traces. Configure the Envoy proxy sampling rate as a scaled integer representing 0.01% increments. In a basic installation, spec.tracing.sampling is set to 10000 , which samples 100% of traces. For example: Setting the value to 10 samples 0.1% of traces. Setting the value to 500 samples 5% of traces. Note The Envoy proxy sampling rate applies for applications that are available to a Service Mesh, and use the Envoy proxy. This sampling rate determines how much data the Envoy proxy collects and tracks. The Jaeger remote sampling rate applies to applications that are external to the Service Mesh, and do not use the Envoy proxy, such as a database. This sampling rate determines how much data the distributed tracing system collects and stores. For more information, see Distributed tracing configuration options . Procedure In the OpenShift Container Platform web console, click Operators Installed Operators . Click the Project menu and select the project where you installed the control plane, for example istio-system . Click the Red Hat OpenShift Service Mesh Operator. In the Istio Service Mesh Control Plane column, click the name of your ServiceMeshControlPlane resource, for example basic . To adjust the sampling rate, set a different value for spec.tracing.sampling . Click the YAML tab. Set the value for spec.tracing.sampling in your ServiceMeshControlPlane resource. In the following example, set it to 100 . Jaeger sampling example spec: tracing: sampling: 100 Click Save . Click Reload to verify the ServiceMeshControlPlane resource was configured correctly. 2.17.5. Accessing the Jaeger console To access the Jaeger console you must have Red Hat OpenShift Service Mesh installed, Red Hat OpenShift distributed tracing platform (Jaeger) installed and configured. The installation process creates a route to access the Jaeger console. If you know the URL for the Jaeger console, you can access it directly. If you do not know the URL, use the following directions. Important Starting with Red Hat OpenShift Service Mesh 2.5, Red Hat OpenShift distributed tracing platform (Jaeger) and OpenShift Elasticsearch Operator have been deprecated and will be removed in a future release. Red Hat will provide bug fixes and support for this feature during the current release lifecycle, but this feature will no longer receive enhancements and will be removed. As an alternative to Red Hat OpenShift distributed tracing platform (Jaeger), you can use Red Hat OpenShift distributed tracing platform (Tempo) instead. Procedure from OpenShift console Log in to the OpenShift Container Platform web console as a user with cluster-admin rights. If you use Red Hat OpenShift Dedicated, you must have an account with the dedicated-admin role. Navigate to Networking Routes . On the Routes page, select the Service Mesh control plane project, for example istio-system , from the Namespace menu. The Location column displays the linked address for each route. If necessary, use the filter to find the jaeger route. Click the route Location to launch the console. Click Log In With OpenShift . Procedure from Kiali console Launch the Kiali console. Click Distributed Tracing in the left navigation pane. Click Log In With OpenShift . Procedure from the CLI Log in to the OpenShift Container Platform CLI as a user with the cluster-admin role. If you use Red Hat OpenShift Dedicated, you must have an account with the dedicated-admin role. USD oc login --username=<NAMEOFUSER> https://<HOSTNAME>:6443 To query for details of the route using the command line, enter the following command. In this example, istio-system is the Service Mesh control plane namespace. USD oc get route -n istio-system jaeger -o jsonpath='{.spec.host}' Launch a browser and navigate to https://<JAEGER_URL> , where <JAEGER_URL> is the route that you discovered in the step. Log in using the same user name and password that you use to access the OpenShift Container Platform console. If you have added services to the service mesh and have generated traces, you can use the filters and Find Traces button to search your trace data. If you are validating the console installation, there is no trace data to display. For more information about configuring Jaeger, see the distributed tracing documentation . 2.17.6. Accessing the Grafana console Grafana is an analytics tool you can use to view, query, and analyze your service mesh metrics. In this example, istio-system is the Service Mesh control plane namespace. To access Grafana, do the following: Procedure Log in to the OpenShift Container Platform web console. Click the Project menu and select the project where you installed the Service Mesh control plane, for example istio-system . Click Routes . Click the link in the Location column for the Grafana row. Log in to the Grafana console with your OpenShift Container Platform credentials. 2.17.7. Accessing the Prometheus console Prometheus is a monitoring and alerting tool that you can use to collect multi-dimensional data about your microservices. In this example, istio-system is the Service Mesh control plane namespace. Procedure Log in to the OpenShift Container Platform web console. Click the Project menu and select the project where you installed the Service Mesh control plane, for example istio-system . Click Routes . Click the link in the Location column for the Prometheus row. Log in to the Prometheus console with your OpenShift Container Platform credentials. 2.17.8. Integrating with user-workload monitoring By default, Red Hat OpenShift Service Mesh (OSSM) installs the Service Mesh control plane (SMCP) with a dedicated instance of Prometheus for collecting metrics from a mesh. However, production systems need more advanced monitoring systems, like OpenShift Container Platform monitoring for user-defined projects. The following steps show how to integrate Service Mesh with user-workload monitoring. Prerequisites User-workload monitoring is enabled. Red Hat OpenShift Service Mesh Operator 2.4 is installed. Kiali Operator 1.65 is installed. Procedure Grant the cluster-monitoring-view role to the Kiali Service Account: apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: kiali-monitoring-rbac roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: cluster-monitoring-view subjects: - kind: ServiceAccount name: kiali-service-account namespace: istio-system Configure Kiali for user-workload monitoring: apiVersion: kiali.io/v1alpha1 kind: Kiali metadata: name: kiali-user-workload-monitoring namespace: istio-system spec: external_services: prometheus: auth: type: bearer use_kiali_token: true query_scope: mesh_id: "basic-istio-system" thanos_proxy: enabled: true url: https://thanos-querier.openshift-monitoring.svc.cluster.local:9091 If you use Istio Operator 2.4, use this configuration to configure Kiali for user-workload monitoring: apiVersion: kiali.io/v1alpha1 kind: Kiali metadata: name: kiali-user-workload-monitoring namespace: istio-system spec: external_services: istio: config_map_name: istio-<smcp-name> istio_sidecar_injector_config_map_name: istio-sidecar-injector-<smcp-name> istiod_deployment_name: istiod-<smcp-name> url_service_version: 'http://istiod-<smcp-name>.istio-system:15014/version' prometheus: auth: token: secret:thanos-querier-web-token:token type: bearer use_kiali_token: false query_scope: mesh_id: "basic-istio-system" thanos_proxy: enabled: true url: https://thanos-querier.openshift-monitoring.svc.cluster.local:9091 version: v1.65 Configure the SMCP for external Prometheus: apiVersion: maistra.io/v2 kind: ServiceMeshControlPlane metadata: name: basic namespace: istio-system spec: addons: prometheus: enabled: false 1 grafana: enabled: false 2 kiali: name: kiali-user-workload-monitoring meshConfig: extensionProviders: - name: prometheus prometheus: {} 1 Disable the default Prometheus instance provided by OSSM. 2 Disable Grafana. It is not supported with an external Prometheus instance. Apply a custom network policy to allow ingress traffic from the monitoring namespace: apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: user-workload-access namespace: istio-system 1 spec: ingress: - from: - namespaceSelector: matchLabels: network.openshift.io/policy-group: monitoring podSelector: {} policyTypes: - Ingress 1 The custom network policy must be applied to all namespaces. Apply a Telemetry object to enable traffic metrics in Istio proxies: apiVersion: telemetry.istio.io/v1alpha1 kind: Telemetry metadata: name: enable-prometheus-metrics namespace: istio-system 1 spec: selector: 2 matchLabels: app: bookinfo metrics: - providers: - name: prometheus 1 A Telemetry object created in the control plane namespace applies to all workloads in a mesh. To apply telemetry to only one namespace, create the object in the target namespace. 2 Optional: Setting the selector.matchLabels spec applies the Telemetry object to specific workloads in the target namespace. Apply a ServiceMonitor object to monitor the Istio control plane: apiVersion: monitoring.coreos.com/v1 kind: ServiceMonitor metadata: name: istiod-monitor namespace: istio-system 1 spec: targetLabels: - app selector: matchLabels: istio: pilot endpoints: - port: http-monitoring interval: 30s relabelings: - action: replace replacement: "basic-istio-system" 2 targetLabel: mesh_id 1 Create this ServiceMonitor object in the Istio control plane namespace because it monitors the Istiod service. In this example, the namespace is istio-system . 2 The string "basic-istio-system" is a combination of the SMCP name and its namespace, but any label can be used as long as it is unique for every mesh using user workload monitoring in the cluster. The spec.prometheus.query_scope of the Kiali resource configured in Step 2 needs to match this value. Note If there is only one mesh using user-workload monitoring, then both the mesh_id relabeling and the spec.prometheus.query_scope field in the Kiali resource are optional (but the query_scope field given here should be removed if the mesh_id label is removed). If multiple mesh instances on the cluster might use user-workload monitoring, then both the mesh_id relabelings and the spec.prometheus.query_scope field in the Kiali resource are required. This ensures that Kiali only sees metrics from its associated mesh. If you are not deploying Kiali, you can still apply mesh_id relabeling so that metrics from different meshes can be distinguished from one another. Apply a PodMonitor object to collect metrics from Istio proxies: apiVersion: monitoring.coreos.com/v1 kind: PodMonitor metadata: name: istio-proxies-monitor namespace: istio-system 1 spec: selector: matchExpressions: - key: istio-prometheus-ignore operator: DoesNotExist podMetricsEndpoints: - path: /stats/prometheus interval: 30s relabelings: - action: keep sourceLabels: [__meta_kubernetes_pod_container_name] regex: "istio-proxy" - action: keep sourceLabels: [__meta_kubernetes_pod_annotationpresent_prometheus_io_scrape] - action: replace regex: (\d+);(([A-Fa-f0-9]{1,4}::?){1,7}[A-Fa-f0-9]{1,4}) replacement: '[USD2]:USD1' sourceLabels: [__meta_kubernetes_pod_annotation_prometheus_io_port, __meta_kubernetes_pod_ip] targetLabel: __address__ - action: replace regex: (\d+);((([0-9]+?)(\.|USD)){4}) replacement: USD2:USD1 sourceLabels: [__meta_kubernetes_pod_annotation_prometheus_io_port, __meta_kubernetes_pod_ip] targetLabel: __address__ - action: labeldrop regex: "__meta_kubernetes_pod_label_(.+)" - sourceLabels: [__meta_kubernetes_namespace] action: replace targetLabel: namespace - sourceLabels: [__meta_kubernetes_pod_name] action: replace targetLabel: pod_name - action: replace replacement: "basic-istio-system" 2 targetLabel: mesh_id 1 Since OpenShift Container Platform monitoring ignores the namespaceSelector spec in ServiceMonitor and PodMonitor objects, you must apply the PodMonitor object in all mesh namespaces, including the control plane namespace. 2 The string "basic-istio-system" is a combination of the SMCP name and its namespace, but any label can be used as long as it is unique for every mesh using user workload monitoring in the cluster. The spec.prometheus.query_scope of the Kiali resource configured in Step 2 needs to match this value. Note If there is only one mesh using user-workload monitoring, then both the mesh_id relabeling and the spec.prometheus.query_scope field in the Kiali resource are optional (but the query_scope field given here should be removed if the mesh_id label is removed). If multiple mesh instances on the cluster might use user-workload monitoring, then both the mesh_id relabelings and the spec.prometheus.query_scope field in the Kiali resource are required. This ensures that Kiali only sees metrics from its associated mesh. If you are not deploying Kiali, you can still apply mesh_id relabeling so that metrics from different meshes can be distinguished from one another. Open the OpenShift Container Platform web console, and check that metrics are visible. 2.17.9. Additional resources Enabling monitoring for user-defined projects Installing the distributed tracing platform (Tempo) Installing the Red Hat build of OpenTelemetry 2.18. Performance and scalability The default ServiceMeshControlPlane settings are not intended for production use; they are designed to install successfully on a default OpenShift Container Platform installation, which is a resource-limited environment. After you have verified a successful SMCP installation, you should modify the settings defined within the SMCP to suit your environment. 2.18.1. Setting limits on compute resources By default, spec.proxy has the settings cpu: 10m and memory: 128M . If you are using Pilot, spec.runtime.components.pilot has the same default values. The settings in the following example are based on 1,000 services and 1,000 requests per second. You can change the values for cpu and memory in the ServiceMeshControlPlane . Procedure In the OpenShift Container Platform web console, click Operators Installed Operators . Click the Project menu and select the project where you installed the Service Mesh control plane, for example istio-system . Click the Red Hat OpenShift Service Mesh Operator. In the Istio Service Mesh Control Plane column, click the name of your ServiceMeshControlPlane , for example basic . Add the name of your standalone Jaeger instance to the ServiceMeshControlPlane . Click the YAML tab. Set the values for spec.proxy.runtime.container.resources.requests.cpu , spec.proxy.runtime.container.resources.requests.memory , components.kiali.container , and components.global.oauthproxy in your ServiceMeshControlPlane resource. Example version 2.6 ServiceMeshControlPlane apiVersion: maistra.io/v2 kind: ServiceMeshControlPlane metadata: name: basic namespace: istio-system spec: version: v2.6 proxy: runtime: container: resources: requests: cpu: 600m memory: 50Mi limits: {} runtime: components: pilot: container: resources: requests: cpu: 1000m memory: 1.6Gi limits: {} kiali: container: resources: limits: cpu: "90m" memory: "245Mi" requests: cpu: "30m" memory: "108Mi" global.oauthproxy: container: resources: requests: cpu: "101m" memory: "256Mi" limits: cpu: "201m" memory: "512Mi" To set values for Red Hat OpenShift distributed tracing platform (Jaeger), see "Configuring and deploying the distributed tracing platform Jaeger". Click Save . Verification Click Reload to verify that the ServiceMeshControlPlane resource was configured correctly. Additional resources Configuring and deploying the distributed tracing platform Jaeger . 2.18.2. Load test results The upstream Istio community load tests mesh consists of 1000 services and 2000 sidecars with 70,000 mesh-wide requests per second. Running the tests using Istio 1.12.3, generated the following results: The Envoy proxy uses 0.35 vCPU and 40 MB memory per 1000 requests per second going through the proxy. Istiod uses 1 vCPU and 1.5 GB of memory. The Envoy proxy adds 2.65 ms to the 90th percentile latency. The legacy istio-telemetry service (disabled by default in Service Mesh 2.0) uses 0.6 vCPU per 1000 mesh-wide requests per second for deployments that use Mixer. The data plane components, the Envoy proxies, handle data flowing through the system. The Service Mesh control plane component, Istiod, configures the data plane. The data plane and control plane have distinct performance concerns. 2.18.2.1. Service Mesh Control plane performance Istiod configures sidecar proxies based on user authored configuration files and the current state of the system. In a Kubernetes environment, Custom Resource Definitions (CRDs) and deployments constitute the configuration and state of the system. The Istio configuration objects like gateways and virtual services, provide the user-authored configuration. To produce the configuration for the proxies, Istiod processes the combined configuration and system state from the Kubernetes environment and the user-authored configuration. The Service Mesh control plane supports thousands of services, spread across thousands of pods with a similar number of user authored virtual services and other configuration objects. Istiod's CPU and memory requirements scale with the number of configurations and possible system states. The CPU consumption scales with the following factors: The rate of deployment changes. The rate of configuration changes. The number of proxies connecting to Istiod. However this part is inherently horizontally scalable. 2.18.2.2. Data plane performance Data plane performance depends on many factors, for example: Number of client connections Target request rate Request size and response size Number of proxy worker threads Protocol CPU cores Number and types of proxy filters, specifically telemetry v2 related filters. The latency, throughput, and the proxies' CPU and memory consumption are measured as a function of these factors. 2.18.2.2.1. CPU and memory consumption Since the sidecar proxy performs additional work on the data path, it consumes CPU and memory. As of Istio 1.12.3, a proxy consumes about 0.5 vCPU per 1000 requests per second. The memory consumption of the proxy depends on the total configuration state the proxy holds. A large number of listeners, clusters, and routes can increase memory usage. Since the proxy normally doesn't buffer the data passing through, request rate doesn't affect the memory consumption. 2.18.2.2.2. Additional latency Since Istio injects a sidecar proxy on the data path, latency is an important consideration. Istio adds an authentication filter, a telemetry filter, and a metadata exchange filter to the proxy. Every additional filter adds to the path length inside the proxy and affects latency. The Envoy proxy collects raw telemetry data after a response is sent to the client. The time spent collecting raw telemetry for a request does not contribute to the total time taken to complete that request. However, since the worker is busy handling the request, the worker won't start handling the request immediately. This process adds to the queue wait time of the request and affects average and tail latencies. The actual tail latency depends on the traffic pattern. Inside the mesh, a request traverses the client-side proxy and then the server-side proxy. In the default configuration of Istio 1.12.3 (that is, Istio with telemetry v2), the two proxies add about 1.7 ms and 2.7 ms to the 90th and 99th percentile latency, respectively, over the baseline data plane latency. 2.19. Configuring Service Mesh for production When you are ready to move from a basic installation to production, you must configure your control plane, tracing, and security certificates to meet production requirements. Prerequisites Install and configure Red Hat OpenShift Service Mesh. Test your configuration in a staging environment. 2.19.1. Configuring your ServiceMeshControlPlane resource for production If you have installed a basic ServiceMeshControlPlane resource to test Service Mesh, you must configure it to production specification before you use Red Hat OpenShift Service Mesh in production. You cannot change the metadata.name field of an existing ServiceMeshControlPlane resource. For production deployments, you must customize the default template. Procedure Configure the distributed tracing platform (Jaeger) for production. Edit the ServiceMeshControlPlane resource to use the production deployment strategy, by setting spec.addons.jaeger.install.storage.type to Elasticsearch and specify additional configuration options under install . You can create and configure your Jaeger instance and set spec.addons.jaeger.name to the name of the Jaeger instance. Default Jaeger parameters including Elasticsearch apiVersion: maistra.io/v2 kind: ServiceMeshControlPlane metadata: name: basic spec: version: v2.6 tracing: sampling: 100 type: Jaeger addons: jaeger: name: MyJaeger install: storage: type: Elasticsearch ingress: enabled: true runtime: components: tracing.jaeger.elasticsearch: # only supports resources and image name container: resources: {} Configure the sampling rate for production. For more information, see the Performance and scalability section. Ensure your security certificates are production ready by installing security certificates from an external certificate authority. For more information, see the Security section. Verification Enter the following command to verify that the ServiceMeshControlPlane resource updated properly. In this example, basic is the name of the ServiceMeshControlPlane resource. USD oc get smcp basic -o yaml 2.19.2. Additional resources For more information about tuning Service Mesh for performance, see Performance and scalability . 2.20. Connecting service meshes Federation is a deployment model that lets you share services and workloads between separate meshes managed in distinct administrative domains. 2.20.1. Federation overview Federation is a set of features that let you connect services between separate meshes, allowing the use of Service Mesh features such as authentication, authorization, and traffic management across multiple, distinct administrative domains. Implementing a federated mesh lets you run, manage, and observe a single service mesh running across multiple OpenShift clusters. Red Hat OpenShift Service Mesh federation takes an opinionated approach to a multi-cluster implementation of Service Mesh that assumes minimal trust between meshes. Service Mesh federation assumes that each mesh is managed individually and retains its own administrator. The default behavior is that no communication is permitted and no information is shared between meshes. The sharing of information between meshes is on an explicit opt-in basis. Nothing is shared in a federated mesh unless it has been configured for sharing. Support functions such as certificate generation, metrics and trace collection remain local in their respective meshes. You configure the ServiceMeshControlPlane on each service mesh to create ingress and egress gateways specifically for the federation, and to specify the trust domain for the mesh. Federation also involves the creation of additional federation files. The following resources are used to configure the federation between two or more meshes. A ServiceMeshPeer resource declares the federation between a pair of service meshes. An ExportedServiceSet resource declares that one or more services from the mesh are available for use by a peer mesh. An ImportedServiceSet resource declares which services exported by a peer mesh will be imported into the mesh. 2.20.2. Federation features Features of the Red Hat OpenShift Service Mesh federated approach to joining meshes include the following: Supports common root certificates for each mesh. Supports different root certificates for each mesh. Mesh administrators must manually configure certificate chains, service discovery endpoints, trust domains, etc for meshes outside of the Federated mesh. Only export/import the services that you want to share between meshes. Defaults to not sharing information about deployed workloads with other meshes in the federation. A service can be exported to make it visible to other meshes and allow requests from workloads outside of its own mesh. A service that has been exported can be imported to another mesh, enabling workloads on that mesh to send requests to the imported service. Encrypts communication between meshes at all times. Supports configuring load balancing across workloads deployed locally and workloads that are deployed in another mesh in the federation. When a mesh is joined to another mesh it can do the following: Provide trust details about itself to the federated mesh. Discover trust details about the federated mesh. Provide information to the federated mesh about its own exported services. Discover information about services exported by the federated mesh. 2.20.3. Federation security Red Hat OpenShift Service Mesh federation takes an opinionated approach to a multi-cluster implementation of Service Mesh that assumes minimal trust between meshes. Data security is built in as part of the federation features. Each mesh is considered to be a unique tenant, with a unique administration. You create a unique trust domain for each mesh in the federation. Traffic between the federated meshes is automatically encrypted using mutual Transport Layer Security (mTLS). The Kiali graph only displays your mesh and services that you have imported. You cannot see the other mesh or services that have not been imported into your mesh. 2.20.4. Federation limitations The Red Hat OpenShift Service Mesh federated approach to joining meshes has the following limitations: Federation of meshes is not supported on OpenShift Dedicated. 2.20.5. Federation prerequisites The Red Hat OpenShift Service Mesh federated approach to joining meshes has the following prerequisites: Two or more OpenShift Container Platform 4.6 or above clusters. Federation was introduced in Red Hat OpenShift Service Mesh 2.1 or later. You must have the Red Hat OpenShift Service Mesh 2.1 or later Operator installed on each mesh that you want to federate. You must have a version 2.1 or later ServiceMeshControlPlane deployed on each mesh that you want to federate. You must configure the load balancers supporting the services associated with the federation gateways to support raw TLS traffic. Federation traffic consists of HTTPS for discovery and raw encrypted TCP for service traffic. Services that you want to expose to another mesh should be deployed before you can export and import them. However, this is not a strict requirement. You can specify service names that do not yet exist for export/import. When you deploy the services named in the ExportedServiceSet and ImportedServiceSet they will be automatically made available for export/import. 2.20.6. Planning your mesh federation Before you start configuring your mesh federation, you should take some time to plan your implementation. How many meshes do you plan to join in a federation? You probably want to start with a limited number of meshes, perhaps two or three. What naming convention do you plan to use for each mesh? Having a pre-defined naming convention will help with configuration and troubleshooting. The examples in this documentation use different colors for each mesh. You should decide on a naming convention that will help you determine who owns and manages each mesh, as well as the following federation resources: Cluster names Cluster network names Mesh names and namespaces Federation ingress gateways Federation egress gateways Security trust domains Note Each mesh in the federation must have its own unique trust domain. Which services from each mesh do you plan to export to the federated mesh? Each service can be exported individually, or you can specify labels or use wildcards. Do you want to use aliases for the service namespaces? Do you want to use aliases for the exported services? Which exported services does each mesh plan to import? Each mesh only imports the services that it needs. Do you want to use aliases for the imported services? 2.20.7. Mesh federation across clusters To connect one instance of the OpenShift Service Mesh with one running in a different cluster, the procedure is not much different as when connecting two meshes deployed in the same cluster. However, the ingress gateway of one mesh must be reachable from the other mesh. One way of ensuring this is to configure the gateway service as a LoadBalancer service if the cluster supports this type of service. The service must be exposed through a load balancer that operates at Layer4 of the OSI model. 2.20.7.1. Exposing the federation ingress on clusters running on bare metal If the cluster runs on bare metal and fully supports LoadBalancer services, the IP address found in the .status.loadBalancer.ingress.ip field of the ingress gateway Service object should be specified as one of the entries in the .spec.remote.addresses field of the ServiceMeshPeer object. If the cluster does not support LoadBalancer services, using a NodePort service could be an option if the nodes are accessible from the cluster running the other mesh. In the ServiceMeshPeer object, specify the IP addresses of the nodes in the .spec.remote.addresses field and the service's node ports in the .spec.remote.discoveryPort and .spec.remote.servicePort fields. 2.20.7.2. Exposing the federation ingress on clusters running on IBM Power and IBM Z If the cluster runs on IBM Power(R) or IBM Z(R) infrastructure and fully supports LoadBalancer services, the IP address found in the .status.loadBalancer.ingress.ip field of the ingress gateway Service object should be specified as one of the entries in the .spec.remote.addresses field of the ServiceMeshPeer object. If the cluster does not support LoadBalancer services, using a NodePort service could be an option if the nodes are accessible from the cluster running the other mesh. In the ServiceMeshPeer object, specify the IP addresses of the nodes in the .spec.remote.addresses field and the service's node ports in the .spec.remote.discoveryPort and .spec.remote.servicePort fields. 2.20.7.3. Exposing the federation ingress on Amazon Web Services (AWS) By default, LoadBalancer services in clusters running on AWS do not support L4 load balancing. In order for Red Hat OpenShift Service Mesh federation to operate correctly, the following annotation must be added to the ingress gateway service: service.beta.kubernetes.io/aws-load-balancer-type: nlb The Fully Qualified Domain Name found in the .status.loadBalancer.ingress.hostname field of the ingress gateway Service object should be specified as one of the entries in the .spec.remote.addresses field of the ServiceMeshPeer object. 2.20.7.4. Exposing the federation ingress on Azure On Microsoft Azure, merely setting the service type to LoadBalancer suffices for mesh federation to operate correctly. The IP address found in the .status.loadBalancer.ingress.ip field of the ingress gateway Service object should be specified as one of the entries in the .spec.remote.addresses field of the ServiceMeshPeer object. 2.20.7.5. Exposing the federation ingress on Google Cloud Platform (GCP) On Google Cloud Platform, merely setting the service type to LoadBalancer suffices for mesh federation to operate correctly. The IP address found in the .status.loadBalancer.ingress.ip field of the ingress gateway Service object should be specified as one of the entries in the .spec.remote.addresses field of the ServiceMeshPeer object. 2.20.8. Federation implementation checklist Federating services meshes involves the following activities: ❏ Configure networking between the clusters that you are going to federate. ❏ Configure the load balancers supporting the services associated with the federation gateways to support raw TLS traffic. ❏ Installing the Red Hat OpenShift Service Mesh version 2.1 or later Operator in each of your clusters. ❏ Deploying a version 2.1 or later ServiceMeshControlPlane to each of your clusters. ❏ Configuring the SMCP for federation for each mesh that you want to federate: ❏ Create a federation egress gateway for each mesh you are going to federate with. ❏ Create a federation ingress gateway for each mesh you are going to federate with. ❏ Configure a unique trust domain. ❏ Federate two or more meshes by creating a ServiceMeshPeer resource for each mesh pair. ❏ Export services by creating an ExportedServiceSet resource to make services available from one mesh to a peer mesh. ❏ Import services by creating an ImportedServiceSet resource to import services shared by a mesh peer. 2.20.9. Configuring a Service Mesh control plane for federation Before a mesh can be federated, you must configure the ServiceMeshControlPlane for mesh federation. Because all meshes that are members of the federation are equal, and each mesh is managed independently, you must configure the SMCP for each mesh that will participate in the federation. In the following example, the administrator for the red-mesh is configuring the SMCP for federation with both the green-mesh and the blue-mesh . Sample SMCP for red-mesh apiVersion: maistra.io/v2 kind: ServiceMeshControlPlane metadata: name: red-mesh namespace: red-mesh-system spec: version: v2.6 runtime: defaults: container: imagePullPolicy: Always gateways: additionalEgress: egress-green-mesh: enabled: true requestedNetworkView: - green-network service: metadata: labels: federation.maistra.io/egress-for: egress-green-mesh ports: - port: 15443 name: tls - port: 8188 name: http-discovery #note HTTP here egress-blue-mesh: enabled: true requestedNetworkView: - blue-network service: metadata: labels: federation.maistra.io/egress-for: egress-blue-mesh ports: - port: 15443 name: tls - port: 8188 name: http-discovery #note HTTP here additionalIngress: ingress-green-mesh: enabled: true service: type: LoadBalancer metadata: labels: federation.maistra.io/ingress-for: ingress-green-mesh ports: - port: 15443 name: tls - port: 8188 name: https-discovery #note HTTPS here ingress-blue-mesh: enabled: true service: type: LoadBalancer metadata: labels: federation.maistra.io/ingress-for: ingress-blue-mesh ports: - port: 15443 name: tls - port: 8188 name: https-discovery #note HTTPS here security: trust: domain: red-mesh.local Table 2.6. ServiceMeshControlPlane federation configuration parameters Parameter Description Values Default value Name of the cluster. You are not required to specify a cluster name, but it is helpful for troubleshooting. String N/A Name of the cluster network. You are not required to specify a name for the network, but it is helpful for configuration and troubleshooting. String N/A 2.20.9.1. Understanding federation gateways You use a gateway to manage inbound and outbound traffic for your mesh, letting you specify which traffic you want to enter or leave the mesh. You use ingress and egress gateways to manage traffic entering and leaving the service mesh (North-South traffic). When you create a federated mesh, you create additional ingress/egress gateways, to facilitate service discovery between federated meshes, communication between federated meshes, and to manage traffic flow between service meshes (East-West traffic). To avoid naming conflicts between meshes, you must create separate egress and ingress gateways for each mesh. For example, red-mesh would have separate egress gateways for traffic going to green-mesh and blue-mesh . Table 2.7. Federation gateway parameters Parameter Description Values Default value Define an additional egress gateway for each mesh peer in the federation. This parameter enables or disables the federation egress. true / false true Networks associated with exported services. Set to the value of spec.cluster.network in the SMCP for the mesh, otherwise use <ServiceMeshPeer-name>-network. For example, if the ServiceMeshPeer resource for that mesh is named west , then the network would be named west-network . Specify a unique label for the gateway to prevent federated traffic from flowing through the cluster's default system gateways. Used to specify the port: and name: used for TLS and service discovery. Federation traffic consists of raw encrypted TCP for service traffic. Port 15443 is required for sending TLS service requests to other meshes in the federation. Port 8188 is required for sending service discovery requests to other meshes in the federation. Define an additional ingress gateway gateway for each mesh peer in the federation. This parameter enables or disables the federation ingress. true / false true The ingress gateway service must be exposed through a load balancer that operates at Layer 4 of the OSI model and is publicly available. LoadBalancer If the cluster does not support LoadBalancer services, the ingress gateway service can be exposed through a NodePort service. NodePort Specify a unique label for the gateway to prevent federated traffic from flowing through the cluster's default system gateways. Used to specify the port: and name: used for TLS and service discovery. Federation traffic consists of raw encrypted TCP for service traffic. Federation traffic consists of HTTPS for discovery. Port 15443 is required for receiving TLS service requests to other meshes in the federation. Port 8188 is required for receiving service discovery requests to other meshes in the federation. Used to specify the nodePort: if the cluster does not support LoadBalancer services. If specified, is required in addition to port: and name: for both TLS and service discovery. nodePort: must be in the range 30000 - 32767 . In the following example, the administrator is configuring the SMCP for federation with the green-mesh using a NodePort service. Sample SMCP for NodePort apiVersion: maistra.io/v2 kind: ServiceMeshControlPlane metadata: name: green-mesh namespace: green-mesh-system spec: # ... gateways: additionalIngress: ingress-green-mesh: enabled: true service: type: NodePort metadata: labels: federation.maistra.io/ingress-for: ingress-green-mesh ports: - port: 15443 nodePort: 30510 name: tls - port: 8188 nodePort: 32359 name: https-discovery 2.20.9.2. Understanding federation trust domain parameters Each mesh in the federation must have its own unique trust domain. This value is used when configuring mesh federation in the ServiceMeshPeer resource. kind: ServiceMeshControlPlane metadata: name: red-mesh namespace: red-mesh-system spec: security: trust: domain: red-mesh.local Table 2.8. Federation security parameters Parameter Description Values Default value Used to specify a unique name for the trust domain for the mesh. Domains must be unique for every mesh in the federation. <mesh-name>.local N/A Procedure from the Console Follow this procedure to edit the ServiceMeshControlPlane with the OpenShift Container Platform web console. This example uses the red-mesh as an example. Log in to the OpenShift Container Platform web console as a user with the cluster-admin role. Navigate to Operators Installed Operators . Click the Project menu and select the project where you installed the Service Mesh control plane. For example, red-mesh-system . Click the Red Hat OpenShift Service Mesh Operator. On the Istio Service Mesh Control Plane tab, click the name of your ServiceMeshControlPlane , for example red-mesh . On the Create ServiceMeshControlPlane Details page, click YAML to modify your configuration. Modify your ServiceMeshControlPlane to add federation ingress and egress gateways and to specify the trust domain. Click Save . Procedure from the CLI Follow this procedure to create or edit the ServiceMeshControlPlane with the command line. This example uses the red-mesh as an example. Log in to the OpenShift Container Platform CLI as a user with the cluster-admin role. Enter the following command. Then, enter your username and password when prompted. USD oc login --username=<NAMEOFUSER> https://<HOSTNAME>:6443 Change to the project where you installed the Service Mesh control plane, for example red-mesh-system. USD oc project red-mesh-system Edit the ServiceMeshControlPlane file to add federation ingress and egress gateways and to specify the trust domain. Run the following command to edit the Service Mesh control plane where red-mesh-system is the system namespace and red-mesh is the name of the ServiceMeshControlPlane object: USD oc edit -n red-mesh-system smcp red-mesh Enter the following command, where red-mesh-system is the system namespace, to see the status of the Service Mesh control plane installation. USD oc get smcp -n red-mesh-system The installation has finished successfully when the READY column indicates that all components are ready. 2.20.10. Joining a federated mesh You declare the federation between two meshes by creating a ServiceMeshPeer resource. The ServiceMeshPeer resource defines the federation between two meshes, and you use it to configure discovery for the peer mesh, access to the peer mesh, and certificates used to validate the other mesh's clients. Meshes are federated on a one-to-one basis, so each pair of peers requires a pair of ServiceMeshPeer resources specifying the federation connection to the other service mesh. For example, federating two meshes named red and green would require two ServiceMeshPeer files. On red-mesh-system, create a ServiceMeshPeer for the green mesh. On green-mesh-system, create a ServiceMeshPeer for the red mesh. Federating three meshes named red , blue , and green would require six ServiceMeshPeer files. On red-mesh-system, create a ServiceMeshPeer for the green mesh. On red-mesh-system, create a ServiceMeshPeer for the blue mesh. On green-mesh-system, create a ServiceMeshPeer for the red mesh. On green-mesh-system, create a ServiceMeshPeer for the blue mesh. On blue-mesh-system, create a ServiceMeshPeer for the red mesh. On blue-mesh-system, create a ServiceMeshPeer for the green mesh. Configuration in the ServiceMeshPeer resource includes the following: The address of the other mesh's ingress gateway, which is used for discovery and service requests. The names of the local ingress and egress gateways that is used for interactions with the specified peer mesh. The client ID used by the other mesh when sending requests to this mesh. The trust domain used by the other mesh. The name of a ConfigMap containing a root certificate that is used to validate client certificates in the trust domain used by the other mesh. In the following example, the administrator for the red-mesh is configuring federation with the green-mesh . Example ServiceMeshPeer resource for red-mesh kind: ServiceMeshPeer apiVersion: federation.maistra.io/v1 metadata: name: green-mesh namespace: red-mesh-system spec: remote: addresses: - ingress-red-mesh.green-mesh-system.apps.domain.com gateways: ingress: name: ingress-green-mesh egress: name: egress-green-mesh security: trustDomain: green-mesh.local clientID: green-mesh.local/ns/green-mesh-system/sa/egress-red-mesh-service-account certificateChain: kind: ConfigMap name: green-mesh-ca-root-cert Table 2.9. ServiceMeshPeer configuration parameters Parameter Description Values Name of the peer mesh that this resource is configuring federation with. String System namespace for this mesh, that is, where the Service Mesh control plane is installed. String List of public addresses of the peer meshes' ingress gateways that are servicing requests from this mesh. The port on which the addresses are handling discovery requests. Defaults to 8188 The port on which the addresses are handling service requests. Defaults to 15443 Name of the ingress on this mesh that is servicing requests received from the peer mesh. For example, ingress-green-mesh . Name of the egress on this mesh that is servicing requests sent to the peer mesh. For example, egress-green-mesh . The trust domain used by the peer mesh. <peerMeshName>.local The client ID used by the peer mesh when calling into this mesh. <peerMeshTrustDomain>/ns/<peerMeshSystem>/sa/<peerMeshEgressGatewayName>-service-account The kind (for example, ConfigMap) and name of a resource containing the root certificate used to validate the client and server certificate(s) presented to this mesh by the peer mesh. The key of the config map entry containing the certificate should be root-cert.pem . kind: ConfigMap name: <peerMesh>-ca-root-cert 2.20.10.1. Creating a ServiceMeshPeer resource Prerequisites Two or more OpenShift Container Platform 4.6 or above clusters. The clusters must already be networked. The load balancers supporting the services associated with the federation gateways must be configured to support raw TLS traffic. Each cluster must have a version 2.1 or later ServiceMeshControlPlane configured to support federation deployed. An account with the cluster-admin role. Procedure from the CLI Follow this procedure to create a ServiceMeshPeer resource from the command line. This example shows the red-mesh creating a peer resource for the green-mesh . Log in to the OpenShift Container Platform CLI as a user with the cluster-admin role. Enter the following command. Then, enter your username and password when prompted. USD oc login --username=<NAMEOFUSER> <API token> https://<HOSTNAME>:6443 Change to the project where you installed the control plane, for example, red-mesh-system . USD oc project red-mesh-system Create a ServiceMeshPeer file based the following example for the two meshes that you want to federate. Example ServiceMeshPeer resource for red-mesh to green-mesh kind: ServiceMeshPeer apiVersion: federation.maistra.io/v1 metadata: name: green-mesh namespace: red-mesh-system spec: remote: addresses: - ingress-red-mesh.green-mesh-system.apps.domain.com gateways: ingress: name: ingress-green-mesh egress: name: egress-green-mesh security: trustDomain: green-mesh.local clientID: green-mesh.local/ns/green-mesh-system/sa/egress-red-mesh-service-account certificateChain: kind: ConfigMap name: green-mesh-ca-root-cert Run the following command to deploy the resource, where red-mesh-system is the system namespace and servicemeshpeer.yaml includes a full path to the file you edited: USD oc create -n red-mesh-system -f servicemeshpeer.yaml To confirm that connection between the red mesh and green mesh is established, inspect the status of the green-mesh ServiceMeshPeer in the red-mesh-system namespace: USD oc -n red-mesh-system get servicemeshpeer green-mesh -o yaml Example ServiceMeshPeer connection between red-mesh and green-mesh status: discoveryStatus: active: - pod: istiod-red-mesh-b65457658-9wq5j remotes: - connected: true lastConnected: "2021-10-05T13:02:25Z" lastFullSync: "2021-10-05T13:02:25Z" source: 10.128.2.149 watch: connected: true lastConnected: "2021-10-05T13:02:55Z" lastDisconnectStatus: 503 Service Unavailable lastFullSync: "2021-10-05T13:05:43Z" The status.discoveryStatus.active.remotes field shows that istiod in the peer mesh (in this example, the green mesh) is connected to istiod in the current mesh (in this example, the red mesh). The status.discoveryStatus.active.watch field shows that istiod in the current mesh is connected to istiod in the peer mesh. If you check the servicemeshpeer named red-mesh in green-mesh-system , you'll find information about the same two connections from the perspective of the green mesh. When the connection between two meshes is not established, the ServiceMeshPeer status indicates this in the status.discoveryStatus.inactive field. For more information on why a connection attempt failed, inspect the Istiod log, the access log of the egress gateway handling egress traffic for the peer, and the ingress gateway handling ingress traffic for the current mesh in the peer mesh. For example, if the red mesh can't connect to the green mesh, check the following logs: istiod-red-mesh in red-mesh-system egress-green-mesh in red-mesh-system ingress-red-mesh in green-mesh-system 2.20.11. Exporting a service from a federated mesh Exporting services allows a mesh to share one or more of its services with another member of the federated mesh. You use an ExportedServiceSet resource to declare the services from one mesh that you are making available to another peer in the federated mesh. You must explicitly declare each service to be shared with a peer. You can select services by namespace or name. You can use wildcards to select services; for example, to export all the services in a namespace. You can export services using an alias. For example, you can export the foo/bar service as custom-ns/bar . You can only export services that are visible to the mesh's system namespace. For example, a service in another namespace with a networking.istio.io/exportTo label set to '.' would not be a candidate for export. For exported services, their target services will only see traffic from the ingress gateway, not the original requestor (that is, they won't see the client ID of either the other mesh's egress gateway or the workload originating the request) The following example is for services that red-mesh is exporting to green-mesh . Example ExportedServiceSet resource kind: ExportedServiceSet apiVersion: federation.maistra.io/v1 metadata: name: green-mesh namespace: red-mesh-system spec: exportRules: # export ratings.mesh-x-bookinfo as ratings.bookinfo - type: NameSelector nameSelector: namespace: red-mesh-bookinfo name: red-ratings alias: namespace: bookinfo name: ratings # export any service in red-mesh-bookinfo namespace with label export-service=true - type: LabelSelector labelSelector: namespace: red-mesh-bookinfo selector: matchLabels: export-service: "true" aliases: # export all matching services as if they were in the bookinfo namespace - namespace: "*" name: "*" alias: namespace: bookinfo Table 2.10. ExportedServiceSet parameters Parameter Description Values Name of the ServiceMeshPeer you are exposing this service to. Must match the name value for the mesh in the ServiceMeshPeer resource. Name of the project/namespace containing this resource (should be the system namespace for the mesh) . Type of rule that will govern the export for this service. The first matching rule found for the service will be used for the export. NameSelector , LabelSelector To create a NameSelector rule, specify the namespace of the service and the name of the service as defined in the Service resource. To create a NameSelector rule that uses an alias for the service, after specifying the namespace and name for the service, then specify the alias for the namespace and the alias to be used for name of the service. To create a LabelSelector rule, specify the namespace of the service and specify the label defined in the Service resource. In the example above, the label is export-service . To create a LabelSelector rule that uses aliases for the services, after specifying the selector , specify the aliases to be used for name or namespace of the service. In the example above, the namespace alias is bookinfo for all matching services. Export services with the name "ratings" from all namespaces in the red-mesh to blue-mesh. kind: ExportedServiceSet apiVersion: federation.maistra.io/v1 metadata: name: blue-mesh namespace: red-mesh-system spec: exportRules: - type: NameSelector nameSelector: namespace: "*" name: ratings Export all services from the west-data-center namespace to green-mesh kind: ExportedServiceSet apiVersion: federation.maistra.io/v1 metadata: name: green-mesh namespace: red-mesh-system spec: exportRules: - type: NameSelector nameSelector: namespace: west-data-center name: "*" 2.20.11.1. Creating an ExportedServiceSet You create an ExportedServiceSet resource to explicitly declare the services that you want to be available to a mesh peer. Services are exported as <export-name>.<export-namespace>.svc.<ServiceMeshPeer.name>-exports.local and will automatically route to the target service. This is the name by which the exported service is known in the exporting mesh. When the ingress gateway receives a request destined for this name, it will be routed to the actual service being exported. For example, if a service named ratings.red-mesh-bookinfo is exported to green-mesh as ratings.bookinfo , the service will be exported under the name ratings.bookinfo.svc.green-mesh-exports.local , and traffic received by the ingress gateway for that hostname will be routed to the ratings.red-mesh-bookinfo service. Note When you set the importAsLocal parameter to true to aggregate the remote endpoint with local services, you must use an alias for the service. When you set the parameter false , no alias is required. Prerequisites The cluster and ServiceMeshControlPlane have been configured for mesh federation. An account with the cluster-admin role. Note You can configure services for export even if they don't exist yet. When a service that matches the value specified in the ExportedServiceSet is deployed, it will be automatically exported. Procedure from the CLI Follow this procedure to create an ExportedServiceSet from the command line. Log in to the OpenShift Container Platform CLI as a user with the cluster-admin role. Enter the following command. Then, enter your username and password when prompted. USD oc login --username=<NAMEOFUSER> <API token> https://<HOSTNAME>:6443 Change to the project where you installed the Service Mesh control plane; for example, red-mesh-system . USD oc project red-mesh-system Create an ExportedServiceSet file based on the following example where red-mesh is exporting services to green-mesh . Example ExportedServiceSet resource from red-mesh to green-mesh apiVersion: federation.maistra.io/v1 kind: ExportedServiceSet metadata: name: green-mesh namespace: red-mesh-system spec: exportRules: - type: NameSelector nameSelector: namespace: red-mesh-bookinfo name: ratings alias: namespace: bookinfo name: red-ratings - type: NameSelector nameSelector: namespace: red-mesh-bookinfo name: reviews Run the following command to upload and create the ExportedServiceSet resource in the red-mesh-system namespace. USD oc create -n <ControlPlaneNamespace> -f <ExportedServiceSet.yaml> For example: USD oc create -n red-mesh-system -f export-to-green-mesh.yaml Create additional ExportedServiceSets as needed for each mesh peer in your federated mesh. Verification Run the following command to validate the services the red-mesh exports to share with green-mesh: USD oc get exportedserviceset <PeerMeshExportedTo> -o yaml For example: USD oc -n red-mesh-system get exportedserviceset green-mesh -o yaml Example validating the services exported from the red mesh that are shared with the green mesh. status: exportedServices: - exportedName: red-ratings.bookinfo.svc.green-mesh-exports.local localService: hostname: ratings.red-mesh-bookinfo.svc.cluster.local name: ratings namespace: red-mesh-bookinfo - exportedName: reviews.red-mesh-bookinfo.svc.green-mesh-exports.local localService: hostname: reviews.red-mesh-bookinfo.svc.cluster.local name: reviews namespace: red-mesh-bookinfo The status.exportedServices array lists the services that are currently exported (these services matched the export rules in the ExportedServiceSet object ). Each entry in the array indicates the name of the exported service and details about the local service that is exported. If a service that you expected to be exported is missing, confirm the Service object exists, its name or labels match the exportRules defined in the ExportedServiceSet object, and that the Service object's namespace is configured as a member of the service mesh using the ServiceMeshMemberRoll or ServiceMeshMember object. 2.20.12. Importing a service into a federated mesh Importing services lets you explicitly specify which services exported from another mesh should be accessible within your service mesh. You use an ImportedServiceSet resource to select services for import. Only services exported by a mesh peer and explicitly imported are available to the mesh. Services that you do not explicitly import are not made available within the mesh. You can select services by namespace or name. You can use wildcards to select services, for example, to import all the services that were exported to the namespace. You can select services for export using a label selector, which may be global to the mesh, or scoped to a specific member namespace. You can import services using an alias. For example, you can import the custom-ns/bar service as other-mesh/bar . You can specify a custom domain suffix, which will be appended to the name.namespace of an imported service for its fully qualified domain name; for example, bar.other-mesh.imported.local . The following example is for the green-mesh importing a service that was exported by red-mesh . Example ImportedServiceSet kind: ImportedServiceSet apiVersion: federation.maistra.io/v1 metadata: name: red-mesh #name of mesh that exported the service namespace: green-mesh-system #mesh namespace that service is being imported into spec: importRules: # first matching rule is used # import ratings.bookinfo as ratings.bookinfo - type: NameSelector importAsLocal: false nameSelector: namespace: bookinfo name: ratings alias: # service will be imported as ratings.bookinfo.svc.red-mesh-imports.local namespace: bookinfo name: ratings Table 2.11. ImportedServiceSet parameters Parameter Description Values Name of the ServiceMeshPeer that exported the service to the federated mesh. Name of the namespace containing the ServiceMeshPeer resource (the mesh system namespace). Type of rule that will govern the import for the service. The first matching rule found for the service will be used for the import. NameSelector To create a NameSelector rule, specify the namespace and the name of the exported service. Set to true to aggregate remote endpoint with local services. When true services are imported as <name>.<namespace>.svc.cluster.local . When true , an alias is required. When false , no alias is required. true / false To create a NameSelector rule that uses an alias for the service, after specifying the namespace and name for the service, then specify the alias for the namespace and the alias to be used for name of the service. Import the "bookinfo/ratings" service from the red-mesh into blue-mesh kind: ImportedServiceSet apiVersion: federation.maistra.io/v1 metadata: name: red-mesh namespace: blue-mesh-system spec: importRules: - type: NameSelector importAsLocal: false nameSelector: namespace: bookinfo name: ratings Import all services from the red-mesh's west-data-center namespace into the green-mesh. These services will be accessible as <name>.west-data-center.svc.red-mesh-imports.local kind: ImportedServiceSet apiVersion: federation.maistra.io/v1 metadata: name: red-mesh namespace: green-mesh-system spec: importRules: - type: NameSelector importAsLocal: false nameSelector: namespace: west-data-center name: "*" 2.20.12.1. Creating an ImportedServiceSet You create an ImportedServiceSet resource to explicitly declare the services that you want to import into your mesh. Services are imported with the name <exported-name>.<exported-namespace>.svc.<ServiceMeshPeer.name>.remote which is a "hidden" service, visible only within the egress gateway namespace and is associated with the exported service's hostname. The service will be available locally as <export-name>.<export-namespace>.<domainSuffix> , where domainSuffix is svc.<ServiceMeshPeer.name>-imports.local by default, unless importAsLocal is set to true , in which case domainSuffix is svc.cluster.local . If importAsLocal is set to false , the domain suffix in the import rule will be applied. You can treat the local import just like any other service in the mesh. It automatically routes through the egress gateway, where it is redirected to the exported service's remote name. Prerequisites The cluster and ServiceMeshControlPlane have been configured for mesh federation. An account with the cluster-admin role. Note You can configure services for import even if they haven't been exported yet. When a service that matches the value specified in the ImportedServiceSet is deployed and exported, it will be automatically imported. Procedure Follow this procedure to create an ImportedServiceSet from the command line. Log in to the OpenShift Container Platform CLI as a user with the cluster-admin role. Enter the following command. Then, enter your username and password when prompted. USD oc login --username=<NAMEOFUSER> <API token> https://<HOSTNAME>:6443 Change to the project where you installed the Service Mesh control plane; for example, green-mesh-system . USD oc project green-mesh-system Create an ImportedServiceSet file based on the following example where green-mesh is importing services previously exported by red-mesh . Example ImportedServiceSet resource from red-mesh to green-mesh kind: ImportedServiceSet apiVersion: federation.maistra.io/v1 metadata: name: red-mesh namespace: green-mesh-system spec: importRules: - type: NameSelector importAsLocal: false nameSelector: namespace: bookinfo name: red-ratings alias: namespace: bookinfo name: ratings Run the following command to upload and create the ImportedServiceSet resource in the green-mesh-system namespace. USD oc create -n <ControlPlaneNamespace> -f <ImportedServiceSet.yaml> For example: USD oc create -n green-mesh-system -f import-from-red-mesh.yaml Create additional ImportedServiceSet resources as needed for each mesh peer in your federated mesh. Verification Run the following command to verify that the services were imported into green-mesh : USD oc get importedserviceset <PeerMeshImportedInto> -o yaml Example verifying that the services exported from the red mesh have been imported into the green mesh using the status section of the importedserviceset/red-mesh' object in the 'green-mesh-system namespace: USD oc -n green-mesh-system get importedserviceset/red-mesh -o yaml status: importedServices: - exportedName: red-ratings.bookinfo.svc.green-mesh-exports.local localService: hostname: ratings.bookinfo.svc.red-mesh-imports.local name: ratings namespace: bookinfo - exportedName: reviews.red-mesh-bookinfo.svc.green-mesh-exports.local localService: hostname: "" name: "" namespace: "" In the preceding example only the ratings service is imported, as indicated by the populated fields under localService . The reviews service is available for import, but isn't currently imported because it does not match any importRules in the ImportedServiceSet object. 2.20.13. Configuring a federated mesh for failover Failover is the ability to switch automatically and seamlessly to a reliable backup system, for example another server. In the case of a federated mesh, you can configure a service in one mesh to failover to a service in another mesh. You configure Federation for failover by setting the importAsLocal and locality settings in an ImportedServiceSet resource and then configuring a DestinationRule that configures failover for the service to the locality specified in the ImportedServiceSet . Prerequisites Two or more OpenShift Container Platform 4.6 or above clusters already networked and federated. ExportedServiceSet resources already created for each mesh peer in the federated mesh. ImportedServiceSet resources already created for each mesh peer in the federated mesh. An account with the cluster-admin role. 2.20.13.1. Configuring an ImportedServiceSet for failover Locality-weighted load balancing allows administrators to control the distribution of traffic to endpoints based on the localities of where the traffic originates and where it will terminate. These localities are specified using arbitrary labels that designate a hierarchy of localities in {region}/{zone}/{sub-zone} form. In the examples in this section, the green-mesh is located in the us-east region, and the red-mesh is located in the us-west region. Example ImportedServiceSet resource from red-mesh to green-mesh kind: ImportedServiceSet apiVersion: federation.maistra.io/v1 metadata: name: red-mesh #name of mesh that exported the service namespace: green-mesh-system #mesh namespace that service is being imported into spec: importRules: # first matching rule is used # import ratings.bookinfo as ratings.bookinfo - type: NameSelector importAsLocal: true nameSelector: namespace: bookinfo name: ratings alias: # service will be imported as ratings.bookinfo.svc.red-mesh-imports.local namespace: bookinfo name: ratings #Locality within which imported services should be associated. locality: region: us-west Table 2.12. ImportedServiceLocality fields table Name Description Type region: Region within which imported services are located. string subzone: Subzone within which imported services are located. I Subzone is specified, Zone must also be specified. string zone: Zone within which imported services are located. If Zone is specified, Region must also be specified. string Procedure Log in to the OpenShift Container Platform CLI as a user with the cluster-admin role, enter the following command: USD oc login --username=<NAMEOFUSER> <API token> https://<HOSTNAME>:6443 Change to the project where you installed the Service Mesh control plane, enter the following command: USD oc project <smcp-system> For example, green-mesh-system . USD oc project green-mesh-system Edit the ImportedServiceSet file, where <ImportedServiceSet.yaml> includes a full path to the file you want to edit, enter the following command: USD oc edit -n <smcp-system> -f <ImportedServiceSet.yaml> For example, if you want to modify the file that imports from the red-mesh-system to the green-mesh-system as shown in the ImportedServiceSet example. USD oc edit -n green-mesh-system -f import-from-red-mesh.yaml Modify the file: Set spec.importRules.importAsLocal to true . Set spec.locality to a region , zone , or subzone . Save your changes. 2.20.13.2. Configuring a DestinationRule for failover Create a DestinationRule resource that configures the following: Outlier detection for the service. This is required in order for failover to function properly. In particular, it configures the sidecar proxies to know when endpoints for a service are unhealthy, eventually triggering a failover to the locality. Failover policy between regions. This ensures that failover beyond a region boundary will behave predictably. Procedure Log in to the OpenShift Container Platform CLI as a user with the cluster-admin role. Enter the following command. Then, enter your username and password when prompted. USD oc login --username=<NAMEOFUSER> <API token> https://<HOSTNAME>:6443 Change to the project where you installed the Service Mesh control plane. USD oc project <smcp-system> For example, green-mesh-system . USD oc project green-mesh-system Create a DestinationRule file based on the following example where if green-mesh is unavailable, the traffic should be routed from the green-mesh in the us-east region to the red-mesh in us-west . Example DestinationRule apiVersion: networking.istio.io/v1beta1 kind: DestinationRule metadata: name: default-failover namespace: bookinfo spec: host: "ratings.bookinfo.svc.cluster.local" trafficPolicy: loadBalancer: localityLbSetting: enabled: true failover: - from: us-east to: us-west outlierDetection: consecutive5xxErrors: 3 interval: 10s baseEjectionTime: 1m Deploy the DestinationRule , where <DestinationRule> includes the full path to your file, enter the following command: USD oc create -n <application namespace> -f <DestinationRule.yaml> For example: USD oc create -n bookinfo -f green-mesh-us-west-DestinationRule.yaml 2.20.14. Removing a service from the federated mesh If you need to remove a service from the federated mesh, for example if it has become obsolete or has been replaced by a different service, you can do so. 2.20.14.1. To remove a service from a single mesh Remove the entry for the service from the ImportedServiceSet resource for the mesh peer that no longer should access the service. 2.20.14.2. To remove a service from the entire federated mesh Remove the entry for the service from the ExportedServiceSet resource for the mesh that owns the service. 2.20.15. Removing a mesh from the federated mesh If you need to remove a mesh from the federation, you can do so. Edit the removed mesh's ServiceMeshControlPlane resource to remove all federation ingress gateways for peer meshes. For each mesh peer that the removed mesh has been federated with: Remove the ServiceMeshPeer resource that links the two meshes. Edit the peer mesh's ServiceMeshControlPlane resource to remove the egress gateway that serves the removed mesh. 2.21. Extensions You can use WebAssembly extensions to add new features directly into the Red Hat OpenShift Service Mesh proxies. This lets you move even more common functionality out of your applications, and implement them in a single language that compiles to WebAssembly bytecode. Note WebAssembly extensions are not supported on IBM Z(R) and IBM Power(R). 2.21.1. WebAssembly modules overview WebAssembly modules can be run on many platforms, including proxies, and have broad language support, fast execution, and a sandboxed-by-default security model. Red Hat OpenShift Service Mesh extensions are Envoy HTTP Filters , giving them a wide range of capabilities: Manipulating the body and headers of requests and responses. Out-of-band HTTP requests to services not in the request path, such as authentication or policy checking. Side-channel data storage and queues for filters to communicate with each other. Note When creating new WebAssembly extensions, use the WasmPlugin API. The ServiceMeshExtension API was deprecated in Red Hat OpenShift Service Mesh version 2.2 and was removed in Red Hat OpenShift Service Mesh version 2.3. There are two parts to writing a Red Hat OpenShift Service Mesh extension: You must write your extension using an SDK that exposes the proxy-wasm API and compile it to a WebAssembly module. You must then package the module into a container. Supported languages You can use any language that compiles to WebAssembly bytecode to write a Red Hat OpenShift Service Mesh extension, but the following languages have existing SDKs that expose the proxy-wasm API so that it can be consumed directly. Table 2.13. Supported languages Language Maintainer Repository AssemblyScript solo.io solo-io/proxy-runtime C++ proxy-wasm team (Istio Community) proxy-wasm/proxy-wasm-cpp-sdk Go tetrate.io tetratelabs/proxy-wasm-go-sdk Rust proxy-wasm team (Istio Community) proxy-wasm/proxy-wasm-rust-sdk 2.21.2. WasmPlugin container format Istio supports Open Container Initiative (OCI) images in its Wasm Plugin mechanism. You can distribute your Wasm Plugins as a container image, and you can use the spec.url field to refer to a container registry location. For example, quay.io/my-username/my-plugin:latest . Because each execution environment (runtime) for a WASM module can have runtime-specific configuration parameters, a WASM image can be composed of two layers: plugin.wasm (Required) - Content layer. This layer consists of a .wasm binary containing the bytecode of your WebAssembly module, to be loaded by the runtime. You must name this file plugin.wasm . runtime-config.json (Optional) - Configuration layer. This layer consists of a JSON-formatted string that describes metadata about the module for the target runtime. The config layer might also contain additional data, depending on the target runtime. For example, the config for a WASM Envoy Filter contains root_ids available on the filter. 2.21.3. WasmPlugin API reference The WasmPlugins API provides a mechanism to extend the functionality provided by the Istio proxy through WebAssembly filters. You can deploy multiple WasmPlugins. The phase and priority settings determine the order of execution (as part of Envoy's filter chain), allowing the configuration of complex interactions between user-supplied WasmPlugins and Istio's internal filters. In the following example, an authentication filter implements an OpenID flow and populates the Authorization header with a JSON Web Token (JWT). Istio authentication consumes this token and deploys it to the ingress gateway. The WasmPlugin file lives in the proxy sidecar filesystem. Note the field url . apiVersion: extensions.istio.io/v1alpha1 kind: WasmPlugin metadata: name: openid-connect namespace: istio-ingress spec: selector: matchLabels: istio: ingressgateway url: file:///opt/filters/openid.wasm sha256: 1ef0c9a92b0420cf25f7fe5d481b231464bc88f486ca3b9c83ed5cc21d2f6210 phase: AUTHN pluginConfig: openid_server: authn openid_realm: ingress Below is the same example, but this time an Open Container Initiative (OCI) image is used instead of a file in the filesystem. Note the fields url , imagePullPolicy , and imagePullSecret . apiVersion: extensions.istio.io/v1alpha1 kind: WasmPlugin metadata: name: openid-connect namespace: istio-system spec: selector: matchLabels: istio: ingressgateway url: oci://private-registry:5000/openid-connect/openid:latest imagePullPolicy: IfNotPresent imagePullSecret: private-registry-pull-secret phase: AUTHN pluginConfig: openid_server: authn openid_realm: ingress Table 2.14. WasmPlugin Field Reference Field Type Description Required spec.selector WorkloadSelector Criteria used to select the specific set of pods/VMs on which this plugin configuration should be applied. If omitted, this configuration will be applied to all workload instances in the same namespace. If the WasmPlugin field is present in the config root namespace, it will be applied to all applicable workloads in any namespace. No spec.url string URL of a Wasm module or OCI container. If no scheme is present, defaults to oci:// , referencing an OCI image. Other valid schemes are file:// for referencing .wasm module files present locally within the proxy container, and http[s]:// for .wasm module files hosted remotely. No spec.sha256 string SHA256 checksum that will be used to verify the Wasm module or OCI container. If the url field already references a SHA256 (using the @sha256: notation), it must match the value of this field. If an OCI image is referenced by tag and this field is set, its checksum will be verified against the contents of this field after pulling. No spec.imagePullPolicy PullPolicy The pull behavior to be applied when fetching an OCI image. Only relevant when images are referenced by tag instead of SHA. Defaults to the value IfNotPresent , except when an OCI image is referenced in the url field and the latest tag is used, in which case the value Always is the default, mirroring K8s behavior. Setting is ignored if the url field is referencing a Wasm module directly using file:// or http[s]:// . No spec.imagePullSecret string Credentials to use for OCI image pulling. The name of a secret in the same namespace as the WasmPlugin object that contains a pull secret for authenticating against the registry when pulling the image. No spec.phase PluginPhase Determines where in the filter chain this WasmPlugin object is injected. No spec.priority int64 Determines the ordering of WasmPlugins objects that have the same phase value. When multiple WasmPlugins objects are applied to the same workload in the same phase, they will be applied by priority and in descending order. If the priority field is not set, or two WasmPlugins objects with the same value, the ordering will be determined from the name and namespace of the WasmPlugins objects. Defaults to the value 0 . No spec.pluginName string The plugin name used in the Envoy configuration. Some Wasm modules might require this value to select the Wasm plugin to execute. No spec.pluginConfig Struct The configuration that will be passed on to the plugin. No spec.pluginConfig.verificationKey string The public key used to verify signatures of signed OCI images or Wasm modules. Must be supplied in PEM format. No The WorkloadSelector object specifies the criteria used to determine if a filter can be applied to a proxy. The matching criteria includes the metadata associated with a proxy, workload instance information such as labels attached to the pod/VM, or any other information that the proxy provides to Istio during the initial handshake. If multiple conditions are specified, all conditions need to match in order for the workload instance to be selected. Currently, only label based selection mechanism is supported. Table 2.15. WorkloadSelector Field Type Description Required matchLabels map<string, string> One or more labels that indicate a specific set of pods/VMs on which a policy should be applied. The scope of label search is restricted to the configuration namespace in which the resource is present. Yes The PullPolicy object specifies the pull behavior to be applied when fetching an OCI image. Table 2.16. PullPolicy Value Description <empty> Defaults to the value IfNotPresent , except for OCI images with tag latest, for which the default will be the value Always . IfNotPresent If an existing version of the image has been pulled before, that will be used. If no version of the image is present locally, we will pull the latest version. Always Always pull the latest version of an image when applying this plugin. Struct represents a structured data value, consisting of fields which map to dynamically typed values. In some languages, Struct might be supported by a native representation. For example, in scripting languages like JavaScript a struct is represented as an object. Table 2.17. Struct Field Type Description fields map<string, Value> Map of dynamically typed values. PluginPhase specifies the phase in the filter chain where the plugin will be injected. Table 2.18. PluginPhase Field Description <empty> Control plane decides where to insert the plugin. This will generally be at the end of the filter chain, right before the Router. Do not specify PluginPhase if the plugin is independent of others. AUTHN Insert plugin before Istio authentication filters. AUTHZ Insert plugin before Istio authorization filters and after Istio authentication filters. STATS Insert plugin before Istio stats filters and after Istio authorization filters. 2.21.3.1. Deploying WasmPlugin resources You can enable Red Hat OpenShift Service Mesh extensions using the WasmPlugin resource. In this example, istio-system is the name of the Service Mesh control plane project. The following example creates an openid-connect filter that performs an OpenID Connect flow to authenticate the user. Procedure Create the following example resource: Example plugin.yaml apiVersion: extensions.istio.io/v1alpha1 kind: WasmPlugin metadata: name: openid-connect namespace: istio-system spec: selector: matchLabels: istio: ingressgateway url: oci://private-registry:5000/openid-connect/openid:latest imagePullPolicy: IfNotPresent imagePullSecret: private-registry-pull-secret phase: AUTHN pluginConfig: openid_server: authn openid_realm: ingress Apply your plugin.yaml file with the following command: USD oc apply -f plugin.yaml 2.21.4. ServiceMeshExtension container format You must have a .wasm file containing the bytecode of your WebAssembly module, and a manifest.yaml file in the root of the container filesystem to make your container image a valid extension image. Note When creating new WebAssembly extensions, use the WasmPlugin API. The ServiceMeshExtension API was deprecated in Red Hat OpenShift Service Mesh version 2.2 and was removed in Red Hat OpenShift Service Mesh version 2.3. manifest.yaml schemaVersion: 1 name: <your-extension> description: <description> version: 1.0.0 phase: PreAuthZ priority: 100 module: extension.wasm Table 2.19. Field Reference for manifest.yml Field Description Required schemaVersion Used for versioning of the manifest schema. Currently the only possible value is 1 . This is a required field. name The name of your extension. This field is just metadata and currently unused. description The description of your extension. This field is just metadata and currently unused. version The version of your extension. This field is just metadata and currently unused. phase The default execution phase of your extension. This is a required field. priority The default priority of your extension. This is a required field. module The relative path from the container filesystem's root to your WebAssembly module. This is a required field. 2.21.5. ServiceMeshExtension reference The ServiceMeshExtension API provides a mechanism to extend the functionality provided by the Istio proxy through WebAssembly filters. There are two parts to writing a WebAssembly extension: Write your extension using an SDK that exposes the proxy-wasm API and compile it to a WebAssembly module. Package it into a container. Note When creating new WebAssembly extensions, use the WasmPlugin API. The ServiceMeshExtension API, which was deprecated in Red Hat OpenShift Service Mesh version 2.2, was removed in Red Hat OpenShift Service Mesh version 2.3. Table 2.20. ServiceMeshExtension Field Reference Field Description metadata.namespace The metadata.namespace field of a ServiceMeshExtension source has a special semantic: if it equals the Control Plane Namespace, the extension will be applied to all workloads in the Service Mesh that match its workloadSelector value. When deployed to any other Mesh Namespace, it will only be applied to workloads in that same Namespace. spec.workloadSelector The spec.workloadSelector field has the same semantic as the spec.selector field of the Istio Gateway resource . It will match a workload based on its Pod labels. If no workloadSelector value is specified, the extension will be applied to all workloads in the namespace. spec.config This is a structured field that will be handed over to the extension, with the semantics dependent on the extension you are deploying. spec.image A container image URI pointing to the image that holds the extension. spec.phase The phase determines where in the filter chain the extension is injected, in relation to existing Istio functionality like Authentication, Authorization and metrics generation. Valid values are: PreAuthN, PostAuthN, PreAuthZ, PostAuthZ, PreStats, PostStats. This field defaults to the value set in the manifest.yaml file of the extension, but can be overwritten by the user. spec.priority If multiple extensions with the same spec.phase value are applied to the same workload instance, the spec.priority value determines the ordering of execution. Extensions with higher priority will be executed first. This allows for inter-dependent extensions. This field defaults to the value set in the manifest.yaml file of the extension, but can be overwritten by the user. 2.21.5.1. Deploying ServiceMeshExtension resources You can enable Red Hat OpenShift Service Mesh extensions using the ServiceMeshExtension resource. In this example, istio-system is the name of the Service Mesh control plane project. Note When creating new WebAssembly extensions, use the WasmPlugin API. The ServiceMeshExtension API was deprecated in Red Hat OpenShift Service Mesh version 2.2 and removed in Red Hat OpenShift Service Mesh version 2.3. For a complete example that was built using the Rust SDK, take a look at the header-append-filter . It is a simple filter that appends one or more headers to the HTTP responses, with their names and values taken out from the config field of the extension. See a sample configuration in the snippet below. Procedure Create the following example resource: Example ServiceMeshExtension resource extension.yaml apiVersion: maistra.io/v1 kind: ServiceMeshExtension metadata: name: header-append namespace: istio-system spec: workloadSelector: labels: app: httpbin config: first-header: some-value another-header: another-value image: quay.io/maistra-dev/header-append-filter:2.1 phase: PostAuthZ priority: 100 Apply your extension.yaml file with the following command: USD oc apply -f <extension>.yaml 2.21.6. Migrating from ServiceMeshExtension to WasmPlugin resources The ServiceMeshExtension API, which was deprecated in Red Hat OpenShift Service Mesh version 2.2, was removed in Red Hat OpenShift Service Mesh version 2.3. If you are using the ServiceMeshExtension API, you must migrate to the WasmPlugin API to continue using your WebAssembly extensions. The APIs are very similar. The migration consists of two steps: Renaming your plugin file and updating the module packaging. Creating a WasmPlugin resource that references the updated container image. 2.21.6.1. API changes The new WasmPlugin API is similar to the ServiceMeshExtension , but with a few differences, especially in the field names: Table 2.21. Field changes between ServiceMeshExtensions and WasmPlugin ServiceMeshExtension WasmPlugin spec.config spec.pluginConfig spec.workloadSelector spec.selector spec.image spec.url spec.phase valid values: PreAuthN, PostAuthN, PreAuthZ, PostAuthZ, PreStats, PostStats spec.phase valid values: <empty>, AUTHN, AUTHZ, STATS The following is an example of how a ServiceMeshExtension resource could be converted into a WasmPlugin resource. ServiceMeshExtension resource apiVersion: maistra.io/v1 kind: ServiceMeshExtension metadata: name: header-append namespace: istio-system spec: workloadSelector: labels: app: httpbin config: first-header: some-value another-header: another-value image: quay.io/maistra-dev/header-append-filter:2.2 phase: PostAuthZ priority: 100 New WasmPlugin resource equivalent to the ServiceMeshExtension above apiVersion: extensions.istio.io/v1alpha1 kind: WasmPlugin metadata: name: header-append namespace: istio-system spec: selector: matchLabels: app: httpbin url: oci://quay.io/maistra-dev/header-append-filter:2.2 phase: STATS pluginConfig: first-header: some-value another-header: another-value 2.21.6.2. Container image format changes The new WasmPlugin container image format is similar to the ServiceMeshExtensions , with the following differences: The ServiceMeshExtension container format required a metadata file named manifest.yaml in the root directory of the container filesystem. The WasmPlugin container format does not require a manifest.yaml file. The .wasm file (the actual plugin) that previously could have any filename now must be named plugin.wasm and must be located in the root directory of the container filesystem. 2.21.6.3. Migrating to WasmPlugin resources To upgrade your WebAssembly extensions from the ServiceMeshExtension API to the WasmPlugin API, you rename your plugin file. Prerequisites ServiceMeshControlPlane is upgraded to version 2.2 or later. Procedure Update your container image. If the plugin is already in /plugin.wasm inside the container, skip to the step. If not: Ensure the plugin file is named plugin.wasm . You must name the extension file plugin.wasm . Ensure the plugin file is located in the root (/) directory. You must store extension files in the root of the container filesystem.. Rebuild your container image and push it to a container registry. Remove the ServiceMeshExtension resource and create a WasmPlugin resource that refers to the new container image you built. 2.22. OpenShift Service Mesh Console plugin The OpenShift Service Mesh Console (OSSMC) plugin is an extension to the OpenShift Container Platform web console that provides visibility into your Service Mesh. With the OSSMC plugin installed, a new Service Mesh menu option is available in the navigation menu on the left side of the web console, as well as new Service Mesh tabs that enhance the existing Workloads and Services console pages. Important If you are using a certificate that your browser does not initially trust, you must tell your browser to trust the certificate first before you are able to access the OSSMC plugin. To do this, go to the Kiali standalone user interface (UI) and tell the browser to accept its certificate. 2.22.1. About the OpenShift Service Mesh Console plugin The OpenShift Service Mesh Console (OSSMC) plugin is an extension to the OpenShift Container Platform web console that provides visibility into your Service Mesh. Warning The OSSMC plugin only supports a single Kiali instance. Whether that Kiali instance is configured to access only a subset of OpenShift projects or has access cluster-wide to all projects does not matter. However, only a single Kiali instance can be accessed. You can install the OSSMC plugin in only one of two ways: using the OpenShift Container Platform web console, or through the CLI. Note The OSSMC plugin is only supported on Service Mesh 2.5 or later. Specifically, the ServiceMeshControlPlane version must be set to 2.5 or later. Installing the OSSMC plugin creates a new category, Service Mesh , in the main OpenShift Container Platform web console navigation. Click Service Mesh to see: Overview for a summary of your mesh displayed as cards that represent the namespaces in the mesh Graph for a full topology view of your mesh represented by nodes and edges, each node representing a component of the mesh and each edge representing traffic flowing through the mesh between components Istio config for a list of all Istio configuration files in your mesh with a column that provides a quick way to know if the configuration for each resource is valid Under Workloads , the OSSMC plugin adds a Service Mesh tab that contains the following subtabs: Overview subtab provides a summary of the selected workload including a localized topology graph showing the workload with all inbound and outbound edges and nodes Traffic subtab displays information about all inbound and outbound traffic to the workload. Logs subtab shows the logs for the workload's containers You can view container logs individually or in a unified fashion, ordered by log time. This is especially helpful to see how the Envoy sidecar proxy logs relate to your workload's application logs. You can enable the tracing span integration which then allows you to see which logs correspond to trace spans. Metrics subtab shows both inbound and outbound metric graphs in the corresponding subtabs. All the workload metrics can be displayed here, providing you with a detail view of the performance of your workload. You can enable the tracing span integration which allows you to see which spans occurred at the same time as the metrics. Click a span marker in the graph to view the specific spans associated with that timeframe. Traces provides a chart showing the trace spans collected over the given timeframe. Click a bubble to drill down into those trace spans; the trace spans can provide you the most low-level detail within your workload application, down to the individual request level. The trace details view gives further details, including heatmaps that provide you with a comparison of one span in relation to other requests and spans in the same timeframe. If you hover over a cell in a heatmap, a tooltip gives some details on the cell data. Envoy subtab provides information about the Envoy sidecar configuration. This is useful when you need to dig down deep into the sidecar configuration when debugging things such as connectivity issues. Under Networking , the OSSMC plugin adds a Service Mesh tab to Services and contains the Overview , Traffic , Inbound Metrics , and Traces subtabs that are similar to the same subtabs found in Workloads . 2.22.2. Installing OpenShift Service Mesh Console plugin using the OpenShift Container Platform web console You can install the OpenShift Service Mesh Console (OSSMC) plugin using the OpenShift Container Platform web console. Prerequisites OpenShift Container Platform is installed. Kiali Operator provided by Red Hat 1.73 is installed. Red Hat OpenShift Service Mesh (OSSM) is installed. ServiceMeshControlPlane 2.5 or later is installed. Procedure Navigate to Installed Operators . Click Kiali Operator provided by Red Hat . Click Create instance on the Red Hat OpenShift Service Mesh tile. Use the Create OSSMConsole form to create an instance of the OSSMConsole custom resource (CR). Name and Version are required fields. Note The Version field must match the spec.version field in your Kiali CR. Click Create . Navigate back to the OpenShift Container Platform web console and use the new menu options for visibility into your Service Mesh. 2.22.3. Installing OpenShift Service Mesh Console plugin using the CLI You can install the OpenShift Service Mesh Console (OSSMC) plugin using the CLI, instead of the OpenShift Container Platform web console. Prerequisites OpenShift Container Platform is installed. Kiali Operator provided by Red Hat 1.73 is installed. Red Hat OpenShift Service Mesh (OSSM) is installed. ServiceMeshControlPlane (SMCP) 2.5 or later is installed. Procedure Create a small OSSMConsole custom resource (CR) to instruct the operator to install the plugin: cat <<EOM | oc apply -f - apiVersion: kiali.io/v1alpha1 kind: OSSMConsole metadata: namespace: openshift-operators name: ossmconsole EOM Note The plugin resources are deployed in the same namespace where the OSSMConsole CR is created. Go to the OpenShift Container Platform web console. Refresh the browser window to see the new OSSMC plugin menu options. 2.22.4. Uninstalling OpenShift Service Mesh Console plugin using the OpenShift Container Platform web console You can uninstall the OpenShift Service Mesh Console (OSSMC) plugin by using the OpenShift Container Platform web console. Procedure Navigate to Installed Operators Operator details . Select the OpenShift Service Mesh Console tab. Click Delete OSSMConsole from the options menu. Note If you intend to also uninstall the Kiali Operator provided by Red Hat, you must first uninstall the OSSMC plugin and then uninstall the Operator. If you uninstall the Operator before ensuring the OSSMConsole CR is deleted then you may have difficulty removing that CR and its namespace. If this occurs then you must manually remove the finalizer on the CR in order to delete it and its namespace. You can do this using: USD oc patch ossmconsoles <CR name> -n <CR namespace> -p '{"metadata":{"finalizers": []}}' --type=merge . 2.22.5. Uninstalling OpenShift Service Mesh Console plugin using the CLI You can uninstall the OpenShift Service Mesh Console (OSSMC) plugin by using the OpenShift CLI ( oc ). Procedure Remove the OSSMC custom resource (CR) by running the following command: oc delete ossmconsoles <custom_resource_name> -n <custom_resource_namespace> Verify all CRs are deleted from all namespaces by running the following command: for r in USD(oc get ossmconsoles --ignore-not-found=true --all-namespaces -o custom-columns=NS:.metadata.namespace,N:.metadata.name --no-headers | sed 's/ */:/g'); do oc delete ossmconsoles -n USD(echo USDr|cut -d: -f1) USD(echo USDr|cut -d: -f2); done 2.22.6. Additional resources .spec.kiali.serviceNamespace 2.23. Using the 3scale WebAssembly module Note The threescale-wasm-auth module runs on integrations of 3scale API Management 2.11 or later with Red Hat OpenShift Service Mesh 2.1.0 or later. The threescale-wasm-auth module is a WebAssembly module that uses a set of interfaces, known as an application binary interfaces ( ABI ). This is defined by the Proxy-WASM specification to drive any piece of software that implements the ABI so it can authorize HTTP requests against 3scale. As an ABI specification, Proxy-WASM defines the interaction between a piece of software named host and another named module , program , or extension . The host exposes a set of services used by the module to perform a task, and in this case, to process proxy requests. The host environment is composed of a WebAssembly virtual machine interacting with a piece of software, in this case, an HTTP proxy. The module itself runs in isolation to the outside world except for the instructions it runs on the virtual machine and the ABI specified by Proxy-WASM. This is a safe way to provide extension points to software: the extension can only interact in well-defined ways with the virtual machine and the host. The interaction provides a computing model and a connection to the outside world the proxy is meant to have. 2.23.1. Compatibility The threescale-wasm-auth module is designed to be fully compatible with all implementations of the Proxy-WASM ABI specification. At this point, however, it has only been thoroughly tested to work with the Envoy reverse proxy. 2.23.2. Usage as a stand-alone module Because of its self-contained design, it is possible to configure this module to work with Proxy-WASM proxies independently of Service Mesh, as well as 3scale Istio adapter deployments. 2.23.3. Prerequisites The module works with all supported 3scale releases, except when configuring a service to use OpenID connect (OIDC) , which requires 3scale 2.11 or later. 2.23.4. Configuring the threescale-wasm-auth module Cluster administrators on OpenShift Container Platform can configure the threescale-wasm-auth module to authorize HTTP requests to 3scale API Management through an application binary interface (ABI). The ABI defines the interaction between host and the module, exposing the hosts services, and allows you to use the module to process proxy requests. 2.23.4.1. The WasmPlugin API extension Service Mesh provides a custom resource definition to specify and apply Proxy-WASM extensions to sidecar proxies, known as WasmPlugin . Service Mesh applies this custom resource to the set of workloads that require HTTP API management with 3scale. See custom resource definition for more information. Note Configuring the WebAssembly extension is currently a manual process. Support for fetching the configuration for services from the 3scale system will be available in a future release. Prerequisites Identify a Kubernetes workload and namespace on your Service Mesh deployment that you will apply this module. You must have a 3scale tenant account. See SaaS or 3scale 2.11 On-Premises with a matching service and relevant applications and metrics defined. If you apply the module to the <product_page> microservice in the bookinfo namespace, see the Bookinfo sample application . The following example is the YAML format for the custom resource for threescale-wasm-auth module. This example refers to the upstream Maistra version of Service Mesh, WasmPlugin API. You must declare the namespace where the threescale-wasm-auth module is deployed, alongside a selector to identify the set of applications the module will apply to: apiVersion: extensions.istio.io/v1alpha1 kind: WasmPlugin metadata: name: <threescale_wasm_plugin_name> namespace: <bookinfo> 1 spec: selector: 2 labels: app: <product_page> pluginConfig: <yaml_configuration> url: oci://registry.redhat.io/3scale-amp2/3scale-auth-wasm-rhel8:0.0.3 phase: AUTHZ priority: 100 1 The namespace . 2 The selector . The spec.pluginConfig field depends on the module configuration and it is not populated in the example. Instead, the example uses the <yaml_configuration> placeholder value. You can use the format of this custom resource example. The spec.pluginConfig field varies depending on the application. All other fields persist across multiple instances of this custom resource. As examples: url : Only changes when newer versions of the module are deployed. phase : Remains the same, since this module needs to be invoked after the proxy has done any local authorization, such as validating OpenID Connect (OIDC) tokens. After you have the module configuration in spec.pluginConfig and the rest of the custom resource, apply it with the oc apply command: USD oc apply -f threescale-wasm-auth-bookinfo.yaml Additional resources Migrating from ServiceMeshExtension to WasmPlugin resources Custom Resources 2.23.5. Applying 3scale external ServiceEntry objects To have the threescale-wasm-auth module authorize requests against 3scale, the module must have access to 3scale services. You can do this within Red Hat OpenShift Service Mesh by applying an external ServiceEntry object and a corresponding DestinationRule object for TLS configuration to use the HTTPS protocol. The custom resources (CRs) set up the service entries and destination rules for secure access from within Service Mesh to 3scale Hosted (SaaS) for the backend and system components of the Service Management API and the Account Management API. The Service Management API receives queries for the authorization status of each request. The Account Management API provides API management configuration settings for your services. Procedure Apply the following external ServiceEntry CR and related DestinationRule CR for 3scale Hosted backend to your cluster: Add the ServiceEntry CR to a file called service-entry-threescale-saas-backend.yml : ServiceEntry CR apiVersion: networking.istio.io/v1beta1 kind: ServiceEntry metadata: name: service-entry-threescale-saas-backend spec: hosts: - su1.3scale.net ports: - number: 443 name: https protocol: HTTPS location: MESH_EXTERNAL resolution: DNS Add the DestinationRule CR to a file called destination-rule-threescale-saas-backend.yml : DestinationRule CR apiVersion: networking.istio.io/v1beta1 kind: DestinationRule metadata: name: destination-rule-threescale-saas-backend spec: host: su1.3scale.net trafficPolicy: tls: mode: SIMPLE sni: su1.3scale.net Apply and save the external ServiceEntry CR for the 3scale Hosted backend to your cluster, by running the following command: USD oc apply -f service-entry-threescale-saas-backend.yml Apply and save the external DestinationRule CR for the 3scale Hosted backend to your cluster, by running the following command: USD oc apply -f destination-rule-threescale-saas-backend.yml Apply the following external ServiceEntry CR and related DestinationRule CR for 3scale Hosted system to your cluster: Add the ServiceEntry CR to a file called service-entry-threescale-saas-system.yml : ServiceEntry CR apiVersion: networking.istio.io/v1beta1 kind: ServiceEntry metadata: name: service-entry-threescale-saas-system spec: hosts: - multitenant.3scale.net ports: - number: 443 name: https protocol: HTTPS location: MESH_EXTERNAL resolution: DNS Add the DestinationRule CR to a file called destination-rule-threescale-saas-system.yml : DestinationRule CR apiVersion: networking.istio.io/v1beta1 kind: DestinationRule metadata: name: destination-rule-threescale-saas-system spec: host: multitenant.3scale.net trafficPolicy: tls: mode: SIMPLE sni: multitenant.3scale.net Apply and save the external ServiceEntry CR for the 3scale Hosted system to your cluster, by running the following command: USD oc apply -f service-entry-threescale-saas-system.yml Apply and save the external DestinationRule CR for the 3scale Hosted system to your cluster, by running the following command: USD oc apply -f <destination-rule-threescale-saas-system.yml> Alternatively, you can deploy an in-mesh 3scale service. To deploy an in-mesh 3scale service, change the location of the services in the CR by deploying 3scale and linking to the deployment. Additional resources Service entry and destination rule documentation 2.23.6. The 3scale WebAssembly module configuration The WasmPlugin custom resource spec provides the configuration that the Proxy-WASM module reads from. The spec is embedded in the host and read by the Proxy-WASM module. Typically, the configurations are in the JSON file format for the modules to parse, however the WasmPlugin resource can interpret the spec value as YAML and convert it to JSON for consumption by the module. If you use the Proxy-WASM module in stand-alone mode, you must write the configuration using the JSON format. Using the JSON format means using escaping and quoting where needed within the host configuration files, for example Envoy . When you use the WebAssembly module with the WasmPlugin resource, the configuration is in the YAML format. In this case, an invalid configuration forces the module to show diagnostics based on its JSON representation to a sidecar's logging stream. Important The EnvoyFilter custom resource is not a supported API, although it can be used in some 3scale Istio adapter or Service Mesh releases. Using the EnvoyFilter custom resource is not recommended. Use the WasmPlugin API instead of the EnvoyFilter custom resource. If you must use the EnvoyFilter custom resource, you must specify the spec in JSON format. 2.23.6.1. Configuring the 3scale WebAssembly module The architecture of the 3scale WebAssembly module configuration depends on the 3scale account and authorization service, and the list of services to handle. Prerequisites The prerequisites are a set of minimum mandatory fields in all cases: For the 3scale account and authorization service: the backend-listener URL. For the list of services to handle: the service IDs and at least one credential look up method and where to find it. You will find examples for dealing with userkey , appid with appkey , and OpenID Connect (OIDC) patterns. The WebAssembly module uses the settings you specified in the static configuration. For example, if you add a mapping rule configuration to the module, it will always apply, even when the 3scale Admin Portal has no such mapping rule. The rest of the WasmPlugin resource exists around the spec.pluginConfig YAML entry. 2.23.6.2. The 3scale WebAssembly module api object The api top-level string from the 3scale WebAssembly module defines which version of the configuration the module will use. Note A non-existent or unsupported version of the api object renders the 3scale WebAssembly module inoperable. The api top-level string example apiVersion: extensions.istio.io/v1alpha1 kind: WasmPlugin metadata: name: <threescale_wasm_plugin_name> namespace: <bookinfo> spec: pluginConfig: api: v1 # ... The api entry defines the rest of the values for the configuration. The only accepted value is v1 . New settings that break compatibility with the current configuration or need more logic that modules using v1 cannot handle, will require different values. 2.23.6.3. The 3scale WebAssembly module system object The system top-level object specifies how to access the 3scale Account Management API for a specific account. The upstream field is the most important part of the object. The system object is optional, but recommended unless you are providing a fully static configuration for the 3scale WebAssembly module, which is an option if you do not want to provide connectivity to the system component of 3scale. When you provide static configuration objects in addition to the system object, the static ones always take precedence. apiVersion: extensions.istio.io/v1alpha1 kind: WasmPlugin metadata: name: <threescale_wasm_plugin_name> spec: pluginConfig: system: name: <saas_porta> upstream: <object> token: <my_account_token> ttl: 300 # ... Table 2.22. system object fields Name Description Required name An identifier for the 3scale service, currently not referenced elsewhere. Optional upstream The details about a network host to be contacted. upstream refers to the 3scale Account Management API host known as system. Yes token A 3scale personal access token with read permissions. Yes ttl The minimum amount of seconds to consider a configuration retrieved from this host as valid before trying to fetch new changes. The default is 600 seconds (10 minutes). Note: there is no maximum amount, but the module will generally fetch any configuration within a reasonable amount of time after this TTL elapses. Optional 2.23.6.4. The 3scale WebAssembly module upstream object The upstream object describes an external host to which the proxy can perform calls. apiVersion: maistra.io/v1 upstream: name: outbound|443||multitenant.3scale.net url: "https://myaccount-admin.3scale.net/" timeout: 5000 # ... Table 2.23. upstream object fields Name Description Required name name is not a free-form identifier. It is the identifier for the external host as defined by the proxy configuration. In the case of stand-alone Envoy configurations, it maps to the name of a Cluster , also known as upstream in other proxies. Note: the value of this field, because the Service Mesh and 3scale Istio adapter control plane configure the name according to a format using a vertical bar (|) as the separator of multiple fields. For the purposes of this integration, always use the format: outbound|<port>||<hostname> . Yes url The complete URL to access the described service. Unless implied by the scheme, you must include the TCP port. Yes Timeout Timeout in milliseconds so that connections to this service that take more than the amount of time to respond will be considered errors. Default is 1000 seconds. Optional 2.23.6.5. The 3scale WebAssembly module backend object The backend top-level object specifies how to access the 3scale Service Management API for authorizing and reporting HTTP requests. This service is provided by the Backend component of 3scale. apiVersion: extensions.istio.io/v1alpha1 kind: WasmPlugin metadata: name: <threescale_wasm_plugin_name> spec: pluginConfig: # ... backend: name: backend upstream: <object> # ... Table 2.24. backend object fields Name Description Required name An identifier for the 3scale backend, currently not referenced elsewhere. Optional upstream The details about a network host to be contacted. This must refer to the 3scale Account Management API host, known, system. Yes. The most important and required field. 2.23.6.6. The 3scale WebAssembly module services object The services top-level object specifies which service identifiers are handled by this particular instance of the module . Since accounts have multiple services, you must specify which ones are handled. The rest of the configuration revolves around how to configure services. The services field is required. It is an array that must contain at least one service to be useful. apiVersion: extensions.istio.io/v1alpha1 kind: WasmPlugin metadata: name: <threescale_wasm_plugin_name> spec: pluginConfig: # ... services: - id: "2555417834789" token: service_token authorities: - "*.app" - 0.0.0.0 - "0.0.0.0:8443" credentials: <object> mapping_rules: <object> # ... Each element in the services array represents a 3scale service. Table 2.25. services object fields Name Description Required ID An identifier for this 3scale service, currently not referenced elsewhere. Yes token This token can be found in the proxy configuration for your service in System or you can retrieve the it from System with following curl command: curl https://<system_host>/admin/api/services/<service_id>/proxy/configs/production/latest.json?access_token=<access_token>" | jq '.proxy_config.content.backend_authentication_value Optional authorities An array of strings, each one representing the Authority of a URL to match. These strings accept glob patterns supporting the asterisk ( * ), plus sign ( + ), and question mark ( ? ) matchers. Yes credentials An object defining which kind of credentials to look for and where. Yes mapping_rules An array of objects representing mapping rules and 3scale methods to hit. Optional 2.23.6.7. The 3scale WebAssembly module credentials object The credentials object is a component of the service object. credentials specifies which kind of credentials to be looked up and the steps to perform this action. All fields are optional, but you must specify at least one, user_key or app_id . The order in which you specify each credential is irrelevant because it is pre-established by the module. Only specify one instance of each credential. apiVersion: extensions.istio.io/v1alpha1 kind: WasmPlugin metadata: name: <threescale_wasm_plugin_name> spec: pluginConfig: # ... services: - credentials: user_key: <array_of_lookup_queries> app_id: <array_of_lookup_queries> app_key: <array_of_lookup_queries> # ... Table 2.26. credentials object fields Name Description Required user_key This is an array of lookup queries that defines a 3scale user key. A user key is commonly known as an API key. Optional app_id This is an array of lookup queries that define a 3scale application identifier. Application identifiers are provided by 3scale or by using an identity provider like Red Hat Single Sign-On (RH-SS0) , or OpenID Connect (OIDC). The resolution of the lookup queries specified here, whenever it is successful and resolves to two values, it sets up the app_id and the app_key . Optional app_key This is an array of lookup queries that define a 3scale application key. Application keys without a resolved app_id are useless, so only specify this field when app_id has been specified. Optional 2.23.6.8. The 3scale WebAssembly module lookup queries The lookup query object is part of any of the fields in the credentials object. It specifies how a given credential field should be found and processed. When evaluated, a successful resolution means that one or more values were found. A failed resolution means that no values were found. Arrays of lookup queries describe a short-circuit or relationship: a successful resolution of one of the queries stops the evaluation of any remaining queries and assigns the value or values to the specified credential-type. Each query in the array is independent of each other. A lookup query is made up of a single field, a source object, which can be one of a number of source types. See the following example: apiVersion: extensions.istio.io/v1alpha1 kind: WasmPlugin metadata: name: <threescale_wasm_plugin_name> spec: pluginConfig: # ... services: - credentials: user_key: - <source_type>: <object> - <source_type>: <object> # ... app_id: - <source_type>: <object> # ... app_key: - <source_type>: <object> # ... 2.23.6.9. The 3scale WebAssembly module source object A source object exists as part of an array of sources within any of the credentials object fields. The object field name, referred to as a source -type is any one of the following: header : The lookup query receives HTTP request headers as input. query_string : The lookup query receives the URL query string parameters as input. filter : The lookup query receives filter metadata as input. All source -type objects have at least the following two fields: Table 2.27. source-type object fields Name Description Required keys An array of strings, each one a key , referring to entries found in the input data. Yes ops An array of operations that perform a key entry match. The array is a pipeline where operations receive inputs and generate outputs on the operation. An operation failing to provide an output resolves the lookup query as failed. The pipeline order of the operations determines the evaluation order. Optional The filter field name has a required path entry to show the path in the metadata you use to look up data. When a key matches the input data, the rest of the keys are not evaluated and the source resolution algorithm jumps to executing the operations ( ops ) specified, if any. If no ops are specified, the result value of the matching key , if any, is returned. Operations provide a way to specify certain conditions and transformations for inputs you have after the first phase looks up a key . Use operations when you need to transform, decode, and assert properties, however they do not provide a mature language to deal with all needs and lack Turing-completeness . A stack stored the outputs of operations . When evaluated, the lookup query finishes by assigning the value or values at the bottom of the stack, depending on how many values the credential consumes. 2.23.6.10. The 3scale WebAssembly module operations object Each element in the ops array belonging to a specific source type is an operation object that either applies transformations to values or performs tests. The field name to use for such an object is the name of the operation itself, and any values are the parameters to the operation , which could be structure objects, for example, maps with fields and values, lists, or strings. Most operations attend to one or more inputs, and produce one or more outputs. When they consume inputs or produce outputs, they work with a stack of values: each value consumed by the operations is popped from the stack of values and initially populated with any source matches. The values outputted by them are pushed to the stack. Other operations do not consume or produce outputs other than asserting certain properties, but they inspect a stack of values. Note When resolution finishes, the values picked up by the step, such as assigning the values to be an app_id , app_key , or user_key , are taken from the bottom values of the stack. There are a few different operations categories: decode : These transform an input value by decoding it to get a different format. string : These take a string value as input and perform transformations and checks on it. stack : These take a set of values in the input and perform multiple stack transformations and selection of specific positions in the stack. check : These assert properties about sets of operations in a side-effect free way. control : These perform operations that allow for modifying the evaluation flow. format : These parse the format-specific structure of input values and look up values in it. All operations are specified by the name identifiers as strings. Additional resources Available operations 2.23.6.11. The 3scale WebAssembly module mapping_rules object The mapping_rules object is part of the service object. It specifies a set of REST path patterns and related 3scale metrics and count increments to use when the patterns match. You need the value if no dynamic configuration is provided in the system top-level object. If the object is provided in addition to the system top-level entry, then the mapping_rules object is evaluated first. mapping_rules is an array object. Each element of that array is a mapping_rule object. The evaluated matching mapping rules on an incoming request provide the set of 3scale methods for authorization and reporting to the APIManager . When multiple matching rules refer to the same methods , there is a summation of deltas when calling into 3scale. For example, if two rules increase the Hits method twice with deltas of 1 and 3, a single method entry for Hits reporting to 3scale has a delta of 4. 2.23.6.12. The 3scale WebAssembly module mapping_rule object The mapping_rule object is part of an array in the mapping_rules object. The mapping_rule object fields specify the following information: The HTTP request method to match. A pattern to match the path against. The 3scale methods to report along with the amount to report. The order in which you specify the fields determines the evaluation order. Table 2.28. mapping_rule object fields Name Description Required method Specifies a string representing an HTTP request method, also known as verb. Values accepted match the any one of the accepted HTTP method names, case-insensitive. A special value of any matches any method. Yes pattern The pattern to match the HTTP request's URI path component. This pattern follows the same syntax as documented by 3scale. It allows wildcards (use of the asterisk (*) character) using any sequence of characters between braces such as {this} . Yes usages A list of usage objects. When the rule matches, all methods with their deltas are added to the list of methods sent to 3scale for authorization and reporting. Embed the usages object with the following required fields: name : The method system name to report. delta : For how much to increase that method by. Yes last Whether the successful matching of this rule should stop the evaluation of more mapping rules. Optional Boolean. The default is false The following example is independent of existing hierarchies between methods in 3scale. That is, anything run on the 3scale side will not affect this. For example, the Hits metric might be a parent of them all, so it stores 4 hits due to the sum of all reported methods in the authorized request and calls the 3scale Authrep API endpoint. The example below uses a GET request to a path, /products/1/sold , that matches all the rules. mapping_rules GET request example apiVersion: extensions.istio.io/v1alpha1 kind: WasmPlugin metadata: name: <threescale_wasm_plugin_name> spec: pluginConfig: # ... mapping_rules: - method: GET pattern: / usages: - name: hits delta: 1 - method: GET pattern: /products/ usages: - name: products delta: 1 - method: ANY pattern: /products/{id}/sold usages: - name: sales delta: 1 - name: products delta: 1 # ... All usages get added to the request the module performs to 3scale with usage data as follows: Hits: 1 products: 2 sales: 1 2.23.7. The 3scale WebAssembly module examples for credentials use cases You will spend most of your time applying configuration steps to obtain credentials in the requests to your services. The following are credentials examples, which you can modify to adapt to specific use cases. You can combine them all, although when you specify multiple source objects with their own lookup queries , they are evaluated in order until one of them successfully resolves. 2.23.7.1. API key (user_key) in query string parameters The following example looks up a user_key in a query string parameter or header of the same name: apiVersion: extensions.istio.io/v1alpha1 kind: WasmPlugin metadata: name: <threescale_wasm_plugin_name> spec: # ... services: # ... credentials: user_key: - query_string: keys: - <user_key> - header: keys: - <user_key> # ... 2.23.7.2. Application ID and key The following example looks up app_key and app_id credentials in a query or headers. apiVersion: extensions.istio.io/v1alpha1 kind: WasmPlugin metadata: name: <threescale_wasm_plugin_name> spec: # ... services: # ... credentials: app_id: - query_string: keys: - <app_id> - header: keys: - <app_id> app_key: - query_string: keys: - <app_key> - header: keys: - <app_key> # ... 2.23.7.3. Authorization header A request includes an app_id and app_key in an authorization header. If there is at least one or two values outputted at the end, then you can assign the app_key . The resolution here assigns the app_key if there is one or two outputted at the end. The authorization header specifies a value with the type of authorization and its value is encoded as Base64 . This means you can split the value by a space character, take the second output and then split it again using a colon (:) as the separator. For example, if you use this format app_id:app_key , the header looks like the following example for credential : You must use lower case header field names as shown in the following example: apiVersion: extensions.istio.io/v1alpha1 kind: WasmPlugin metadata: name: <threescale_wasm_plugin_name> spec: # ... services: # ... credentials: app_id: - header: keys: - authorization ops: - split: separator: " " max: 2 - length: min: 2 - drop: head: 1 - base64_urlsafe - split: max: 2 app_key: - header: keys: - app_key # ... The example use case looks at the headers for an authorization : It takes its string value and split it by a space, checking that it generates at least two values of a credential -type and the credential itself, then dropping the credential -type. It then decodes the second value containing the data it needs, and splits it by using a colon (:) character to have an operations stack including first the app_id , then the app_key , if it exists. If app_key does not exist in the authorization header then its specific sources are checked, for example, the header with the key app_key in this case. To add extra conditions to credentials , allow Basic authorizations, where app_id is either aladdin or admin , or any app_id being at least 8 characters in length. app_key must contain a value and have a minimum of 64 characters as shown in the following example: apiVersion: extensions.istio.io/v1alpha1 kind: WasmPlugin metadata: name: <threescale_wasm_plugin_name> spec: # ... services: # ... credentials: app_id: - header: keys: - authorization ops: - split: separator: " " max: 2 - length: min: 2 - reverse - glob: - Basic - drop: tail: 1 - base64_urlsafe - split: max: 2 - test: if: length: min: 2 then: - strlen: max: 63 - or: - strlen: min: 1 - drop: tail: 1 - assert: - and: - reverse - or: - strlen: min: 8 - glob: - aladdin - admin # ... After picking up the authorization header value, you get a Basic credential -type by reversing the stack so that the type is placed on top. Run a glob match on it. When it validates, and the credential is decoded and split, you get the app_id at the bottom of the stack, and potentially the app_key at the top. Run a test: if there are two values in the stack, meaning an app_key was acquired. Ensure the string length is between 1 and 63, including app_id and app_key . If the key's length is zero, drop it and continue as if no key exists. If there was only an app_id and no app_key , the missing else branch indicates a successful test and evaluation continues. The last operation, assert , indicates that no side-effects make it into the stack. You can then modify the stack: Reverse the stack to have the app_id at the top. Whether or not an app_key is present, reversing the stack ensures app_id is at the top. Use and to preserve the contents of the stack across tests. Then use one of the following possibilities: Make sure app_id has a string length of at least 8. Make sure app_id matches either aladdin or admin . 2.23.7.4. OpenID Connect (OIDC) use case For Service Mesh and the 3scale Istio adapter, you must deploy a RequestAuthentication as shown in the following example, filling in your own workload data and jwtRules : apiVersion: security.istio.io/v1beta1 kind: RequestAuthentication metadata: name: jwt-example namespace: bookinfo spec: selector: matchLabels: app: productpage jwtRules: - issuer: >- http://keycloak-keycloak.34.242.107.254.nip.io/auth/realms/3scale-keycloak jwksUri: >- http://keycloak-keycloak.34.242.107.254.nip.io/auth/realms/3scale-keycloak/protocol/openid-connect/certs When you apply the RequestAuthentication , it configures Envoy with a native plugin to validate JWT tokens. The proxy validates everything before running the module so any requests that fail do not make it to the 3scale WebAssembly module. When a JWT token is validated, the proxy stores its contents in an internal metadata object, with an entry whose key depends on the specific configuration of the plugin. This use case gives you the ability to look up structure objects with a single entry containing an unknown key name. The 3scale app_id for OIDC matches the OAuth client_id . This is found in the azp or aud fields of JWT tokens. To get app_id field from Envoy's native JWT authentication filter, see the following example: apiVersion: extensions.istio.io/v1alpha1 kind: WasmPlugin metadata: name: <threescale_wasm_plugin_name> spec: # ... services: # ... credentials: app_id: - filter: path: - envoy.filters.http.jwt_authn - "0" keys: - azp - aud ops: - take: head: 1 # ... The example instructs the module to use the filter source type to look up filter metadata for an object from the Envoy -specific JWT authentication native plugin. This plugin includes the JWT token as part of a structure object with a single entry and a preconfigured name. Use 0 to specify that you will only access the single entry. The resulting value is a structure for which you will resolve two fields: azp : The value where app_id is found. aud : The value where this information can also be found. The operation ensures only one value is held for assignment. 2.23.7.5. Picking up the JWT token from a header Some setups might have validation processes for JWT tokens where the validated token would reach this module via a header in JSON format. To get the app_id , see the following example: apiVersion: extensions.istio.io/v1alpha1 kind: WasmPlugin metadata: name: <threescale_wasm_plugin_name> spec: # ... services: # ... credentials: app_id: - header: keys: - x-jwt-payload ops: - base64_urlsafe - json: - keys: - azp - aud - take: head: 1 # ,,, 2.23.8. 3scale WebAssembly module minimal working configuration The following is an example of a 3scale WebAssembly module minimal working configuration. You can copy and paste this and edit it to work with your own configuration. apiVersion: extensions.istio.io/v1alpha1 kind: WasmPlugin metadata: name: <threescale_wasm_plugin_name> spec: url: oci://registry.redhat.io/3scale-amp2/3scale-auth-wasm-rhel8:0.0.3 imagePullSecret: <optional_pull_secret_resource> phase: AUTHZ priority: 100 selector: labels: app: <product_page> pluginConfig: api: v1 system: name: <system_name> upstream: name: outbound|443||multitenant.3scale.net url: https://istiodevel-admin.3scale.net/ timeout: 5000 token: <token> backend: name: <backend_name> upstream: name: outbound|443||su1.3scale.net url: https://su1.3scale.net/ timeout: 5000 extensions: - no_body services: - id: '2555417834780' authorities: - "*" credentials: user_key: - query_string: keys: - <user_key> - header: keys: - <user_key> app_id: - query_string: keys: - <app_id> - header: keys: - <app_id> app_key: - query_string: keys: - <app_key> - header: keys: - <app_key> 2.24. Using the 3scale Istio adapter The 3scale Istio Adapter is an optional adapter that allows you to label a service running within the Red Hat OpenShift Service Mesh and integrate that service with the 3scale API Management solution. It is not required for Red Hat OpenShift Service Mesh. Important You can only use the 3scale Istio adapter with Red Hat OpenShift Service Mesh versions 2.0 and below. The Mixer component was deprecated in release 2.0 and removed in release 2.1. For Red Hat OpenShift Service Mesh versions 2.1.0 and later you should use the 3scale WebAssembly module . If you want to enable 3scale backend cache with the 3scale Istio adapter, you must also enable Mixer policy and Mixer telemetry. See Deploying the Red Hat OpenShift Service Mesh control plane . 2.24.1. Integrate the 3scale adapter with Red Hat OpenShift Service Mesh You can use these examples to configure requests to your services using the 3scale Istio Adapter. Prerequisites Red Hat OpenShift Service Mesh version 2.x A working 3scale account ( SaaS or 3scale 2.9 On-Premises ) Enabling backend cache requires 3scale 2.9 or greater Red Hat OpenShift Service Mesh prerequisites Ensure Mixer policy enforcement is enabled. Update Mixer policy enforcement section provides instructions to check the current Mixer policy enforcement status and enable policy enforcement. Mixer policy and telemetry must be enabled if you are using a mixer plugin. You will need to properly configure the Service Mesh Control Plane (SMCP) when upgrading. Note To configure the 3scale Istio Adapter, refer to Red Hat OpenShift Service Mesh custom resources for instructions on adding adapter parameters to the custom resource file. Note Pay particular attention to the kind: handler resource. You must update this with your 3scale account credentials. You can optionally add a service_id to a handler, but this is kept for backwards compatibility only, since it would render the handler only useful for one service in your 3scale account. If you add service_id to a handler, enabling 3scale for other services requires you to create more handlers with different service_ids . Use a single handler per 3scale account by following the steps below: Procedure Create a handler for your 3scale account and specify your account credentials. Omit any service identifier. apiVersion: "config.istio.io/v1alpha2" kind: handler metadata: name: threescale spec: adapter: threescale params: system_url: "https://<organization>-admin.3scale.net/" access_token: "<ACCESS_TOKEN>" connection: address: "threescale-istio-adapter:3333" Optionally, you can provide a backend_url field within the params section to override the URL provided by the 3scale configuration. This may be useful if the adapter runs on the same cluster as the 3scale on-premise instance, and you wish to leverage the internal cluster DNS. Edit or patch the Deployment resource of any services belonging to your 3scale account as follows: Add the "service-mesh.3scale.net/service-id" label with a value corresponding to a valid service_id . Add the "service-mesh.3scale.net/credentials" label with its value being the name of the handler resource from step 1. Do step 2 to link it to your 3scale account credentials and to its service identifier, whenever you intend to add more services. Modify the rule configuration with your 3scale configuration to dispatch the rule to the threescale handler. Rule configuration example apiVersion: "config.istio.io/v1alpha2" kind: rule metadata: name: threescale spec: match: destination.labels["service-mesh.3scale.net"] == "true" actions: - handler: threescale.handler instances: - threescale-authorization.instance 2.24.1.1. Generating 3scale custom resources The adapter includes a tool that allows you to generate the handler , instance , and rule custom resources. Table 2.29. Usage Option Description Required Default value -h, --help Produces help output for available options No --name Unique name for this URL, token pair Yes -n, --namespace Namespace to generate templates No istio-system -t, --token 3scale access token Yes -u, --url 3scale Admin Portal URL Yes --backend-url 3scale backend URL. If set, it overrides the value that is read from system configuration No -s, --service 3scale API/Service ID No --auth 3scale authentication pattern to specify (1=API Key, 2=App Id/App Key, 3=OIDC) No Hybrid -o, --output File to save produced manifests to No Standard output --version Outputs the CLI version and exits immediately No 2.24.1.1.1. Generate templates from URL examples Note Run the following commands via oc exec from the 3scale adapter container image in Generating manifests from a deployed adapter . Use the 3scale-config-gen command to help avoid YAML syntax and indentation errors. You can omit the --service if you use the annotations. This command must be invoked from within the container image via oc exec . Procedure Use the 3scale-config-gen command to autogenerate templates files allowing the token, URL pair to be shared by multiple services as a single handler: The following example generates the templates with the service ID embedded in the handler: Additional resources Tokens . 2.24.1.2. Generating manifests from a deployed adapter Note NAME is an identifier you use to identify with the service you are managing with 3scale. The CREDENTIALS_NAME reference is an identifier that corresponds to the match section in the rule configuration. This is automatically set to the NAME identifier if you are using the CLI tool. Its value does not need to be anything specific: the label value should just match the contents of the rule. See Routing service traffic through the adapter for more information. Run this command to generate manifests from a deployed adapter in the istio-system namespace: This will produce sample output to the terminal. Edit these samples if required and create the objects using the oc create command. When the request reaches the adapter, the adapter needs to know how the service maps to an API on 3scale. You can provide this information in two ways: Label the workload (recommended) Hard code the handler as service_id Update the workload with the required annotations: Note You only need to update the service ID provided in this example if it is not already embedded in the handler. The setting in the handler takes precedence . 2.24.1.3. Routing service traffic through the adapter Follow these steps to drive traffic for your service through the 3scale adapter. Prerequisites Credentials and service ID from your 3scale administrator. Procedure Match the rule destination.labels["service-mesh.3scale.net/credentials"] == "threescale" that you previously created in the configuration, in the kind: rule resource. Add the above label to PodTemplateSpec on the Deployment of the target workload to integrate a service. the value, threescale , refers to the name of the generated handler. This handler stores the access token required to call 3scale. Add the destination.labels["service-mesh.3scale.net/service-id"] == "replace-me" label to the workload to pass the service ID to the adapter via the instance at request time. 2.24.2. Configure the integration settings in 3scale Follow this procedure to configure the 3scale integration settings. Note For 3scale SaaS customers, Red Hat OpenShift Service Mesh is enabled as part of the Early Access program. Procedure Navigate to [your_API_name] Integration Click Settings . Select the Istio option under Deployment . The API Key (user_key) option under Authentication is selected by default. Click Update Product to save your selection. Click Configuration . Click Update Configuration . 2.24.3. Caching behavior Responses from 3scale System APIs are cached by default within the adapter. Entries will be purged from the cache when they become older than the cacheTTLSeconds value. Also by default, automatic refreshing of cached entries will be attempted seconds before they expire, based on the cacheRefreshSeconds value. You can disable automatic refreshing by setting this value higher than the cacheTTLSeconds value. Caching can be disabled entirely by setting cacheEntriesMax to a non-positive value. By using the refreshing process, cached values whose hosts become unreachable will be retried before eventually being purged when past their expiry. 2.24.4. Authenticating requests This release supports the following authentication methods: Standard API Keys : single randomized strings or hashes acting as an identifier and a secret token. Application identifier and key pairs : immutable identifier and mutable secret key strings. OpenID authentication method : client ID string parsed from the JSON Web Token. 2.24.4.1. Applying authentication patterns Modify the instance custom resource, as illustrated in the following authentication method examples, to configure authentication behavior. You can accept the authentication credentials from: Request headers Request parameters Both request headers and query parameters Note When specifying values from headers, they must be lower case. For example, if you want to send a header as User-Key , this must be referenced in the configuration as request.headers["user-key"] . 2.24.4.1.1. API key authentication method Service Mesh looks for the API key in query parameters and request headers as specified in the user option in the subject custom resource parameter. It checks the values in the order given in the custom resource file. You can restrict the search for the API key to either query parameters or request headers by omitting the unwanted option. In this example, Service Mesh looks for the API key in the user_key query parameter. If the API key is not in the query parameter, Service Mesh then checks the user-key header. API key authentication method example apiVersion: "config.istio.io/v1alpha2" kind: instance metadata: name: threescale-authorization namespace: istio-system spec: template: authorization params: subject: user: request.query_params["user_key"] | request.headers["user-key"] | "" action: path: request.url_path method: request.method | "get" If you want the adapter to examine a different query parameter or request header, change the name as appropriate. For example, to check for the API key in a query parameter named "key", change request.query_params["user_key"] to request.query_params["key"] . 2.24.4.1.2. Application ID and application key pair authentication method Service Mesh looks for the application ID and application key in query parameters and request headers, as specified in the properties option in the subject custom resource parameter. The application key is optional. It checks the values in the order given in the custom resource file. You can restrict the search for the credentials to either query parameters or request headers by not including the unwanted option. In this example, Service Mesh looks for the application ID and application key in the query parameters first, moving on to the request headers if needed. Application ID and application key pair authentication method example apiVersion: "config.istio.io/v1alpha2" kind: instance metadata: name: threescale-authorization namespace: istio-system spec: template: authorization params: subject: app_id: request.query_params["app_id"] | request.headers["app-id"] | "" app_key: request.query_params["app_key"] | request.headers["app-key"] | "" action: path: request.url_path method: request.method | "get" If you want the adapter to examine a different query parameter or request header, change the name as appropriate. For example, to check for the application ID in a query parameter named identification , change request.query_params["app_id"] to request.query_params["identification"] . 2.24.4.1.3. OpenID authentication method To use the OpenID Connect (OIDC) authentication method , use the properties value on the subject field to set client_id , and optionally app_key . You can manipulate this object using the methods described previously. In the example configuration shown below, the client identifier (application ID) is parsed from the JSON Web Token (JWT) under the label azp . You can modify this as needed. OpenID authentication method example apiVersion: "config.istio.io/v1alpha2" kind: instance metadata: name: threescale-authorization spec: template: threescale-authorization params: subject: properties: app_key: request.query_params["app_key"] | request.headers["app-key"] | "" client_id: request.auth.claims["azp"] | "" action: path: request.url_path method: request.method | "get" service: destination.labels["service-mesh.3scale.net/service-id"] | "" For this integration to work correctly, OIDC must still be done in 3scale for the client to be created in the identity provider (IdP). You should create a Request authorization for the service you want to protect in the same namespace as that service. The JWT is passed in the Authorization header of the request. In the sample RequestAuthentication defined below, replace issuer , jwksUri , and selector as appropriate. OpenID Policy example apiVersion: security.istio.io/v1beta1 kind: RequestAuthentication metadata: name: jwt-example namespace: bookinfo spec: selector: matchLabels: app: productpage jwtRules: - issuer: >- http://keycloak-keycloak.34.242.107.254.nip.io/auth/realms/3scale-keycloak jwksUri: >- http://keycloak-keycloak.34.242.107.254.nip.io/auth/realms/3scale-keycloak/protocol/openid-connect/certs 2.24.4.1.4. Hybrid authentication method You can choose to not enforce a particular authentication method and accept any valid credentials for either method. If both an API key and an application ID/application key pair are provided, Service Mesh uses the API key. In this example, Service Mesh checks for an API key in the query parameters, then the request headers. If there is no API key, it then checks for an application ID and key in the query parameters, then the request headers. Hybrid authentication method example apiVersion: "config.istio.io/v1alpha2" kind: instance metadata: name: threescale-authorization spec: template: authorization params: subject: user: request.query_params["user_key"] | request.headers["user-key"] | properties: app_id: request.query_params["app_id"] | request.headers["app-id"] | "" app_key: request.query_params["app_key"] | request.headers["app-key"] | "" client_id: request.auth.claims["azp"] | "" action: path: request.url_path method: request.method | "get" service: destination.labels["service-mesh.3scale.net/service-id"] | "" 2.24.5. 3scale Adapter metrics The adapter, by default reports various Prometheus metrics that are exposed on port 8080 at the /metrics endpoint. These metrics provide insight into how the interactions between the adapter and 3scale are performing. The service is labeled to be automatically discovered and scraped by Prometheus. Note There are incompatible changes in the 3scale Istio Adapter metrics since the releases in Service Mesh 1.x. In Prometheus, metrics have been renamed with one addition for the backend cache, so that the following metrics exist as of Service Mesh 2.0: Table 2.30. Prometheus metrics Metric Type Description threescale_latency Histogram Request latency between adapter and 3scale. threescale_http_total Counter HTTP Status response codes for requests to 3scale backend. threescale_system_cache_hits Counter Total number of requests to the 3scale system fetched from the configuration cache. threescale_backend_cache_hits Counter Total number of requests to 3scale backend fetched from the backend cache. 2.24.6. 3scale backend cache The 3scale backend cache provides an authorization and reporting cache for clients of the 3scale Service Management API. This cache is embedded in the adapter to enable lower latencies in responses in certain situations assuming the administrator is willing to accept the trade-offs. Note 3scale backend cache is disabled by default. 3scale backend cache functionality trades inaccuracy in rate limiting and potential loss of hits since the last flush was performed for low latency and higher consumption of resources in the processor and memory. 2.24.6.1. Advantages of enabling backend cache The following are advantages to enabling the backend cache: Enable the backend cache when you find latencies are high while accessing services managed by the 3scale Istio Adapter. Enabling the backend cache will stop the adapter from continually checking with the 3scale API manager for request authorizations, which will lower the latency. This creates an in-memory cache of 3scale authorizations for the 3scale Istio Adapter to store and reuse before attempting to contact the 3scale API manager for authorizations. Authorizations will then take much less time to be granted or denied. Backend caching is useful in cases when you are hosting the 3scale API manager in another geographical location from the service mesh running the 3scale Istio Adapter. This is generally the case with the 3scale Hosted (SaaS) platform, but also if a user hosts their 3scale API manager in another cluster located in a different geographical location, in a different availability zone, or in any case where the network overhead to reach the 3scale API manager is noticeable. 2.24.6.2. Trade-offs for having lower latencies The following are trade-offs for having lower latencies: Each 3scale adapter's authorization state updates every time a flush happens. This means two or more instances of the adapter will introduce more inaccuracy between flushing periods. There is a greater chance of too many requests being granted that exceed limits and introduce erratic behavior, which leads to some requests going through and some not, depending on which adapter processes each request. An adapter cache that cannot flush its data and update its authorization information risks shut down or crashing without reporting its information to the API manager. A fail open or fail closed policy will be applied when an adapter cache cannot determine whether a request must be granted or denied, possibly due to network connectivity issues in contacting the API manager. When cache misses occur, typically right after booting the adapter or after a long period of no connectivity, latencies will grow in order to query the API manager. An adapter cache must do much more work on computing authorizations than it would without an enabled cache, which will tax processor resources. Memory requirements will grow proportionally to the combination of the amount of limits, applications, and services managed by the cache. 2.24.6.3. Backend cache configuration settings The following points explain the backend cache configuration settings: Find the settings to configure the backend cache in the 3scale configuration options. The last 3 settings control enabling of backend cache: PARAM_USE_CACHE_BACKEND - set to true to enable backend cache. PARAM_BACKEND_CACHE_FLUSH_INTERVAL_SECONDS - sets time in seconds between consecutive attempts to flush cache data to the API manager. PARAM_BACKEND_CACHE_POLICY_FAIL_CLOSED - set whether or not to allow/open or deny/close requests to the services when there is not enough cached data and the 3scale API manager cannot be reached. 2.24.7. 3scale Istio Adapter APIcast emulation The 3scale Istio Adapter performs as APIcast would when the following conditions occur: When a request cannot match any mapping rule defined, the returned HTTP code is 404 Not Found. This was previously 403 Forbidden. When a request is denied because it goes over limits, the returned HTTP code is 429 Too Many Requests. This was previously 403 Forbidden. When generating default templates via the CLI, it will use underscores rather than dashes for the headers, for example: user_key rather than user-key . 2.24.8. 3scale Istio adapter verification You might want to check whether the 3scale Istio adapter is working as expected. If your adapter is not working, use the following steps to help troubleshoot the problem. Procedure Ensure the 3scale-adapter pod is running in the Service Mesh control plane namespace: USD oc get pods -n istio-system Check that the 3scale-adapter pod has printed out information about itself booting up, such as its version: USD oc logs istio-system When performing requests to the services protected by the 3scale adapter integration, always try requests that lack the right credentials and ensure they fail. Check the 3scale adapter logs to gather additional information. Additional resources Inspecting pod and container logs . 2.24.9. 3scale Istio adapter troubleshooting checklist As the administrator installing the 3scale Istio adapter, there are a number of scenarios that might be causing your integration to not function properly. Use the following list to troubleshoot your installation: Incorrect YAML indentation. Missing YAML sections. Forgot to apply the changes in the YAML to the cluster. Forgot to label the service workloads with the service-mesh.3scale.net/credentials key. Forgot to label the service workloads with service-mesh.3scale.net/service-id when using handlers that do not contain a service_id so they are reusable per account. The Rule custom resource points to the wrong handler or instance custom resources, or the references lack the corresponding namespace suffix. The Rule custom resource match section cannot possibly match the service you are configuring, or it points to a destination workload that is not currently running or does not exist. Wrong access token or URL for the 3scale Admin Portal in the handler. The Instance custom resource's params/subject/properties section fails to list the right parameters for app_id , app_key , or client_id , either because they specify the wrong location such as the query parameters, headers, and authorization claims, or the parameter names do not match the requests used for testing. Failing to use the configuration generator without realizing that it actually lives in the adapter container image and needs oc exec to invoke it. 2.25. Troubleshooting your service mesh This section describes how to identify and resolve common problems in Red Hat OpenShift Service Mesh. Use the following sections to help troubleshoot and debug problems when deploying Red Hat OpenShift Service Mesh on OpenShift Container Platform. 2.25.1. Understanding Service Mesh versions In order to understand what version of Red Hat OpenShift Service Mesh you have deployed on your system, you need to understand how each of the component versions is managed. Operator version - The most current Operator version is 2.6.6. The Operator version number only indicates the version of the currently installed Operator. Because the Red Hat OpenShift Service Mesh Operator supports multiple versions of the Service Mesh control plane, the version of the Operator does not determine the version of your deployed ServiceMeshControlPlane resources. Important Upgrading to the latest Operator version automatically applies patch updates, but does not automatically upgrade your Service Mesh control plane to the latest minor version. ServiceMeshControlPlane version - The ServiceMeshControlPlane version determines what version of Red Hat OpenShift Service Mesh you are using. The value of the spec.version field in the ServiceMeshControlPlane resource controls the architecture and configuration settings that are used to install and deploy Red Hat OpenShift Service Mesh. When you create the Service Mesh control plane you can set the version in one of two ways: To configure in the Form View, select the version from the Control Plane Version menu. To configure in the YAML View, set the value for spec.version in the YAML file. Operator Lifecycle Manager (OLM) does not manage Service Mesh control plane upgrades, so the version number for your Operator and ServiceMeshControlPlane (SMCP) may not match, unless you have manually upgraded your SMCP. 2.25.2. Troubleshooting Operator installation In addition to the information in this section, be sure to review the following topics: What are Operators? Operator Lifecycle Management concepts . OpenShift Operator troubleshooting section . OpenShift installation troubleshooting section . 2.25.2.1. Validating Operator installation When you install the Red Hat OpenShift Service Mesh Operators, OpenShift automatically creates the following objects as part of a successful Operator installation: config maps custom resource definitions deployments pods replica sets roles role bindings secrets service accounts services From the OpenShift Container Platform console You can verify that the Operator pods are available and running by using the OpenShift Container Platform console. Navigate to Workloads Pods . Select the openshift-operators namespace. Verify that the following pods exist and have a status of running : istio-operator jaeger-operator kiali-operator Select the openshift-operators-redhat namespace. Verify that the elasticsearch-operator pod exists and has a status of running . From the command line Verify the Operator pods are available and running in the openshift-operators namespace with the following command: USD oc get pods -n openshift-operators Example output NAME READY STATUS RESTARTS AGE istio-operator-bb49787db-zgr87 1/1 Running 0 15s jaeger-operator-7d5c4f57d8-9xphf 1/1 Running 0 2m42s kiali-operator-f9c8d84f4-7xh2v 1/1 Running 0 64s Verify the Elasticsearch operator with the following command: USD oc get pods -n openshift-operators-redhat Example output NAME READY STATUS RESTARTS AGE elasticsearch-operator-d4f59b968-796vq 1/1 Running 0 15s 2.25.2.2. Troubleshooting service mesh Operators If you experience Operator issues: Verify your Operator subscription status. Verify that you did not install a community version of the Operator, instead of the supported Red Hat version. Verify that you have the cluster-admin role to install Red Hat OpenShift Service Mesh. Check for any errors in the Operator pod logs if the issue is related to installation of Operators. Note You can install Operators only through the OpenShift console, the OperatorHub is not accessible from the command line. 2.25.2.2.1. Viewing Operator pod logs You can view Operator logs by using the oc logs command. Red Hat may request logs to help resolve support cases. Procedure To view Operator pod logs, enter the command: USD oc logs -n openshift-operators <podName> For example, USD oc logs -n openshift-operators istio-operator-bb49787db-zgr87 2.25.3. Troubleshooting the control plane The Service Mesh control plane is composed of Istiod, which consolidates several control plane components (Citadel, Galley, Pilot) into a single binary. Deploying the ServiceMeshControlPlane also creates the other components that make up Red Hat OpenShift Service Mesh as described in the architecture topic. 2.25.3.1. Validating the Service Mesh control plane installation When you create the Service Mesh control plane, the Service Mesh Operator uses the parameters that you have specified in the ServiceMeshControlPlane resource file to do the following: Creates the Istio components and deploys the following pods: istiod istio-ingressgateway istio-egressgateway grafana prometheus Calls the Kiali Operator to create Kaili deployment based on configuration in either the SMCP or the Kiali custom resource. Note You view the Kiali components under the Kiali Operator, not the Service Mesh Operator. Calls the Red Hat OpenShift distributed tracing platform (Jaeger) Operator to create distributed tracing platform (Jaeger) components based on configuration in either the SMCP or the Jaeger custom resource. Note You view the Jaeger components under the Red Hat OpenShift distributed tracing platform (Jaeger) Operator and the Elasticsearch components under the Red Hat Elasticsearch Operator, not the Service Mesh Operator. From the OpenShift Container Platform console You can verify the Service Mesh control plane installation in the OpenShift Container Platform web console. Navigate to Operators Installed Operators . Select the istio-system namespace. Select the Red Hat OpenShift Service Mesh Operator. Click the Istio Service Mesh Control Plane tab. Click the name of your control plane, for example basic . To view the resources created by the deployment, click the Resources tab. You can use the filter to narrow your view, for example, to check that all the Pods have a status of running . If the SMCP status indicates any problems, check the status: output in the YAML file for more information. Navigate back to Operators Installed Operators . Select the OpenShift Elasticsearch Operator. Click the Elasticsearch tab. Click the name of the deployment, for example elasticsearch . To view the resources created by the deployment, click the Resources tab. . If the Status column any problems, check the status: output on the YAML tab for more information. Navigate back to Operators Installed Operators . Select the Red Hat OpenShift distributed tracing platform (Jaeger) Operator. Click the Jaeger tab. Click the name of your deployment, for example jaeger . To view the resources created by the deployment, click the Resources tab. If the Status column indicates any problems, check the status: output on the YAML tab for more information. Navigate to Operators Installed Operators . Select the Kiali Operator. Click the Istio Service Mesh Control Plane tab. Click the name of your deployment, for example kiali . To view the resources created by the deployment, click the Resources tab. If the Status column any problems, check the status: output on the YAML tab for more information. From the command line Run the following command to see if the Service Mesh control plane pods are available and running, where istio-system is the namespace where you installed the SMCP. USD oc get pods -n istio-system Example output NAME READY STATUS RESTARTS AGE grafana-6776785cfc-6fz7t 2/2 Running 0 102s istio-egressgateway-5f49dd99-l9ppq 1/1 Running 0 103s istio-ingressgateway-6dc885c48-jjd8r 1/1 Running 0 103s istiod-basic-6c9cc55998-wg4zq 1/1 Running 0 2m14s jaeger-6865d5d8bf-zrfss 2/2 Running 0 100s kiali-579799fbb7-8mwc8 1/1 Running 0 46s prometheus-5c579dfb-6qhjk 2/2 Running 0 115s Check the status of the Service Mesh control plane deployment by using the following command. Replace istio-system with the namespace where you deployed the SMCP. USD oc get smcp -n istio-system The installation has finished successfully when the STATUS column is ComponentsReady . Example output NAME READY STATUS PROFILES VERSION AGE basic 10/10 ComponentsReady ["default"] 2.1.3 4m2s If you have modified and redeployed your Service Mesh control plane, the status should read UpdateSuccessful . Example output NAME READY STATUS TEMPLATE VERSION AGE basic-install 10/10 UpdateSuccessful default v1.1 3d16h If the SMCP status indicates anything other than ComponentsReady check the status: output in the SCMP resource for more information. USD oc describe smcp <smcp-name> -n <controlplane-namespace> Example output USD oc describe smcp basic -n istio-system Check the status of the Jaeger deployment with the following command, where istio-system is the namespace where you deployed the SMCP. USD oc get jaeger -n istio-system Example output NAME STATUS VERSION STRATEGY STORAGE AGE jaeger Running 1.30.0 allinone memory 15m Check the status of the Kiali deployment with the following command, where istio-system is the namespace where you deployed the SMCP. USD oc get kiali -n istio-system Example output NAME AGE kiali 15m 2.25.3.1.1. Accessing the Kiali console You can view your application's topology, health, and metrics in the Kiali console. If your service is experiencing problems, the Kiali console lets you view the data flow through your service. You can view insights about the mesh components at different levels, including abstract applications, services, and workloads. Kiali also provides an interactive graph view of your namespace in real time. To access the Kiali console you must have Red Hat OpenShift Service Mesh installed, Kiali installed and configured. The installation process creates a route to access the Kiali console. If you know the URL for the Kiali console, you can access it directly. If you do not know the URL, use the following directions. Procedure for administrators Log in to the OpenShift Container Platform web console with an administrator role. Click Home Projects . On the Projects page, if necessary, use the filter to find the name of your project. Click the name of your project, for example, bookinfo . On the Project details page, in the Launcher section, click the Kiali link. Log in to the Kiali console with the same user name and password that you use to access the OpenShift Container Platform console. When you first log in to the Kiali Console, you see the Overview page which displays all the namespaces in your service mesh that you have permission to view. If you are validating the console installation and namespaces have not yet been added to the mesh, there might not be any data to display other than istio-system . Procedure for developers Log in to the OpenShift Container Platform web console with a developer role. Click Project . On the Project Details page, if necessary, use the filter to find the name of your project. Click the name of your project, for example, bookinfo . On the Project page, in the Launcher section, click the Kiali link. Click Log In With OpenShift . 2.25.3.1.2. Accessing the Jaeger console To access the Jaeger console you must have Red Hat OpenShift Service Mesh installed, Red Hat OpenShift distributed tracing platform (Jaeger) installed and configured. The installation process creates a route to access the Jaeger console. If you know the URL for the Jaeger console, you can access it directly. If you do not know the URL, use the following directions. Important Starting with Red Hat OpenShift Service Mesh 2.5, Red Hat OpenShift distributed tracing platform (Jaeger) and OpenShift Elasticsearch Operator have been deprecated and will be removed in a future release. Red Hat will provide bug fixes and support for this feature during the current release lifecycle, but this feature will no longer receive enhancements and will be removed. As an alternative to Red Hat OpenShift distributed tracing platform (Jaeger), you can use Red Hat OpenShift distributed tracing platform (Tempo) instead. Procedure from OpenShift console Log in to the OpenShift Container Platform web console as a user with cluster-admin rights. If you use Red Hat OpenShift Dedicated, you must have an account with the dedicated-admin role. Navigate to Networking Routes . On the Routes page, select the Service Mesh control plane project, for example istio-system , from the Namespace menu. The Location column displays the linked address for each route. If necessary, use the filter to find the jaeger route. Click the route Location to launch the console. Click Log In With OpenShift . Procedure from Kiali console Launch the Kiali console. Click Distributed Tracing in the left navigation pane. Click Log In With OpenShift . Procedure from the CLI Log in to the OpenShift Container Platform CLI as a user with the cluster-admin role. If you use Red Hat OpenShift Dedicated, you must have an account with the dedicated-admin role. USD oc login --username=<NAMEOFUSER> https://<HOSTNAME>:6443 To query for details of the route using the command line, enter the following command. In this example, istio-system is the Service Mesh control plane namespace. USD oc get route -n istio-system jaeger -o jsonpath='{.spec.host}' Launch a browser and navigate to https://<JAEGER_URL> , where <JAEGER_URL> is the route that you discovered in the step. Log in using the same user name and password that you use to access the OpenShift Container Platform console. If you have added services to the service mesh and have generated traces, you can use the filters and Find Traces button to search your trace data. If you are validating the console installation, there is no trace data to display. 2.25.3.2. Troubleshooting the Service Mesh control plane If you are experiencing issues while deploying the Service Mesh control plane, Ensure that the ServiceMeshControlPlane resource is installed in a project that is separate from your services and Operators. This documentation uses the istio-system project as an example, but you can deploy your control plane in any project as long as it is separate from the project that contains your Operators and services. Ensure that the ServiceMeshControlPlane and Jaeger custom resources are deployed in the same project. For example, use the istio-system project for both. 2.25.4. Troubleshooting the data plane The data plane is a set of intelligent proxies that intercept and control all inbound and outbound network communications between services in the service mesh. Red Hat OpenShift Service Mesh relies on a proxy sidecar within the application's pod to provide service mesh capabilities to the application. 2.25.4.1. Troubleshooting sidecar injection Red Hat OpenShift Service Mesh does not automatically inject proxy sidecars to pods. You must opt in to sidecar injection. 2.25.4.1.1. Troubleshooting Istio sidecar injection Check to see if automatic injection is enabled in the Deployment for your application. If automatic injection for the Envoy proxy is enabled, there should be a sidecar.istio.io/inject:"true" annotation in the Deployment resource under spec.template.metadata.annotations . 2.25.4.1.2. Troubleshooting Jaeger agent sidecar injection Check to see if automatic injection is enabled in the Deployment for your application. If automatic injection for the Jaeger agent is enabled, there should be a sidecar.jaegertracing.io/inject:"true" annotation in the Deployment resource. For more information about sidecar injection, see Enabling automatic injection 2.26. Troubleshooting Envoy proxy The Envoy proxy intercepts all inbound and outbound traffic for all services in the service mesh. Envoy also collects and reports telemetry on the service mesh. Envoy is deployed as a sidecar to the relevant service in the same pod. 2.26.1. Enabling Envoy access logs Envoy access logs are useful in diagnosing traffic failures and flows, and help with end-to-end traffic flow analysis. To enable access logging for all istio-proxy containers, edit the ServiceMeshControlPlane (SMCP) object to add a file name for the logging output. Procedure Log in to the OpenShift Container Platform CLI as a user with the cluster-admin role. Enter the following command. Then, enter your username and password when prompted. USD oc login --username=<NAMEOFUSER> https://<HOSTNAME>:6443 Change to the project where you installed the Service Mesh control plane, for example istio-system . USD oc project istio-system Edit the ServiceMeshControlPlane file. USD oc edit smcp <smcp_name> As show in the following example, use name to specify the file name for the proxy log. If you do not specify a value for name , no log entries will be written. apiVersion: maistra.io/v2 kind: ServiceMeshControlPlane metadata: name: basic namespace: istio-system spec: proxy: accessLogging: file: name: /dev/stdout #file name For more information about troubleshooting pod issues, see Investigating pod issues 2.26.2. Getting support If you experience difficulty with a procedure described in this documentation, or with OpenShift Container Platform in general, visit the Red Hat Customer Portal . From the Customer Portal, you can: Search or browse through the Red Hat Knowledgebase of articles and solutions relating to Red Hat products. Submit a support case to Red Hat Support. Access other product documentation. To identify issues with your cluster, you can use Insights in OpenShift Cluster Manager . Insights provides details about issues and, if available, information on how to solve a problem. If you have a suggestion for improving this documentation or have found an error, submit a Jira issue for the most relevant documentation component. Please provide specific details, such as the section name and OpenShift Container Platform version. 2.26.2.1. About the Red Hat Knowledgebase The Red Hat Knowledgebase provides rich content aimed at helping you make the most of Red Hat's products and technologies. The Red Hat Knowledgebase consists of articles, product documentation, and videos outlining best practices on installing, configuring, and using Red Hat products. In addition, you can search for solutions to known issues, each providing concise root cause descriptions and remedial steps. 2.26.2.2. Searching the Red Hat Knowledgebase In the event of an OpenShift Container Platform issue, you can perform an initial search to determine if a solution already exists within the Red Hat Knowledgebase. Prerequisites You have a Red Hat Customer Portal account. Procedure Log in to the Red Hat Customer Portal . Click Search . In the search field, input keywords and strings relating to the problem, including: OpenShift Container Platform components (such as etcd ) Related procedure (such as installation ) Warnings, error messages, and other outputs related to explicit failures Click the Enter key. Optional: Select the OpenShift Container Platform product filter. Optional: Select the Documentation content type filter. 2.26.2.3. About collecting service mesh data You can use the oc adm must-gather CLI command to collect information about your cluster, including features and objects associated with Red Hat OpenShift Service Mesh. Prerequisites Access to the cluster as a user with the cluster-admin role. The OpenShift Container Platform CLI ( oc ) installed. Procedure To collect Red Hat OpenShift Service Mesh data with must-gather , you must specify the Red Hat OpenShift Service Mesh image. USD oc adm must-gather --image=registry.redhat.io/openshift-service-mesh/istio-must-gather-rhel8:2.6 To collect Red Hat OpenShift Service Mesh data for a specific Service Mesh control plane namespace with must-gather , you must specify the Red Hat OpenShift Service Mesh image and namespace. In this example, after gather, replace <namespace> with your Service Mesh control plane namespace, such as istio-system . USD oc adm must-gather --image=registry.redhat.io/openshift-service-mesh/istio-must-gather-rhel8:2.6 gather <namespace> This creates a local directory that contains the following items: The Istio Operator namespace and its child objects All control plane namespaces and their children objects All namespaces and their children objects that belong to any service mesh All Istio custom resource definitions (CRD) All Istio CRD objects, such as VirtualServices, in a given namespace All Istio webhooks For prompt support, supply diagnostic information for both OpenShift Container Platform and Red Hat OpenShift Service Mesh. 2.26.2.4. Submitting a support case Prerequisites You have access to the cluster as a user with the cluster-admin role. You have installed the OpenShift CLI ( oc ). You have a Red Hat Customer Portal account. You have a Red Hat Standard or Premium subscription. Procedure Log in to the Customer Support page of the Red Hat Customer Portal. Click Get support . On the Cases tab of the Customer Support page: Optional: Change the pre-filled account and owner details if needed. Select the appropriate category for your issue, such as Bug or Defect , and click Continue . Enter the following information: In the Summary field, enter a concise but descriptive problem summary and further details about the symptoms being experienced, as well as your expectations. Select OpenShift Container Platform from the Product drop-down menu. Select 4.15 from the Version drop-down. Review the list of suggested Red Hat Knowledgebase solutions for a potential match against the problem that is being reported. If the suggested articles do not address the issue, click Continue . Review the updated list of suggested Red Hat Knowledgebase solutions for a potential match against the problem that is being reported. The list is refined as you provide more information during the case creation process. If the suggested articles do not address the issue, click Continue . Ensure that the account information presented is as expected, and if not, amend accordingly. Check that the autofilled OpenShift Container Platform Cluster ID is correct. If it is not, manually obtain your cluster ID. To manually obtain your cluster ID using the OpenShift Container Platform web console: Navigate to Home Overview . Find the value in the Cluster ID field of the Details section. Alternatively, it is possible to open a new support case through the OpenShift Container Platform web console and have your cluster ID autofilled. From the toolbar, navigate to (?) Help Open Support Case . The Cluster ID value is autofilled. To obtain your cluster ID using the OpenShift CLI ( oc ), run the following command: USD oc get clusterversion -o jsonpath='{.items[].spec.clusterID}{"\n"}' Complete the following questions where prompted and then click Continue : What are you experiencing? What are you expecting to happen? Define the value or impact to you or the business. Where are you experiencing this behavior? What environment? When does this behavior occur? Frequency? Repeatedly? At certain times? Upload relevant diagnostic data files and click Continue . It is recommended to include data gathered using the oc adm must-gather command as a starting point, plus any issue specific data that is not collected by that command. Input relevant case management details and click Continue . Preview the case details and click Submit . 2.27. Service Mesh control plane configuration reference You can customize your Red Hat OpenShift Service Mesh by modifying the default ServiceMeshControlPlane (SMCP) resource or by creating a completely custom SMCP resource. This reference section documents the configuration options available for the SMCP resource. 2.27.1. Service Mesh Control plane parameters The following table lists the top-level parameters for the ServiceMeshControlPlane resource. Table 2.31. ServiceMeshControlPlane resource parameters Name Description Type apiVersion APIVersion defines the versioned schema of this representation of an object. Servers convert recognized schemas to the latest internal value, and may reject unrecognized values. The value for the ServiceMeshControlPlane version 2.0 is maistra.io/v2 . The value for ServiceMeshControlPlane version 2.0 is maistra.io/v2 . kind Kind is a string value that represents the REST resource this object represents. ServiceMeshControlPlane is the only valid value for a ServiceMeshControlPlane. metadata Metadata about this ServiceMeshControlPlane instance. You can provide a name for your Service Mesh control plane installation to keep track of your work, for example, basic . string spec The specification of the desired state of this ServiceMeshControlPlane . This includes the configuration options for all components that comprise the Service Mesh control plane. For more information, see Table 2. status The current status of this ServiceMeshControlPlane and the components that comprise the Service Mesh control plane. For more information, see Table 3. The following table lists the specifications for the ServiceMeshControlPlane resource. Changing these parameters configures Red Hat OpenShift Service Mesh components. Table 2.32. ServiceMeshControlPlane resource spec Name Description Configurable parameters addons The addons parameter configures additional features beyond core Service Mesh control plane components, such as visualization, or metric storage. 3scale , grafana , jaeger , kiali , and prometheus . cluster The cluster parameter sets the general configuration of the cluster (cluster name, network name, multi-cluster, mesh expansion, etc.) meshExpansion , multiCluster , name , and network gateways You use the gateways parameter to configure ingress and egress gateways for the mesh. enabled , additionalEgress , additionalIngress , egress , ingress , and openshiftRoute general The general parameter represents general Service Mesh control plane configuration that does not fit anywhere else. logging and validationMessages policy You use the policy parameter to configure policy checking for the Service Mesh control plane. Policy checking can be enabled by setting spec.policy.enabled to true . mixer remote , or type . type can be set to Istiod , Mixer or None . profiles You select the ServiceMeshControlPlane profile to use for default values using the profiles parameter. default proxy You use the proxy parameter to configure the default behavior for sidecars. accessLogging , adminPort , concurrency , and envoyMetricsService runtime You use the runtime parameter to configure the Service Mesh control plane components. components , and defaults security The security parameter allows you to configure aspects of security for the Service Mesh control plane. certificateAuthority , controlPlane , identity , dataPlane and trust techPreview The techPreview parameter enables early access to features that are in technology preview. N/A telemetry If spec.mixer.telemetry.enabled is set to true , telemetry is enabled. mixer , remote , and type . type can be set to Istiod , Mixer or None . tracing You use the tracing parameter to enables distributed tracing for the mesh. sampling , type . type can be set to Jaeger or None . version You use the version parameter to specify what Maistra version of the Service Mesh control plane to install. When creating a ServiceMeshControlPlane with an empty version, the admission webhook sets the version to the current version. New ServiceMeshControlPlanes with an empty version are set to v2.0 . Existing ServiceMeshControlPlanes with an empty version keep their setting. string ControlPlaneStatus represents the current state of your service mesh. Table 2.33. ServiceMeshControlPlane resource ControlPlaneStatus Name Description Type annotations The annotations parameter stores additional, usually redundant status information, such as the number of components deployed by the ServiceMeshControlPlane . These statuses are used by the command line tool, oc , which does not yet allow counting objects in JSONPath expressions. Not configurable conditions Represents the latest available observations of the object's current state. Reconciled indicates whether the operator has finished reconciling the actual state of deployed components with the configuration in the ServiceMeshControlPlane resource. Installed indicates whether the Service Mesh control plane has been installed. Ready indicates whether all Service Mesh control plane components are ready. string components Shows the status of each deployed Service Mesh control plane component. string appliedSpec The resulting specification of the configuration options after all profiles have been applied. ControlPlaneSpec appliedValues The resulting values.yaml used to generate the charts. ControlPlaneSpec chartVersion The version of the charts that were last processed for this resource. string observedGeneration The generation observed by the controller during the most recent reconciliation. The information in the status pertains to this particular generation of the object. The status.conditions are not up-to-date if the status.observedGeneration field doesn't match metadata.generation . integer operatorVersion The version of the operator that last processed this resource. string readiness The readiness status of components & owned resources. string This example ServiceMeshControlPlane definition contains all of the supported parameters. Example ServiceMeshControlPlane resource apiVersion: maistra.io/v2 kind: ServiceMeshControlPlane metadata: name: basic spec: version: v2.6 proxy: runtime: container: resources: requests: cpu: 100m memory: 128Mi limits: cpu: 500m memory: 128Mi tracing: type: Jaeger gateways: ingress: # istio-ingressgateway service: type: ClusterIP ports: - name: status-port port: 15020 - name: http2 port: 80 targetPort: 8080 - name: https port: 443 targetPort: 8443 meshExpansionPorts: [] egress: # istio-egressgateway service: type: ClusterIP ports: - name: status-port port: 15020 - name: http2 port: 80 targetPort: 8080 - name: https port: 443 targetPort: 8443 additionalIngress: some-other-ingress-gateway: {} additionalEgress: some-other-egress-gateway: {} policy: type: Mixer mixer: # only applies if policy.type: Mixer enableChecks: true failOpen: false telemetry: type: Istiod # or Mixer mixer: # only applies if telemetry.type: Mixer, for v1 telemetry sessionAffinity: false batching: maxEntries: 100 maxTime: 1s adapters: kubernetesenv: true stdio: enabled: true outputAsJSON: true addons: grafana: enabled: true install: config: env: {} envSecrets: {} persistence: enabled: true storageClassName: "" accessMode: ReadWriteOnce capacity: requests: storage: 5Gi service: ingress: contextPath: /grafana tls: termination: reencrypt kiali: name: kiali enabled: true install: # install kiali CR if not present dashboard: viewOnly: false enableGrafana: true enableTracing: true enablePrometheus: true service: ingress: contextPath: /kiali jaeger: name: jaeger install: storage: type: Elasticsearch # or Memory memory: maxTraces: 100000 elasticsearch: nodeCount: 3 storage: {} redundancyPolicy: SingleRedundancy indexCleaner: {} ingress: {} # jaeger ingress configuration runtime: components: pilot: deployment: replicas: 2 pod: affinity: {} container: resources: requests: cpu: 100m memory: 128Mi limits: cpu: 500m memory: 128Mi grafana: deployment: {} pod: {} kiali: deployment: {} pod: {} 2.27.2. spec parameters 2.27.2.1. general parameters Here is an example that illustrates the spec.general parameters for the ServiceMeshControlPlane object and a description of the available parameters with appropriate values. Example general parameters apiVersion: maistra.io/v2 kind: ServiceMeshControlPlane metadata: name: basic spec: general: logging: componentLevels: {} # misc: error logAsJSON: false validationMessages: true Table 2.34. Istio general parameters Parameter Description Values Default value Use to configure logging for the Service Mesh control plane components. N/A Use to specify the component logging level. Possible values: debug , info , warn , error , fatal . N/A Use to enable or disable JSON logging. true / false N/A Use to enable or disable validation messages to the status fields of istio.io resources. This can be useful for detecting configuration errors in resources. true / false N/A 2.27.2.2. profiles parameters You can create reusable configurations with ServiceMeshControlPlane object profiles. If you do not configure the profile setting, Red Hat OpenShift Service Mesh uses the default profile. Here is an example that illustrates the spec.profiles parameter for the ServiceMeshControlPlane object: Example profiles parameters apiVersion: maistra.io/v2 kind: ServiceMeshControlPlane metadata: name: basic spec: profiles: - YourProfileName For information about creating profiles, see the Creating control plane profiles . For more detailed examples of security configuration, see Mutual Transport Layer Security (mTLS) . 2.27.2.3. techPreview parameters The spec.techPreview parameter enables early access to features that are in Technology Preview. Important Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . 2.27.2.4. tracing parameters The following example illustrates the spec.tracing parameters for the ServiceMeshControlPlane object, and a description of the available parameters with appropriate values. Example tracing parameters apiVersion: maistra.io/v2 kind: ServiceMeshControlPlane metadata: name: basic spec: version: v2.6 tracing: sampling: 100 type: Jaeger Table 2.35. Istio tracing parameters Parameter Description Values Default value The sampling rate determines how often the Envoy proxy generates a trace. You use the sampling rate to control what percentage of requests get reported to your tracing system. Integer values between 0 and 10000 representing increments of 0.01% (0 to 100%). For example, setting the value to 10 samples 0.1% of requests, setting the value to 100 will sample 1% of requests setting the value to 500 samples 5% of requests, and a setting of 10000 samples 100% of requests. 10000 (100% of traces) Currently the only tracing type that is supported is Jaeger . Jaeger is enabled by default. To disable tracing, set the type parameter to None . None , Jaeger Jaeger 2.27.2.5. version parameter The Red Hat OpenShift Service Mesh Operator supports installation of different versions of the ServiceMeshControlPlane . You use the version parameter to specify what version of the Service Mesh control plane to install. If you do not specify a version parameter when creating your SMCP, the Operator sets the value to the latest version: (2.6). Existing ServiceMeshControlPlane objects keep their version setting during upgrades of the Operator. 2.27.2.6. 3scale configuration The following table explains the parameters for the 3scale Istio Adapter in the ServiceMeshControlPlane resource. Example 3scale parameters apiVersion: maistra.io/v2 kind: ServiceMeshControlPlane metadata: name: basic spec: addons: 3Scale: enabled: false PARAM_THREESCALE_LISTEN_ADDR: 3333 PARAM_THREESCALE_LOG_LEVEL: info PARAM_THREESCALE_LOG_JSON: true PARAM_THREESCALE_LOG_GRPC: false PARAM_THREESCALE_REPORT_METRICS: true PARAM_THREESCALE_METRICS_PORT: 8080 PARAM_THREESCALE_CACHE_TTL_SECONDS: 300 PARAM_THREESCALE_CACHE_REFRESH_SECONDS: 180 PARAM_THREESCALE_CACHE_ENTRIES_MAX: 1000 PARAM_THREESCALE_CACHE_REFRESH_RETRIES: 1 PARAM_THREESCALE_ALLOW_INSECURE_CONN: false PARAM_THREESCALE_CLIENT_TIMEOUT_SECONDS: 10 PARAM_THREESCALE_GRPC_CONN_MAX_SECONDS: 60 PARAM_USE_CACHED_BACKEND: false PARAM_BACKEND_CACHE_FLUSH_INTERVAL_SECONDS: 15 PARAM_BACKEND_CACHE_POLICY_FAIL_CLOSED: true # ... Table 2.36. 3scale parameters Parameter Description Values Default value enabled Whether to use the 3scale adapter true / false false PARAM_THREESCALE_LISTEN_ADDR Sets the listen address for the gRPC server Valid port number 3333 PARAM_THREESCALE_LOG_LEVEL Sets the minimum log output level. debug , info , warn , error , or none info PARAM_THREESCALE_LOG_JSON Controls whether the log is formatted as JSON true / false true PARAM_THREESCALE_LOG_GRPC Controls whether the log contains gRPC info true / false true PARAM_THREESCALE_REPORT_METRICS Controls whether 3scale system and backend metrics are collected and reported to Prometheus true / false true PARAM_THREESCALE_METRICS_PORT Sets the port that the 3scale /metrics endpoint can be scrapped from Valid port number 8080 PARAM_THREESCALE_CACHE_TTL_SECONDS Time period, in seconds, to wait before purging expired items from the cache Time period in seconds 300 PARAM_THREESCALE_CACHE_REFRESH_SECONDS Time period before expiry when cache elements are attempted to be refreshed Time period in seconds 180 PARAM_THREESCALE_CACHE_ENTRIES_MAX Max number of items that can be stored in the cache at any time. Set to 0 to disable caching Valid number 1000 PARAM_THREESCALE_CACHE_REFRESH_RETRIES The number of times unreachable hosts are retried during a cache update loop Valid number 1 PARAM_THREESCALE_ALLOW_INSECURE_CONN Allow to skip certificate verification when calling 3scale APIs. Enabling this is not recommended. true / false false PARAM_THREESCALE_CLIENT_TIMEOUT_SECONDS Sets the number of seconds to wait before terminating requests to 3scale System and Backend Time period in seconds 10 PARAM_THREESCALE_GRPC_CONN_MAX_SECONDS Sets the maximum amount of seconds (+/-10% jitter) a connection may exist before it is closed Time period in seconds 60 PARAM_USE_CACHE_BACKEND If true, attempt to create an in-memory apisonator cache for authorization requests true / false false PARAM_BACKEND_CACHE_FLUSH_INTERVAL_SECONDS If the backend cache is enabled, this sets the interval in seconds for flushing the cache against 3scale Time period in seconds 15 PARAM_BACKEND_CACHE_POLICY_FAIL_CLOSED Whenever the backend cache cannot retrieve authorization data, whether to deny (closed) or allow (open) requests true / false true 2.27.3. status parameter The status parameter describes the current state of your service mesh. This information is generated by the Operator and is read-only. Table 2.37. Istio status parameters Name Description Type observedGeneration The generation observed by the controller during the most recent reconciliation. The information in the status pertains to this particular generation of the object. The status.conditions are not up-to-date if the status.observedGeneration field doesn't match metadata.generation . integer annotations The annotations parameter stores additional, usually redundant status information, such as the number of components deployed by the ServiceMeshControlPlane object. These statuses are used by the command line tool, oc , which does not yet allow counting objects in JSONPath expressions. Not configurable readiness The readiness status of components and owned resources. string operatorVersion The version of the Operator that last processed this resource. string components Shows the status of each deployed Service Mesh control plane component. string appliedSpec The resulting specification of the configuration options after all profiles have been applied. ControlPlaneSpec conditions Represents the latest available observations of the object's current state. Reconciled indicates that the Operator has finished reconciling the actual state of deployed components with the configuration in the ServiceMeshControlPlane resource. Installed indicates that the Service Mesh control plane has been installed. Ready indicates that all Service Mesh control plane components are ready. string chartVersion The version of the charts that were last processed for this resource. string appliedValues The resulting values.yaml file that was used to generate the charts. ControlPlaneSpec 2.27.4. Additional resources For more information about how to configure the features in the ServiceMeshControlPlane resource, see the following links: Security Traffic management Metrics and traces 2.28. Kiali configuration reference When the Service Mesh Operator creates the ServiceMeshControlPlane it also processes the Kiali resource. The Kiali Operator then uses this object when creating Kiali instances. 2.28.1. Specifying Kiali configuration in the SMCP You can configure Kiali under the addons section of the ServiceMeshControlPlane resource. Kiali is enabled by default. To disable Kiali, set spec.addons.kiali.enabled to false . You can specify your Kiali configuration in either of two ways: Specify the Kiali configuration in the ServiceMeshControlPlane resource under spec.addons.kiali.install . This approach has some limitations, because the complete list of Kiali configurations is not available in the SMCP. Configure and deploy a Kiali instance and specify the name of the Kiali resource as the value for spec.addons.kiali.name in the ServiceMeshControlPlane resource. You must create the CR in the same namespace as the Service Mesh control plane, for example, istio-system . If a Kiali resource matching the value of name exists, the control plane will configure that Kiali resource for use with the control plane. This approach lets you fully customize your Kiali configuration in the Kiali resource. Note that with this approach, various fields in the Kiali resource are overwritten by the Service Mesh Operator, specifically, the accessible_namespaces list, as well as the endpoints for Grafana, Prometheus, and tracing. Example SMCP parameters for Kiali apiVersion: maistra.io/v2 kind: ServiceMeshControlPlane metadata: name: basic spec: addons: kiali: name: kiali enabled: true install: dashboard: viewOnly: false enableGrafana: true enableTracing: true enablePrometheus: true service: ingress: contextPath: /kiali Table 2.38. ServiceMeshControlPlane Kiali parameters Parameter Description Values Default value Name of Kiali custom resource. If a Kiali CR matching the value of name exists, the Service Mesh Operator will use that CR for the installation. If no Kiali CR exists, the Operator will create one using this name and the configuration options specified in the SMCP. string kiali This parameter enables or disables Kiali. Kiali is enabled by default. true / false true Install a Kiali resource if the named Kiali resource is not present. The install section is ignored if addons.kiali.enabled is set to false . Configuration parameters for the dashboards shipped with Kiali. This parameter enables or disables view-only mode for the Kiali console. When view-only mode is enabled, users cannot use the Kiali console to make changes to the Service Mesh. true / false false Grafana endpoint configured based on spec.addons.grafana configuration. true / false true Prometheus endpoint configured based on spec.addons.prometheus configuration. true / false true Tracing endpoint configured based on Jaeger custom resource configuration. true / false true Configuration parameters for the Kubernetes service associated with the Kiali installation. Use to specify additional metadata to apply to resources. N/A N/A Use to specify additional annotations to apply to the component's service. string N/A Use to specify additional labels to apply to the component's service. string N/A Use to specify details for accessing the component's service through an OpenShift Route. N/A N/A Use to specify additional annotations to apply to the component's service ingress. string N/A Use to specify additional labels to apply to the component's service ingress. string N/A Use to customize an OpenShift Route for the service associated with a component. true / false true Use to specify the context path to the service. string N/A Use to specify a single hostname per OpenShift route. An empty hostname implies a default hostname for the Route. string N/A Use to configure the TLS for the OpenShift route. N/A Use to specify the nodePort for the component's service Values.<component>.service.nodePort.port integer N/A 2.28.2. Specifying Kiali configuration in a Kiali custom resource You can fully customize your Kiali deployment by configuring Kiali in the Kiali custom resource (CR) rather than in the ServiceMeshControlPlane (SMCP) resource. This configuration is sometimes called an "external Kiali" since the configuration is specified outside of the SMCP. Note You must deploy the ServiceMeshControlPlane and Kiali custom resources in the same namespace. For example, istio-system . You can configure and deploy a Kiali instance and then specify the name of the Kiali resource as the value for spec.addons.kiali.name in the SMCP resource. If a Kiali CR matching the value of name exists, the Service Mesh control plane will use the existing installation. This approach lets you fully customize your Kiali configuration. 2.29. Jaeger configuration reference When the Service Mesh Operator deploys the ServiceMeshControlPlane resource, it can also create the resources for distributed tracing. Service Mesh uses Jaeger for distributed tracing. Important Jaeger does not use FIPS validated cryptographic modules. Starting with Red Hat OpenShift Service Mesh 2.5, Red Hat OpenShift distributed tracing platform (Jaeger) is deprecated and will be removed in a future release. Red Hat will provide bug fixes and support for this feature during the current release lifecycle, but this feature will no longer receive enhancements and will be removed. As an alternative to Red Hat OpenShift distributed tracing platform (Jaeger), you can use Red Hat OpenShift distributed tracing platform (Tempo) instead. 2.29.1. Enabling and disabling tracing You enable distributed tracing by specifying a tracing type and a sampling rate in the ServiceMeshControlPlane resource. Default all-in-one Jaeger parameters apiVersion: maistra.io/v2 kind: ServiceMeshControlPlane metadata: name: basic spec: version: v2.6 tracing: sampling: 100 type: Jaeger In Red Hat OpenShift Service Mesh 2.6, the tracing type Jaeger is deprecated and disabled by default. In Red Hat OpenShift Service Mesh 2.5 and earlier, the tracing type Jaeger is enabled by default. To disable Jaeger tracing, set the spec.tracing.type parameter of the ServiceMeshControlPlane resource to None . The sampling rate determines how often the Envoy proxy generates a trace. You can use the sampling rate option to control what percentage of requests get reported to your tracing system. You can configure this setting based upon your traffic in the mesh and the amount of tracing data you want to collect. You configure sampling as a scaled integer representing 0.01% increments. For example, setting the value to 10 samples 0.1% of traces, setting the value to 500 samples 5% of traces, and a setting of 10000 samples 100% of traces. Note The SMCP sampling configuration option controls the Envoy sampling rate. You configure the Jaeger trace sampling rate in the Jaeger custom resource. 2.29.2. Specifying Jaeger configuration in the SMCP You configure Jaeger under the addons section of the ServiceMeshControlPlane resource. However, there are some limitations to what you can configure in the SMCP. When the SMCP passes configuration information to the Red Hat OpenShift distributed tracing platform (Jaeger) Operator, it triggers one of three deployment strategies: allInOne , production , or streaming . 2.29.3. Deploying the distributed tracing platform The distributed tracing platform (Jaeger) has predefined deployment strategies. You specify a deployment strategy in the Jaeger custom resource (CR) file. When you create an instance of the distributed tracing platform (Jaeger), the Red Hat OpenShift distributed tracing platform (Jaeger) Operator uses this configuration file to create the objects necessary for the deployment. The Red Hat OpenShift distributed tracing platform (Jaeger) Operator currently supports the following deployment strategies: allInOne (default) - This strategy is intended for development, testing, and demo purposes and it is not for production use. The main back-end components, Agent, Collector, and Query service, are all packaged into a single executable, which is configured (by default) to use in-memory storage. You can configure this deployment strategy in the SMCP. Note In-memory storage is not persistent, which means that if the Jaeger instance shuts down, restarts, or is replaced, your trace data will be lost. And in-memory storage cannot be scaled, since each pod has its own memory. For persistent storage, you must use the production or streaming strategies, which use Elasticsearch as the default storage. production - The production strategy is intended for production environments, where long term storage of trace data is important, and a more scalable and highly available architecture is required. Each back-end component is therefore deployed separately. The Agent can be injected as a sidecar on the instrumented application. The Query and Collector services are configured with a supported storage type, which is currently Elasticsearch. Multiple instances of each of these components can be provisioned as required for performance and resilience purposes. You can configure this deployment strategy in the SMCP, but in order to be fully customized, you must specify your configuration in the Jaeger CR and link that to the SMCP. streaming - The streaming strategy is designed to augment the production strategy by providing a streaming capability that sits between the Collector and the Elasticsearch back-end storage. This provides the benefit of reducing the pressure on the back-end storage, under high load situations, and enables other trace post-processing capabilities to tap into the real-time span data directly from the streaming platform ( AMQ Streams / Kafka ). You cannot configure this deployment strategy in the SMCP; you must configure a Jaeger CR and link that to the SMCP. Note The streaming strategy requires an additional Red Hat subscription for AMQ Streams. 2.29.3.1. Default distributed tracing platform (Jaeger) deployment If you do not specify Jaeger configuration options, the ServiceMeshControlPlane resource will use the allInOne Jaeger deployment strategy by default. When using the default allInOne deployment strategy, set spec.addons.jaeger.install.storage.type to Memory . You can accept the defaults or specify additional configuration options under install . Control plane default Jaeger parameters (Memory) apiVersion: maistra.io/v2 kind: ServiceMeshControlPlane metadata: name: basic spec: version: v2.6 tracing: sampling: 10000 type: Jaeger addons: jaeger: name: jaeger install: storage: type: Memory 2.29.3.2. Production distributed tracing platform (Jaeger) deployment (minimal) To use the default settings for the production deployment strategy, set spec.addons.jaeger.install.storage.type to Elasticsearch and specify additional configuration options under install . Note that the SMCP only supports configuring Elasticsearch resources and image name. Control plane default Jaeger parameters (Elasticsearch) apiVersion: maistra.io/v2 kind: ServiceMeshControlPlane metadata: name: basic spec: version: v2.6 tracing: sampling: 10000 type: Jaeger addons: jaeger: name: jaeger #name of Jaeger CR install: storage: type: Elasticsearch ingress: enabled: true runtime: components: tracing.jaeger.elasticsearch: # only supports resources and image name container: resources: {} 2.29.3.3. Production distributed tracing platform (Jaeger) deployment (fully customized) The SMCP supports only minimal Elasticsearch parameters. To fully customize your production environment and access all of the Elasticsearch configuration parameters, use the Jaeger custom resource (CR) to configure Jaeger. Create and configure your Jaeger instance and set spec.addons.jaeger.name to the name of the Jaeger instance, in this example: MyJaegerInstance . Control plane with linked Jaeger production CR apiVersion: maistra.io/v2 kind: ServiceMeshControlPlane metadata: name: basic spec: version: v2.6 tracing: sampling: 1000 type: Jaeger addons: jaeger: name: MyJaegerInstance #name of Jaeger CR install: storage: type: Elasticsearch ingress: enabled: true 2.29.3.4. Streaming Jaeger deployment To use the streaming deployment strategy, you create and configure your Jaeger instance first, then set spec.addons.jaeger.name to the name of the Jaeger instance, in this example: MyJaegerInstance . Control plane with linked Jaeger streaming CR apiVersion: maistra.io/v2 kind: ServiceMeshControlPlane metadata: name: basic spec: version: v2.6 tracing: sampling: 1000 type: Jaeger addons: jaeger: name: MyJaegerInstance #name of Jaeger CR 2.29.4. Specifying Jaeger configuration in a Jaeger custom resource You can fully customize your Jaeger deployment by configuring Jaeger in the Jaeger custom resource (CR) rather than in the ServiceMeshControlPlane (SMCP) resource. This configuration is sometimes referred to as an "external Jaeger" since the configuration is specified outside of the SMCP. Note You must deploy the SMCP and Jaeger CR in the same namespace. For example, istio-system . You can configure and deploy a standalone Jaeger instance and then specify the name of the Jaeger resource as the value for spec.addons.jaeger.name in the SMCP resource. If a Jaeger CR matching the value of name exists, the Service Mesh control plane will use the existing installation. This approach lets you fully customize your Jaeger configuration. 2.29.4.1. Deployment best practices Red Hat OpenShift distributed tracing platform instance names must be unique. If you want to have multiple Red Hat OpenShift distributed tracing platform (Jaeger) instances and are using sidecar injected agents, then the Red Hat OpenShift distributed tracing platform (Jaeger) instances should have unique names, and the injection annotation should explicitly specify the Red Hat OpenShift distributed tracing platform (Jaeger) instance name the tracing data should be reported to. If you have a multitenant implementation and tenants are separated by namespaces, deploy a Red Hat OpenShift distributed tracing platform (Jaeger) instance to each tenant namespace. For information about configuring persistent storage, see Understanding persistent storage and the appropriate configuration topic for your chosen storage option. 2.29.4.2. Configuring distributed tracing security for service mesh The distributed tracing platform (Jaeger) uses OAuth for default authentication. However Red Hat OpenShift Service Mesh uses a secret called htpasswd to facilitate communication between dependent services such as Grafana, Kiali, and the distributed tracing platform (Jaeger). When you configure your distributed tracing platform (Jaeger) in the ServiceMeshControlPlane the Service Mesh automatically configures security settings to use htpasswd . If you are specifying your distributed tracing platform (Jaeger) configuration in a Jaeger custom resource, you must manually configure the htpasswd settings and ensure the htpasswd secret is mounted into your Jaeger instance so that Kiali can communicate with it. 2.29.4.2.1. Configuring distributed tracing security for service mesh from the web console You can modify the Jaeger resource to configure distributed tracing platform (Jaeger) security for use with Service Mesh in the web console. Prerequisites You have access to the cluster as a user with the cluster-admin role. If you use Red Hat OpenShift Dedicated, you must have an account with the dedicated-admin role. The Red Hat OpenShift Service Mesh Operator must be installed. The ServiceMeshControlPlane deployed to the cluster. You have access to the OpenShift Container Platform web console. Procedure Log in to the OpenShift Container Platform web console as a user with the cluster-admin role. Navigate to Operators Installed Operators . Click the Project menu and select the project where your ServiceMeshControlPlane resource is deployed from the list, for example istio-system . Click the Red Hat OpenShift distributed tracing platform (Jaeger) Operator . On the Operator Details page, click the Jaeger tab. Click the name of your Jaeger instance. On the Jaeger details page, click the YAML tab to modify your configuration. Edit the Jaeger custom resource file to add the htpasswd configuration as shown in the following example. spec.ingress.openshift.htpasswdFile spec.volumes spec.volumeMounts Example Jaeger resource showing htpasswd configuration apiVersion: jaegertracing.io/v1 kind: Jaeger spec: ingress: enabled: true openshift: htpasswdFile: /etc/proxy/htpasswd/auth sar: '{"namespace": "istio-system", "resource": "pods", "verb": "get"}' options: {} resources: {} security: oauth-proxy volumes: - name: secret-htpasswd secret: secretName: htpasswd - configMap: defaultMode: 420 items: - key: ca-bundle.crt path: tls-ca-bundle.pem name: trusted-ca-bundle optional: true name: trusted-ca-bundle volumeMounts: - mountPath: /etc/proxy/htpasswd name: secret-htpasswd - mountPath: /etc/pki/ca-trust/extracted/pem/ name: trusted-ca-bundle readOnly: true # ... Click Save . 2.29.4.2.2. Configuring distributed tracing security for service mesh from the command line You can modify the Jaeger resource to configure distributed tracing platform (Jaeger) security for use with Service Mesh from the command line by running the OpenShift CLI ( oc ). Prerequisites You have access to the cluster as a user with the cluster-admin role. If you use Red Hat OpenShift Dedicated, you must have an account with the dedicated-admin role. The Red Hat OpenShift Service Mesh Operator must be installed. The ServiceMeshControlPlane deployed to the cluster. You have access to the OpenShift CLI ( oc ) that matches your OpenShift Container Platform version. Procedure Log in to the OpenShift CLI ( oc ) as a user with the cluster-admin role by running the following command. If you use Red Hat OpenShift Dedicated, you must have an account with the dedicated-admin role. USD oc login https://<HOSTNAME>:6443 Change to the project where you installed the control plane, for example istio-system , by entering the following command: USD oc project istio-system Run the following command to edit the Jaeger custom resource file: USD oc edit -n openshift-distributed-tracing -f jaeger.yaml Edit the Jaeger custom resource file to add the htpasswd configuration as shown in the following example. spec.ingress.openshift.htpasswdFile spec.volumes spec.volumeMounts Example Jaeger resource showing htpasswd configuration apiVersion: jaegertracing.io/v1 kind: Jaeger spec: ingress: enabled: true openshift: htpasswdFile: /etc/proxy/htpasswd/auth sar: '{"namespace": "istio-system", "resource": "pods", "verb": "get"}' options: {} resources: {} security: oauth-proxy volumes: - name: secret-htpasswd secret: secretName: htpasswd - configMap: defaultMode: 420 items: - key: ca-bundle.crt path: tls-ca-bundle.pem name: trusted-ca-bundle optional: true name: trusted-ca-bundle volumeMounts: - mountPath: /etc/proxy/htpasswd name: secret-htpasswd - mountPath: /etc/pki/ca-trust/extracted/pem/ name: trusted-ca-bundle readOnly: true Run the following command to watch the progress of the pod deployment: USD oc get pods -n openshift-distributed-tracing 2.29.4.3. Distributed tracing default configuration options The Jaeger custom resource (CR) defines the architecture and settings to be used when creating the distributed tracing platform (Jaeger) resources. You can modify these parameters to customize your distributed tracing platform (Jaeger) implementation to your business needs. Generic YAML example of the Jaeger CR apiVersion: jaegertracing.io/v1 kind: Jaeger metadata: name: name spec: strategy: <deployment_strategy> allInOne: options: {} resources: {} agent: options: {} resources: {} collector: options: {} resources: {} sampling: options: {} storage: type: options: {} query: options: {} resources: {} ingester: options: {} resources: {} options: {} Table 2.39. Jaeger parameters Parameter Description Values Default value apiVersion: API version to use when creating the object. jaegertracing.io/v1 jaegertracing.io/v1 kind: Defines the kind of Kubernetes object to create. jaeger metadata: Data that helps uniquely identify the object, including a name string, UID , and optional namespace . OpenShift Container Platform automatically generates the UID and completes the namespace with the name of the project where the object is created. name: Name for the object. The name of your distributed tracing platform (Jaeger) instance. jaeger-all-in-one-inmemory spec: Specification for the object to be created. Contains all of the configuration parameters for your distributed tracing platform (Jaeger) instance. When a common definition for all Jaeger components is required, it is defined under the spec node. When the definition relates to an individual component, it is placed under the spec/<component> node. N/A strategy: Jaeger deployment strategy allInOne , production , or streaming allInOne allInOne: Because the allInOne image deploys the Agent, Collector, Query, Ingester, and Jaeger UI in a single pod, configuration for this deployment must nest component configuration under the allInOne parameter. agent: Configuration options that define the Agent. collector: Configuration options that define the Jaeger Collector. sampling: Configuration options that define the sampling strategies for tracing. storage: Configuration options that define the storage. All storage-related options must be placed under storage , rather than under the allInOne or other component options. query: Configuration options that define the Query service. ingester: Configuration options that define the Ingester service. The following example YAML is the minimum required to create a Red Hat OpenShift distributed tracing platform (Jaeger) deployment using the default settings. Example minimum required dist-tracing-all-in-one.yaml apiVersion: jaegertracing.io/v1 kind: Jaeger metadata: name: jaeger-all-in-one-inmemory 2.29.4.4. Jaeger Collector configuration options The Jaeger Collector is the component responsible for receiving the spans that were captured by the tracer and writing them to persistent Elasticsearch storage when using the production strategy, or to AMQ Streams when using the streaming strategy. The Collectors are stateless and thus many instances of Jaeger Collector can be run in parallel. Collectors require almost no configuration, except for the location of the Elasticsearch cluster. Table 2.40. Parameters used by the Operator to define the Jaeger Collector Parameter Description Values Specifies the number of Collector replicas to create. Integer, for example, 5 Table 2.41. Configuration parameters passed to the Collector Parameter Description Values Configuration options that define the Jaeger Collector. The number of workers pulling from the queue. Integer, for example, 50 The size of the Collector queue. Integer, for example, 2000 The topic parameter identifies the Kafka configuration used by the Collector to produce the messages, and the Ingester to consume the messages. Label for the producer. Identifies the Kafka configuration used by the Collector to produce the messages. If brokers are not specified, and you have AMQ Streams 1.4.0+ installed, the Red Hat OpenShift distributed tracing platform (Jaeger) Operator will self-provision Kafka. Logging level for the Collector. Possible values: debug , info , warn , error , fatal , panic . To accept OTLP/gRPC, explicitly enable the otlp . All the other options are optional. To accept OTLP/HTTP, explicitly enable the otlp . All the other options are optional. 2.29.4.5. Distributed tracing sampling configuration options The Red Hat OpenShift distributed tracing platform (Jaeger) Operator can be used to define sampling strategies that will be supplied to tracers that have been configured to use a remote sampler. While all traces are generated, only a few are sampled. Sampling a trace marks the trace for further processing and storage. Note This is not relevant if a trace was started by the Envoy proxy, as the sampling decision is made there. The Jaeger sampling decision is only relevant when the trace is started by an application using the client. When a service receives a request that contains no trace context, the client starts a new trace, assigns it a random trace ID, and makes a sampling decision based on the currently installed sampling strategy. The sampling decision propagates to all subsequent requests in the trace so that other services are not making the sampling decision again. distributed tracing platform (Jaeger) libraries support the following samplers: Probabilistic - The sampler makes a random sampling decision with the probability of sampling equal to the value of the sampling.param property. For example, using sampling.param=0.1 samples approximately 1 in 10 traces. Rate Limiting - The sampler uses a leaky bucket rate limiter to ensure that traces are sampled with a certain constant rate. For example, using sampling.param=2.0 samples requests with the rate of 2 traces per second. Table 2.42. Jaeger sampling options Parameter Description Values Default value Configuration options that define the sampling strategies for tracing. If you do not provide configuration, the Collectors will return the default probabilistic sampling policy with 0.001 (0.1%) probability for all services. Sampling strategy to use. See descriptions above. Valid values are probabilistic , and ratelimiting . probabilistic Parameters for the selected sampling strategy. Decimal and integer values (0, .1, 1, 10) 1 This example defines a default sampling strategy that is probabilistic, with a 50% chance of the trace instances being sampled. Probabilistic sampling example apiVersion: jaegertracing.io/v1 kind: Jaeger metadata: name: with-sampling spec: sampling: options: default_strategy: type: probabilistic param: 0.5 service_strategies: - service: alpha type: probabilistic param: 0.8 operation_strategies: - operation: op1 type: probabilistic param: 0.2 - operation: op2 type: probabilistic param: 0.4 - service: beta type: ratelimiting param: 5 If there are no user-supplied configurations, the distributed tracing platform (Jaeger) uses the following settings: Default sampling spec: sampling: options: default_strategy: type: probabilistic param: 1 2.29.4.6. Distributed tracing storage configuration options You configure storage for the Collector, Ingester, and Query services under spec.storage . Multiple instances of each of these components can be provisioned as required for performance and resilience purposes. Table 2.43. General storage parameters used by the Red Hat OpenShift distributed tracing platform (Jaeger) Operator to define distributed tracing storage Parameter Description Values Default value Type of storage to use for the deployment. memory or elasticsearch . Memory storage is only appropriate for development, testing, demonstrations, and proof of concept environments as the data does not persist if the pod is shut down. For production environments distributed tracing platform (Jaeger) supports Elasticsearch for persistent storage. memory Name of the secret, for example tracing-secret . N/A Configuration options that define the storage. Table 2.44. Elasticsearch index cleaner parameters Parameter Description Values Default value When using Elasticsearch storage, by default a job is created to clean old traces from the index. This parameter enables or disables the index cleaner job. true / false true Number of days to wait before deleting an index. Integer value 7 Defines the schedule for how often to clean the Elasticsearch index. Cron expression "55 23 * * *" 2.29.4.6.1. Auto-provisioning an Elasticsearch instance When you deploy a Jaeger custom resource, the Red Hat OpenShift distributed tracing platform (Jaeger) Operator uses the OpenShift Elasticsearch Operator to create an Elasticsearch cluster based on the configuration provided in the storage section of the custom resource file. The Red Hat OpenShift distributed tracing platform (Jaeger) Operator will provision Elasticsearch if the following configurations are set: spec.storage:type is set to elasticsearch spec.storage.elasticsearch.doNotProvision set to false spec.storage.options.es.server-urls is not defined, that is, there is no connection to an Elasticsearch instance that was not provisioned by the OpenShift Elasticsearch Operator. When provisioning Elasticsearch, the Red Hat OpenShift distributed tracing platform (Jaeger) Operator sets the Elasticsearch custom resource name to the value of spec.storage.elasticsearch.name from the Jaeger custom resource. If you do not specify a value for spec.storage.elasticsearch.name , the Operator uses elasticsearch . Restrictions You can have only one distributed tracing platform (Jaeger) with self-provisioned Elasticsearch instance per namespace. The Elasticsearch cluster is meant to be dedicated for a single distributed tracing platform (Jaeger) instance. There can be only one Elasticsearch per namespace. Note If you already have installed Elasticsearch as part of OpenShift Logging, the Red Hat OpenShift distributed tracing platform (Jaeger) Operator can use the installed OpenShift Elasticsearch Operator to provision storage. The following configuration parameters are for a self-provisioned Elasticsearch instance, that is an instance created by the Red Hat OpenShift distributed tracing platform (Jaeger) Operator using the OpenShift Elasticsearch Operator. You specify configuration options for self-provisioned Elasticsearch under spec:storage:elasticsearch in your configuration file. Table 2.45. Elasticsearch resource configuration parameters Parameter Description Values Default value Use to specify whether or not an Elasticsearch instance should be provisioned by the Red Hat OpenShift distributed tracing platform (Jaeger) Operator. true / false true Name of the Elasticsearch instance. The Red Hat OpenShift distributed tracing platform (Jaeger) Operator uses the Elasticsearch instance specified in this parameter to connect to Elasticsearch. string elasticsearch Number of Elasticsearch nodes. For high availability use at least 3 nodes. Do not use 2 nodes as "split brain" problem can happen. Integer value. For example, Proof of concept = 1, Minimum deployment =3 3 Number of central processing units for requests, based on your environment's configuration. Specified in cores or millicores, for example, 200m, 0.5, 1. For example, Proof of concept = 500m, Minimum deployment =1 1 Available memory for requests, based on your environment's configuration. Specified in bytes, for example, 200Ki, 50Mi, 5Gi. For example, Proof of concept = 1Gi, Minimum deployment = 16Gi* 16Gi Limit on number of central processing units, based on your environment's configuration. Specified in cores or millicores, for example, 200m, 0.5, 1. For example, Proof of concept = 500m, Minimum deployment =1 Available memory limit based on your environment's configuration. Specified in bytes, for example, 200Ki, 50Mi, 5Gi. For example, Proof of concept = 1Gi, Minimum deployment = 16Gi* Data replication policy defines how Elasticsearch shards are replicated across data nodes in the cluster. If not specified, the Red Hat OpenShift distributed tracing platform (Jaeger) Operator automatically determines the most appropriate replication based on number of nodes. ZeroRedundancy (no replica shards), SingleRedundancy (one replica shard), MultipleRedundancy (each index is spread over half of the Data nodes), FullRedundancy (each index is fully replicated on every Data node in the cluster). Use to specify whether or not distributed tracing platform (Jaeger) should use the certificate management feature of the OpenShift Elasticsearch Operator. This feature was added to {logging-title} 5.2 in OpenShift Container Platform 4.7 and is the preferred setting for new Jaeger deployments. true / false true Each Elasticsearch node can operate with a lower memory setting though this is NOT recommended for production deployments. For production use, you must have no less than 16 Gi allocated to each pod by default, but preferably allocate as much as you can, up to 64 Gi per pod. Production storage example apiVersion: jaegertracing.io/v1 kind: Jaeger metadata: name: simple-prod spec: strategy: production storage: type: elasticsearch elasticsearch: nodeCount: 3 resources: requests: cpu: 1 memory: 16Gi limits: memory: 16Gi Storage example with persistent storage: apiVersion: jaegertracing.io/v1 kind: Jaeger metadata: name: simple-prod spec: strategy: production storage: type: elasticsearch elasticsearch: nodeCount: 1 storage: 1 storageClassName: gp2 size: 5Gi resources: requests: cpu: 200m memory: 4Gi limits: memory: 4Gi redundancyPolicy: ZeroRedundancy 1 Persistent storage configuration. In this case AWS gp2 with 5Gi size. When no value is specified, distributed tracing platform (Jaeger) uses emptyDir . The OpenShift Elasticsearch Operator provisions PersistentVolumeClaim and PersistentVolume which are not removed with distributed tracing platform (Jaeger) instance. You can mount the same volumes if you create a distributed tracing platform (Jaeger) instance with the same name and namespace. 2.29.4.6.2. Connecting to an existing Elasticsearch instance You can use an existing Elasticsearch cluster for storage with distributed tracing platform. An existing Elasticsearch cluster, also known as an external Elasticsearch instance, is an instance that was not installed by the Red Hat OpenShift distributed tracing platform (Jaeger) Operator or by the OpenShift Elasticsearch Operator. When you deploy a Jaeger custom resource, the Red Hat OpenShift distributed tracing platform (Jaeger) Operator will not provision Elasticsearch if the following configurations are set: spec.storage.elasticsearch.doNotProvision set to true spec.storage.options.es.server-urls has a value spec.storage.elasticsearch.name has a value, or if the Elasticsearch instance name is elasticsearch . The Red Hat OpenShift distributed tracing platform (Jaeger) Operator uses the Elasticsearch instance specified in spec.storage.elasticsearch.name to connect to Elasticsearch. Restrictions You cannot share or reuse a OpenShift Container Platform logging Elasticsearch instance with distributed tracing platform (Jaeger). The Elasticsearch cluster is meant to be dedicated for a single distributed tracing platform (Jaeger) instance. The following configuration parameters are for an already existing Elasticsearch instance, also known as an external Elasticsearch instance. In this case, you specify configuration options for Elasticsearch under spec:storage:options:es in your custom resource file. Table 2.46. General ES configuration parameters Parameter Description Values Default value URL of the Elasticsearch instance. The fully-qualified domain name of the Elasticsearch server. http://elasticsearch.<namespace>.svc:9200 The maximum document count to return from an Elasticsearch query. This will also apply to aggregations. If you set both es.max-doc-count and es.max-num-spans , Elasticsearch will use the smaller value of the two. 10000 [ Deprecated - Will be removed in a future release, use es.max-doc-count instead.] The maximum number of spans to fetch at a time, per query, in Elasticsearch. If you set both es.max-num-spans and es.max-doc-count , Elasticsearch will use the smaller value of the two. 10000 The maximum lookback for spans in Elasticsearch. 72h0m0s The sniffer configuration for Elasticsearch. The client uses the sniffing process to find all nodes automatically. Disabled by default. true / false false Option to enable TLS when sniffing an Elasticsearch Cluster. The client uses the sniffing process to find all nodes automatically. Disabled by default true / false false Timeout used for queries. When set to zero there is no timeout. 0s The username required by Elasticsearch. The basic authentication also loads CA if it is specified. See also es.password . The password required by Elasticsearch. See also, es.username . The major Elasticsearch version. If not specified, the value will be auto-detected from Elasticsearch. 0 Table 2.47. ES data replication parameters Parameter Description Values Default value The number of replicas per index in Elasticsearch. 1 The number of shards per index in Elasticsearch. 5 Table 2.48. ES index configuration parameters Parameter Description Values Default value Automatically create index templates at application startup when set to true . When templates are installed manually, set to false . true / false true Optional prefix for distributed tracing platform (Jaeger) indices. For example, setting this to "production" creates indices named "production-tracing-*". Table 2.49. ES bulk processor configuration parameters Parameter Description Values Default value The number of requests that can be added to the queue before the bulk processor decides to commit updates to disk. 1000 A time.Duration after which bulk requests are committed, regardless of other thresholds. To disable the bulk processor flush interval, set this to zero. 200ms The number of bytes that the bulk requests can take up before the bulk processor decides to commit updates to disk. 5000000 The number of workers that are able to receive and commit bulk requests to Elasticsearch. 1 Table 2.50. ES TLS configuration parameters Parameter Description Values Default value Path to a TLS Certification Authority (CA) file used to verify the remote servers. Will use the system truststore by default. Path to a TLS Certificate file, used to identify this process to the remote servers. Enable transport layer security (TLS) when talking to the remote servers. Disabled by default. true / false false Path to a TLS Private Key file, used to identify this process to the remote servers. Override the expected TLS server name in the certificate of the remote servers. Path to a file containing the bearer token. This flag also loads the Certification Authority (CA) file if it is specified. Table 2.51. ES archive configuration parameters Parameter Description Values Default value The number of requests that can be added to the queue before the bulk processor decides to commit updates to disk. 0 A time.Duration after which bulk requests are committed, regardless of other thresholds. To disable the bulk processor flush interval, set this to zero. 0s The number of bytes that the bulk requests can take up before the bulk processor decides to commit updates to disk. 0 The number of workers that are able to receive and commit bulk requests to Elasticsearch. 0 Automatically create index templates at application startup when set to true . When templates are installed manually, set to false . true / false false Enable extra storage. true / false false Optional prefix for distributed tracing platform (Jaeger) indices. For example, setting this to "production" creates indices named "production-tracing-*". The maximum document count to return from an Elasticsearch query. This will also apply to aggregations. 0 [ Deprecated - Will be removed in a future release, use es-archive.max-doc-count instead.] The maximum number of spans to fetch at a time, per query, in Elasticsearch. 0 The maximum lookback for spans in Elasticsearch. 0s The number of replicas per index in Elasticsearch. 0 The number of shards per index in Elasticsearch. 0 The password required by Elasticsearch. See also, es.username . The comma-separated list of Elasticsearch servers. Must be specified as fully qualified URLs, for example, http://localhost:9200 . The sniffer configuration for Elasticsearch. The client uses the sniffing process to find all nodes automatically. Disabled by default. true / false false Option to enable TLS when sniffing an Elasticsearch Cluster. The client uses the sniffing process to find all nodes automatically. Disabled by default. true / false false Timeout used for queries. When set to zero there is no timeout. 0s Path to a TLS Certification Authority (CA) file used to verify the remote servers. Will use the system truststore by default. Path to a TLS Certificate file, used to identify this process to the remote servers. Enable transport layer security (TLS) when talking to the remote servers. Disabled by default. true / false false Path to a TLS Private Key file, used to identify this process to the remote servers. Override the expected TLS server name in the certificate of the remote servers. Path to a file containing the bearer token. This flag also loads the Certification Authority (CA) file if it is specified. The username required by Elasticsearch. The basic authentication also loads CA if it is specified. See also es-archive.password . The major Elasticsearch version. If not specified, the value will be auto-detected from Elasticsearch. 0 Storage example with volume mounts apiVersion: jaegertracing.io/v1 kind: Jaeger metadata: name: simple-prod spec: strategy: production storage: type: elasticsearch options: es: server-urls: https://quickstart-es-http.default.svc:9200 index-prefix: my-prefix tls: ca: /es/certificates/ca.crt secretName: tracing-secret volumeMounts: - name: certificates mountPath: /es/certificates/ readOnly: true volumes: - name: certificates secret: secretName: quickstart-es-http-certs-public The following example shows a Jaeger CR using an external Elasticsearch cluster with TLS CA certificate mounted from a volume and user/password stored in a secret. External Elasticsearch example apiVersion: jaegertracing.io/v1 kind: Jaeger metadata: name: simple-prod spec: strategy: production storage: type: elasticsearch options: es: server-urls: https://quickstart-es-http.default.svc:9200 1 index-prefix: my-prefix tls: 2 ca: /es/certificates/ca.crt secretName: tracing-secret 3 volumeMounts: 4 - name: certificates mountPath: /es/certificates/ readOnly: true volumes: - name: certificates secret: secretName: quickstart-es-http-certs-public 1 URL to Elasticsearch service running in default namespace. 2 TLS configuration. In this case only CA certificate, but it can also contain es.tls.key and es.tls.cert when using mutual TLS. 3 Secret which defines environment variables ES_PASSWORD and ES_USERNAME. Created by kubectl create secret generic tracing-secret --from-literal=ES_PASSWORD=changeme --from-literal=ES_USERNAME=elastic 4 Volume mounts and volumes which are mounted into all storage components. 2.29.4.7. Managing certificates with Elasticsearch You can create and manage certificates using the OpenShift Elasticsearch Operator. Managing certificates using the OpenShift Elasticsearch Operator also lets you use a single Elasticsearch cluster with multiple Jaeger Collectors. Important Managing certificates with Elasticsearch is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . Starting with version 2.4, the Red Hat OpenShift distributed tracing platform (Jaeger) Operator delegates certificate creation to the OpenShift Elasticsearch Operator by using the following annotations in the Elasticsearch custom resource: logging.openshift.io/elasticsearch-cert-management: "true" logging.openshift.io/elasticsearch-cert.jaeger-<shared-es-node-name>: "user.jaeger" logging.openshift.io/elasticsearch-cert.curator-<shared-es-node-name>: "system.logging.curator" Where the <shared-es-node-name> is the name of the Elasticsearch node. For example, if you create an Elasticsearch node named custom-es , your custom resource might look like the following example. Example Elasticsearch CR showing annotations apiVersion: logging.openshift.io/v1 kind: Elasticsearch metadata: annotations: logging.openshift.io/elasticsearch-cert-management: "true" logging.openshift.io/elasticsearch-cert.jaeger-custom-es: "user.jaeger" logging.openshift.io/elasticsearch-cert.curator-custom-es: "system.logging.curator" name: custom-es spec: managementState: Managed nodeSpec: resources: limits: memory: 16Gi requests: cpu: 1 memory: 16Gi nodes: - nodeCount: 3 proxyResources: {} resources: {} roles: - master - client - data storage: {} redundancyPolicy: ZeroRedundancy Prerequisites The Red Hat OpenShift Service Mesh Operator is installed. The {logging-title} is installed with default configuration in your cluster. The Elasticsearch node and the Jaeger instances must be deployed in the same namespace. For example, tracing-system . You enable certificate management by setting spec.storage.elasticsearch.useCertManagement to true in the Jaeger custom resource. Example showing useCertManagement apiVersion: jaegertracing.io/v1 kind: Jaeger metadata: name: jaeger-prod spec: strategy: production storage: type: elasticsearch elasticsearch: name: custom-es doNotProvision: true useCertManagement: true The Red Hat OpenShift distributed tracing platform (Jaeger) Operator sets the Elasticsearch custom resource name to the value of spec.storage.elasticsearch.name from the Jaeger custom resource when provisioning Elasticsearch. The certificates are provisioned by the OpenShift Elasticsearch Operator and the Red Hat OpenShift distributed tracing platform (Jaeger) Operator injects the certificates. For more information about configuring Elasticsearch with OpenShift Container Platform, see Configuring the Elasticsearch log store or Configuring and deploying distributed tracing . 2.29.4.8. Query configuration options Query is a service that retrieves traces from storage and hosts the user interface to display them. Table 2.52. Parameters used by the Red Hat OpenShift distributed tracing platform (Jaeger) Operator to define Query Parameter Description Values Default value Specifies the number of Query replicas to create. Integer, for example, 2 Table 2.53. Configuration parameters passed to Query Parameter Description Values Default value Configuration options that define the Query service. Logging level for Query. Possible values: debug , info , warn , error , fatal , panic . The base path for all jaeger-query HTTP routes can be set to a non-root value, for example, /jaeger would cause all UI URLs to start with /jaeger . This can be useful when running jaeger-query behind a reverse proxy. /<path> Sample Query configuration apiVersion: jaegertracing.io/v1 kind: "Jaeger" metadata: name: "my-jaeger" spec: strategy: allInOne allInOne: options: log-level: debug query: base-path: /jaeger 2.29.4.9. Ingester configuration options Ingester is a service that reads from a Kafka topic and writes to the Elasticsearch storage backend. If you are using the allInOne or production deployment strategies, you do not need to configure the Ingester service. Table 2.54. Jaeger parameters passed to the Ingester Parameter Description Values Configuration options that define the Ingester service. Specifies the interval, in seconds or minutes, that the Ingester must wait for a message before terminating. The deadlock interval is disabled by default (set to 0 ), to avoid terminating the Ingester when no messages arrive during system initialization. Minutes and seconds, for example, 1m0s . Default value is 0 . The topic parameter identifies the Kafka configuration used by the collector to produce the messages, and the Ingester to consume the messages. Label for the consumer. For example, jaeger-spans . Identifies the Kafka configuration used by the Ingester to consume the messages. Label for the broker, for example, my-cluster-kafka-brokers.kafka:9092 . Logging level for the Ingester. Possible values: debug , info , warn , error , fatal , dpanic , panic . Streaming Collector and Ingester example apiVersion: jaegertracing.io/v1 kind: Jaeger metadata: name: simple-streaming spec: strategy: streaming collector: options: kafka: producer: topic: jaeger-spans brokers: my-cluster-kafka-brokers.kafka:9092 ingester: options: kafka: consumer: topic: jaeger-spans brokers: my-cluster-kafka-brokers.kafka:9092 ingester: deadlockInterval: 5 storage: type: elasticsearch options: es: server-urls: http://elasticsearch:9200 2.30. Uninstalling Service Mesh To uninstall Red Hat OpenShift Service Mesh from an existing OpenShift Container Platform instance and remove its resources, you must delete the control plane, delete the Operators, and run commands to manually remove some resources. 2.30.1. Removing the Red Hat OpenShift Service Mesh control plane To uninstall Service Mesh from an existing OpenShift Container Platform instance, first you delete the Service Mesh control plane and the Operators. Then, you run commands to remove residual resources. 2.30.1.1. Removing the Service Mesh control plane using the web console You can remove the Red Hat OpenShift Service Mesh control plane by using the web console. Procedure Log in to the OpenShift Container Platform web console. Click the Project menu and select the project where you installed the Service Mesh control plane, for example istio-system . Navigate to Operators Installed Operators . Click Service Mesh Control Plane under Provided APIs . Click the ServiceMeshControlPlane menu . Click Delete Service Mesh Control Plane . Click Delete on the confirmation dialog window to remove the ServiceMeshControlPlane . 2.30.1.2. Removing the Service Mesh control plane using the CLI You can remove the Red Hat OpenShift Service Mesh control plane by using the CLI. In this example, istio-system is the name of the control plane project. Procedure Log in to the OpenShift Container Platform CLI. Run the following command to delete the ServiceMeshMemberRoll resource. USD oc delete smmr -n istio-system default Run this command to retrieve the name of the installed ServiceMeshControlPlane : USD oc get smcp -n istio-system Replace <name_of_custom_resource> with the output from the command, and run this command to remove the custom resource: USD oc delete smcp -n istio-system <name_of_custom_resource> 2.30.2. Removing the installed Operators You must remove the Operators to successfully remove Red Hat OpenShift Service Mesh. After you remove the Red Hat OpenShift Service Mesh Operator, you must remove the Kiali Operator, the Red Hat OpenShift distributed tracing platform (Jaeger) Operator, and the OpenShift Elasticsearch Operator. 2.30.2.1. Removing the Operators Follow this procedure to remove the Operators that make up Red Hat OpenShift Service Mesh. Repeat the steps for each of the following Operators. Red Hat OpenShift Service Mesh Kiali Red Hat OpenShift distributed tracing platform (Jaeger) OpenShift Elasticsearch Procedure Log in to the OpenShift Container Platform web console. From the Operators Installed Operators page, scroll or type a keyword into the Filter by name to find each Operator. Then, click the Operator name. On the Operator Details page, select Uninstall Operator from the Actions menu. Follow the prompts to uninstall each Operator. 2.30.3. Clean up Operator resources You can manually remove resources left behind after removing the Red Hat OpenShift Service Mesh Operator using the OpenShift Container Platform web console. Prerequisites An account with cluster administration access. If you use Red Hat OpenShift Dedicated, you must have an account with the dedicated-admin role. Access to the OpenShift CLI ( oc ). Procedure Log in to the OpenShift Container Platform CLI as a cluster administrator. Run the following commands to clean up resources after uninstalling the Operators. If you intend to keep using distributed tracing platform (Jaeger) as a stand-alone service without service mesh, do not delete the Jaeger resources. Note The OpenShift Elasticsearch Operator is installed in openshift-operators-redhat by default. The other Operators are installed in the openshift-operators namespace by default. If you installed the Operators in another namespace, replace openshift-operators with the name of the project where the Red Hat OpenShift Service Mesh Operator was installed. USD oc -n openshift-operators delete ds -lmaistra-version USD oc delete clusterrole/istio-admin clusterrole/istio-cni clusterrolebinding/istio-cni clusterrole/ossm-cni clusterrolebinding/ossm-cni USD oc delete clusterrole istio-view istio-edit USD oc delete clusterrole jaegers.jaegertracing.io-v1-admin jaegers.jaegertracing.io-v1-crdview jaegers.jaegertracing.io-v1-edit jaegers.jaegertracing.io-v1-view USD oc get crds -o name | grep '.*\.istio\.io' | xargs -r -n 1 oc delete USD oc get crds -o name | grep '.*\.maistra\.io' | xargs -r -n 1 oc delete USD oc get crds -o name | grep '.*\.kiali\.io' | xargs -r -n 1 oc delete USD oc delete crds jaegers.jaegertracing.io USD oc delete cm -n openshift-operators -lmaistra-version USD oc delete sa -n openshift-operators -lmaistra-version | [
"apiVersion: maistra.io/v2 kind: ServiceMeshControlPlane metadata: name: basic spec: runtime: components: pilot: container: env: ENABLE_NATIVE_SIDECARS: \"true\"",
"apiVersion: maistra.io/v2 kind: ServiceMeshControlPlane metadata: name: basic spec: runtime: components: pilot: container: env: PILOT_ENABLE_GATEWAY_API: \"false\"",
"apiVersion: maistra.io/v2 kind: ServiceMeshControlPlane spec: gateways: openshiftRoute: enabled: true",
"spec: meshConfig discoverySelectors: - matchLabels: env: prod region: us-east1 - matchExpressions: - key: app operator: In values: - cassandra - spark",
"spec: meshConfig: extensionProviders: - name: prometheus prometheus: {} --- apiVersion: telemetry.istio.io/v1alpha1 kind: Telemetry metadata: name: enable-prometheus-metrics spec: metrics: - providers: - name: prometheus",
"spec: techPreview: gatewayAPI: enabled: true",
"spec: runtime: components: pilot: container: env: PILOT_ENABLE_GATEWAY_API: \"true\" PILOT_ENABLE_GATEWAY_API_STATUS: \"true\" PILOT_ENABLE_GATEWAY_API_DEPLOYMENT_CONTROLLER: \"true\"",
"apiVersion: maistra.io/v2 kind: ServiceMeshControlPlane metadata: name: cluster-wide namespace: istio-system spec: version: v2.3 techPreview: controlPlaneMode: ClusterScoped 1",
"apiVersion: maistra.io/v1 kind: ServiceMeshMemberRoll metadata: name: default spec: members: - '*' 1",
"kubectl get crd gateways.gateway.networking.k8s.io || { kubectl kustomize \"github.com/kubernetes-sigs/gateway-api/config/crd?ref=v0.4.0\" | kubectl apply -f -; }",
"spec: runtime: components: pilot: container: env: PILOT_ENABLE_GATEWAY_API: \"true\" PILOT_ENABLE_GATEWAY_API_STATUS: \"true\" # and optionally, for the deployment controller PILOT_ENABLE_GATEWAY_API_DEPLOYMENT_CONTROLLER: \"true\"",
"apiVersion: gateway.networking.k8s.io/v1alpha2 kind: Gateway metadata: name: gateway spec: addresses: - value: ingress.istio-gateways.svc.cluster.local type: Hostname",
"apiVersion: maistra.io/v2 kind: ServiceMeshControlPlane spec: security: trust: manageNetworkPolicy: false",
"apiVersion: maistra.io/v2 kind: ServiceMeshControlPlane metadata: name: basic spec: techPreview: meshConfig: defaultConfig: proxyMetadata: HTTP_STRIP_FRAGMENT_FROM_PATH_UNSAFE_IF_DISABLED: \"false\"",
"apiVersion: security.istio.io/v1beta1 kind: AuthorizationPolicy metadata: name: httpbin namespace: foo spec: action: DENY rules: - from: - source: namespaces: [\"dev\"] to: - operation: hosts: [\"httpbin.com\",\"httpbin.com:*\"]",
"apiVersion: security.istio.io/v1beta1 kind: AuthorizationPolicy metadata: name: httpbin namespace: default spec: action: DENY rules: - to: - operation: hosts: [\"httpbin.example.com:*\"]",
"spec: techPreview: global: pathNormalization: <option>",
"oc create -f <myEnvoyFilterFile>",
"apiVersion: networking.istio.io/v1alpha3 kind: EnvoyFilter metadata: name: ingress-case-insensitive namespace: istio-system spec: configPatches: - applyTo: HTTP_FILTER match: context: GATEWAY listener: filterChain: filter: name: \"envoy.filters.network.http_connection_manager\" subFilter: name: \"envoy.filters.http.router\" patch: operation: INSERT_BEFORE value: name: envoy.lua typed_config: \"@type\": \"type.googleapis.com/envoy.extensions.filters.http.lua.v3.Lua\" inlineCode: | function envoy_on_request(request_handle) local path = request_handle:headers():get(\":path\") request_handle:headers():replace(\":path\", string.lower(path)) end",
"apiVersion: maistra.io/v2 kind: ServiceMeshControlPlane metadata: name: basic namespace: istio-system spec: mode: ClusterWide meshConfig: discoverySelectors: - matchLabels: istio-discovery: enabled gateways: ingress: enabled: true",
"label namespace istio-system istio-discovery=enabled",
"2023-05-02T15:20:42.541034Z error watch error in cluster Kubernetes: failed to list *v1alpha2.TLSRoute: the server could not find the requested resource (get tlsroutes.gateway.networking.k8s.io) 2023-05-02T15:20:42.616450Z info kube controller \"gateway.networking.k8s.io/v1alpha2/TCPRoute\" is syncing",
"kubectl get crd gateways.gateway.networking.k8s.io || { kubectl kustomize \"github.com/kubernetes-sigs/gateway-api/config/crd/experimental?ref=v0.5.1\" | kubectl apply -f -; }",
"apiVersion: networking.istio.io/v1beta1 kind: ProxyConfig metadata: name: mesh-wide-concurrency namespace: <istiod-namespace> spec: concurrency: 0",
"api: namespaces: exclude: - \"^istio-operator\" - \"^kube-.*\" - \"^openshift.*\" - \"^ibm.*\" - \"^kiali-operator\"",
"spec: proxy: networking: trafficControl: inbound: excludedPorts: - 15020",
"spec: runtime: components: pilot: container: env: APPLY_WASM_PLUGINS_TO_INBOUND_ONLY: \"true\"",
"error Installer exits with open /host/etc/cni/multus/net.d/v2-2-istio-cni.kubeconfig.tmp.841118073: no such file or directory",
"oc label namespace istio-system maistra.io/ignore-namespace-",
"apiVersion: maistra.io/v2 kind: ServiceMeshControlPlane spec: gateways: openshiftRoute: enabled: true",
"An error occurred admission webhook smcp.validation.maistra.io denied the request: [support for policy.type \"Mixer\" and policy.Mixer options have been removed in v2.1, please use another alternative, support for telemetry.type \"Mixer\" and telemetry.Mixer options have been removed in v2.1, please use another alternative]\"",
"apiVersion: maistra.io/v2 kind: ServiceMeshControlPlane spec: policy: type: Istiod telemetry: type: Istiod version: v2.6",
"oc project istio-system",
"oc get smcp -o yaml",
"apiVersion: maistra.io/v2 kind: ServiceMeshControlPlane metadata: name: basic spec: version: v2.6",
"oc get smcp -o yaml",
"oc get smcp.v1.maistra.io <smcp_name> > smcp-resource.yaml #Edit the smcp-resource.yaml file. oc replace -f smcp-resource.yaml",
"oc patch smcp.v1.maistra.io <smcp_name> --type json --patch '[{\"op\": \"replace\",\"path\":\"/spec/path/to/bad/setting\",\"value\":\"corrected-value\"}]'",
"oc edit smcp.v1.maistra.io <smcp_name>",
"oc project istio-system",
"oc get servicemeshcontrolplanes.v1.maistra.io <smcp_name> -o yaml > <smcp_name>.v1.yaml",
"oc get smcp <smcp_name> -o yaml > <smcp_name>.v2.yaml",
"oc new-project istio-system-upgrade",
"oc create -n istio-system-upgrade -f <smcp_name>.v2.yaml",
"spec: policy: type: Mixer",
"spec: telemetry: type: Mixer",
"apiVersion: authentication.istio.io/v1alpha1 kind: Policy metadata: name: productpage-mTLS-disable namespace: <namespace> spec: targets: - name: productpage",
"apiVersion: security.istio.io/v1beta1 kind: PeerAuthentication metadata: name: productpage-mTLS-disable namespace: <namespace> spec: mtls: mode: DISABLE selector: matchLabels: # this should match the selector for the \"productpage\" service app: productpage",
"apiVersion: authentication.istio.io/v1alpha1 kind: Policy metadata: name: productpage-mTLS-with-JWT namespace: <namespace> spec: targets: - name: productpage ports: - number: 9000 peers: - mtls: origins: - jwt: issuer: \"https://securetoken.google.com\" audiences: - \"productpage\" jwksUri: \"https://www.googleapis.com/oauth2/v1/certs\" jwtHeaders: - \"x-goog-iap-jwt-assertion\" triggerRules: - excludedPaths: - exact: /health_check principalBinding: USE_ORIGIN",
"#require mtls for productpage:9000 apiVersion: security.istio.io/v1beta1 kind: PeerAuthentication metadata: name: productpage-mTLS-with-JWT namespace: <namespace> spec: selector: matchLabels: # this should match the selector for the \"productpage\" service app: productpage portLevelMtls: 9000: mode: STRICT --- #JWT authentication for productpage apiVersion: security.istio.io/v1beta1 kind: RequestAuthentication metadata: name: productpage-mTLS-with-JWT namespace: <namespace> spec: selector: matchLabels: # this should match the selector for the \"productpage\" service app: productpage jwtRules: - issuer: \"https://securetoken.google.com\" audiences: - \"productpage\" jwksUri: \"https://www.googleapis.com/oauth2/v1/certs\" fromHeaders: - name: \"x-goog-iap-jwt-assertion\" --- #Require JWT token to access product page service from #any client to all paths except /health_check apiVersion: security.istio.io/v1beta1 kind: AuthorizationPolicy metadata: name: productpage-mTLS-with-JWT namespace: <namespace> spec: action: ALLOW selector: matchLabels: # this should match the selector for the \"productpage\" service app: productpage rules: - to: # require JWT token to access all other paths - operation: notPaths: - /health_check from: - source: # if using principalBinding: USE_PEER in the Policy, # then use principals, e.g. # principals: # - \"*\" requestPrincipals: - \"*\" - to: # no JWT token required to access health_check - operation: paths: - /health_check",
"spec: tracing: sampling: 100 # 1% type: Jaeger",
"spec: addons: jaeger: name: jaeger install: storage: type: Memory # or Elasticsearch for production mode memory: maxTraces: 100000 elasticsearch: # the following values only apply if storage:type:=Elasticsearch storage: # specific storageclass configuration for the Jaeger Elasticsearch (optional) size: \"100G\" storageClassName: \"storageclass\" nodeCount: 3 redundancyPolicy: SingleRedundancy runtime: components: tracing.jaeger: {} # general Jaeger specific runtime configuration (optional) tracing.jaeger.elasticsearch: #runtime configuration for Jaeger Elasticsearch deployment (optional) container: resources: requests: memory: \"1Gi\" cpu: \"500m\" limits: memory: \"1Gi\"",
"spec: addons: grafana: enabled: true install: {} # customize install kiali: enabled: true name: kiali install: {} # customize install",
"oc rollout restart <deployment>",
"apiVersion: maistra.io/v2 kind: ServiceMeshControlPlane metadata: name: basic spec: mode: ClusterWide meshConfig: discoverySelectors: - matchLabels: istio-discovery: enabled 1 - matchExpressions: - key: kubernetes.io/metadata.name 2 operator: In values: - bookinfo - httpbin - istio-system",
"oc -n istio-system edit smcp <name> 1",
"apiVersion: maistra.io/v2 kind: ServiceMeshControlPlane metadata: name: basic spec: mode: ClusterWide meshConfig: discoverySelectors: - matchLabels: istio-discovery: enabled 1 - matchExpressions: - key: kubernetes.io/metadata.name 2 operator: In values: - bookinfo - httpbin - istio-system",
"apiVersion: maistra.io/v1 kind: ServiceMeshMemberRoll metadata: name: default spec: memberSelectors: - matchLabels: istio-injection: enabled 1",
"oc edit smmr -n <controlplane-namespace>",
"apiVersion: maistra.io/v1 kind: ServiceMeshMemberRoll metadata: name: default spec: memberSelectors: - matchLabels: istio-injection: enabled 1",
"apiVersion: apps/v1 kind: Deployment metadata: name: nginx spec: selector: matchLabels: app: nginx template: metadata: annotations: sidecar.istio.io/inject: 'true' 1 labels: app: nginx spec: containers: - name: nginx image: nginx:1.14.2 ports: - containerPort: 80 --- apiVersion: apps/v1 kind: Deployment metadata: name: nginx-without-sidecar spec: selector: matchLabels: app: nginx-without-sidecar template: metadata: labels: app: nginx-without-sidecar 2 spec: containers: - name: nginx image: nginx:1.14.2 ports: - containerPort: 80",
"oc edit deployment -n <namespace> <deploymentName>",
"apiVersion: apps/v1 kind: Deployment metadata: name: nginx spec: selector: matchLabels: app: nginx template: metadata: annotations: sidecar.istio.io/inject: 'true' 1 labels: app: nginx spec: containers: - name: nginx image: nginx:1.14.2 ports: - containerPort: 80 --- apiVersion: apps/v1 kind: Deployment metadata: name: nginx-without-sidecar spec: selector: matchLabels: app: nginx-without-sidecar template: metadata: labels: app: nginx-without-sidecar 2 spec: containers: - name: nginx image: nginx:1.14.2 ports: - containerPort: 80",
"apiVersion: security.istio.io/v1beta1 kind: AuthorizationPolicy metadata: name: httpbin-usernamepolicy spec: action: ALLOW rules: - when: - key: 'request.regex.headers[username]' values: - \"allowed.*\" selector: matchLabels: app: httpbin",
"oc -n openshift-operators get subscriptions",
"oc -n openshift-operators edit subscription <name> 1",
"apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: labels: operators.coreos.com/servicemeshoperator.openshift-operators: \"\" name: servicemeshoperator namespace: openshift-operators spec: config: nodeSelector: 1 node-role.kubernetes.io/infra: \"\" tolerations: 2 - effect: NoSchedule key: node-role.kubernetes.io/infra value: reserved - effect: NoExecute key: node-role.kubernetes.io/infra value: reserved",
"oc -n openshift-operators get po -l name=istio-operator -owide",
"oc new-project istio-system",
"apiVersion: maistra.io/v2 kind: ServiceMeshControlPlane metadata: name: basic namespace: istio-system spec: version: v2.6 tracing: type: None sampling: 10000 addons: kiali: enabled: true name: kiali grafana: enabled: true",
"oc create -n istio-system -f <istio_installation.yaml>",
"oc get pods -n istio-system -w",
"NAME READY STATUS RESTARTS AGE grafana-b4d59bd7-mrgbr 2/2 Running 0 65m istio-egressgateway-678dc97b4c-wrjkp 1/1 Running 0 108s istio-ingressgateway-b45c9d54d-4qg6n 1/1 Running 0 108s istiod-basic-55d78bbbcd-j5556 1/1 Running 0 108s kiali-6476c7656c-x5msp 1/1 Running 0 43m prometheus-58954b8d6b-m5std 2/2 Running 0 66m",
"oc get smcp -n istio-system",
"NAME READY STATUS PROFILES VERSION AGE basic 10/10 ComponentsReady [\"default\"] 2.6.6 66m",
"spec: runtime: defaults: pod: nodeSelector: 1 node-role.kubernetes.io/infra: \"\" tolerations: 2 - effect: NoSchedule key: node-role.kubernetes.io/infra value: reserved - effect: NoExecute key: node-role.kubernetes.io/infra value: reserved",
"spec: runtime: components: pilot: pod: nodeSelector: 1 node-role.kubernetes.io/infra: \"\" tolerations: 2 - effect: NoSchedule key: node-role.kubernetes.io/infra value: reserved - effect: NoExecute key: node-role.kubernetes.io/infra value: reserved",
"spec: gateways: ingress: runtime: pod: nodeSelector: 1 node-role.kubernetes.io/infra: \"\" tolerations: 2 - effect: NoSchedule key: node-role.kubernetes.io/infra value: reserved - effect: NoExecute key: node-role.kubernetes.io/infra value: reserved egress: runtime: pod: nodeSelector: 3 node-role.kubernetes.io/infra: \"\" tolerations: 4 - effect: NoSchedule key: node-role.kubernetes.io/infra value: reserved - effect: NoExecute key: node-role.kubernetes.io/infra value: reserved",
"oc -n istio-system edit smcp <name> 1",
"spec: runtime: defaults: pod: nodeSelector: 1 node-role.kubernetes.io/infra: \"\" tolerations: 2 - effect: NoSchedule key: node-role.kubernetes.io/infra value: reserved - effect: NoExecute key: node-role.kubernetes.io/infra value: reserved",
"oc -n istio-system edit smcp <name> 1",
"spec: runtime: components: pilot: pod: nodeSelector: 1 node-role.kubernetes.io/infra: \"\" tolerations: 2 - effect: NoSchedule key: node-role.kubernetes.io/infra value: reserved - effect: NoExecute key: node-role.kubernetes.io/infra value: reserved",
"spec: gateways: ingress: runtime: pod: nodeSelector: 1 node-role.kubernetes.io/infra: \"\" tolerations: 2 - effect: NoSchedule key: node-role.kubernetes.io/infra value: reserved - effect: NoExecute key: node-role.kubernetes.io/infra value: reserved egress: runtime: pod: nodeSelector: 3 node-role.kubernetes.io/infra: \"\" tolerations: 4 - effect: NoSchedule key: node-role.kubernetes.io/infra value: reserved - effect: NoExecute key: node-role.kubernetes.io/infra value: reserved",
"oc -n istio-system get pods -owide",
"apiVersion: maistra.io/v2 kind: ServiceMeshControlPlane metadata: name: basic namespace: istio-system spec: version: v2.6 mode: ClusterWide",
"oc new-project istio-system",
"apiVersion: maistra.io/v2 kind: ServiceMeshControlPlane metadata: name: basic namespace: istio-system spec: version: v2.6 mode: ClusterWide",
"oc create -n istio-system -f <istio_installation.yaml>",
"oc get pods -n istio-system -w",
"NAME READY STATUS RESTARTS AGE grafana-b4d59bd7-mrgbr 2/2 Running 0 65m istio-egressgateway-678dc97b4c-wrjkp 1/1 Running 0 108s istio-ingressgateway-b45c9d54d-4qg6n 1/1 Running 0 108s istiod-basic-55d78bbbcd-j5556 1/1 Running 0 108s jaeger-67c75bd6dc-jv6k6 2/2 Running 0 65m kiali-6476c7656c-x5msp 1/1 Running 0 43m prometheus-58954b8d6b-m5std 2/2 Running 0 66m",
"oc login --username=<NAMEOFUSER> https://<HOSTNAME>:6443",
"oc new-project <your-project>",
"apiVersion: maistra.io/v1 kind: ServiceMeshMemberRoll metadata: name: default namespace: istio-system spec: members: # a list of projects joined into the service mesh - your-project-name - another-project-name",
"oc create -n istio-system -f servicemeshmemberroll-default.yaml",
"oc get smmr -n istio-system default",
"apiVersion: maistra.io/v1 kind: ServiceMeshMemberRoll metadata: name: default namespace: istio-system #control plane project spec: members: # a list of projects joined into the service mesh - your-project-name - another-project-name",
"oc edit smmr -n <controlplane-namespace>",
"apiVersion: maistra.io/v1 kind: ServiceMeshMemberRoll metadata: name: default namespace: istio-system #control plane project spec: members: # a list of projects joined into the service mesh - your-project-name - another-project-name",
"apiVersion: maistra.io/v1 kind: ServiceMeshMember metadata: name: default namespace: my-application spec: controlPlaneRef: namespace: istio-system name: basic",
"oc apply -f <file-name>",
"oc get smm default -n my-application",
"NAME CONTROL PLANE READY AGE default istio-system/basic True 2m11s",
"oc describe smmr default -n istio-system",
"Name: default Namespace: istio-system Labels: <none> Status: Configured Members: default my-application Members: default my-application",
"oc edit smmr default -n istio-system",
"apiVersion: maistra.io/v1 kind: ServiceMeshMemberRoll metadata: name: default namespace: istio-system spec: memberSelectors: 1 - matchLabels: 2 mykey: myvalue 3 - matchLabels: 4 myotherkey: myothervalue 5",
"oc new-project bookinfo",
"apiVersion: maistra.io/v1 kind: ServiceMeshMemberRoll metadata: name: default spec: members: - bookinfo",
"oc create -n istio-system -f servicemeshmemberroll-default.yaml",
"oc get smmr -n istio-system -o wide",
"NAME READY STATUS AGE MEMBERS default 1/1 Configured 70s [\"bookinfo\"]",
"oc apply -n bookinfo -f https://raw.githubusercontent.com/Maistra/istio/maistra-2.6/samples/bookinfo/platform/kube/bookinfo.yaml",
"service/details created serviceaccount/bookinfo-details created deployment.apps/details-v1 created service/ratings created serviceaccount/bookinfo-ratings created deployment.apps/ratings-v1 created service/reviews created serviceaccount/bookinfo-reviews created deployment.apps/reviews-v1 created deployment.apps/reviews-v2 created deployment.apps/reviews-v3 created service/productpage created serviceaccount/bookinfo-productpage created deployment.apps/productpage-v1 created",
"oc apply -n bookinfo -f https://raw.githubusercontent.com/Maistra/istio/maistra-2.6/samples/bookinfo/networking/bookinfo-gateway.yaml",
"gateway.networking.istio.io/bookinfo-gateway created virtualservice.networking.istio.io/bookinfo created",
"export GATEWAY_URL=USD(oc -n istio-system get route istio-ingressgateway -o jsonpath='{.spec.host}')",
"oc apply -n bookinfo -f https://raw.githubusercontent.com/Maistra/istio/maistra-2.6/samples/bookinfo/networking/destination-rule-all.yaml",
"oc apply -n bookinfo -f https://raw.githubusercontent.com/Maistra/istio/maistra-2.6/samples/bookinfo/networking/destination-rule-all-mtls.yaml",
"destinationrule.networking.istio.io/productpage created destinationrule.networking.istio.io/reviews created destinationrule.networking.istio.io/ratings created destinationrule.networking.istio.io/details created",
"oc get pods -n bookinfo",
"NAME READY STATUS RESTARTS AGE details-v1-55b869668-jh7hb 2/2 Running 0 12m productpage-v1-6fc77ff794-nsl8r 2/2 Running 0 12m ratings-v1-7d7d8d8b56-55scn 2/2 Running 0 12m reviews-v1-868597db96-bdxgq 2/2 Running 0 12m reviews-v2-5b64f47978-cvssp 2/2 Running 0 12m reviews-v3-6dfd49b55b-vcwpf 2/2 Running 0 12m",
"echo \"http://USDGATEWAY_URL/productpage\"",
"oc delete project bookinfo",
"oc -n istio-system patch --type='json' smmr default -p '[{\"op\": \"remove\", \"path\": \"/spec/members\", \"value\":[\"'\"bookinfo\"'\"]}]'",
"oc get deployment -n <namespace>",
"get deployment -n bookinfo ratings-v1 -o yaml",
"apiVersion: apps/v1 kind: Deployment metadata: name: ratings-v1 namespace: bookinfo labels: app: ratings version: v1 spec: template: metadata: labels: sidecar.istio.io/inject: 'true'",
"oc apply -n <namespace> -f deployment.yaml",
"oc apply -n bookinfo -f deployment-ratings-v1.yaml",
"oc get deployment -n <namespace> <deploymentName> -o yaml",
"oc get deployment -n bookinfo ratings-v1 -o yaml",
"apiVersion: apps/v1 kind: Deployment metadata: name: resource spec: replicas: 7 selector: matchLabels: app: resource template: metadata: annotations: sidecar.maistra.io/proxyEnv: \"{ \\\"maistra_test_env\\\": \\\"env_value\\\", \\\"maistra_test_env_2\\\": \\\"env_value_2\\\" }\"",
"oc patch deployment/<deployment> -p '{\"spec\":{\"template\":{\"metadata\":{\"annotations\":{\"kubectl.kubernetes.io/restartedAt\": \"'`date -Iseconds`'\"}}}}}'",
"oc policy add-role-to-user -n istio-system --role-namespace istio-system mesh-user <user_name>",
"apiVersion: maistra.io/v1 kind: ServiceMeshMember metadata: name: default spec: controlPlaneRef: namespace: istio-system name: basic",
"oc policy add-role-to-user",
"apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: namespace: istio-system name: mesh-users roleRef: apiGroup: rbac.authorization.k8s.io kind: Role name: mesh-user subjects: - apiGroup: rbac.authorization.k8s.io kind: User name: alice",
"oc create configmap --from-file=<profiles-directory> smcp-templates -n openshift-operators",
"apiVersion: maistra.io/v2 kind: ServiceMeshControlPlane metadata: name: basic spec: profiles: - default",
"apiVersion: maistra.io/v2 kind: ServiceMeshControlPlane spec: version: v2.6 security: dataPlane: mtls: true",
"apiVersion: security.istio.io/v1beta1 kind: PeerAuthentication metadata: name: default namespace: <namespace> spec: mtls: mode: STRICT",
"oc create -n <namespace> -f <policy.yaml>",
"apiVersion: networking.istio.io/v1alpha3 kind: DestinationRule metadata: name: default namespace: <namespace> spec: host: \"*.<namespace>.svc.cluster.local\" trafficPolicy: tls: mode: ISTIO_MUTUAL",
"oc create -n <namespace> -f <destination-rule.yaml>",
"kind: ServiceMeshControlPlane spec: security: controlPlane: tls: minProtocolVersion: TLSv1_2",
"apiVersion: security.istio.io/v1beta1 kind: AuthorizationPolicy metadata: name: ingress-policy namespace: istio-system spec: selector: matchLabels: app: istio-ingressgateway action: DENY rules: - from: - source: ipBlocks: [\"1.2.3.4\"]",
"oc create -n istio-system -f <filename>",
"apiVersion: security.istio.io/v1beta1 kind: AuthorizationPolicy metadata: name: httpbin-deny namespace: bookinfo spec: selector: matchLabels: app: httpbin version: v1 action: DENY rules: - from: - source: notNamespaces: [\"bookinfo\"]",
"apiVersion: security.istio.io/v1beta1 kind: AuthorizationPolicy metadata: name: allow-all namespace: bookinfo spec: action: ALLOW rules: - {}",
"apiVersion: security.istio.io/v1beta1 kind: AuthorizationPolicy metadata: name: deny-all namespace: bookinfo spec: {}",
"apiVersion: security.istio.io/v1beta1 kind: AuthorizationPolicy metadata: name: ingress-policy namespace: istio-system spec: selector: matchLabels: app: istio-ingressgateway action: ALLOW rules: - from: - source: ipBlocks: [\"1.2.3.4\", \"5.6.7.0/24\"]",
"apiVersion: \"security.istio.io/v1beta1\" kind: \"RequestAuthentication\" metadata: name: \"jwt-example\" namespace: bookinfo spec: selector: matchLabels: app: httpbin jwtRules: - issuer: \"http://localhost:8080/auth/realms/master\" jwksUri: \"http://keycloak.default.svc:8080/auth/realms/master/protocol/openid-connect/certs\"",
"apiVersion: \"security.istio.io/v1beta1\" kind: \"AuthorizationPolicy\" metadata: name: \"frontend-ingress\" namespace: bookinfo spec: selector: matchLabels: app: httpbin action: DENY rules: - from: - source: notRequestPrincipals: [\"*\"]",
"oc edit smcp <smcp-name>",
"spec: security: dataPlane: mtls: true # enable mtls for data plane # JWKSResolver extra CA # PEM-encoded certificate content to trust an additional CA jwksResolverCA: | -----BEGIN CERTIFICATE----- [...] [...] -----END CERTIFICATE-----",
"kind: ConfigMap apiVersion: v1 data: extra.pem: | -----BEGIN CERTIFICATE----- [...] [...] -----END CERTIFICATE-----",
"oc create secret generic cacerts -n istio-system --from-file=<path>/ca-cert.pem --from-file=<path>/ca-key.pem --from-file=<path>/root-cert.pem --from-file=<path>/cert-chain.pem",
"apiVersion: maistra.io/v2 kind: ServiceMeshControlPlane spec: security: dataPlane: mtls: true certificateAuthority: type: Istiod istiod: type: PrivateKey privateKey: rootCADir: /etc/cacerts",
"oc -n istio-system delete pods -l 'app in (istiod,istio-ingressgateway, istio-egressgateway)'",
"oc -n bookinfo delete pods --all",
"pod \"details-v1-6cd699df8c-j54nh\" deleted pod \"productpage-v1-5ddcb4b84f-mtmf2\" deleted pod \"ratings-v1-bdbcc68bc-kmng4\" deleted pod \"reviews-v1-754ddd7b6f-lqhsv\" deleted pod \"reviews-v2-675679877f-q67r2\" deleted pod \"reviews-v3-79d7549c7-c2gjs\" deleted",
"oc get pods -n bookinfo",
"sleep 60 oc -n bookinfo exec \"USD(oc -n bookinfo get pod -l app=productpage -o jsonpath={.items..metadata.name})\" -c istio-proxy -- openssl s_client -showcerts -connect details:9080 > bookinfo-proxy-cert.txt sed -n '/-----BEGIN CERTIFICATE-----/{:start /-----END CERTIFICATE-----/!{N;b start};/.*/p}' bookinfo-proxy-cert.txt > certs.pem awk 'BEGIN {counter=0;} /BEGIN CERT/{counter++} { print > \"proxy-cert-\" counter \".pem\"}' < certs.pem",
"openssl x509 -in <path>/root-cert.pem -text -noout > /tmp/root-cert.crt.txt",
"openssl x509 -in ./proxy-cert-3.pem -text -noout > /tmp/pod-root-cert.crt.txt",
"diff -s /tmp/root-cert.crt.txt /tmp/pod-root-cert.crt.txt",
"openssl x509 -in <path>/ca-cert.pem -text -noout > /tmp/ca-cert.crt.txt",
"openssl x509 -in ./proxy-cert-2.pem -text -noout > /tmp/pod-cert-chain-ca.crt.txt",
"diff -s /tmp/ca-cert.crt.txt /tmp/pod-cert-chain-ca.crt.txt",
"openssl verify -CAfile <(cat <path>/ca-cert.pem <path>/root-cert.pem) ./proxy-cert-1.pem",
"oc delete secret cacerts -n istio-system",
"apiVersion: maistra.io/v2 kind: ServiceMeshControlPlane spec: security: dataPlane: mtls: true",
"apiVersion: cert-manager.io/v1 kind: Issuer metadata: name: selfsigned-root-issuer namespace: cert-manager spec: selfSigned: {} --- apiVersion: cert-manager.io/v1 kind: Certificate metadata: name: root-ca namespace: cert-manager spec: isCA: true duration: 21600h # 900d secretName: root-ca commonName: root-ca.my-company.net subject: organizations: - my-company.net issuerRef: name: selfsigned-root-issuer kind: Issuer group: cert-manager.io --- apiVersion: cert-manager.io/v1 kind: ClusterIssuer metadata: name: root-ca spec: ca: secretName: root-ca",
"oc apply -f cluster-issuer.yaml",
"apiVersion: cert-manager.io/v1 kind: Certificate metadata: name: istio-ca namespace: istio-system spec: isCA: true duration: 21600h secretName: istio-ca commonName: istio-ca.my-company.net subject: organizations: - my-company.net issuerRef: name: root-ca kind: ClusterIssuer group: cert-manager.io --- apiVersion: cert-manager.io/v1 kind: Issuer metadata: name: istio-ca namespace: istio-system spec: ca: secretName: istio-ca",
"oc apply -n istio-system -f istio-ca.yaml",
"helm install istio-csr jetstack/cert-manager-istio-csr -n istio-system -f deploy/examples/cert-manager/istio-csr/istio-csr.yaml",
"replicaCount: 2 image: repository: quay.io/jetstack/cert-manager-istio-csr tag: v0.6.0 pullSecretName: \"\" app: certmanager: namespace: istio-system issuer: group: cert-manager.io kind: Issuer name: istio-ca controller: configmapNamespaceSelector: \"maistra.io/member-of=istio-system\" leaderElectionNamespace: istio-system istio: namespace: istio-system revisions: [\"basic\"] server: maxCertificateDuration: 5m tls: certificateDNSNames: # This DNS name must be set in the SMCP spec.security.certificateAuthority.cert-manager.address - cert-manager-istio-csr.istio-system.svc",
"oc apply -f mesh.yaml -n istio-system",
"apiVersion: maistra.io/v2 kind: ServiceMeshControlPlane metadata: name: basic spec: addons: grafana: enabled: false kiali: enabled: false prometheus: enabled: false proxy: accessLogging: file: name: /dev/stdout security: certificateAuthority: cert-manager: address: cert-manager-istio-csr.istio-system.svc:443 type: cert-manager dataPlane: mtls: true identity: type: ThirdParty tracing: type: None --- apiVersion: maistra.io/v1 kind: ServiceMeshMemberRoll metadata: name: default spec: members: - httpbin - sleep",
"oc new-project <namespace>",
"oc apply -f https://raw.githubusercontent.com/maistra/istio/maistra-2.4/samples/httpbin/httpbin.yaml",
"oc apply -f https://raw.githubusercontent.com/maistra/istio/maistra-2.4/samples/sleep/sleep.yaml",
"oc exec \"USD(oc get pod -l app=sleep -n <namespace> -o jsonpath={.items..metadata.name})\" -c sleep -n <namespace> -- curl http://httpbin.<namespace>:8000/ip -s -o /dev/null -w \"%{http_code}\\n\"",
"200",
"oc apply -n <namespace> -f https://raw.githubusercontent.com/maistra/istio/maistra-2.4/samples/httpbin/httpbin-gateway.yaml",
"INGRESS_HOST=USD(oc -n istio-system get routes istio-ingressgateway -o jsonpath='{.spec.host}')",
"curl -s -I http://USDINGRESS_HOST/headers -o /dev/null -w \"%{http_code}\" -s",
"apiVersion: networking.istio.io/v1alpha3 kind: Gateway metadata: name: ext-host-gwy spec: selector: istio: ingressgateway # use istio default controller servers: - port: number: 443 name: https protocol: HTTPS hosts: - ext-host.example.com tls: mode: SIMPLE serverCertificate: /tmp/tls.crt privateKey: /tmp/tls.key",
"apiVersion: networking.istio.io/v1alpha3 kind: VirtualService metadata: name: virtual-svc spec: hosts: - ext-host.example.com gateways: - ext-host-gwy",
"apiVersion: v1 kind: Service metadata: name: istio-ingressgateway namespace: istio-ingress spec: type: ClusterIP selector: istio: ingressgateway ports: - name: http2 port: 80 targetPort: 8080 - name: https port: 443 targetPort: 8443 --- apiVersion: apps/v1 kind: Deployment metadata: name: istio-ingressgateway namespace: istio-ingress spec: selector: matchLabels: istio: ingressgateway template: metadata: annotations: inject.istio.io/templates: gateway labels: istio: ingressgateway sidecar.istio.io/inject: \"true\" 1 spec: containers: - name: istio-proxy image: auto 2",
"apiVersion: rbac.authorization.k8s.io/v1 kind: Role metadata: name: istio-ingressgateway-sds namespace: istio-ingress rules: - apiGroups: [\"\"] resources: [\"secrets\"] verbs: [\"get\", \"watch\", \"list\"] --- apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: name: istio-ingressgateway-sds namespace: istio-ingress roleRef: apiGroup: rbac.authorization.k8s.io kind: Role name: istio-ingressgateway-sds subjects: - kind: ServiceAccount name: default",
"apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: gatewayingress namespace: istio-ingress spec: podSelector: matchLabels: istio: ingressgateway ingress: - {} policyTypes: - Ingress",
"apiVersion: autoscaling/v2 kind: HorizontalPodAutoscaler metadata: labels: istio: ingressgateway release: istio name: ingressgatewayhpa namespace: istio-ingress spec: maxReplicas: 5 metrics: - resource: name: cpu target: averageUtilization: 80 type: Utilization type: Resource minReplicas: 2 scaleTargetRef: apiVersion: apps/v1 kind: Deployment name: istio-ingressgateway",
"apiVersion: policy/v1 kind: PodDisruptionBudget metadata: labels: istio: ingressgateway release: istio name: ingressgatewaypdb namespace: istio-ingress spec: minAvailable: 1 selector: matchLabels: istio: ingressgateway",
"oc get svc istio-ingressgateway -n istio-system",
"export INGRESS_HOST=USD(oc -n istio-system get service istio-ingressgateway -o jsonpath='{.status.loadBalancer.ingress[0].ip}')",
"export INGRESS_PORT=USD(oc -n istio-system get service istio-ingressgateway -o jsonpath='{.spec.ports[?(@.name==\"http2\")].port}')",
"export SECURE_INGRESS_PORT=USD(oc -n istio-system get service istio-ingressgateway -o jsonpath='{.spec.ports[?(@.name==\"https\")].port}')",
"export TCP_INGRESS_PORT=USD(kubectl -n istio-system get service istio-ingressgateway -o jsonpath='{.spec.ports[?(@.name==\"tcp\")].port}')",
"export INGRESS_HOST=USD(oc -n istio-system get service istio-ingressgateway -o jsonpath='{.status.loadBalancer.ingress[0].hostname}')",
"export INGRESS_PORT=USD(oc -n istio-system get service istio-ingressgateway -o jsonpath='{.spec.ports[?(@.name==\"http2\")].nodePort}')",
"export SECURE_INGRESS_PORT=USD(oc -n istio-system get service istio-ingressgateway -o jsonpath='{.spec.ports[?(@.name==\"https\")].nodePort}')",
"export TCP_INGRESS_PORT=USD(kubectl -n istio-system get service istio-ingressgateway -o jsonpath='{.spec.ports[?(@.name==\"tcp\")].nodePort}')",
"apiVersion: networking.istio.io/v1alpha3 kind: Gateway metadata: name: bookinfo-gateway spec: selector: istio: ingressgateway servers: - port: number: 80 name: http protocol: HTTP hosts: - \"*\"",
"oc apply -f gateway.yaml",
"apiVersion: networking.istio.io/v1alpha3 kind: VirtualService metadata: name: bookinfo spec: hosts: - \"*\" gateways: - bookinfo-gateway http: - match: - uri: exact: /productpage - uri: prefix: /static - uri: exact: /login - uri: exact: /logout - uri: prefix: /api/v1/products route: - destination: host: productpage port: number: 9080",
"oc apply -f vs.yaml",
"export GATEWAY_URL=USD(oc -n istio-system get route istio-ingressgateway -o jsonpath='{.spec.host}')",
"export TARGET_PORT=USD(oc -n istio-system get route istio-ingressgateway -o jsonpath='{.spec.port.targetPort}')",
"curl -s -I \"USDGATEWAY_URL/productpage\"",
"apiVersion: networking.istio.io/v1alpha3 kind: Gateway metadata: name: gateway1 spec: selector: istio: ingressgateway servers: - port: number: 80 name: http protocol: HTTP hosts: - www.bookinfo.com - bookinfo.example.com",
"oc -n istio-system get routes",
"NAME HOST/PORT PATH SERVICES PORT TERMINATION WILDCARD gateway1-lvlfn bookinfo.example.com istio-ingressgateway <all> None gateway1-scqhv www.bookinfo.com istio-ingressgateway <all> None",
"apiVersion: maistra.io/v1alpha1 kind: ServiceMeshControlPlane metadata: namespace: istio-system spec: gateways: openshiftRoute: enabled: false",
"apiVersion: networking.istio.io/v1alpha3 kind: ServiceEntry metadata: name: svc-entry spec: hosts: - ext-svc.example.com ports: - number: 443 name: https protocol: HTTPS location: MESH_EXTERNAL resolution: DNS",
"apiVersion: networking.istio.io/v1alpha3 kind: DestinationRule metadata: name: ext-res-dr spec: host: ext-svc.example.com trafficPolicy: tls: mode: MUTUAL clientCertificate: /etc/certs/myclientcert.pem privateKey: /etc/certs/client_private_key.pem caCertificates: /etc/certs/rootcacerts.pem",
"apiVersion: networking.istio.io/v1alpha3 kind: VirtualService metadata: name: reviews spec: hosts: - reviews http: - match: - headers: end-user: exact: jason route: - destination: host: reviews subset: v2 - route: - destination: host: reviews subset: v3",
"oc apply -f <VirtualService.yaml>",
"spec: hosts:",
"spec: http: - match:",
"spec: http: - match: - destination:",
"apiVersion: networking.istio.io/v1alpha3 kind: DestinationRule metadata: name: my-destination-rule spec: host: my-svc trafficPolicy: loadBalancer: simple: RANDOM subsets: - name: v1 labels: version: v1 - name: v2 labels: version: v2 trafficPolicy: loadBalancer: simple: ROUND_ROBIN - name: v3 labels: version: v3",
"apiVersion: maistra.io/v2 kind: ServiceMeshControlPlane spec: security: manageNetworkPolicy: false",
"apiVersion: networking.istio.io/v1alpha3 kind: Sidecar metadata: name: default namespace: bookinfo spec: egress: - hosts: - \"./*\" - \"istio-system/*\"",
"oc apply -f sidecar.yaml",
"oc get sidecar",
"oc apply -f https://raw.githubusercontent.com/Maistra/istio/maistra-2.6/samples/bookinfo/networking/virtual-service-all-v1.yaml",
"oc get virtualservices -o yaml",
"export GATEWAY_URL=USD(oc -n istio-system get route istio-ingressgateway -o jsonpath='{.spec.host}')",
"echo \"http://USDGATEWAY_URL/productpage\"",
"oc apply -f https://raw.githubusercontent.com/Maistra/istio/maistra-2.6/samples/bookinfo/networking/virtual-service-reviews-test-v2.yaml",
"oc get virtualservice reviews -o yaml",
"apiVersion: apps/v1 kind: Deployment metadata: name: istio-ingressgateway-canary namespace: istio-system 1 spec: selector: matchLabels: app: istio-ingressgateway istio: ingressgateway template: metadata: annotations: inject.istio.io/templates: gateway labels: 2 app: istio-ingressgateway istio: ingressgateway sidecar.istio.io/inject: \"true\" spec: containers: - name: istio-proxy image: auto serviceAccountName: istio-ingressgateway --- apiVersion: v1 kind: ServiceAccount metadata: name: istio-ingressgateway namespace: istio-system --- apiVersion: rbac.authorization.k8s.io/v1 kind: Role metadata: name: secret-reader namespace: istio-system rules: - apiGroups: [\"\"] resources: [\"secrets\"] verbs: [\"get\", \"watch\", \"list\"] --- apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: name: istio-ingressgateway-secret-reader namespace: istio-system roleRef: apiGroup: rbac.authorization.k8s.io kind: Role name: secret-reader subjects: - kind: ServiceAccount name: istio-ingressgateway --- apiVersion: networking.k8s.io/v1 kind: NetworkPolicy 3 metadata: name: gatewayingress namespace: istio-system spec: podSelector: matchLabels: istio: ingressgateway ingress: - {} policyTypes: - Ingress",
"oc scale -n istio-system deployment/<new_gateway_deployment> --replicas <new_number_of_replicas>",
"oc scale -n istio-system deployment/<old_gateway_deployment> --replicas <new_number_of_replicas>",
"oc label service -n istio-system istio-ingressgateway app.kubernetes.io/managed-by-",
"oc patch service -n istio-system istio-ingressgateway --type='json' -p='[{\"op\": \"remove\", \"path\": \"/metadata/ownerReferences\"}]'",
"oc patch smcp -n istio-system <smcp_name> --type='json' -p='[{\"op\": \"replace\", \"path\": \"/spec/gateways/ingress/enabled\", \"value\": false}]'",
"apiVersion: maistra.io/v2 kind: ServiceMeshControlPlane spec: gateways: openshiftRoute: enabled: false",
"kind: Route apiVersion: route.openshift.io/v1 metadata: name: example-gateway namespace: istio-system 1 spec: host: www.example.com to: kind: Service name: istio-ingressgateway 2 weight: 100 port: targetPort: http2 wildcardPolicy: None",
"oc login --username=<NAMEOFUSER> https://<HOSTNAME>:6443",
"oc project istio-system",
"oc get routes",
"NAME HOST/PORT SERVICES PORT TERMINATION bookinfo-gateway bookinfo-gateway-yourcompany.com istio-ingressgateway http2 grafana grafana-yourcompany.com grafana <all> reencrypt/Redirect istio-ingressgateway istio-ingress-yourcompany.com istio-ingressgateway 8080 jaeger jaeger-yourcompany.com jaeger-query <all> reencrypt kiali kiali-yourcompany.com kiali 20001 reencrypt/Redirect prometheus prometheus-yourcompany.com prometheus <all> reencrypt/Redirect",
"curl \"http://USDGATEWAY_URL/productpage\"",
"apiVersion: opentelemetry.io/v1alpha1 kind: OpenTelemetryCollector metadata: name: otel namespace: bookinfo 1 spec: mode: deployment config: | receivers: otlp: protocols: grpc: endpoint: 0.0.0.0:4317 exporters: otlp: endpoint: \"tempo-sample-distributor.tracing-system.svc.cluster.local:4317\" 2 tls: insecure: true service: pipelines: traces: receivers: [otlp] processors: [] exporters: [otlp]",
"oc logs -n bookinfo -l app.kubernetes.io/name=otel-collector",
"kind: ServiceMeshControlPlane apiVersion: maistra.io/v2 metadata: name: basic namespace: istio-system spec: addons: grafana: enabled: false kiali: enabled: true prometheus: enabled: true meshConfig: extensionProviders: - name: otel opentelemetry: port: 4317 service: otel-collector.bookinfo.svc.cluster.local policy: type: Istiod telemetry: type: Istiod version: v2.6",
"spec: tracing: type: None",
"apiVersion: telemetry.istio.io/v1alpha1 kind: Telemetry metadata: name: mesh-default namespace: istio-system spec: tracing: - providers: - name: otel randomSamplingPercentage: 100",
"apiVersion: kiali.io/v1alpha1 kind: Kiali spec: external_services: tracing: query_timeout: 30 1 enabled: true in_cluster_url: 'http://tempo-sample-query-frontend.tracing-system.svc.cluster.local:16685' url: '[Tempo query frontend Route url]' use_grpc: true 2",
"apiVersion: networking.istio.io/v1alpha3 kind: DestinationRule metadata: name: otel-disable-tls spec: host: \"otel-collector.bookinfo.svc.cluster.local\" trafficPolicy: tls: mode: DISABLE",
"apiVersion: networking.istio.io/v1alpha3 kind: DestinationRule metadata: name: tempo namespace: tracing-system-mtls spec: host: \"*.tracing-system-mtls.svc.cluster.local\" trafficPolicy: tls: mode: DISABLE",
"apiVersion: networking.istio.io/v1alpha3 kind: DestinationRule metadata: name: kiali namespace: istio-system spec: host: kiali.istio-system.svc.cluster.local trafficPolicy: tls: mode: DISABLE",
"spec: addons: jaeger: name: distr-tracing-production",
"spec: tracing: sampling: 100",
"oc login --username=<NAMEOFUSER> https://<HOSTNAME>:6443",
"oc get route -n istio-system jaeger -o jsonpath='{.spec.host}'",
"apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: kiali-monitoring-rbac roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: cluster-monitoring-view subjects: - kind: ServiceAccount name: kiali-service-account namespace: istio-system",
"apiVersion: kiali.io/v1alpha1 kind: Kiali metadata: name: kiali-user-workload-monitoring namespace: istio-system spec: external_services: prometheus: auth: type: bearer use_kiali_token: true query_scope: mesh_id: \"basic-istio-system\" thanos_proxy: enabled: true url: https://thanos-querier.openshift-monitoring.svc.cluster.local:9091",
"apiVersion: kiali.io/v1alpha1 kind: Kiali metadata: name: kiali-user-workload-monitoring namespace: istio-system spec: external_services: istio: config_map_name: istio-<smcp-name> istio_sidecar_injector_config_map_name: istio-sidecar-injector-<smcp-name> istiod_deployment_name: istiod-<smcp-name> url_service_version: 'http://istiod-<smcp-name>.istio-system:15014/version' prometheus: auth: token: secret:thanos-querier-web-token:token type: bearer use_kiali_token: false query_scope: mesh_id: \"basic-istio-system\" thanos_proxy: enabled: true url: https://thanos-querier.openshift-monitoring.svc.cluster.local:9091 version: v1.65",
"apiVersion: maistra.io/v2 kind: ServiceMeshControlPlane metadata: name: basic namespace: istio-system spec: addons: prometheus: enabled: false 1 grafana: enabled: false 2 kiali: name: kiali-user-workload-monitoring meshConfig: extensionProviders: - name: prometheus prometheus: {}",
"apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: user-workload-access namespace: istio-system 1 spec: ingress: - from: - namespaceSelector: matchLabels: network.openshift.io/policy-group: monitoring podSelector: {} policyTypes: - Ingress",
"apiVersion: telemetry.istio.io/v1alpha1 kind: Telemetry metadata: name: enable-prometheus-metrics namespace: istio-system 1 spec: selector: 2 matchLabels: app: bookinfo metrics: - providers: - name: prometheus",
"apiVersion: monitoring.coreos.com/v1 kind: ServiceMonitor metadata: name: istiod-monitor namespace: istio-system 1 spec: targetLabels: - app selector: matchLabels: istio: pilot endpoints: - port: http-monitoring interval: 30s relabelings: - action: replace replacement: \"basic-istio-system\" 2 targetLabel: mesh_id",
"apiVersion: monitoring.coreos.com/v1 kind: PodMonitor metadata: name: istio-proxies-monitor namespace: istio-system 1 spec: selector: matchExpressions: - key: istio-prometheus-ignore operator: DoesNotExist podMetricsEndpoints: - path: /stats/prometheus interval: 30s relabelings: - action: keep sourceLabels: [__meta_kubernetes_pod_container_name] regex: \"istio-proxy\" - action: keep sourceLabels: [__meta_kubernetes_pod_annotationpresent_prometheus_io_scrape] - action: replace regex: (\\d+);(([A-Fa-f0-9]{1,4}::?){1,7}[A-Fa-f0-9]{1,4}) replacement: '[USD2]:USD1' sourceLabels: [__meta_kubernetes_pod_annotation_prometheus_io_port, __meta_kubernetes_pod_ip] targetLabel: __address__ - action: replace regex: (\\d+);((([0-9]+?)(\\.|USD)){4}) replacement: USD2:USD1 sourceLabels: [__meta_kubernetes_pod_annotation_prometheus_io_port, __meta_kubernetes_pod_ip] targetLabel: __address__ - action: labeldrop regex: \"__meta_kubernetes_pod_label_(.+)\" - sourceLabels: [__meta_kubernetes_namespace] action: replace targetLabel: namespace - sourceLabels: [__meta_kubernetes_pod_name] action: replace targetLabel: pod_name - action: replace replacement: \"basic-istio-system\" 2 targetLabel: mesh_id",
"apiVersion: maistra.io/v2 kind: ServiceMeshControlPlane metadata: name: basic namespace: istio-system spec: version: v2.6 proxy: runtime: container: resources: requests: cpu: 600m memory: 50Mi limits: {} runtime: components: pilot: container: resources: requests: cpu: 1000m memory: 1.6Gi limits: {} kiali: container: resources: limits: cpu: \"90m\" memory: \"245Mi\" requests: cpu: \"30m\" memory: \"108Mi\" global.oauthproxy: container: resources: requests: cpu: \"101m\" memory: \"256Mi\" limits: cpu: \"201m\" memory: \"512Mi\"",
"apiVersion: maistra.io/v2 kind: ServiceMeshControlPlane metadata: name: basic spec: version: v2.6 tracing: sampling: 100 type: Jaeger addons: jaeger: name: MyJaeger install: storage: type: Elasticsearch ingress: enabled: true runtime: components: tracing.jaeger.elasticsearch: # only supports resources and image name container: resources: {}",
"oc get smcp basic -o yaml",
"apiVersion: maistra.io/v2 kind: ServiceMeshControlPlane metadata: name: red-mesh namespace: red-mesh-system spec: version: v2.6 runtime: defaults: container: imagePullPolicy: Always gateways: additionalEgress: egress-green-mesh: enabled: true requestedNetworkView: - green-network service: metadata: labels: federation.maistra.io/egress-for: egress-green-mesh ports: - port: 15443 name: tls - port: 8188 name: http-discovery #note HTTP here egress-blue-mesh: enabled: true requestedNetworkView: - blue-network service: metadata: labels: federation.maistra.io/egress-for: egress-blue-mesh ports: - port: 15443 name: tls - port: 8188 name: http-discovery #note HTTP here additionalIngress: ingress-green-mesh: enabled: true service: type: LoadBalancer metadata: labels: federation.maistra.io/ingress-for: ingress-green-mesh ports: - port: 15443 name: tls - port: 8188 name: https-discovery #note HTTPS here ingress-blue-mesh: enabled: true service: type: LoadBalancer metadata: labels: federation.maistra.io/ingress-for: ingress-blue-mesh ports: - port: 15443 name: tls - port: 8188 name: https-discovery #note HTTPS here security: trust: domain: red-mesh.local",
"spec: cluster: name:",
"spec: cluster: network:",
"spec: gateways: additionalEgress: <egress_name>:",
"spec: gateways: additionalEgress: <egress_name>: enabled:",
"spec: gateways: additionalEgress: <egress_name>: requestedNetworkView:",
"spec: gateways: additionalEgress: <egress_name>: service: metadata: labels: federation.maistra.io/egress-for:",
"spec: gateways: additionalEgress: <egress_name>: service: ports:",
"spec: gateways: additionalIngress:",
"spec: gateways: additionalIgress: <ingress_name>: enabled:",
"spec: gateways: additionalIngress: <ingress_name>: service: type:",
"spec: gateways: additionalIngress: <ingress_name>: service: type:",
"spec: gateways: additionalIngress: <ingress_name>: service: metadata: labels: federation.maistra.io/ingress-for:",
"spec: gateways: additionalIngress: <ingress_name>: service: ports:",
"spec: gateways: additionalIngress: <ingress_name>: service: ports: nodePort:",
"apiVersion: maistra.io/v2 kind: ServiceMeshControlPlane metadata: name: green-mesh namespace: green-mesh-system spec: gateways: additionalIngress: ingress-green-mesh: enabled: true service: type: NodePort metadata: labels: federation.maistra.io/ingress-for: ingress-green-mesh ports: - port: 15443 nodePort: 30510 name: tls - port: 8188 nodePort: 32359 name: https-discovery",
"kind: ServiceMeshControlPlane metadata: name: red-mesh namespace: red-mesh-system spec: security: trust: domain: red-mesh.local",
"spec: security: trust: domain:",
"oc login --username=<NAMEOFUSER> https://<HOSTNAME>:6443",
"oc project red-mesh-system",
"oc edit -n red-mesh-system smcp red-mesh",
"oc get smcp -n red-mesh-system",
"NAME READY STATUS PROFILES VERSION AGE red-mesh 10/10 ComponentsReady [\"default\"] 2.1.0 4m25s",
"kind: ServiceMeshPeer apiVersion: federation.maistra.io/v1 metadata: name: green-mesh namespace: red-mesh-system spec: remote: addresses: - ingress-red-mesh.green-mesh-system.apps.domain.com gateways: ingress: name: ingress-green-mesh egress: name: egress-green-mesh security: trustDomain: green-mesh.local clientID: green-mesh.local/ns/green-mesh-system/sa/egress-red-mesh-service-account certificateChain: kind: ConfigMap name: green-mesh-ca-root-cert",
"metadata: name:",
"metadata: namespace:",
"spec: remote: addresses:",
"spec: remote: discoveryPort:",
"spec: remote: servicePort:",
"spec: gateways: ingress: name:",
"spec: gateways: egress: name:",
"spec: security: trustDomain:",
"spec: security: clientID:",
"spec: security: certificateChain: kind: ConfigMap name:",
"oc login --username=<NAMEOFUSER> <API token> https://<HOSTNAME>:6443",
"oc project red-mesh-system",
"kind: ServiceMeshPeer apiVersion: federation.maistra.io/v1 metadata: name: green-mesh namespace: red-mesh-system spec: remote: addresses: - ingress-red-mesh.green-mesh-system.apps.domain.com gateways: ingress: name: ingress-green-mesh egress: name: egress-green-mesh security: trustDomain: green-mesh.local clientID: green-mesh.local/ns/green-mesh-system/sa/egress-red-mesh-service-account certificateChain: kind: ConfigMap name: green-mesh-ca-root-cert",
"oc create -n red-mesh-system -f servicemeshpeer.yaml",
"oc -n red-mesh-system get servicemeshpeer green-mesh -o yaml",
"status: discoveryStatus: active: - pod: istiod-red-mesh-b65457658-9wq5j remotes: - connected: true lastConnected: \"2021-10-05T13:02:25Z\" lastFullSync: \"2021-10-05T13:02:25Z\" source: 10.128.2.149 watch: connected: true lastConnected: \"2021-10-05T13:02:55Z\" lastDisconnectStatus: 503 Service Unavailable lastFullSync: \"2021-10-05T13:05:43Z\"",
"kind: ExportedServiceSet apiVersion: federation.maistra.io/v1 metadata: name: green-mesh namespace: red-mesh-system spec: exportRules: # export ratings.mesh-x-bookinfo as ratings.bookinfo - type: NameSelector nameSelector: namespace: red-mesh-bookinfo name: red-ratings alias: namespace: bookinfo name: ratings # export any service in red-mesh-bookinfo namespace with label export-service=true - type: LabelSelector labelSelector: namespace: red-mesh-bookinfo selector: matchLabels: export-service: \"true\" aliases: # export all matching services as if they were in the bookinfo namespace - namespace: \"*\" name: \"*\" alias: namespace: bookinfo",
"metadata: name:",
"metadata: namespace:",
"spec: exportRules: - type:",
"spec: exportRules: - type: NameSelector nameSelector: namespace: name:",
"spec: exportRules: - type: NameSelector nameSelector: alias: namespace: name:",
"spec: exportRules: - type: LabelSelector labelSelector: namespace: <exportingMesh> selector: matchLabels: <labelKey>: <labelValue>",
"spec: exportRules: - type: LabelSelector labelSelector: namespace: <exportingMesh> selector: matchLabels: <labelKey>: <labelValue> aliases: - namespace: name: alias: namespace: name:",
"kind: ExportedServiceSet apiVersion: federation.maistra.io/v1 metadata: name: blue-mesh namespace: red-mesh-system spec: exportRules: - type: NameSelector nameSelector: namespace: \"*\" name: ratings",
"kind: ExportedServiceSet apiVersion: federation.maistra.io/v1 metadata: name: green-mesh namespace: red-mesh-system spec: exportRules: - type: NameSelector nameSelector: namespace: west-data-center name: \"*\"",
"oc login --username=<NAMEOFUSER> <API token> https://<HOSTNAME>:6443",
"oc project red-mesh-system",
"apiVersion: federation.maistra.io/v1 kind: ExportedServiceSet metadata: name: green-mesh namespace: red-mesh-system spec: exportRules: - type: NameSelector nameSelector: namespace: red-mesh-bookinfo name: ratings alias: namespace: bookinfo name: red-ratings - type: NameSelector nameSelector: namespace: red-mesh-bookinfo name: reviews",
"oc create -n <ControlPlaneNamespace> -f <ExportedServiceSet.yaml>",
"oc create -n red-mesh-system -f export-to-green-mesh.yaml",
"oc get exportedserviceset <PeerMeshExportedTo> -o yaml",
"oc -n red-mesh-system get exportedserviceset green-mesh -o yaml",
"status: exportedServices: - exportedName: red-ratings.bookinfo.svc.green-mesh-exports.local localService: hostname: ratings.red-mesh-bookinfo.svc.cluster.local name: ratings namespace: red-mesh-bookinfo - exportedName: reviews.red-mesh-bookinfo.svc.green-mesh-exports.local localService: hostname: reviews.red-mesh-bookinfo.svc.cluster.local name: reviews namespace: red-mesh-bookinfo",
"kind: ImportedServiceSet apiVersion: federation.maistra.io/v1 metadata: name: red-mesh #name of mesh that exported the service namespace: green-mesh-system #mesh namespace that service is being imported into spec: importRules: # first matching rule is used # import ratings.bookinfo as ratings.bookinfo - type: NameSelector importAsLocal: false nameSelector: namespace: bookinfo name: ratings alias: # service will be imported as ratings.bookinfo.svc.red-mesh-imports.local namespace: bookinfo name: ratings",
"metadata: name:",
"metadata: namespace:",
"spec: importRules: - type:",
"spec: importRules: - type: NameSelector nameSelector: namespace: name:",
"spec: importRules: - type: NameSelector importAsLocal:",
"spec: importRules: - type: NameSelector nameSelector: namespace: name: alias: namespace: name:",
"kind: ImportedServiceSet apiVersion: federation.maistra.io/v1 metadata: name: red-mesh namespace: blue-mesh-system spec: importRules: - type: NameSelector importAsLocal: false nameSelector: namespace: bookinfo name: ratings",
"kind: ImportedServiceSet apiVersion: federation.maistra.io/v1 metadata: name: red-mesh namespace: green-mesh-system spec: importRules: - type: NameSelector importAsLocal: false nameSelector: namespace: west-data-center name: \"*\"",
"oc login --username=<NAMEOFUSER> <API token> https://<HOSTNAME>:6443",
"oc project green-mesh-system",
"kind: ImportedServiceSet apiVersion: federation.maistra.io/v1 metadata: name: red-mesh namespace: green-mesh-system spec: importRules: - type: NameSelector importAsLocal: false nameSelector: namespace: bookinfo name: red-ratings alias: namespace: bookinfo name: ratings",
"oc create -n <ControlPlaneNamespace> -f <ImportedServiceSet.yaml>",
"oc create -n green-mesh-system -f import-from-red-mesh.yaml",
"oc get importedserviceset <PeerMeshImportedInto> -o yaml",
"oc -n green-mesh-system get importedserviceset/red-mesh -o yaml",
"status: importedServices: - exportedName: red-ratings.bookinfo.svc.green-mesh-exports.local localService: hostname: ratings.bookinfo.svc.red-mesh-imports.local name: ratings namespace: bookinfo - exportedName: reviews.red-mesh-bookinfo.svc.green-mesh-exports.local localService: hostname: \"\" name: \"\" namespace: \"\"",
"kind: ImportedServiceSet apiVersion: federation.maistra.io/v1 metadata: name: red-mesh #name of mesh that exported the service namespace: green-mesh-system #mesh namespace that service is being imported into spec: importRules: # first matching rule is used # import ratings.bookinfo as ratings.bookinfo - type: NameSelector importAsLocal: true nameSelector: namespace: bookinfo name: ratings alias: # service will be imported as ratings.bookinfo.svc.red-mesh-imports.local namespace: bookinfo name: ratings #Locality within which imported services should be associated. locality: region: us-west",
"oc login --username=<NAMEOFUSER> <API token> https://<HOSTNAME>:6443",
"oc project <smcp-system>",
"oc project green-mesh-system",
"oc edit -n <smcp-system> -f <ImportedServiceSet.yaml>",
"oc edit -n green-mesh-system -f import-from-red-mesh.yaml",
"oc login --username=<NAMEOFUSER> <API token> https://<HOSTNAME>:6443",
"oc project <smcp-system>",
"oc project green-mesh-system",
"apiVersion: networking.istio.io/v1beta1 kind: DestinationRule metadata: name: default-failover namespace: bookinfo spec: host: \"ratings.bookinfo.svc.cluster.local\" trafficPolicy: loadBalancer: localityLbSetting: enabled: true failover: - from: us-east to: us-west outlierDetection: consecutive5xxErrors: 3 interval: 10s baseEjectionTime: 1m",
"oc create -n <application namespace> -f <DestinationRule.yaml>",
"oc create -n bookinfo -f green-mesh-us-west-DestinationRule.yaml",
"apiVersion: extensions.istio.io/v1alpha1 kind: WasmPlugin metadata: name: openid-connect namespace: istio-ingress spec: selector: matchLabels: istio: ingressgateway url: file:///opt/filters/openid.wasm sha256: 1ef0c9a92b0420cf25f7fe5d481b231464bc88f486ca3b9c83ed5cc21d2f6210 phase: AUTHN pluginConfig: openid_server: authn openid_realm: ingress",
"apiVersion: extensions.istio.io/v1alpha1 kind: WasmPlugin metadata: name: openid-connect namespace: istio-system spec: selector: matchLabels: istio: ingressgateway url: oci://private-registry:5000/openid-connect/openid:latest imagePullPolicy: IfNotPresent imagePullSecret: private-registry-pull-secret phase: AUTHN pluginConfig: openid_server: authn openid_realm: ingress",
"apiVersion: extensions.istio.io/v1alpha1 kind: WasmPlugin metadata: name: openid-connect namespace: istio-system spec: selector: matchLabels: istio: ingressgateway url: oci://private-registry:5000/openid-connect/openid:latest imagePullPolicy: IfNotPresent imagePullSecret: private-registry-pull-secret phase: AUTHN pluginConfig: openid_server: authn openid_realm: ingress",
"oc apply -f plugin.yaml",
"schemaVersion: 1 name: <your-extension> description: <description> version: 1.0.0 phase: PreAuthZ priority: 100 module: extension.wasm",
"apiVersion: maistra.io/v1 kind: ServiceMeshExtension metadata: name: header-append namespace: istio-system spec: workloadSelector: labels: app: httpbin config: first-header: some-value another-header: another-value image: quay.io/maistra-dev/header-append-filter:2.1 phase: PostAuthZ priority: 100",
"oc apply -f <extension>.yaml",
"apiVersion: maistra.io/v1 kind: ServiceMeshExtension metadata: name: header-append namespace: istio-system spec: workloadSelector: labels: app: httpbin config: first-header: some-value another-header: another-value image: quay.io/maistra-dev/header-append-filter:2.2 phase: PostAuthZ priority: 100",
"apiVersion: extensions.istio.io/v1alpha1 kind: WasmPlugin metadata: name: header-append namespace: istio-system spec: selector: matchLabels: app: httpbin url: oci://quay.io/maistra-dev/header-append-filter:2.2 phase: STATS pluginConfig: first-header: some-value another-header: another-value",
"cat <<EOM | oc apply -f - apiVersion: kiali.io/v1alpha1 kind: OSSMConsole metadata: namespace: openshift-operators name: ossmconsole EOM",
"delete ossmconsoles <custom_resource_name> -n <custom_resource_namespace>",
"for r in USD(oc get ossmconsoles --ignore-not-found=true --all-namespaces -o custom-columns=NS:.metadata.namespace,N:.metadata.name --no-headers | sed 's/ */:/g'); do oc delete ossmconsoles -n USD(echo USDr|cut -d: -f1) USD(echo USDr|cut -d: -f2); done",
"apiVersion: extensions.istio.io/v1alpha1 kind: WasmPlugin metadata: name: <threescale_wasm_plugin_name> namespace: <bookinfo> 1 spec: selector: 2 labels: app: <product_page> pluginConfig: <yaml_configuration> url: oci://registry.redhat.io/3scale-amp2/3scale-auth-wasm-rhel8:0.0.3 phase: AUTHZ priority: 100",
"oc apply -f threescale-wasm-auth-bookinfo.yaml",
"apiVersion: networking.istio.io/v1beta1 kind: ServiceEntry metadata: name: service-entry-threescale-saas-backend spec: hosts: - su1.3scale.net ports: - number: 443 name: https protocol: HTTPS location: MESH_EXTERNAL resolution: DNS",
"apiVersion: networking.istio.io/v1beta1 kind: DestinationRule metadata: name: destination-rule-threescale-saas-backend spec: host: su1.3scale.net trafficPolicy: tls: mode: SIMPLE sni: su1.3scale.net",
"oc apply -f service-entry-threescale-saas-backend.yml",
"oc apply -f destination-rule-threescale-saas-backend.yml",
"apiVersion: networking.istio.io/v1beta1 kind: ServiceEntry metadata: name: service-entry-threescale-saas-system spec: hosts: - multitenant.3scale.net ports: - number: 443 name: https protocol: HTTPS location: MESH_EXTERNAL resolution: DNS",
"apiVersion: networking.istio.io/v1beta1 kind: DestinationRule metadata: name: destination-rule-threescale-saas-system spec: host: multitenant.3scale.net trafficPolicy: tls: mode: SIMPLE sni: multitenant.3scale.net",
"oc apply -f service-entry-threescale-saas-system.yml",
"oc apply -f <destination-rule-threescale-saas-system.yml>",
"apiVersion: extensions.istio.io/v1alpha1 kind: WasmPlugin metadata: name: <threescale_wasm_plugin_name> namespace: <bookinfo> spec: pluginConfig: api: v1",
"apiVersion: extensions.istio.io/v1alpha1 kind: WasmPlugin metadata: name: <threescale_wasm_plugin_name> spec: pluginConfig: system: name: <saas_porta> upstream: <object> token: <my_account_token> ttl: 300",
"apiVersion: maistra.io/v1 upstream: name: outbound|443||multitenant.3scale.net url: \"https://myaccount-admin.3scale.net/\" timeout: 5000",
"apiVersion: extensions.istio.io/v1alpha1 kind: WasmPlugin metadata: name: <threescale_wasm_plugin_name> spec: pluginConfig: backend: name: backend upstream: <object>",
"apiVersion: extensions.istio.io/v1alpha1 kind: WasmPlugin metadata: name: <threescale_wasm_plugin_name> spec: pluginConfig: services: - id: \"2555417834789\" token: service_token authorities: - \"*.app\" - 0.0.0.0 - \"0.0.0.0:8443\" credentials: <object> mapping_rules: <object>",
"apiVersion: extensions.istio.io/v1alpha1 kind: WasmPlugin metadata: name: <threescale_wasm_plugin_name> spec: pluginConfig: services: - credentials: user_key: <array_of_lookup_queries> app_id: <array_of_lookup_queries> app_key: <array_of_lookup_queries>",
"apiVersion: extensions.istio.io/v1alpha1 kind: WasmPlugin metadata: name: <threescale_wasm_plugin_name> spec: pluginConfig: services: - credentials: user_key: - <source_type>: <object> - <source_type>: <object> app_id: - <source_type>: <object> app_key: - <source_type>: <object>",
"apiVersion: extensions.istio.io/v1alpha1 kind: WasmPlugin metadata: name: <threescale_wasm_plugin_name> spec: pluginConfig: mapping_rules: - method: GET pattern: / usages: - name: hits delta: 1 - method: GET pattern: /products/ usages: - name: products delta: 1 - method: ANY pattern: /products/{id}/sold usages: - name: sales delta: 1 - name: products delta: 1",
"apiVersion: extensions.istio.io/v1alpha1 kind: WasmPlugin metadata: name: <threescale_wasm_plugin_name> spec: services: credentials: user_key: - query_string: keys: - <user_key> - header: keys: - <user_key>",
"apiVersion: extensions.istio.io/v1alpha1 kind: WasmPlugin metadata: name: <threescale_wasm_plugin_name> spec: services: credentials: app_id: - query_string: keys: - <app_id> - header: keys: - <app_id> app_key: - query_string: keys: - <app_key> - header: keys: - <app_key>",
"aladdin:opensesame: Authorization: Basic YWxhZGRpbjpvcGVuc2VzYW1l",
"apiVersion: extensions.istio.io/v1alpha1 kind: WasmPlugin metadata: name: <threescale_wasm_plugin_name> spec: services: credentials: app_id: - header: keys: - authorization ops: - split: separator: \" \" max: 2 - length: min: 2 - drop: head: 1 - base64_urlsafe - split: max: 2 app_key: - header: keys: - app_key",
"apiVersion: extensions.istio.io/v1alpha1 kind: WasmPlugin metadata: name: <threescale_wasm_plugin_name> spec: services: credentials: app_id: - header: keys: - authorization ops: - split: separator: \" \" max: 2 - length: min: 2 - reverse - glob: - Basic - drop: tail: 1 - base64_urlsafe - split: max: 2 - test: if: length: min: 2 then: - strlen: max: 63 - or: - strlen: min: 1 - drop: tail: 1 - assert: - and: - reverse - or: - strlen: min: 8 - glob: - aladdin - admin",
"apiVersion: security.istio.io/v1beta1 kind: RequestAuthentication metadata: name: jwt-example namespace: bookinfo spec: selector: matchLabels: app: productpage jwtRules: - issuer: >- http://keycloak-keycloak.34.242.107.254.nip.io/auth/realms/3scale-keycloak jwksUri: >- http://keycloak-keycloak.34.242.107.254.nip.io/auth/realms/3scale-keycloak/protocol/openid-connect/certs",
"apiVersion: extensions.istio.io/v1alpha1 kind: WasmPlugin metadata: name: <threescale_wasm_plugin_name> spec: services: credentials: app_id: - filter: path: - envoy.filters.http.jwt_authn - \"0\" keys: - azp - aud ops: - take: head: 1",
"apiVersion: extensions.istio.io/v1alpha1 kind: WasmPlugin metadata: name: <threescale_wasm_plugin_name> spec: services: credentials: app_id: - header: keys: - x-jwt-payload ops: - base64_urlsafe - json: - keys: - azp - aud - take: head: 1 ,,,",
"apiVersion: extensions.istio.io/v1alpha1 kind: WasmPlugin metadata: name: <threescale_wasm_plugin_name> spec: url: oci://registry.redhat.io/3scale-amp2/3scale-auth-wasm-rhel8:0.0.3 imagePullSecret: <optional_pull_secret_resource> phase: AUTHZ priority: 100 selector: labels: app: <product_page> pluginConfig: api: v1 system: name: <system_name> upstream: name: outbound|443||multitenant.3scale.net url: https://istiodevel-admin.3scale.net/ timeout: 5000 token: <token> backend: name: <backend_name> upstream: name: outbound|443||su1.3scale.net url: https://su1.3scale.net/ timeout: 5000 extensions: - no_body services: - id: '2555417834780' authorities: - \"*\" credentials: user_key: - query_string: keys: - <user_key> - header: keys: - <user_key> app_id: - query_string: keys: - <app_id> - header: keys: - <app_id> app_key: - query_string: keys: - <app_key> - header: keys: - <app_key>",
"apiVersion: \"config.istio.io/v1alpha2\" kind: handler metadata: name: threescale spec: adapter: threescale params: system_url: \"https://<organization>-admin.3scale.net/\" access_token: \"<ACCESS_TOKEN>\" connection: address: \"threescale-istio-adapter:3333\"",
"apiVersion: \"config.istio.io/v1alpha2\" kind: rule metadata: name: threescale spec: match: destination.labels[\"service-mesh.3scale.net\"] == \"true\" actions: - handler: threescale.handler instances: - threescale-authorization.instance",
"3scale-config-gen --name=admin-credentials --url=\"https://<organization>-admin.3scale.net:443\" --token=\"[redacted]\"",
"3scale-config-gen --url=\"https://<organization>-admin.3scale.net\" --name=\"my-unique-id\" --service=\"123456789\" --token=\"[redacted]\"",
"export NS=\"istio-system\" URL=\"https://replaceme-admin.3scale.net:443\" NAME=\"name\" TOKEN=\"token\" exec -n USD{NS} USD(oc get po -n USD{NS} -o jsonpath='{.items[?(@.metadata.labels.app==\"3scale-istio-adapter\")].metadata.name}') -it -- ./3scale-config-gen --url USD{URL} --name USD{NAME} --token USD{TOKEN} -n USD{NS}",
"export CREDENTIALS_NAME=\"replace-me\" export SERVICE_ID=\"replace-me\" export DEPLOYMENT=\"replace-me\" patch=\"USD(oc get deployment \"USD{DEPLOYMENT}\" patch=\"USD(oc get deployment \"USD{DEPLOYMENT}\" --template='{\"spec\":{\"template\":{\"metadata\":{\"labels\":{ {{ range USDk,USDv := .spec.template.metadata.labels }}\"{{ USDk }}\":\"{{ USDv }}\",{{ end }}\"service-mesh.3scale.net/service-id\":\"'\"USD{SERVICE_ID}\"'\",\"service-mesh.3scale.net/credentials\":\"'\"USD{CREDENTIALS_NAME}\"'\"}}}}}' )\" patch deployment \"USD{DEPLOYMENT}\" --patch ''\"USD{patch}\"''",
"apiVersion: \"config.istio.io/v1alpha2\" kind: instance metadata: name: threescale-authorization namespace: istio-system spec: template: authorization params: subject: user: request.query_params[\"user_key\"] | request.headers[\"user-key\"] | \"\" action: path: request.url_path method: request.method | \"get\"",
"apiVersion: \"config.istio.io/v1alpha2\" kind: instance metadata: name: threescale-authorization namespace: istio-system spec: template: authorization params: subject: app_id: request.query_params[\"app_id\"] | request.headers[\"app-id\"] | \"\" app_key: request.query_params[\"app_key\"] | request.headers[\"app-key\"] | \"\" action: path: request.url_path method: request.method | \"get\"",
"apiVersion: \"config.istio.io/v1alpha2\" kind: instance metadata: name: threescale-authorization spec: template: threescale-authorization params: subject: properties: app_key: request.query_params[\"app_key\"] | request.headers[\"app-key\"] | \"\" client_id: request.auth.claims[\"azp\"] | \"\" action: path: request.url_path method: request.method | \"get\" service: destination.labels[\"service-mesh.3scale.net/service-id\"] | \"\"",
"apiVersion: security.istio.io/v1beta1 kind: RequestAuthentication metadata: name: jwt-example namespace: bookinfo spec: selector: matchLabels: app: productpage jwtRules: - issuer: >- http://keycloak-keycloak.34.242.107.254.nip.io/auth/realms/3scale-keycloak jwksUri: >- http://keycloak-keycloak.34.242.107.254.nip.io/auth/realms/3scale-keycloak/protocol/openid-connect/certs",
"apiVersion: \"config.istio.io/v1alpha2\" kind: instance metadata: name: threescale-authorization spec: template: authorization params: subject: user: request.query_params[\"user_key\"] | request.headers[\"user-key\"] | properties: app_id: request.query_params[\"app_id\"] | request.headers[\"app-id\"] | \"\" app_key: request.query_params[\"app_key\"] | request.headers[\"app-key\"] | \"\" client_id: request.auth.claims[\"azp\"] | \"\" action: path: request.url_path method: request.method | \"get\" service: destination.labels[\"service-mesh.3scale.net/service-id\"] | \"\"",
"oc get pods -n istio-system",
"oc logs istio-system",
"oc get pods -n openshift-operators",
"NAME READY STATUS RESTARTS AGE istio-operator-bb49787db-zgr87 1/1 Running 0 15s jaeger-operator-7d5c4f57d8-9xphf 1/1 Running 0 2m42s kiali-operator-f9c8d84f4-7xh2v 1/1 Running 0 64s",
"oc get pods -n openshift-operators-redhat",
"NAME READY STATUS RESTARTS AGE elasticsearch-operator-d4f59b968-796vq 1/1 Running 0 15s",
"oc logs -n openshift-operators <podName>",
"oc logs -n openshift-operators istio-operator-bb49787db-zgr87",
"oc get pods -n istio-system",
"NAME READY STATUS RESTARTS AGE grafana-6776785cfc-6fz7t 2/2 Running 0 102s istio-egressgateway-5f49dd99-l9ppq 1/1 Running 0 103s istio-ingressgateway-6dc885c48-jjd8r 1/1 Running 0 103s istiod-basic-6c9cc55998-wg4zq 1/1 Running 0 2m14s jaeger-6865d5d8bf-zrfss 2/2 Running 0 100s kiali-579799fbb7-8mwc8 1/1 Running 0 46s prometheus-5c579dfb-6qhjk 2/2 Running 0 115s",
"oc get smcp -n istio-system",
"NAME READY STATUS PROFILES VERSION AGE basic 10/10 ComponentsReady [\"default\"] 2.1.3 4m2s",
"NAME READY STATUS TEMPLATE VERSION AGE basic-install 10/10 UpdateSuccessful default v1.1 3d16h",
"oc describe smcp <smcp-name> -n <controlplane-namespace>",
"oc describe smcp basic -n istio-system",
"oc get jaeger -n istio-system",
"NAME STATUS VERSION STRATEGY STORAGE AGE jaeger Running 1.30.0 allinone memory 15m",
"oc get kiali -n istio-system",
"NAME AGE kiali 15m",
"oc login --username=<NAMEOFUSER> https://<HOSTNAME>:6443",
"oc get route -n istio-system jaeger -o jsonpath='{.spec.host}'",
"oc login --username=<NAMEOFUSER> https://<HOSTNAME>:6443",
"oc project istio-system",
"oc edit smcp <smcp_name>",
"apiVersion: maistra.io/v2 kind: ServiceMeshControlPlane metadata: name: basic namespace: istio-system spec: proxy: accessLogging: file: name: /dev/stdout #file name",
"oc adm must-gather --image=registry.redhat.io/openshift-service-mesh/istio-must-gather-rhel8:2.6",
"oc adm must-gather --image=registry.redhat.io/openshift-service-mesh/istio-must-gather-rhel8:2.6 gather <namespace>",
"oc get clusterversion -o jsonpath='{.items[].spec.clusterID}{\"\\n\"}'",
"apiVersion: maistra.io/v2 kind: ServiceMeshControlPlane metadata: name: basic spec: version: v2.6 proxy: runtime: container: resources: requests: cpu: 100m memory: 128Mi limits: cpu: 500m memory: 128Mi tracing: type: Jaeger gateways: ingress: # istio-ingressgateway service: type: ClusterIP ports: - name: status-port port: 15020 - name: http2 port: 80 targetPort: 8080 - name: https port: 443 targetPort: 8443 meshExpansionPorts: [] egress: # istio-egressgateway service: type: ClusterIP ports: - name: status-port port: 15020 - name: http2 port: 80 targetPort: 8080 - name: https port: 443 targetPort: 8443 additionalIngress: some-other-ingress-gateway: {} additionalEgress: some-other-egress-gateway: {} policy: type: Mixer mixer: # only applies if policy.type: Mixer enableChecks: true failOpen: false telemetry: type: Istiod # or Mixer mixer: # only applies if telemetry.type: Mixer, for v1 telemetry sessionAffinity: false batching: maxEntries: 100 maxTime: 1s adapters: kubernetesenv: true stdio: enabled: true outputAsJSON: true addons: grafana: enabled: true install: config: env: {} envSecrets: {} persistence: enabled: true storageClassName: \"\" accessMode: ReadWriteOnce capacity: requests: storage: 5Gi service: ingress: contextPath: /grafana tls: termination: reencrypt kiali: name: kiali enabled: true install: # install kiali CR if not present dashboard: viewOnly: false enableGrafana: true enableTracing: true enablePrometheus: true service: ingress: contextPath: /kiali jaeger: name: jaeger install: storage: type: Elasticsearch # or Memory memory: maxTraces: 100000 elasticsearch: nodeCount: 3 storage: {} redundancyPolicy: SingleRedundancy indexCleaner: {} ingress: {} # jaeger ingress configuration runtime: components: pilot: deployment: replicas: 2 pod: affinity: {} container: resources: requests: cpu: 100m memory: 128Mi limits: cpu: 500m memory: 128Mi grafana: deployment: {} pod: {} kiali: deployment: {} pod: {}",
"apiVersion: maistra.io/v2 kind: ServiceMeshControlPlane metadata: name: basic spec: general: logging: componentLevels: {} # misc: error logAsJSON: false validationMessages: true",
"logging:",
"logging: componentLevels:",
"logging: logAsJSON:",
"validationMessages:",
"apiVersion: maistra.io/v2 kind: ServiceMeshControlPlane metadata: name: basic spec: profiles: - YourProfileName",
"apiVersion: maistra.io/v2 kind: ServiceMeshControlPlane metadata: name: basic spec: version: v2.6 tracing: sampling: 100 type: Jaeger",
"tracing: sampling:",
"tracing: type:",
"apiVersion: maistra.io/v2 kind: ServiceMeshControlPlane metadata: name: basic spec: addons: 3Scale: enabled: false PARAM_THREESCALE_LISTEN_ADDR: 3333 PARAM_THREESCALE_LOG_LEVEL: info PARAM_THREESCALE_LOG_JSON: true PARAM_THREESCALE_LOG_GRPC: false PARAM_THREESCALE_REPORT_METRICS: true PARAM_THREESCALE_METRICS_PORT: 8080 PARAM_THREESCALE_CACHE_TTL_SECONDS: 300 PARAM_THREESCALE_CACHE_REFRESH_SECONDS: 180 PARAM_THREESCALE_CACHE_ENTRIES_MAX: 1000 PARAM_THREESCALE_CACHE_REFRESH_RETRIES: 1 PARAM_THREESCALE_ALLOW_INSECURE_CONN: false PARAM_THREESCALE_CLIENT_TIMEOUT_SECONDS: 10 PARAM_THREESCALE_GRPC_CONN_MAX_SECONDS: 60 PARAM_USE_CACHED_BACKEND: false PARAM_BACKEND_CACHE_FLUSH_INTERVAL_SECONDS: 15 PARAM_BACKEND_CACHE_POLICY_FAIL_CLOSED: true",
"apiVersion: maistra.io/v2 kind: ServiceMeshControlPlane metadata: name: basic spec: addons: kiali: name: kiali enabled: true install: dashboard: viewOnly: false enableGrafana: true enableTracing: true enablePrometheus: true service: ingress: contextPath: /kiali",
"spec: addons: kiali: name:",
"kiali: enabled:",
"kiali: install:",
"kiali: install: dashboard:",
"kiali: install: dashboard: viewOnly:",
"kiali: install: dashboard: enableGrafana:",
"kiali: install: dashboard: enablePrometheus:",
"kiali: install: dashboard: enableTracing:",
"kiali: install: service:",
"kiali: install: service: metadata:",
"kiali: install: service: metadata: annotations:",
"kiali: install: service: metadata: labels:",
"kiali: install: service: ingress:",
"kiali: install: service: ingress: metadata: annotations:",
"kiali: install: service: ingress: metadata: labels:",
"kiali: install: service: ingress: enabled:",
"kiali: install: service: ingress: contextPath:",
"install: service: ingress: hosts:",
"install: service: ingress: tls:",
"kiali: install: service: nodePort:",
"apiVersion: maistra.io/v2 kind: ServiceMeshControlPlane metadata: name: basic spec: version: v2.6 tracing: sampling: 100 type: Jaeger",
"apiVersion: maistra.io/v2 kind: ServiceMeshControlPlane metadata: name: basic spec: version: v2.6 tracing: sampling: 10000 type: Jaeger addons: jaeger: name: jaeger install: storage: type: Memory",
"apiVersion: maistra.io/v2 kind: ServiceMeshControlPlane metadata: name: basic spec: version: v2.6 tracing: sampling: 10000 type: Jaeger addons: jaeger: name: jaeger #name of Jaeger CR install: storage: type: Elasticsearch ingress: enabled: true runtime: components: tracing.jaeger.elasticsearch: # only supports resources and image name container: resources: {}",
"apiVersion: maistra.io/v2 kind: ServiceMeshControlPlane metadata: name: basic spec: version: v2.6 tracing: sampling: 1000 type: Jaeger addons: jaeger: name: MyJaegerInstance #name of Jaeger CR install: storage: type: Elasticsearch ingress: enabled: true",
"apiVersion: maistra.io/v2 kind: ServiceMeshControlPlane metadata: name: basic spec: version: v2.6 tracing: sampling: 1000 type: Jaeger addons: jaeger: name: MyJaegerInstance #name of Jaeger CR",
"apiVersion: jaegertracing.io/v1 kind: Jaeger spec: ingress: enabled: true openshift: htpasswdFile: /etc/proxy/htpasswd/auth sar: '{\"namespace\": \"istio-system\", \"resource\": \"pods\", \"verb\": \"get\"}' options: {} resources: {} security: oauth-proxy volumes: - name: secret-htpasswd secret: secretName: htpasswd - configMap: defaultMode: 420 items: - key: ca-bundle.crt path: tls-ca-bundle.pem name: trusted-ca-bundle optional: true name: trusted-ca-bundle volumeMounts: - mountPath: /etc/proxy/htpasswd name: secret-htpasswd - mountPath: /etc/pki/ca-trust/extracted/pem/ name: trusted-ca-bundle readOnly: true",
"oc login https://<HOSTNAME>:6443",
"oc project istio-system",
"oc edit -n openshift-distributed-tracing -f jaeger.yaml",
"apiVersion: jaegertracing.io/v1 kind: Jaeger spec: ingress: enabled: true openshift: htpasswdFile: /etc/proxy/htpasswd/auth sar: '{\"namespace\": \"istio-system\", \"resource\": \"pods\", \"verb\": \"get\"}' options: {} resources: {} security: oauth-proxy volumes: - name: secret-htpasswd secret: secretName: htpasswd - configMap: defaultMode: 420 items: - key: ca-bundle.crt path: tls-ca-bundle.pem name: trusted-ca-bundle optional: true name: trusted-ca-bundle volumeMounts: - mountPath: /etc/proxy/htpasswd name: secret-htpasswd - mountPath: /etc/pki/ca-trust/extracted/pem/ name: trusted-ca-bundle readOnly: true",
"oc get pods -n openshift-distributed-tracing",
"apiVersion: jaegertracing.io/v1 kind: Jaeger metadata: name: name spec: strategy: <deployment_strategy> allInOne: options: {} resources: {} agent: options: {} resources: {} collector: options: {} resources: {} sampling: options: {} storage: type: options: {} query: options: {} resources: {} ingester: options: {} resources: {} options: {}",
"apiVersion: jaegertracing.io/v1 kind: Jaeger metadata: name: jaeger-all-in-one-inmemory",
"collector: replicas:",
"spec: collector: options: {}",
"options: collector: num-workers:",
"options: collector: queue-size:",
"options: kafka: producer: topic: jaeger-spans",
"options: kafka: producer: brokers: my-cluster-kafka-brokers.kafka:9092",
"options: log-level:",
"options: otlp: enabled: true grpc: host-port: 4317 max-connection-age: 0s max-connection-age-grace: 0s max-message-size: 4194304 tls: enabled: false cert: /path/to/cert.crt cipher-suites: \"TLS_AES_256_GCM_SHA384,TLS_CHACHA20_POLY1305_SHA256\" client-ca: /path/to/cert.ca reload-interval: 0s min-version: 1.2 max-version: 1.3",
"options: otlp: enabled: true http: cors: allowed-headers: [<header-name>[, <header-name>]*] allowed-origins: * host-port: 4318 max-connection-age: 0s max-connection-age-grace: 0s max-message-size: 4194304 read-timeout: 0s read-header-timeout: 2s idle-timeout: 0s tls: enabled: false cert: /path/to/cert.crt cipher-suites: \"TLS_AES_256_GCM_SHA384,TLS_CHACHA20_POLY1305_SHA256\" client-ca: /path/to/cert.ca reload-interval: 0s min-version: 1.2 max-version: 1.3",
"spec: sampling: options: {} default_strategy: service_strategy:",
"default_strategy: type: service_strategy: type:",
"default_strategy: param: service_strategy: param:",
"apiVersion: jaegertracing.io/v1 kind: Jaeger metadata: name: with-sampling spec: sampling: options: default_strategy: type: probabilistic param: 0.5 service_strategies: - service: alpha type: probabilistic param: 0.8 operation_strategies: - operation: op1 type: probabilistic param: 0.2 - operation: op2 type: probabilistic param: 0.4 - service: beta type: ratelimiting param: 5",
"spec: sampling: options: default_strategy: type: probabilistic param: 1",
"spec: storage: type:",
"storage: secretname:",
"storage: options: {}",
"storage: esIndexCleaner: enabled:",
"storage: esIndexCleaner: numberOfDays:",
"storage: esIndexCleaner: schedule:",
"elasticsearch: properties: doNotProvision:",
"elasticsearch: properties: name:",
"elasticsearch: nodeCount:",
"elasticsearch: resources: requests: cpu:",
"elasticsearch: resources: requests: memory:",
"elasticsearch: resources: limits: cpu:",
"elasticsearch: resources: limits: memory:",
"elasticsearch: redundancyPolicy:",
"elasticsearch: useCertManagement:",
"apiVersion: jaegertracing.io/v1 kind: Jaeger metadata: name: simple-prod spec: strategy: production storage: type: elasticsearch elasticsearch: nodeCount: 3 resources: requests: cpu: 1 memory: 16Gi limits: memory: 16Gi",
"apiVersion: jaegertracing.io/v1 kind: Jaeger metadata: name: simple-prod spec: strategy: production storage: type: elasticsearch elasticsearch: nodeCount: 1 storage: 1 storageClassName: gp2 size: 5Gi resources: requests: cpu: 200m memory: 4Gi limits: memory: 4Gi redundancyPolicy: ZeroRedundancy",
"es: server-urls:",
"es: max-doc-count:",
"es: max-num-spans:",
"es: max-span-age:",
"es: sniffer:",
"es: sniffer-tls-enabled:",
"es: timeout:",
"es: username:",
"es: password:",
"es: version:",
"es: num-replicas:",
"es: num-shards:",
"es: create-index-templates:",
"es: index-prefix:",
"es: bulk: actions:",
"es: bulk: flush-interval:",
"es: bulk: size:",
"es: bulk: workers:",
"es: tls: ca:",
"es: tls: cert:",
"es: tls: enabled:",
"es: tls: key:",
"es: tls: server-name:",
"es: token-file:",
"es-archive: bulk: actions:",
"es-archive: bulk: flush-interval:",
"es-archive: bulk: size:",
"es-archive: bulk: workers:",
"es-archive: create-index-templates:",
"es-archive: enabled:",
"es-archive: index-prefix:",
"es-archive: max-doc-count:",
"es-archive: max-num-spans:",
"es-archive: max-span-age:",
"es-archive: num-replicas:",
"es-archive: num-shards:",
"es-archive: password:",
"es-archive: server-urls:",
"es-archive: sniffer:",
"es-archive: sniffer-tls-enabled:",
"es-archive: timeout:",
"es-archive: tls: ca:",
"es-archive: tls: cert:",
"es-archive: tls: enabled:",
"es-archive: tls: key:",
"es-archive: tls: server-name:",
"es-archive: token-file:",
"es-archive: username:",
"es-archive: version:",
"apiVersion: jaegertracing.io/v1 kind: Jaeger metadata: name: simple-prod spec: strategy: production storage: type: elasticsearch options: es: server-urls: https://quickstart-es-http.default.svc:9200 index-prefix: my-prefix tls: ca: /es/certificates/ca.crt secretName: tracing-secret volumeMounts: - name: certificates mountPath: /es/certificates/ readOnly: true volumes: - name: certificates secret: secretName: quickstart-es-http-certs-public",
"apiVersion: jaegertracing.io/v1 kind: Jaeger metadata: name: simple-prod spec: strategy: production storage: type: elasticsearch options: es: server-urls: https://quickstart-es-http.default.svc:9200 1 index-prefix: my-prefix tls: 2 ca: /es/certificates/ca.crt secretName: tracing-secret 3 volumeMounts: 4 - name: certificates mountPath: /es/certificates/ readOnly: true volumes: - name: certificates secret: secretName: quickstart-es-http-certs-public",
"apiVersion: logging.openshift.io/v1 kind: Elasticsearch metadata: annotations: logging.openshift.io/elasticsearch-cert-management: \"true\" logging.openshift.io/elasticsearch-cert.jaeger-custom-es: \"user.jaeger\" logging.openshift.io/elasticsearch-cert.curator-custom-es: \"system.logging.curator\" name: custom-es spec: managementState: Managed nodeSpec: resources: limits: memory: 16Gi requests: cpu: 1 memory: 16Gi nodes: - nodeCount: 3 proxyResources: {} resources: {} roles: - master - client - data storage: {} redundancyPolicy: ZeroRedundancy",
"apiVersion: jaegertracing.io/v1 kind: Jaeger metadata: name: jaeger-prod spec: strategy: production storage: type: elasticsearch elasticsearch: name: custom-es doNotProvision: true useCertManagement: true",
"spec: query: replicas:",
"spec: query: options: {}",
"options: log-level:",
"options: query: base-path:",
"apiVersion: jaegertracing.io/v1 kind: \"Jaeger\" metadata: name: \"my-jaeger\" spec: strategy: allInOne allInOne: options: log-level: debug query: base-path: /jaeger",
"spec: ingester: options: {}",
"options: deadlockInterval:",
"options: kafka: consumer: topic:",
"options: kafka: consumer: brokers:",
"options: log-level:",
"apiVersion: jaegertracing.io/v1 kind: Jaeger metadata: name: simple-streaming spec: strategy: streaming collector: options: kafka: producer: topic: jaeger-spans brokers: my-cluster-kafka-brokers.kafka:9092 ingester: options: kafka: consumer: topic: jaeger-spans brokers: my-cluster-kafka-brokers.kafka:9092 ingester: deadlockInterval: 5 storage: type: elasticsearch options: es: server-urls: http://elasticsearch:9200",
"oc delete smmr -n istio-system default",
"oc get smcp -n istio-system",
"oc delete smcp -n istio-system <name_of_custom_resource>",
"oc -n openshift-operators delete ds -lmaistra-version",
"oc delete clusterrole/istio-admin clusterrole/istio-cni clusterrolebinding/istio-cni clusterrole/ossm-cni clusterrolebinding/ossm-cni",
"oc delete clusterrole istio-view istio-edit",
"oc delete clusterrole jaegers.jaegertracing.io-v1-admin jaegers.jaegertracing.io-v1-crdview jaegers.jaegertracing.io-v1-edit jaegers.jaegertracing.io-v1-view",
"oc get crds -o name | grep '.*\\.istio\\.io' | xargs -r -n 1 oc delete",
"oc get crds -o name | grep '.*\\.maistra\\.io' | xargs -r -n 1 oc delete",
"oc get crds -o name | grep '.*\\.kiali\\.io' | xargs -r -n 1 oc delete",
"oc delete crds jaegers.jaegertracing.io",
"oc delete cm -n openshift-operators -lmaistra-version",
"oc delete sa -n openshift-operators -lmaistra-version"
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.15/html/service_mesh/service-mesh-2-x |
Chapter 1. Preparing to install on IBM Cloud | Chapter 1. Preparing to install on IBM Cloud The installation workflows documented in this section are for IBM Cloud(R) infrastructure environments. IBM Cloud(R) classic is not supported at this time. For more information about the difference between classic and VPC infrastructures, see the IBM(R) documentation . 1.1. Prerequisites You reviewed details about the OpenShift Container Platform installation and update processes. You read the documentation on selecting a cluster installation method and preparing it for users . 1.2. Requirements for installing OpenShift Container Platform on IBM Cloud Before installing OpenShift Container Platform on IBM Cloud(R), you must create a service account and configure an IBM Cloud(R) account. See Configuring an IBM Cloud(R) account for details about creating an account, enabling API services, configuring DNS, IBM Cloud(R) account limits, and supported IBM Cloud(R) regions. You must manually manage your cloud credentials when installing a cluster to IBM Cloud(R). Do this by configuring the Cloud Credential Operator (CCO) for manual mode before you install the cluster. For more information, see Configuring IAM for IBM Cloud(R) . 1.3. Choosing a method to install OpenShift Container Platform on IBM Cloud You can install OpenShift Container Platform on IBM Cloud(R) using installer-provisioned infrastructure. This process involves using an installation program to provision the underlying infrastructure for your cluster. Installing OpenShift Container Platform on IBM Cloud(R) using user-provisioned infrastructure is not supported at this time. See Installation process for more information about installer-provisioned installation processes. 1.3.1. Installing a cluster on installer-provisioned infrastructure You can install a cluster on IBM Cloud(R) infrastructure that is provisioned by the OpenShift Container Platform installation program by using one of the following methods: Installing a customized cluster on IBM Cloud(R) : You can install a customized cluster on IBM Cloud(R) infrastructure that the installation program provisions. The installation program allows for some customization to be applied at the installation stage. Many other customization options are available post-installation . Installing a cluster on IBM Cloud(R) with network customizations : You can customize your OpenShift Container Platform network configuration during installation, so that your cluster can coexist with your existing IP address allocations and adhere to your network requirements. Installing a cluster on IBM Cloud(R) into an existing VPC : You can install OpenShift Container Platform on an existing IBM Cloud(R). You can use this installation method if you have constraints set by the guidelines of your company, such as limits when creating new accounts or infrastructure. Installing a private cluster on an existing VPC : You can install a private cluster on an existing Virtual Private Cloud (VPC). You can use this method to deploy OpenShift Container Platform on an internal network that is not visible to the internet. 1.4. steps Configuring an IBM Cloud(R) account | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.14/html/installing_on_ibm_cloud/preparing-to-install-on-ibm-cloud |
Chapter 10. Publishing on the catalog | Chapter 10. Publishing on the catalog After submitting your test results through the Red Hat certification portal, your application is scanned for vulnerabilities. When the scanning is completed, you can publish your product on the Red Hat Ecosystem Catalog . A RHOSP infrastructure certification is generated if you have performed the following: You ran the required tests successfully. Red Hat reviewed the testing configuration report, and found it was valid and appropriate for the certification. Perform the following steps to publish your product on the catalog: Procedure Navigate to your Product listing page. Click Publish . Your certified application is now published on the Red Hat Ecosystem Catalog. | null | https://docs.redhat.com/en/documentation/red_hat_software_certification/2025/html/red_hat_openstack_certification_workflow_guide/assembly-publishing-certification-catalog_rhosp-wf-cert-tests |
Chapter 6. PodMonitor [monitoring.coreos.com/v1] | Chapter 6. PodMonitor [monitoring.coreos.com/v1] Description PodMonitor defines monitoring for a set of pods. Type object Required spec 6.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata spec object Specification of desired Pod selection for target discovery by Prometheus. 6.1.1. .spec Description Specification of desired Pod selection for target discovery by Prometheus. Type object Required selector Property Type Description attachMetadata object attachMetadata defines additional metadata which is added to the discovered targets. It requires Prometheus >= v2.37.0. bodySizeLimit string When defined, bodySizeLimit specifies a job level limit on the size of uncompressed response body that will be accepted by Prometheus. It requires Prometheus >= v2.28.0. jobLabel string The label to use to retrieve the job name from. jobLabel selects the label from the associated Kubernetes Pod object which will be used as the job label for all metrics. For example if jobLabel is set to foo and the Kubernetes Pod object is labeled with foo: bar , then Prometheus adds the job="bar" label to all ingested metrics. If the value of this field is empty, the job label of the metrics defaults to the namespace and name of the PodMonitor object (e.g. <namespace>/<name> ). keepDroppedTargets integer Per-scrape limit on the number of targets dropped by relabeling that will be kept in memory. 0 means no limit. It requires Prometheus >= v2.47.0. labelLimit integer Per-scrape limit on number of labels that will be accepted for a sample. It requires Prometheus >= v2.27.0. labelNameLengthLimit integer Per-scrape limit on length of labels name that will be accepted for a sample. It requires Prometheus >= v2.27.0. labelValueLengthLimit integer Per-scrape limit on length of labels value that will be accepted for a sample. It requires Prometheus >= v2.27.0. namespaceSelector object Selector to select which namespaces the Kubernetes Pods objects are discovered from. podMetricsEndpoints array List of endpoints part of this PodMonitor. podMetricsEndpoints[] object PodMetricsEndpoint defines an endpoint serving Prometheus metrics to be scraped by Prometheus. podTargetLabels array (string) podTargetLabels defines the labels which are transferred from the associated Kubernetes Pod object onto the ingested metrics. sampleLimit integer sampleLimit defines a per-scrape limit on the number of scraped samples that will be accepted. scrapeClass string The scrape class to apply. scrapeProtocols array (string) scrapeProtocols defines the protocols to negotiate during a scrape. It tells clients the protocols supported by Prometheus in order of preference (from most to least preferred). If unset, Prometheus uses its default value. It requires Prometheus >= v2.49.0. selector object Label selector to select the Kubernetes Pod objects. targetLimit integer targetLimit defines a limit on the number of scraped targets that will be accepted. 6.1.2. .spec.attachMetadata Description attachMetadata defines additional metadata which is added to the discovered targets. It requires Prometheus >= v2.37.0. Type object Property Type Description node boolean When set to true, Prometheus must have the get permission on the Nodes objects. 6.1.3. .spec.namespaceSelector Description Selector to select which namespaces the Kubernetes Pods objects are discovered from. Type object Property Type Description any boolean Boolean describing whether all namespaces are selected in contrast to a list restricting them. matchNames array (string) List of namespace names to select from. 6.1.4. .spec.podMetricsEndpoints Description List of endpoints part of this PodMonitor. Type array 6.1.5. .spec.podMetricsEndpoints[] Description PodMetricsEndpoint defines an endpoint serving Prometheus metrics to be scraped by Prometheus. Type object Property Type Description authorization object authorization configures the Authorization header credentials to use when scraping the target. Cannot be set at the same time as basicAuth , or oauth2 . basicAuth object basicAuth configures the Basic Authentication credentials to use when scraping the target. Cannot be set at the same time as authorization , or oauth2 . bearerTokenSecret object bearerTokenSecret specifies a key of a Secret containing the bearer token for scraping targets. The secret needs to be in the same namespace as the PodMonitor object and readable by the Prometheus Operator. Deprecated: use authorization instead. enableHttp2 boolean enableHttp2 can be used to disable HTTP2 when scraping the target. filterRunning boolean When true, the pods which are not running (e.g. either in Failed or Succeeded state) are dropped during the target discovery. If unset, the filtering is enabled. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle/#pod-phase followRedirects boolean followRedirects defines whether the scrape requests should follow HTTP 3xx redirects. honorLabels boolean When true, honorLabels preserves the metric's labels when they collide with the target's labels. honorTimestamps boolean honorTimestamps controls whether Prometheus preserves the timestamps when exposed by the target. interval string Interval at which Prometheus scrapes the metrics from the target. If empty, Prometheus uses the global scrape interval. metricRelabelings array metricRelabelings configures the relabeling rules to apply to the samples before ingestion. metricRelabelings[] object RelabelConfig allows dynamic rewriting of the label set for targets, alerts, scraped samples and remote write samples. More info: https://prometheus.io/docs/prometheus/latest/configuration/configuration/#relabel_config oauth2 object oauth2 configures the OAuth2 settings to use when scraping the target. It requires Prometheus >= 2.27.0. Cannot be set at the same time as authorization , or basicAuth . params object params define optional HTTP URL parameters. params{} array (string) path string HTTP path from which to scrape for metrics. If empty, Prometheus uses the default value (e.g. /metrics ). port string Name of the Pod port which this endpoint refers to. It takes precedence over targetPort . proxyUrl string proxyURL configures the HTTP Proxy URL (e.g. "http://proxyserver:2195") to go through when scraping the target. relabelings array relabelings configures the relabeling rules to apply the target's metadata labels. The Operator automatically adds relabelings for a few standard Kubernetes fields. The original scrape job's name is available via the \__tmp_prometheus_job_name label. More info: https://prometheus.io/docs/prometheus/latest/configuration/configuration/#relabel_config relabelings[] object RelabelConfig allows dynamic rewriting of the label set for targets, alerts, scraped samples and remote write samples. More info: https://prometheus.io/docs/prometheus/latest/configuration/configuration/#relabel_config scheme string HTTP scheme to use for scraping. http and https are the expected values unless you rewrite the scheme label via relabeling. If empty, Prometheus uses the default value http . scrapeTimeout string Timeout after which Prometheus considers the scrape to be failed. If empty, Prometheus uses the global scrape timeout unless it is less than the target's scrape interval value in which the latter is used. targetPort integer-or-string Name or number of the target port of the Pod object behind the Service, the port must be specified with container port property. Deprecated: use 'port' instead. tlsConfig object TLS configuration to use when scraping the target. trackTimestampsStaleness boolean trackTimestampsStaleness defines whether Prometheus tracks staleness of the metrics that have an explicit timestamp present in scraped data. Has no effect if honorTimestamps is false. It requires Prometheus >= v2.48.0. 6.1.6. .spec.podMetricsEndpoints[].authorization Description authorization configures the Authorization header credentials to use when scraping the target. Cannot be set at the same time as basicAuth , or oauth2 . Type object Property Type Description credentials object Selects a key of a Secret in the namespace that contains the credentials for authentication. type string Defines the authentication type. The value is case-insensitive. "Basic" is not a supported value. Default: "Bearer" 6.1.7. .spec.podMetricsEndpoints[].authorization.credentials Description Selects a key of a Secret in the namespace that contains the credentials for authentication. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. TODO: Add other useful fields. apiVersion, kind, uid? More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Drop kubebuilder:default when controller-gen doesn't need it https://github.com/kubernetes-sigs/kubebuilder/issues/3896 . optional boolean Specify whether the Secret or its key must be defined 6.1.8. .spec.podMetricsEndpoints[].basicAuth Description basicAuth configures the Basic Authentication credentials to use when scraping the target. Cannot be set at the same time as authorization , or oauth2 . Type object Property Type Description password object password specifies a key of a Secret containing the password for authentication. username object username specifies a key of a Secret containing the username for authentication. 6.1.9. .spec.podMetricsEndpoints[].basicAuth.password Description password specifies a key of a Secret containing the password for authentication. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. TODO: Add other useful fields. apiVersion, kind, uid? More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Drop kubebuilder:default when controller-gen doesn't need it https://github.com/kubernetes-sigs/kubebuilder/issues/3896 . optional boolean Specify whether the Secret or its key must be defined 6.1.10. .spec.podMetricsEndpoints[].basicAuth.username Description username specifies a key of a Secret containing the username for authentication. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. TODO: Add other useful fields. apiVersion, kind, uid? More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Drop kubebuilder:default when controller-gen doesn't need it https://github.com/kubernetes-sigs/kubebuilder/issues/3896 . optional boolean Specify whether the Secret or its key must be defined 6.1.11. .spec.podMetricsEndpoints[].bearerTokenSecret Description bearerTokenSecret specifies a key of a Secret containing the bearer token for scraping targets. The secret needs to be in the same namespace as the PodMonitor object and readable by the Prometheus Operator. Deprecated: use authorization instead. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. TODO: Add other useful fields. apiVersion, kind, uid? More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Drop kubebuilder:default when controller-gen doesn't need it https://github.com/kubernetes-sigs/kubebuilder/issues/3896 . optional boolean Specify whether the Secret or its key must be defined 6.1.12. .spec.podMetricsEndpoints[].metricRelabelings Description metricRelabelings configures the relabeling rules to apply to the samples before ingestion. Type array 6.1.13. .spec.podMetricsEndpoints[].metricRelabelings[] Description RelabelConfig allows dynamic rewriting of the label set for targets, alerts, scraped samples and remote write samples. More info: https://prometheus.io/docs/prometheus/latest/configuration/configuration/#relabel_config Type object Property Type Description action string Action to perform based on the regex matching. Uppercase and Lowercase actions require Prometheus >= v2.36.0. DropEqual and KeepEqual actions require Prometheus >= v2.41.0. Default: "Replace" modulus integer Modulus to take of the hash of the source label values. Only applicable when the action is HashMod . regex string Regular expression against which the extracted value is matched. replacement string Replacement value against which a Replace action is performed if the regular expression matches. Regex capture groups are available. separator string Separator is the string between concatenated SourceLabels. sourceLabels array (string) The source labels select values from existing labels. Their content is concatenated using the configured Separator and matched against the configured regular expression. targetLabel string Label to which the resulting string is written in a replacement. It is mandatory for Replace , HashMod , Lowercase , Uppercase , KeepEqual and DropEqual actions. Regex capture groups are available. 6.1.14. .spec.podMetricsEndpoints[].oauth2 Description oauth2 configures the OAuth2 settings to use when scraping the target. It requires Prometheus >= 2.27.0. Cannot be set at the same time as authorization , or basicAuth . Type object Required clientId clientSecret tokenUrl Property Type Description clientId object clientId specifies a key of a Secret or ConfigMap containing the OAuth2 client's ID. clientSecret object clientSecret specifies a key of a Secret containing the OAuth2 client's secret. endpointParams object (string) endpointParams configures the HTTP parameters to append to the token URL. scopes array (string) scopes defines the OAuth2 scopes used for the token request. tokenUrl string tokenURL configures the URL to fetch the token from. 6.1.15. .spec.podMetricsEndpoints[].oauth2.clientId Description clientId specifies a key of a Secret or ConfigMap containing the OAuth2 client's ID. Type object Property Type Description configMap object ConfigMap containing data to use for the targets. secret object Secret containing data to use for the targets. 6.1.16. .spec.podMetricsEndpoints[].oauth2.clientId.configMap Description ConfigMap containing data to use for the targets. Type object Required key Property Type Description key string The key to select. name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. TODO: Add other useful fields. apiVersion, kind, uid? More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Drop kubebuilder:default when controller-gen doesn't need it https://github.com/kubernetes-sigs/kubebuilder/issues/3896 . optional boolean Specify whether the ConfigMap or its key must be defined 6.1.17. .spec.podMetricsEndpoints[].oauth2.clientId.secret Description Secret containing data to use for the targets. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. TODO: Add other useful fields. apiVersion, kind, uid? More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Drop kubebuilder:default when controller-gen doesn't need it https://github.com/kubernetes-sigs/kubebuilder/issues/3896 . optional boolean Specify whether the Secret or its key must be defined 6.1.18. .spec.podMetricsEndpoints[].oauth2.clientSecret Description clientSecret specifies a key of a Secret containing the OAuth2 client's secret. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. TODO: Add other useful fields. apiVersion, kind, uid? More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Drop kubebuilder:default when controller-gen doesn't need it https://github.com/kubernetes-sigs/kubebuilder/issues/3896 . optional boolean Specify whether the Secret or its key must be defined 6.1.19. .spec.podMetricsEndpoints[].params Description params define optional HTTP URL parameters. Type object 6.1.20. .spec.podMetricsEndpoints[].relabelings Description relabelings configures the relabeling rules to apply the target's metadata labels. The Operator automatically adds relabelings for a few standard Kubernetes fields. The original scrape job's name is available via the \__tmp_prometheus_job_name label. More info: https://prometheus.io/docs/prometheus/latest/configuration/configuration/#relabel_config Type array 6.1.21. .spec.podMetricsEndpoints[].relabelings[] Description RelabelConfig allows dynamic rewriting of the label set for targets, alerts, scraped samples and remote write samples. More info: https://prometheus.io/docs/prometheus/latest/configuration/configuration/#relabel_config Type object Property Type Description action string Action to perform based on the regex matching. Uppercase and Lowercase actions require Prometheus >= v2.36.0. DropEqual and KeepEqual actions require Prometheus >= v2.41.0. Default: "Replace" modulus integer Modulus to take of the hash of the source label values. Only applicable when the action is HashMod . regex string Regular expression against which the extracted value is matched. replacement string Replacement value against which a Replace action is performed if the regular expression matches. Regex capture groups are available. separator string Separator is the string between concatenated SourceLabels. sourceLabels array (string) The source labels select values from existing labels. Their content is concatenated using the configured Separator and matched against the configured regular expression. targetLabel string Label to which the resulting string is written in a replacement. It is mandatory for Replace , HashMod , Lowercase , Uppercase , KeepEqual and DropEqual actions. Regex capture groups are available. 6.1.22. .spec.podMetricsEndpoints[].tlsConfig Description TLS configuration to use when scraping the target. Type object Property Type Description ca object Certificate authority used when verifying server certificates. cert object Client certificate to present when doing client-authentication. insecureSkipVerify boolean Disable target certificate validation. keySecret object Secret containing the client key file for the targets. serverName string Used to verify the hostname for the targets. 6.1.23. .spec.podMetricsEndpoints[].tlsConfig.ca Description Certificate authority used when verifying server certificates. Type object Property Type Description configMap object ConfigMap containing data to use for the targets. secret object Secret containing data to use for the targets. 6.1.24. .spec.podMetricsEndpoints[].tlsConfig.ca.configMap Description ConfigMap containing data to use for the targets. Type object Required key Property Type Description key string The key to select. name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. TODO: Add other useful fields. apiVersion, kind, uid? More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Drop kubebuilder:default when controller-gen doesn't need it https://github.com/kubernetes-sigs/kubebuilder/issues/3896 . optional boolean Specify whether the ConfigMap or its key must be defined 6.1.25. .spec.podMetricsEndpoints[].tlsConfig.ca.secret Description Secret containing data to use for the targets. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. TODO: Add other useful fields. apiVersion, kind, uid? More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Drop kubebuilder:default when controller-gen doesn't need it https://github.com/kubernetes-sigs/kubebuilder/issues/3896 . optional boolean Specify whether the Secret or its key must be defined 6.1.26. .spec.podMetricsEndpoints[].tlsConfig.cert Description Client certificate to present when doing client-authentication. Type object Property Type Description configMap object ConfigMap containing data to use for the targets. secret object Secret containing data to use for the targets. 6.1.27. .spec.podMetricsEndpoints[].tlsConfig.cert.configMap Description ConfigMap containing data to use for the targets. Type object Required key Property Type Description key string The key to select. name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. TODO: Add other useful fields. apiVersion, kind, uid? More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Drop kubebuilder:default when controller-gen doesn't need it https://github.com/kubernetes-sigs/kubebuilder/issues/3896 . optional boolean Specify whether the ConfigMap or its key must be defined 6.1.28. .spec.podMetricsEndpoints[].tlsConfig.cert.secret Description Secret containing data to use for the targets. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. TODO: Add other useful fields. apiVersion, kind, uid? More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Drop kubebuilder:default when controller-gen doesn't need it https://github.com/kubernetes-sigs/kubebuilder/issues/3896 . optional boolean Specify whether the Secret or its key must be defined 6.1.29. .spec.podMetricsEndpoints[].tlsConfig.keySecret Description Secret containing the client key file for the targets. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. TODO: Add other useful fields. apiVersion, kind, uid? More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Drop kubebuilder:default when controller-gen doesn't need it https://github.com/kubernetes-sigs/kubebuilder/issues/3896 . optional boolean Specify whether the Secret or its key must be defined 6.1.30. .spec.selector Description Label selector to select the Kubernetes Pod objects. Type object Property Type Description matchExpressions array matchExpressions is a list of label selector requirements. The requirements are ANDed. matchExpressions[] object A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. matchLabels object (string) matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed. 6.1.31. .spec.selector.matchExpressions Description matchExpressions is a list of label selector requirements. The requirements are ANDed. Type array 6.1.32. .spec.selector.matchExpressions[] Description A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. Type object Required key operator Property Type Description key string key is the label key that the selector applies to. operator string operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist. values array (string) values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch. 6.2. API endpoints The following API endpoints are available: /apis/monitoring.coreos.com/v1/podmonitors GET : list objects of kind PodMonitor /apis/monitoring.coreos.com/v1/namespaces/{namespace}/podmonitors DELETE : delete collection of PodMonitor GET : list objects of kind PodMonitor POST : create a PodMonitor /apis/monitoring.coreos.com/v1/namespaces/{namespace}/podmonitors/{name} DELETE : delete a PodMonitor GET : read the specified PodMonitor PATCH : partially update the specified PodMonitor PUT : replace the specified PodMonitor 6.2.1. /apis/monitoring.coreos.com/v1/podmonitors HTTP method GET Description list objects of kind PodMonitor Table 6.1. HTTP responses HTTP code Reponse body 200 - OK PodMonitorList schema 401 - Unauthorized Empty 6.2.2. /apis/monitoring.coreos.com/v1/namespaces/{namespace}/podmonitors HTTP method DELETE Description delete collection of PodMonitor Table 6.2. HTTP responses HTTP code Reponse body 200 - OK Status schema 401 - Unauthorized Empty HTTP method GET Description list objects of kind PodMonitor Table 6.3. HTTP responses HTTP code Reponse body 200 - OK PodMonitorList schema 401 - Unauthorized Empty HTTP method POST Description create a PodMonitor Table 6.4. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 6.5. Body parameters Parameter Type Description body PodMonitor schema Table 6.6. HTTP responses HTTP code Reponse body 200 - OK PodMonitor schema 201 - Created PodMonitor schema 202 - Accepted PodMonitor schema 401 - Unauthorized Empty 6.2.3. /apis/monitoring.coreos.com/v1/namespaces/{namespace}/podmonitors/{name} Table 6.7. Global path parameters Parameter Type Description name string name of the PodMonitor HTTP method DELETE Description delete a PodMonitor Table 6.8. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed Table 6.9. HTTP responses HTTP code Reponse body 200 - OK Status schema 202 - Accepted Status schema 401 - Unauthorized Empty HTTP method GET Description read the specified PodMonitor Table 6.10. HTTP responses HTTP code Reponse body 200 - OK PodMonitor schema 401 - Unauthorized Empty HTTP method PATCH Description partially update the specified PodMonitor Table 6.11. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 6.12. HTTP responses HTTP code Reponse body 200 - OK PodMonitor schema 401 - Unauthorized Empty HTTP method PUT Description replace the specified PodMonitor Table 6.13. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 6.14. Body parameters Parameter Type Description body PodMonitor schema Table 6.15. HTTP responses HTTP code Reponse body 200 - OK PodMonitor schema 201 - Created PodMonitor schema 401 - Unauthorized Empty | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.17/html/monitoring_apis/podmonitor-monitoring-coreos-com-v1 |
Chapter 19. Changing a hostname | Chapter 19. Changing a hostname The hostname of a system is the name on the system itself. You can set the name when you install RHEL, and you can change it afterwards. 19.1. Changing a hostname by using nmcli You can use the nmcli utility to update the system hostname. Note that other utilities might use a different term, such as static or persistent hostname. Procedure Optional: Display the current hostname setting: Set the new hostname: NetworkManager automatically restarts the systemd-hostnamed to activate the new name. For the changes to take effect, reboot the host: Alternatively, if you know which services use the hostname: Restart all services that only read the hostname when the service starts: Active shell users must re-login for the changes to take effect. Verification Display the hostname: 19.2. Changing a hostname by using hostnamectl You can use the hostnamectl utility to update the hostname. By default, this utility sets the following hostname types: Static hostname: Stored in the /etc/hostname file. Typically, services use this name as the hostname. Pretty hostname: A descriptive name, such as Proxy server in data center A . Transient hostname: A fall-back value that is typically received from the network configuration. Procedure Optional: Display the current hostname setting: Set the new hostname: This command sets the static, pretty, and transient hostname to the new value. To set only a specific type, pass the --static , --pretty , or --transient option to the command. The hostnamectl utility automatically restarts the systemd-hostnamed to activate the new name. For the changes to take effect, reboot the host: Alternatively, if you know which services use the hostname: Restart all services that only read the hostname when the service starts: Active shell users must re-login for the changes to take effect. Verification Display the hostname: | [
"nmcli general hostname old-hostname.example.com",
"nmcli general hostname new-hostname.example.com",
"reboot",
"systemctl restart <service_name>",
"nmcli general hostname new-hostname.example.com",
"hostnamectl status --static old-hostname.example.com",
"hostnamectl set-hostname new-hostname.example.com",
"reboot",
"systemctl restart <service_name>",
"hostnamectl status --static new-hostname.example.com"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/9/html/configuring_and_managing_networking/assembly_changing-a-hostname_configuring-and-managing-networking |
probe::stap.pass1a | probe::stap.pass1a Name probe::stap.pass1a - Starting stap pass1 (parsing user script) Synopsis stap.pass1a Values session the systemtap_session variable s Description pass1a fires just after the call to gettimeofday , before the user script is parsed. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/systemtap_tapset_reference/api-stap-pass1a |
Chapter 41. PodDisruptionBudgetTemplate schema reference | Chapter 41. PodDisruptionBudgetTemplate schema reference Used in: CruiseControlTemplate , KafkaBridgeTemplate , KafkaClusterTemplate , KafkaConnectTemplate , KafkaMirrorMakerTemplate , ZookeeperClusterTemplate Full list of PodDisruptionBudgetTemplate schema properties A PodDisruptionBudget (PDB) is an OpenShift resource that ensures high availability by specifying the minimum number of pods that must be available during planned maintenance or upgrades. Streams for Apache Kafka creates a PDB for every new StrimziPodSet or Deployment . By default, the PDB allows only one pod to be unavailable at any given time. You can increase the number of unavailable pods allowed by changing the default value of the maxUnavailable property. StrimziPodSet custom resources manage pods using a custom controller that cannot use the maxUnavailable value directly. Instead, the maxUnavailable value is automatically converted to a minAvailable value when creating the PDB resource, which effectively serves the same purpose, as illustrated in the following examples: If there are three broker pods and the maxUnavailable property is set to 1 in the Kafka resource, the minAvailable setting is 2 , allowing one pod to be unavailable. If there are three broker pods and the maxUnavailable property is set to 0 (zero), the minAvailable setting is 3 , requiring all three broker pods to be available and allowing zero pods to be unavailable. Example PodDisruptionBudget template configuration # ... template: podDisruptionBudget: metadata: labels: key1: label1 key2: label2 annotations: key1: label1 key2: label2 maxUnavailable: 1 # ... 41.1. PodDisruptionBudgetTemplate schema properties Property Property type Description metadata MetadataTemplate Metadata to apply to the PodDisruptionBudgetTemplate resource. maxUnavailable integer Maximum number of unavailable pods to allow automatic Pod eviction. A Pod eviction is allowed when the maxUnavailable number of pods or fewer are unavailable after the eviction. Setting this value to 0 prevents all voluntary evictions, so the pods must be evicted manually. Defaults to 1. | [
"template: podDisruptionBudget: metadata: labels: key1: label1 key2: label2 annotations: key1: label1 key2: label2 maxUnavailable: 1"
] | https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.7/html/streams_for_apache_kafka_api_reference/type-poddisruptionbudgettemplate-reference |
Chapter 4. Unsupported and deprecated features | Chapter 4. Unsupported and deprecated features Cryostat 3.0 removes some features because of their high maintenance costs, low community interest, and better alternative solutions. Target TLS certificate upload The Security view of the Cryostat web console no longer offers a way to upload SSL/TLS certificates directly into the Cryostat server truststore. The Security view now only displays a list of certificates that have already been loaded. From Cryostat 3.0 onward, new certificates must be added to the storage volume that Cryostat reads at startup. You can configure any new certificates by using the existing TrustedCertSecrets property in the Cryostat CR. JMX target credentials passed through API requests The X-JMX-Authorization header is no longer supported. This means that Cryostat no longer accepts API requests from target applications to allow Cryostat to authenticate itself and store credentials in memory for the duration of a JMX connection to an application. From Cryostat 3.0 onward, JMX credentials for target applications are always stored in an encrypted database that is stored on a persistent volume claim (PVC) on Red Hat OpenShift. The Settings view of the Cryostat web console also no longer offers an advanced configuration for selecting which authentication mechanism to use. Cryostat self-discovery When you install Cryostat by using the Cryostat Operator or a Helm chart, Cryostat no longer discovers itself as a target application by default. In releases, Cryostat exposed a JMX port on a Kubernetes service and the Cryostat Operator generated credentials and assigned a TLS certificate to help secure this port. From Cryostat 3.0 onward, the JMX port exposed by Cryostat is disabled and the corresponding Service port is removed, which means that Cryostat can no longer discover itself as a connectable target. This also means that Cryostat no longer appears in the target selection list or in the Topology view of the Cryostat web console. Note If you want to connect Cryostat to itself to check performance, you can create a Custom Target with the URL value localhost:0 . This value instructs the JVM to open a local JMX connection to itself, without exposing a port to the network, which means that additional authentication and TLS encryption is unnecessary. Cryostat Operator installation for a single namespace Support is no longer provided for installing the Cryostat Operator in a single namespace or subset of cluster namespaces. From Cryostat 3.0 onward, the Cryostat Operator can only be installed on a cluster-wide basis. Cluster-wide installation is the preferred mode for the Operator Lifecycle Manager and per-namespace installations are a deprecated feature. Cluster Cryostat API The Cluster Cryostat API is no longer supported. In this release, when installing a Cryostat instance by using the Cryostat Operator, you can no longer select a Cluster Cryostat option in the the Provided APIs section of the Details tab. From Cryostat 3.0 onward, you can use the Cryostat API to create both single-namespace and multi-namespace Cryostat instances. When you install a Cryostat instance by using the Cryostat Operator, the Cryostat API now enables you to specify an optional list of target namespaces. | null | https://docs.redhat.com/en/documentation/red_hat_build_of_cryostat/3/html/release_notes_for_the_red_hat_build_of_cryostat_3.0/unsupported-deprecated-features_cryostat |
Chapter 1. Support policy for Eclipse Temurin | Chapter 1. Support policy for Eclipse Temurin Red Hat will support select major versions of Eclipse Temurin in its products. For consistency, these are the same versions that Oracle designates as long-term support (LTS) for the Oracle JDK. A major version of Eclipse Temurin will be supported for a minimum of six years from the time that version is first introduced. For more information, see the Eclipse Temurin Life Cycle and Support Policy . Note RHEL 6 reached the end of life in November 2020. Because of this, Eclipse Temurin does not support RHEL 6 as a supported configuration. | null | https://docs.redhat.com/en/documentation/red_hat_build_of_openjdk/11/html/release_notes_for_eclipse_temurin_11.0.17/rn-openjdk-temurin-support-policy |
Enabling dynamic JFR recordings based on MBean custom triggers | Enabling dynamic JFR recordings based on MBean custom triggers Red Hat build of Cryostat 3 Red Hat Customer Content Services | null | https://docs.redhat.com/en/documentation/red_hat_build_of_cryostat/3/html/enabling_dynamic_jfr_recordings_based_on_mbean_custom_triggers/index |
Chapter 8. Monitoring | Chapter 8. Monitoring 8.1. Monitoring Red Hat JBoss Data Virtualization Red Hat JBoss Data Virtualization provides information about its current operational state. This information can be useful in tuning, monitoring, and managing load and throughput. Runtime data can be accessed using administrative tools such as web-console, AdminShell or Admin API. | null | https://docs.redhat.com/en/documentation/red_hat_jboss_data_virtualization/6.4/html/administration_and_configuration_guide/chap-monitoring |
11.2.4. Channel Bonding Interfaces | 11.2.4. Channel Bonding Interfaces Red Hat Enterprise Linux allows administrators to bind multiple network interfaces together into a single channel using the bonding kernel module and a special network interface called a channel bonding interface . Channel bonding enables two or more network interfaces to act as one, simultaneously increasing the bandwidth and providing redundancy. Warning The use of direct cable connections without network switches is not supported for bonding. The failover mechanisms described here will not work as expected without the presence of network switches. See the Red Hat Knowledgebase article Why is bonding in not supported with direct connection using crossover cables? for more information. Note The active-backup, balance-tlb and balance-alb modes do not require any specific configuration of the switch. Other bonding modes require configuring the switch to aggregate the links. For example, a Cisco switch requires EtherChannel for Modes 0, 2, and 3, but for Mode 4 LACP and EtherChannel are required. See the documentation supplied with your switch and the bonding.txt file in the kernel-doc package (see Section 31.9, "Additional Resources" ). 11.2.4.1. Check if Bonding Kernel Module is Installed In Red Hat Enterprise Linux 6, the bonding module is not loaded by default. You can load the module by issuing the following command as root : No visual output indicates the module was not running and has now been loaded. This activation will not persist across system restarts. See Section 31.7, "Persistent Module Loading" for an explanation of persistent module loading. Note that given a correct configuration file using the BONDING_OPTS directive, the bonding module will be loaded as required and therefore does not need to be loaded separately. To display information about the module, issue the following command: See the modprobe(8) man page for more command options and see Chapter 31, Working with Kernel Modules for information on loading and unloading modules. 11.2.4.2. Create a Channel Bonding Interface To create a channel bonding interface, create a file in the /etc/sysconfig/network-scripts/ directory called ifcfg-bond N , replacing N with the number for the interface, such as 0 . The contents of the file can be identical to whatever type of interface is getting bonded, such as an Ethernet interface. The only difference is that the DEVICE directive is bond N , replacing N with the number for the interface. The NM_CONTROLLED directive can be added to prevent NetworkManager from configuring this device. Example 11.1. Example ifcfg-bond0 interface configuration file The following is an example of a channel bonding interface configuration file: The MAC address of the bond will be taken from the first interface to be enslaved. It can also be specified using the HWADDR directive if required. If you want NetworkManager to control this interface, remove the NM_CONTROLLED=no directive, or set it to yes , and add TYPE=Bond and BONDING_MASTER=yes . After the channel bonding interface is created, the network interfaces to be bound together must be configured by adding the MASTER and SLAVE directives to their configuration files. The configuration files for each of the channel-bonded interfaces can be nearly identical. Example 11.2. Example ifcfg-ethX bonded interface configuration file If two Ethernet interfaces are being channel bonded, both eth0 and eth1 can be as follows: In this example, replace X with the numerical value for the interface. Once the interfaces have been configured, restart the network service to bring the bond up. As root , issue the following command: To view the status of a bond, view the /proc/ file by issuing a command in the following format: cat /proc/net/bonding/bond N For example: For further instructions and advice on configuring the bonding module and to view the list of bonding parameters, see Section 31.8.1, "Using Channel Bonding" . Support for bonding was added to NetworkManager in Red Hat Enterprise Linux 6.3. See Section 11.2.1, "Ethernet Interfaces" for an explanation of NM_CONTROLLED and the NM_BOND_VLAN_ENABLED directive. Important In Red Hat Enterprise Linux 6, interface-specific parameters for the bonding kernel module must be specified as a space-separated list in the BONDING_OPTS=" bonding parameters " directive in the ifcfg-bond N interface file. Do not specify options specific to a bond in /etc/modprobe.d/ bonding .conf , or in the deprecated /etc/modprobe.conf file. The max_bonds parameter is not interface specific and therefore, if required, should be specified in /etc/modprobe.d/bonding.conf as follows: However, the max_bonds parameter should not be set when using ifcfg-bond N files with the BONDING_OPTS directive as this directive will cause the network scripts to create the bond interfaces as required. Note that any changes to /etc/modprobe.d/bonding.conf will not take effect until the module is loaded. A running module must first be unloaded. See Chapter 31, Working with Kernel Modules for more information on loading and unloading modules. 11.2.4.2.1. Creating Multiple Bonds In Red Hat Enterprise Linux 6, for each bond a channel bonding interface is created including the BONDING_OPTS directive. This configuration method is used so that multiple bonding devices can have different configurations. To create multiple channel bonding interfaces, proceed as follows: Create multiple ifcfg-bond N files with the BONDING_OPTS directive; this directive will cause the network scripts to create the bond interfaces as required. Create, or edit existing, interface configuration files to be bonded and include the SLAVE directive. Assign the interfaces to be bonded, the slave interfaces, to the channel bonding interfaces by means of the MASTER directive. Example 11.3. Example multiple ifcfg-bondN interface configuration files The following is an example of a channel bonding interface configuration file: In this example, replace N with the number for the bond interface. For example, to create two bonds create two configuration files, ifcfg-bond0 and ifcfg-bond1 . Create the interfaces to be bonded as per Example 11.2, "Example ifcfg-ethX bonded interface configuration file" and assign them to the bond interfaces as required using the MASTER=bond N directive. For example, continuing on from the example above, if two interfaces per bond are required, then for two bonds create four interface configuration files and assign the first two using MASTER=bond 0 and the two using MASTER=bond 1 . | [
"~]# modprobe --first-time bonding",
"~]USD modinfo bonding",
"DEVICE=bond0 IPADDR=192.168.1.1 NETMASK=255.255.255.0 ONBOOT=yes BOOTPROTO=none USERCTL=no NM_CONTROLLED=no BONDING_OPTS=\" bonding parameters separated by spaces \"",
"DEVICE=eth X BOOTPROTO=none ONBOOT=yes MASTER=bond0 SLAVE=yes USERCTL=no NM_CONTROLLED=no",
"~]# service network restart",
"~]USD cat /proc/net/bonding/bond0 Ethernet Channel Bonding Driver: v3.6.0 (September 26, 2009) Bonding Mode: load balancing (round-robin) MII Status: down MII Polling Interval (ms): 0 Up Delay (ms): 0 Down Delay (ms): 0",
"options bonding max_bonds=1",
"DEVICE=bondN IPADDR=192.168.1.1 NETMASK=255.255.255.0 ONBOOT=yes BOOTPROTO=none USERCTL=no NM_CONTROLLED=no BONDING_OPTS=\" bonding parameters separated by spaces \""
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/deployment_guide/s2-networkscripts-interfaces-chan |
Chapter 4. Support for FIPS cryptography | Chapter 4. Support for FIPS cryptography You can install an OpenShift Container Platform cluster in FIPS mode. OpenShift Container Platform is designed for FIPS. When running Red Hat Enterprise Linux (RHEL) or Red Hat Enterprise Linux CoreOS (RHCOS) booted in FIPS mode, OpenShift Container Platform core components use the RHEL cryptographic libraries that have been submitted to NIST for FIPS 140-2/140-3 Validation on only the x86_64, ppc64le, and s390x architectures. For more information about the NIST validation program, see Cryptographic Module Validation Program . For the latest NIST status for the individual versions of RHEL cryptographic libraries that have been submitted for validation, see Compliance Activities and Government Standards . Important To enable FIPS mode for your cluster, you must run the installation program from a RHEL 9 computer that is configured to operate in FIPS mode, and you must use a FIPS-capable version of the installation program. See the section titled Obtaining a FIPS-capable installation program using `oc adm extract` . For more information about configuring FIPS mode on RHEL, see Installing the system in FIPS mode . For the Red Hat Enterprise Linux CoreOS (RHCOS) machines in your cluster, this change is applied when the machines are deployed based on the status of an option in the install-config.yaml file, which governs the cluster options that a user can change during cluster deployment. With Red Hat Enterprise Linux (RHEL) machines, you must enable FIPS mode when you install the operating system on the machines that you plan to use as worker machines. Because FIPS must be enabled before the operating system that your cluster uses boots for the first time, you cannot enable FIPS after you deploy a cluster. 4.1. Obtaining a FIPS-capable installation program using oc adm extract OpenShift Container Platform requires the use of a FIPS-capable installation binary to install a cluster in FIPS mode. You can obtain this binary by extracting it from the release image by using the OpenShift CLI ( oc ). After you have obtained the binary, you proceed with the cluster installation, replacing all instances of the openshift-install command with openshift-install-fips . Prerequisites You have installed the OpenShift CLI ( oc ) with version 4.16 or newer. Procedure Extract the FIPS-capable binary from the installation program by running the following command: USD oc adm release extract --registry-config "USD{pullsecret_file}" --command=openshift-install-fips --to "USD{extract_dir}" USD{RELEASE_IMAGE} where: <pullsecret_file> Specifies the name of a file that contains your pull secret. <extract_dir> Specifies the directory where you want to extract the binary. <RELEASE_IMAGE> Specifies the Quay.io URL of the OpenShift Container Platform release you are using. For more information on finding the release image, see Extracting the OpenShift Container Platform installation program . Proceed with cluster installation, replacing all instances of the openshift-install command with openshift-install-fips . Additional resources Extracting the OpenShift Container Platform installation program 4.2. Obtaining a FIPS-capable installation program using the public OpenShift mirror OpenShift Container Platform requires the use of a FIPS-capable installation binary to install a cluster in FIPS mode. You can obtain this binary by downloading it from the public OpenShift mirror. After you have obtained the binary, proceed with the cluster installation, replacing all instances of the openshift-install binary with openshift-install-fips . Prerequisites You have access to the internet. Procedure Download the installation program from https://mirror.openshift.com/pub/openshift-v4/clients/ocp/latest-4.18/openshift-install-rhel9-amd64.tar.gz . Extract the installation program. For example, on a computer that uses a Linux operating system, run the following command: USD tar -xvf openshift-install-rhel9-amd64.tar.gz Proceed with cluster installation, replacing all instances of the openshift-install command with openshift-install-fips . 4.3. FIPS validation in OpenShift Container Platform OpenShift Container Platform uses certain FIPS validated or Modules In Process modules within RHEL and RHCOS for the operating system components that it uses. See RHEL core crypto components . For example, when users use SSH to connect to OpenShift Container Platform clusters and containers, those connections are properly encrypted. OpenShift Container Platform components are written in Go and built with Red Hat's golang compiler. When you enable FIPS mode for your cluster, all OpenShift Container Platform components that require cryptographic signing call RHEL and RHCOS cryptographic libraries. Table 4.1. FIPS mode attributes and limitations in OpenShift Container Platform 4.18 Attributes Limitations FIPS support in RHEL 9 and RHCOS operating systems. The FIPS implementation does not use a function that performs hash computation and signature generation or validation in a single step. This limitation will continue to be evaluated and improved in future OpenShift Container Platform releases. FIPS support in CRI-O runtimes. FIPS support in OpenShift Container Platform services. FIPS validated or Modules In Process cryptographic module and algorithms that are obtained from RHEL 9 and RHCOS binaries and images. Use of FIPS compatible golang compiler. TLS FIPS support is not complete but is planned for future OpenShift Container Platform releases. FIPS support across multiple architectures. FIPS is currently only supported on OpenShift Container Platform deployments using x86_64 , ppc64le , and s390x architectures. 4.4. FIPS support in components that the cluster uses Although the OpenShift Container Platform cluster itself uses FIPS validated or Modules In Process modules, ensure that the systems that support your OpenShift Container Platform cluster use FIPS validated or Modules In Process modules for cryptography. 4.4.1. etcd To ensure that the secrets that are stored in etcd use FIPS validated or Modules In Process encryption, boot the node in FIPS mode. After you install the cluster in FIPS mode, you can encrypt the etcd data by using the FIPS-approved aes cbc cryptographic algorithm. 4.4.2. Storage For local storage, use RHEL-provided disk encryption or Container Native Storage that uses RHEL-provided disk encryption. By storing all data in volumes that use RHEL-provided disk encryption and enabling FIPS mode for your cluster, both data at rest and data in motion, or network data, are protected by FIPS validated or Modules In Process encryption. You can configure your cluster to encrypt the root filesystem of each node, as described in Customizing nodes . 4.4.3. Runtimes To ensure that containers know that they are running on a host that is using FIPS validated or Modules In Process cryptography modules, use CRI-O to manage your runtimes. 4.5. Installing a cluster in FIPS mode To install a cluster in FIPS mode, follow the instructions to install a customized cluster on your preferred infrastructure. Ensure that you set fips: true in the install-config.yaml file before you deploy your cluster. Important To enable FIPS mode for your cluster, you must run the installation program from a RHEL computer configured to operate in FIPS mode. For more information about configuring FIPS mode on RHEL, see Installing the system in FIPS mode . Amazon Web Services Microsoft Azure Bare metal Google Cloud Platform IBM Cloud(R) IBM Power(R) IBM Z(R) and IBM(R) LinuxONE IBM Z(R) and IBM(R) LinuxONE with RHEL KVM IBM Z(R) and IBM(R) LinuxONE in an LPAR Red Hat OpenStack Platform (RHOSP) VMware vSphere Note If you are using Azure File storage, you cannot enable FIPS mode. To apply AES CBC encryption to your etcd data store, follow the Encrypting etcd data process after you install your cluster. If you add RHEL nodes to your cluster, ensure that you enable FIPS mode on the machines before their initial boot. See Adding RHEL compute machines to an OpenShift Container Platform cluster and Installing the system in FIPS mode . | [
"oc adm release extract --registry-config \"USD{pullsecret_file}\" --command=openshift-install-fips --to \"USD{extract_dir}\" USD{RELEASE_IMAGE}",
"tar -xvf openshift-install-rhel9-amd64.tar.gz"
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.18/html/installation_overview/installing-fips |
Chapter 18. Setting up distributed tracing | Chapter 18. Setting up distributed tracing Distributed tracing allows you to track the progress of transactions between applications in a distributed system. In a microservices architecture, tracing tracks the progress of transactions between services. Trace data is useful for monitoring application performance and investigating issues with target systems and end-user applications. In Streams for Apache Kafka, tracing facilitates the end-to-end tracking of messages: from source systems to Kafka, and then from Kafka to target systems and applications. It complements the metrics that are available to view in JMX metrics , as well as the component loggers. Support for tracing is built in to the following Kafka components: Kafka Connect MirrorMaker MirrorMaker 2 Streams for Apache Kafka Bridge Tracing is not supported for Kafka brokers. You add tracing configuration to the properties file of the component. To enable tracing, you set environment variables and add the library of the tracing system to the Kafka classpath. For Jaeger tracing, you can add tracing artifacts for OpenTelemetry with the Jaeger Exporter. Note Streams for Apache Kafka no longer supports OpenTracing. If you were previously using OpenTracing with Jaeger, we encourage you to transition to using OpenTelemetry instead. To enable tracing in Kafka producers, consumers, and Kafka Streams API applications, you instrument application code. When instrumented, clients generate trace data; for example, when producing messages or writing offsets to the log. Note Setting up tracing for applications and systems beyond Streams for Apache Kafka is outside the scope of this content. 18.1. Outline of procedures To set up tracing for Streams for Apache Kafka, follow these procedures in order: Set up tracing for Kafka Connect, MirrorMaker 2, and MirrorMaker: Enable tracing for Kafka Connect Enable tracing for MirrorMaker 2 Enable tracing for MirrorMaker Set up tracing for clients: Initialize a Jaeger tracer for Kafka clients Instrument clients with tracers: Instrument producers and consumers for tracing Instrument Kafka Streams applications for tracing Note For information on enabling tracing for the Kafka Bridge, see Using the Streams for Apache Kafka Bridge . 18.2. Tracing options Distributed traces consist of spans, which represent individual units of work performed over a specific time period. When instrumented with tracers, applications generate traces that follow requests as they move through the system, making it easier to identify delays or issues. OpenTelemetry, a telemetry framework, provides APIs for tracing that are independent of any specific backend tracing system. In Streams for Apache Kafka, the default protocol for transmitting traces between Kafka components and tracing systems is OpenTelemetry's OTLP (OpenTelemetry Protocol), a vendor-neutral protocol. While OTLP is the default, Streams for Apache Kafka also supports other tracing systems, such as Jaeger. Jaeger is a distributed tracing system designed for monitoring microservices, and its user interface allows you to query, filter, and analyze trace data in detail. The Jaeger user interface showing a simple query Additional resources Jaeger documentation OpenTelemetry documentation 18.3. Environment variables for tracing Use environment variables to enable tracing for Kafka components or to initialize a tracer for Kafka clients. Tracing environment variables are subject to change. For the latest information, see the OpenTelemetry documentation . The following table describes the key environment variables for setting up tracing with OpenTelemetry. Table 18.1. OpenTelemetry environment variables Property Required Description OTEL_SERVICE_NAME Yes The name of the tracing service for OpenTelemetry, such as OTLP or Jaeger. OTEL_EXPORTER_OTLP_ENDPOINT Yes (if using OTLP exporter) The OTLP endpoint for exporting trace data to the tracing system. For Jaeger tracing, specify the OTEL_EXPORTER_JAEGER_ENDPOINT . For other tracing systems, specify the appropriate endpoint . OTEL_TRACES_EXPORTER No (unless using a non-OTLP exporter) The exporter used for tracing. The default is otlp , which does not need to be specified. For Jaeger tracing, set this variable to jaeger . For other tracing systems, specify the appropriate exporter . OTEL_EXPORTER_OTLP_CERTIFICATE No (required if using TLS with OTLP) The path to the file containing trusted certificates for TLS authentication. Required to secure communication between Kafka components and the OpenTelemetry endpoint when using TLS with the otlp exporter. 18.4. Enabling tracing for Kafka Connect Enable distributed tracing for Kafka Connect using configuration properties. Only messages produced and consumed by Kafka Connect itself are traced. To trace messages sent between Kafka Connect and external systems, you must configure tracing in the connectors for those systems. You can enable tracing that uses OpenTelemetry. Procedure Add the tracing artifacts to the opt/kafka/libs directory. Configure producer and consumer tracing in the relevant Kafka Connect configuration file. If you are running Kafka Connect in standalone mode, edit the ./config/connect-standalone.properties file. If you are running Kafka Connect in distributed mode, edit the ./config/connect-distributed.properties file. Add the following tracing interceptor properties to the configuration file: Properties for OpenTelemetry producer.interceptor.classes=io.opentelemetry.instrumentation.kafkaclients.TracingProducerInterceptor consumer.interceptor.classes=io.opentelemetry.instrumentation.kafkaclients.TracingConsumerInterceptor With tracing enabled, you initialize tracing when you run the Kafka Connect script. Save the configuration file. Set the environment variables for tracing. Start Kafka Connect in standalone or distributed mode with the configuration file as a parameter (plus any connector properties): Running Kafka Connect in standalone mode ./bin/connect-standalone.sh \ ./config/connect-standalone.properties \ connector1.properties \ [connector2.properties ...] Running Kafka Connect in distributed mode ./bin/connect-distributed.sh ./config/connect-distributed.properties The internal consumers and producers of Kafka Connect are now enabled for tracing. 18.5. Enabling tracing for MirrorMaker 2 Enable distributed tracing for MirrorMaker 2 by defining the Interceptor properties in the MirrorMaker 2 properties file. Messages are traced between Kafka clusters. The trace data records messages entering and leaving the MirrorMaker 2 component. You can enable tracing that uses OpenTelemetry. Procedure Add the tracing artifacts to the opt/kafka/libs directory. Configure producer and consumer tracing in the opt/kafka/config/connect-mirror-maker.properties file. Add the following tracing interceptor properties to the configuration file: Properties for OpenTelemetry header.converter=org.apache.kafka.connect.converters.ByteArrayConverter producer.interceptor.classes=io.opentelemetry.instrumentation.kafkaclients.TracingProducerInterceptor consumer.interceptor.classes=io.opentelemetry.instrumentation.kafkaclients.TracingConsumerInterceptor ByteArrayConverter prevents Kafka Connect from converting message headers (containing trace IDs) to base64 encoding. This ensures that messages are the same in both the source and the target clusters. With tracing enabled, you initialize tracing when you run the Kafka MirrorMaker 2 script. Save the configuration file. Set the environment variables for tracing. Start MirrorMaker 2 with the producer and consumer configuration files as parameters: ./bin/connect-mirror-maker.sh \ ./config/connect-mirror-maker.properties The internal consumers and producers of MirrorMaker 2 are now enabled for tracing. 18.6. Enabling tracing for MirrorMaker Enable distributed tracing for MirrorMaker by passing the Interceptor properties as consumer and producer configuration parameters. Messages are traced from the source cluster to the target cluster. The trace data records messages entering and leaving the MirrorMaker component. You can enable tracing that uses OpenTelemetry. Procedure Add the tracing artifacts to the opt/kafka/libs directory. Configure producer tracing in the ./config/producer.properties file. Add the following tracing interceptor property: Producer property for OpenTelemetry producer.interceptor.classes=io.opentelemetry.instrumentation.kafkaclients.TracingProducerInterceptor Save the configuration file. Configure consumer tracing in the ./config/consumer.properties file. Add the following tracing interceptor property: Consumer property for OpenTelemetry consumer.interceptor.classes=io.opentelemetry.instrumentation.kafkaclients.TracingConsumerInterceptor With tracing enabled, you initialize tracing when you run the Kafka MirrorMaker script. Save the configuration file. Set the environment variables for tracing. Start MirrorMaker with the producer and consumer configuration files as parameters: ./bin/kafka-mirror-maker.sh \ --producer.config ./config/producer.properties \ --consumer.config ./config/consumer.properties \ --num.streams=2 The internal consumers and producers of MirrorMaker are now enabled for tracing. 18.7. Initializing tracing for Kafka clients Initialize a tracer for OpenTelemetry, then instrument your client applications for distributed tracing. You can instrument Kafka producer and consumer clients, and Kafka Streams API applications. Configure and initialize a tracer using a set of tracing environment variables . Procedure In each client application add the dependencies for the tracer: Add the Maven dependencies to the pom.xml file for the client application: Dependencies for OpenTelemetry <dependency> <groupId>io.opentelemetry.semconv</groupId> <artifactId>opentelemetry-semconv</artifactId> <version>1.21.0-alpha-redhat-00001</version> </dependency> <dependency> <groupId>io.opentelemetry</groupId> <artifactId>opentelemetry-exporter-otlp</artifactId> <version>1.34.1</version> <exclusions> <exclusion> <groupId>io.opentelemetry</groupId> <artifactId>opentelemetry-exporter-sender-okhttp</artifactId> </exclusion> </exclusions> </dependency> <dependency> <groupId>io.opentelemetry</groupId> <artifactId>opentelemetry-exporter-sender-grpc-managed-channel</artifactId> <version>1.34.1</version> <scope>runtime</scope> </dependency> <dependency> <groupId>io.opentelemetry</groupId> <artifactId>opentelemetry-sdk-extension-autoconfigure</artifactId> <version>1.34.1</version> </dependency> <dependency> <groupId>io.opentelemetry.instrumentation</groupId> <artifactId>opentelemetry-kafka-clients-2.6</artifactId> <version>1.32.0-alpha</version> </dependency> <dependency> <groupId>io.opentelemetry</groupId> <artifactId>opentelemetry-sdk</artifactId> <version>1.34.1</version> </dependency> <dependency> <groupId>io.opentelemetry</groupId> <artifactId>opentelemetry-exporter-sender-jdk</artifactId> <version>1.34.1-alpha</version> <scope>runtime</scope> </dependency> <dependency> <groupId>io.grpc</groupId> <artifactId>grpc-netty-shaded</artifactId> <version>1.61.0</version> </dependency> Define the configuration of the tracer using the tracing environment variables . Create a tracer, which is initialized with the environment variables: Creating a tracer for OpenTelemetry OpenTelemetry ot = GlobalOpenTelemetry.get(); Register the tracer as a global tracer: GlobalTracer.register(tracer); Instrument your client: Section 18.8, "Instrumenting producers and consumers for tracing" Section 18.9, "Instrumenting Kafka Streams applications for tracing" 18.8. Instrumenting producers and consumers for tracing Instrument application code to enable tracing in Kafka producers and consumers. Use a decorator pattern or interceptors to instrument your Java producer and consumer application code for tracing. You can then record traces when messages are produced or retrieved from a topic. OpenTelemetry instrumentation project provides classes that support instrumentation of producers and consumers. Decorator instrumentation For decorator instrumentation, create a modified producer or consumer instance for tracing. Interceptor instrumentation For interceptor instrumentation, add the tracing capability to the consumer or producer configuration. Prerequisites You have initialized tracing for the client . You enable instrumentation in producer and consumer applications by adding the tracing JARs as dependencies to your project. Procedure Perform these steps in the application code of each producer and consumer application. Instrument your client application code using either a decorator pattern or interceptors. To use a decorator pattern, create a modified producer or consumer instance to send or receive messages. You pass the original KafkaProducer or KafkaConsumer class. Example decorator instrumentation for OpenTelemetry // Producer instance Producer < String, String > op = new KafkaProducer < > ( configs, new StringSerializer(), new StringSerializer() ); Producer < String, String > producer = tracing.wrap(op); KafkaTracing tracing = KafkaTracing.create(GlobalOpenTelemetry.get()); producer.send(...); //consumer instance Consumer<String, String> oc = new KafkaConsumer<>( configs, new StringDeserializer(), new StringDeserializer() ); Consumer<String, String> consumer = tracing.wrap(oc); consumer.subscribe(Collections.singleton("mytopic")); ConsumerRecords<Integer, String> records = consumer.poll(1000); ConsumerRecord<Integer, String> record = ... SpanContext spanContext = TracingKafkaUtils.extractSpanContext(record.headers(), tracer); To use interceptors, set the interceptor class in the producer or consumer configuration. You use the KafkaProducer and KafkaConsumer classes in the usual way. The TracingProducerInterceptor and TracingConsumerInterceptor interceptor classes take care of the tracing capability. Example producer configuration using interceptors senderProps.put(ProducerConfig.INTERCEPTOR_CLASSES_CONFIG, TracingProducerInterceptor.class.getName()); KafkaProducer<Integer, String> producer = new KafkaProducer<>(senderProps); producer.send(...); Example consumer configuration using interceptors consumerProps.put(ConsumerConfig.INTERCEPTOR_CLASSES_CONFIG, TracingConsumerInterceptor.class.getName()); KafkaConsumer<Integer, String> consumer = new KafkaConsumer<>(consumerProps); consumer.subscribe(Collections.singletonList("messages")); ConsumerRecords<Integer, String> records = consumer.poll(1000); ConsumerRecord<Integer, String> record = ... SpanContext spanContext = TracingKafkaUtils.extractSpanContext(record.headers(), tracer); 18.9. Instrumenting Kafka Streams applications for tracing Instrument application code to enable tracing in Kafka Streams API applications. Use a decorator pattern or interceptors to instrument your Kafka Streams API applications for tracing. You can then record traces when messages are produced or retrieved from a topic. Decorator instrumentation For decorator instrumentation, create a modified Kafka Streams instance for tracing. For OpenTelemetry, you need to create a custom TracingKafkaClientSupplier class to provide tracing instrumentation for Kafka Streams. Interceptor instrumentation For interceptor instrumentation, add the tracing capability to the Kafka Streams producer and consumer configuration. Prerequisites You have initialized tracing for the client . You enable instrumentation in Kafka Streams applications by adding the tracing JARs as dependencies to your project. To instrument Kafka Streams with OpenTelemetry, you'll need to write a custom TracingKafkaClientSupplier . The custom TracingKafkaClientSupplier can extend Kafka's DefaultKafkaClientSupplier , overriding the producer and consumer creation methods to wrap the instances with the telemetry-related code. Example custom TracingKafkaClientSupplier private class TracingKafkaClientSupplier extends DefaultKafkaClientSupplier { @Override public Producer<byte[], byte[]> getProducer(Map<String, Object> config) { KafkaTelemetry telemetry = KafkaTelemetry.create(GlobalOpenTelemetry.get()); return telemetry.wrap(super.getProducer(config)); } @Override public Consumer<byte[], byte[]> getConsumer(Map<String, Object> config) { KafkaTelemetry telemetry = KafkaTelemetry.create(GlobalOpenTelemetry.get()); return telemetry.wrap(super.getConsumer(config)); } @Override public Consumer<byte[], byte[]> getRestoreConsumer(Map<String, Object> config) { return this.getConsumer(config); } @Override public Consumer<byte[], byte[]> getGlobalConsumer(Map<String, Object> config) { return this.getConsumer(config); } } Procedure Perform these steps for each Kafka Streams API application. To use a decorator pattern, create an instance of the TracingKafkaClientSupplier supplier interface, then provide the supplier interface to KafkaStreams . Example decorator instrumentation KafkaClientSupplier supplier = new TracingKafkaClientSupplier(tracer); KafkaStreams streams = new KafkaStreams(builder.build(), new StreamsConfig(config), supplier); streams.start(); To use interceptors, set the interceptor class in the Kafka Streams producer and consumer configuration. The TracingProducerInterceptor and TracingConsumerInterceptor interceptor classes take care of the tracing capability. Example producer and consumer configuration using interceptors props.put(StreamsConfig.PRODUCER_PREFIX + ProducerConfig.INTERCEPTOR_CLASSES_CONFIG, TracingProducerInterceptor.class.getName()); props.put(StreamsConfig.CONSUMER_PREFIX + ConsumerConfig.INTERCEPTOR_CLASSES_CONFIG, TracingConsumerInterceptor.class.getName()); 18.10. Specifying tracing systems with OpenTelemetry Instead of the default Jaeger system, you can specify other tracing systems that are supported by OpenTelemetry. If you want to use another tracing system with OpenTelemetry, do the following: Add the library of the tracing system to the Kafka classpath. Add the name of the tracing system as an additional exporter environment variable. Additional environment variable when not using Jaeger OTEL_SERVICE_NAME=my-tracing-service OTEL_TRACES_EXPORTER=zipkin 1 OTEL_EXPORTER_ZIPKIN_ENDPOINT=http://localhost:9411/api/v2/spans 2 1 The name of the tracing system. In this example, Zipkin is specified. 2 The endpoint of the specific selected exporter that listens for spans. In this example, a Zipkin endpoint is specified. Additional resources OpenTelemetry exporter values 18.11. Specifying custom span names for OpenTelemetry A tracing span is a logical unit of work in Jaeger, with an operation name, start time, and duration. Spans have built-in names, but you can specify custom span names in your Kafka client instrumentation where used. Specifying custom span names is optional and only applies when using a decorator pattern in producer and consumer client instrumentation or Kafka Streams instrumentation . Custom span names cannot be specified directly with OpenTelemetry. Instead, you retrieve span names by adding code to your client application to extract additional tags and attributes. Example code to extract attributes //Defines attribute extraction for a producer private static class ProducerAttribExtractor implements AttributesExtractor < ProducerRecord < ? , ? > , Void > { @Override public void onStart(AttributesBuilder attributes, ProducerRecord < ? , ? > producerRecord) { set(attributes, AttributeKey.stringKey("prod_start"), "prod1"); } @Override public void onEnd(AttributesBuilder attributes, ProducerRecord < ? , ? > producerRecord, @Nullable Void unused, @Nullable Throwable error) { set(attributes, AttributeKey.stringKey("prod_end"), "prod2"); } } //Defines attribute extraction for a consumer private static class ConsumerAttribExtractor implements AttributesExtractor < ConsumerRecord < ? , ? > , Void > { @Override public void onStart(AttributesBuilder attributes, ConsumerRecord < ? , ? > producerRecord) { set(attributes, AttributeKey.stringKey("con_start"), "con1"); } @Override public void onEnd(AttributesBuilder attributes, ConsumerRecord < ? , ? > producerRecord, @Nullable Void unused, @Nullable Throwable error) { set(attributes, AttributeKey.stringKey("con_end"), "con2"); } } //Extracts the attributes public static void main(String[] args) throws Exception { Map < String, Object > configs = new HashMap < > (Collections.singletonMap(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG, "localhost:9092")); System.setProperty("otel.traces.exporter", "jaeger"); System.setProperty("otel.service.name", "myapp1"); KafkaTracing tracing = KafkaTracing.newBuilder(GlobalOpenTelemetry.get()) .addProducerAttributesExtractors(new ProducerAttribExtractor()) .addConsumerAttributesExtractors(new ConsumerAttribExtractor()) .build(); | [
"producer.interceptor.classes=io.opentelemetry.instrumentation.kafkaclients.TracingProducerInterceptor consumer.interceptor.classes=io.opentelemetry.instrumentation.kafkaclients.TracingConsumerInterceptor",
"./bin/connect-standalone.sh ./config/connect-standalone.properties connector1.properties [connector2.properties ...]",
"./bin/connect-distributed.sh ./config/connect-distributed.properties",
"header.converter=org.apache.kafka.connect.converters.ByteArrayConverter producer.interceptor.classes=io.opentelemetry.instrumentation.kafkaclients.TracingProducerInterceptor consumer.interceptor.classes=io.opentelemetry.instrumentation.kafkaclients.TracingConsumerInterceptor",
"./bin/connect-mirror-maker.sh ./config/connect-mirror-maker.properties",
"producer.interceptor.classes=io.opentelemetry.instrumentation.kafkaclients.TracingProducerInterceptor",
"consumer.interceptor.classes=io.opentelemetry.instrumentation.kafkaclients.TracingConsumerInterceptor",
"./bin/kafka-mirror-maker.sh --producer.config ./config/producer.properties --consumer.config ./config/consumer.properties --num.streams=2",
"<dependency> <groupId>io.opentelemetry.semconv</groupId> <artifactId>opentelemetry-semconv</artifactId> <version>1.21.0-alpha-redhat-00001</version> </dependency> <dependency> <groupId>io.opentelemetry</groupId> <artifactId>opentelemetry-exporter-otlp</artifactId> <version>1.34.1</version> <exclusions> <exclusion> <groupId>io.opentelemetry</groupId> <artifactId>opentelemetry-exporter-sender-okhttp</artifactId> </exclusion> </exclusions> </dependency> <dependency> <groupId>io.opentelemetry</groupId> <artifactId>opentelemetry-exporter-sender-grpc-managed-channel</artifactId> <version>1.34.1</version> <scope>runtime</scope> </dependency> <dependency> <groupId>io.opentelemetry</groupId> <artifactId>opentelemetry-sdk-extension-autoconfigure</artifactId> <version>1.34.1</version> </dependency> <dependency> <groupId>io.opentelemetry.instrumentation</groupId> <artifactId>opentelemetry-kafka-clients-2.6</artifactId> <version>1.32.0-alpha</version> </dependency> <dependency> <groupId>io.opentelemetry</groupId> <artifactId>opentelemetry-sdk</artifactId> <version>1.34.1</version> </dependency> <dependency> <groupId>io.opentelemetry</groupId> <artifactId>opentelemetry-exporter-sender-jdk</artifactId> <version>1.34.1-alpha</version> <scope>runtime</scope> </dependency> <dependency> <groupId>io.grpc</groupId> <artifactId>grpc-netty-shaded</artifactId> <version>1.61.0</version> </dependency>",
"OpenTelemetry ot = GlobalOpenTelemetry.get();",
"GlobalTracer.register(tracer);",
"// Producer instance Producer < String, String > op = new KafkaProducer < > ( configs, new StringSerializer(), new StringSerializer() ); Producer < String, String > producer = tracing.wrap(op); KafkaTracing tracing = KafkaTracing.create(GlobalOpenTelemetry.get()); producer.send(...); //consumer instance Consumer<String, String> oc = new KafkaConsumer<>( configs, new StringDeserializer(), new StringDeserializer() ); Consumer<String, String> consumer = tracing.wrap(oc); consumer.subscribe(Collections.singleton(\"mytopic\")); ConsumerRecords<Integer, String> records = consumer.poll(1000); ConsumerRecord<Integer, String> record = SpanContext spanContext = TracingKafkaUtils.extractSpanContext(record.headers(), tracer);",
"senderProps.put(ProducerConfig.INTERCEPTOR_CLASSES_CONFIG, TracingProducerInterceptor.class.getName()); KafkaProducer<Integer, String> producer = new KafkaProducer<>(senderProps); producer.send(...);",
"consumerProps.put(ConsumerConfig.INTERCEPTOR_CLASSES_CONFIG, TracingConsumerInterceptor.class.getName()); KafkaConsumer<Integer, String> consumer = new KafkaConsumer<>(consumerProps); consumer.subscribe(Collections.singletonList(\"messages\")); ConsumerRecords<Integer, String> records = consumer.poll(1000); ConsumerRecord<Integer, String> record = SpanContext spanContext = TracingKafkaUtils.extractSpanContext(record.headers(), tracer);",
"private class TracingKafkaClientSupplier extends DefaultKafkaClientSupplier { @Override public Producer<byte[], byte[]> getProducer(Map<String, Object> config) { KafkaTelemetry telemetry = KafkaTelemetry.create(GlobalOpenTelemetry.get()); return telemetry.wrap(super.getProducer(config)); } @Override public Consumer<byte[], byte[]> getConsumer(Map<String, Object> config) { KafkaTelemetry telemetry = KafkaTelemetry.create(GlobalOpenTelemetry.get()); return telemetry.wrap(super.getConsumer(config)); } @Override public Consumer<byte[], byte[]> getRestoreConsumer(Map<String, Object> config) { return this.getConsumer(config); } @Override public Consumer<byte[], byte[]> getGlobalConsumer(Map<String, Object> config) { return this.getConsumer(config); } }",
"KafkaClientSupplier supplier = new TracingKafkaClientSupplier(tracer); KafkaStreams streams = new KafkaStreams(builder.build(), new StreamsConfig(config), supplier); streams.start();",
"props.put(StreamsConfig.PRODUCER_PREFIX + ProducerConfig.INTERCEPTOR_CLASSES_CONFIG, TracingProducerInterceptor.class.getName()); props.put(StreamsConfig.CONSUMER_PREFIX + ConsumerConfig.INTERCEPTOR_CLASSES_CONFIG, TracingConsumerInterceptor.class.getName());",
"OTEL_SERVICE_NAME=my-tracing-service OTEL_TRACES_EXPORTER=zipkin 1 OTEL_EXPORTER_ZIPKIN_ENDPOINT=http://localhost:9411/api/v2/spans 2",
"//Defines attribute extraction for a producer private static class ProducerAttribExtractor implements AttributesExtractor < ProducerRecord < ? , ? > , Void > { @Override public void onStart(AttributesBuilder attributes, ProducerRecord < ? , ? > producerRecord) { set(attributes, AttributeKey.stringKey(\"prod_start\"), \"prod1\"); } @Override public void onEnd(AttributesBuilder attributes, ProducerRecord < ? , ? > producerRecord, @Nullable Void unused, @Nullable Throwable error) { set(attributes, AttributeKey.stringKey(\"prod_end\"), \"prod2\"); } } //Defines attribute extraction for a consumer private static class ConsumerAttribExtractor implements AttributesExtractor < ConsumerRecord < ? , ? > , Void > { @Override public void onStart(AttributesBuilder attributes, ConsumerRecord < ? , ? > producerRecord) { set(attributes, AttributeKey.stringKey(\"con_start\"), \"con1\"); } @Override public void onEnd(AttributesBuilder attributes, ConsumerRecord < ? , ? > producerRecord, @Nullable Void unused, @Nullable Throwable error) { set(attributes, AttributeKey.stringKey(\"con_end\"), \"con2\"); } } //Extracts the attributes public static void main(String[] args) throws Exception { Map < String, Object > configs = new HashMap < > (Collections.singletonMap(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG, \"localhost:9092\")); System.setProperty(\"otel.traces.exporter\", \"jaeger\"); System.setProperty(\"otel.service.name\", \"myapp1\"); KafkaTracing tracing = KafkaTracing.newBuilder(GlobalOpenTelemetry.get()) .addProducerAttributesExtractors(new ProducerAttribExtractor()) .addConsumerAttributesExtractors(new ConsumerAttribExtractor()) .build();"
] | https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.9/html/using_streams_for_apache_kafka_on_rhel_with_zookeeper/assembly-distributed-tracing-str |
Migration Toolkit for Containers | Migration Toolkit for Containers OpenShift Container Platform 4.10 Migrating to OpenShift Container Platform 4 Red Hat OpenShift Documentation Team | [
"status: conditions: - category: Warn lastTransitionTime: 2021-07-15T04:11:44Z message: Failed gathering extended PV usage information for PVs [nginx-logs nginx-html], please see MigAnalytic openshift-migration/ocp-24706-basicvolmig-migplan-1626319591-szwd6 for details reason: FailedRunningDf status: \"True\" type: ExtendedPVAnalysisFailed",
"podman login registry.redhat.io",
"podman cp USD(podman create registry.redhat.io/rhmtc/openshift-migration-legacy-rhel8-operator:v1.7):/operator.yml ./",
"podman cp USD(podman create registry.redhat.io/rhmtc/openshift-migration-legacy-rhel8-operator:v1.7):/controller.yml ./",
"oc run test --image registry.redhat.io/ubi8 --command sleep infinity",
"oc create -f operator.yml",
"namespace/openshift-migration created rolebinding.rbac.authorization.k8s.io/system:deployers created serviceaccount/migration-operator created customresourcedefinition.apiextensions.k8s.io/migrationcontrollers.migration.openshift.io created role.rbac.authorization.k8s.io/migration-operator created rolebinding.rbac.authorization.k8s.io/migration-operator created clusterrolebinding.rbac.authorization.k8s.io/migration-operator created deployment.apps/migration-operator created Error from server (AlreadyExists): error when creating \"./operator.yml\": rolebindings.rbac.authorization.k8s.io \"system:image-builders\" already exists 1 Error from server (AlreadyExists): error when creating \"./operator.yml\": rolebindings.rbac.authorization.k8s.io \"system:image-pullers\" already exists",
"oc create -f controller.yml",
"oc get pods -n openshift-migration",
"apiVersion: migration.openshift.io/v1alpha1 kind: MigrationController metadata: name: migration-controller namespace: openshift-migration spec: [...] stunnel_tcp_proxy: http://username:password@ip:port",
"apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: allow-all-egress-from-rsync-pods spec: podSelector: matchLabels: owner: directvolumemigration app: directvolumemigration-rsync-transfer egress: - {} policyTypes: - Egress",
"apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: allow-all-egress-from-rsync-pods spec: podSelector: matchLabels: owner: directvolumemigration app: directvolumemigration-rsync-transfer ingress: - {} policyTypes: - Ingress",
"apiVersion: network.openshift.io/v1 kind: EgressNetworkPolicy metadata: name: test-egress-policy namespace: <namespace> spec: egress: - to: cidrSelector: <cidr_of_source_or_target_cluster> type: Deny",
"apiVersion: migration.openshift.io/v1alpha1 kind: MigrationController metadata: name: migration-controller namespace: openshift-migration spec: [...] rsync_endpoint_type: [NodePort|ClusterIP|Route]",
"spec: src_supplemental_groups: \"1000,2000\" target_supplemental_groups: \"2000,3000\"",
"oc get migrationcontroller <migration_controller> -n openshift-migration",
"apiVersion: migration.openshift.io/v1alpha1 kind: MigrationController metadata: name: <migration_controller> namespace: openshift-migration spec: stunnel_tcp_proxy: http://<username>:<password>@<ip>:<port> 1 noProxy: example.com 2",
"oc replace -f migration-controller.yaml -n openshift-migration",
"BUCKET=<your_bucket>",
"REGION=<your_region>",
"aws s3api create-bucket --bucket USDBUCKET --region USDREGION --create-bucket-configuration LocationConstraint=USDREGION 1",
"aws iam create-user --user-name velero 1",
"cat > velero-policy.json <<EOF { \"Version\": \"2012-10-17\", \"Statement\": [ { \"Effect\": \"Allow\", \"Action\": [ \"ec2:DescribeVolumes\", \"ec2:DescribeSnapshots\", \"ec2:CreateTags\", \"ec2:CreateVolume\", \"ec2:CreateSnapshot\", \"ec2:DeleteSnapshot\" ], \"Resource\": \"*\" }, { \"Effect\": \"Allow\", \"Action\": [ \"s3:GetObject\", \"s3:DeleteObject\", \"s3:PutObject\", \"s3:AbortMultipartUpload\", \"s3:ListMultipartUploadParts\" ], \"Resource\": [ \"arn:aws:s3:::USD{BUCKET}/*\" ] }, { \"Effect\": \"Allow\", \"Action\": [ \"s3:ListBucket\", \"s3:GetBucketLocation\", \"s3:ListBucketMultipartUploads\" ], \"Resource\": [ \"arn:aws:s3:::USD{BUCKET}\" ] } ] } EOF",
"aws iam put-user-policy --user-name velero --policy-name velero --policy-document file://velero-policy.json",
"aws iam create-access-key --user-name velero",
"{ \"AccessKey\": { \"UserName\": \"velero\", \"Status\": \"Active\", \"CreateDate\": \"2017-07-31T22:24:41.576Z\", \"SecretAccessKey\": <AWS_SECRET_ACCESS_KEY>, \"AccessKeyId\": <AWS_ACCESS_KEY_ID> } }",
"gcloud auth login",
"BUCKET=<bucket> 1",
"gsutil mb gs://USDBUCKET/",
"PROJECT_ID=USD(gcloud config get-value project)",
"gcloud iam service-accounts create velero --display-name \"Velero service account\"",
"gcloud iam service-accounts list",
"SERVICE_ACCOUNT_EMAIL=USD(gcloud iam service-accounts list --filter=\"displayName:Velero service account\" --format 'value(email)')",
"ROLE_PERMISSIONS=( compute.disks.get compute.disks.create compute.disks.createSnapshot compute.snapshots.get compute.snapshots.create compute.snapshots.useReadOnly compute.snapshots.delete compute.zones.get storage.objects.create storage.objects.delete storage.objects.get storage.objects.list iam.serviceAccounts.signBlob )",
"gcloud iam roles create velero.server --project USDPROJECT_ID --title \"Velero Server\" --permissions \"USD(IFS=\",\"; echo \"USD{ROLE_PERMISSIONS[*]}\")\"",
"gcloud projects add-iam-policy-binding USDPROJECT_ID --member serviceAccount:USDSERVICE_ACCOUNT_EMAIL --role projects/USDPROJECT_ID/roles/velero.server",
"gsutil iam ch serviceAccount:USDSERVICE_ACCOUNT_EMAIL:objectAdmin gs://USD{BUCKET}",
"gcloud iam service-accounts keys create credentials-velero --iam-account USDSERVICE_ACCOUNT_EMAIL",
"az login",
"AZURE_RESOURCE_GROUP=Velero_Backups",
"az group create -n USDAZURE_RESOURCE_GROUP --location CentralUS 1",
"AZURE_STORAGE_ACCOUNT_ID=\"veleroUSD(uuidgen | cut -d '-' -f5 | tr '[A-Z]' '[a-z]')\"",
"az storage account create --name USDAZURE_STORAGE_ACCOUNT_ID --resource-group USDAZURE_RESOURCE_GROUP --sku Standard_GRS --encryption-services blob --https-only true --kind BlobStorage --access-tier Hot",
"BLOB_CONTAINER=velero",
"az storage container create -n USDBLOB_CONTAINER --public-access off --account-name USDAZURE_STORAGE_ACCOUNT_ID",
"AZURE_SUBSCRIPTION_ID=`az account list --query '[?isDefault].id' -o tsv` AZURE_TENANT_ID=`az account list --query '[?isDefault].tenantId' -o tsv` AZURE_CLIENT_SECRET=`az ad sp create-for-rbac --name \"velero\" --role \"Contributor\" --query 'password' -o tsv` AZURE_CLIENT_ID=`az ad sp list --display-name \"velero\" --query '[0].appId' -o tsv`",
"cat << EOF > ./credentials-velero AZURE_SUBSCRIPTION_ID=USD{AZURE_SUBSCRIPTION_ID} AZURE_TENANT_ID=USD{AZURE_TENANT_ID} AZURE_CLIENT_ID=USD{AZURE_CLIENT_ID} AZURE_CLIENT_SECRET=USD{AZURE_CLIENT_SECRET} AZURE_RESOURCE_GROUP=USD{AZURE_RESOURCE_GROUP} AZURE_CLOUD_NAME=AzurePublicCloud EOF",
"oc delete migrationcontroller <migration_controller>",
"oc delete USD(oc get crds -o name | grep 'migration.openshift.io')",
"oc delete USD(oc get crds -o name | grep 'velero')",
"oc delete USD(oc get clusterroles -o name | grep 'migration.openshift.io')",
"oc delete clusterrole migration-operator",
"oc delete USD(oc get clusterroles -o name | grep 'velero')",
"oc delete USD(oc get clusterrolebindings -o name | grep 'migration.openshift.io')",
"oc delete clusterrolebindings migration-operator",
"oc delete USD(oc get clusterrolebindings -o name | grep 'velero')",
"podman login registry.redhat.io",
"podman cp USD(podman create registry.redhat.io/rhmtc/openshift-migration-legacy-rhel8-operator:v1.7):/operator.yml ./",
"podman cp USD(podman create registry.redhat.io/rhmtc/openshift-migration-legacy-rhel8-operator:v1.7):/controller.yml ./",
"grep openshift-migration-legacy-rhel8-operator ./mapping.txt | grep rhmtc",
"registry.redhat.io/rhmtc/openshift-migration-legacy-rhel8-operator@sha256:468a6126f73b1ee12085ca53a312d1f96ef5a2ca03442bcb63724af5e2614e8a=<registry.apps.example.com>/rhmtc/openshift-migration-legacy-rhel8-operator",
"containers: - name: ansible image: <registry.apps.example.com>/rhmtc/openshift-migration-legacy-rhel8-operator@sha256:<468a6126f73b1ee12085ca53a312d1f96ef5a2ca03442bcb63724af5e2614e8a> 1 - name: operator image: <registry.apps.example.com>/rhmtc/openshift-migration-legacy-rhel8-operator@sha256:<468a6126f73b1ee12085ca53a312d1f96ef5a2ca03442bcb63724af5e2614e8a> 2 env: - name: REGISTRY value: <registry.apps.example.com> 3",
"oc create -f operator.yml",
"namespace/openshift-migration created rolebinding.rbac.authorization.k8s.io/system:deployers created serviceaccount/migration-operator created customresourcedefinition.apiextensions.k8s.io/migrationcontrollers.migration.openshift.io created role.rbac.authorization.k8s.io/migration-operator created rolebinding.rbac.authorization.k8s.io/migration-operator created clusterrolebinding.rbac.authorization.k8s.io/migration-operator created deployment.apps/migration-operator created Error from server (AlreadyExists): error when creating \"./operator.yml\": rolebindings.rbac.authorization.k8s.io \"system:image-builders\" already exists 1 Error from server (AlreadyExists): error when creating \"./operator.yml\": rolebindings.rbac.authorization.k8s.io \"system:image-pullers\" already exists",
"oc create -f controller.yml",
"oc get pods -n openshift-migration",
"apiVersion: migration.openshift.io/v1alpha1 kind: MigrationController metadata: name: migration-controller namespace: openshift-migration spec: [...] stunnel_tcp_proxy: http://username:password@ip:port",
"apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: allow-all-egress-from-rsync-pods spec: podSelector: matchLabels: owner: directvolumemigration app: directvolumemigration-rsync-transfer egress: - {} policyTypes: - Egress",
"apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: allow-all-egress-from-rsync-pods spec: podSelector: matchLabels: owner: directvolumemigration app: directvolumemigration-rsync-transfer ingress: - {} policyTypes: - Ingress",
"apiVersion: network.openshift.io/v1 kind: EgressNetworkPolicy metadata: name: test-egress-policy namespace: <namespace> spec: egress: - to: cidrSelector: <cidr_of_source_or_target_cluster> type: Deny",
"apiVersion: migration.openshift.io/v1alpha1 kind: MigrationController metadata: name: migration-controller namespace: openshift-migration spec: [...] rsync_endpoint_type: [NodePort|ClusterIP|Route]",
"spec: src_supplemental_groups: \"1000,2000\" target_supplemental_groups: \"2000,3000\"",
"oc get migrationcontroller <migration_controller> -n openshift-migration",
"apiVersion: migration.openshift.io/v1alpha1 kind: MigrationController metadata: name: <migration_controller> namespace: openshift-migration spec: stunnel_tcp_proxy: http://<username>:<password>@<ip>:<port> 1 noProxy: example.com 2",
"oc replace -f migration-controller.yaml -n openshift-migration",
"oc delete migrationcontroller <migration_controller>",
"oc delete USD(oc get crds -o name | grep 'migration.openshift.io')",
"oc delete USD(oc get crds -o name | grep 'velero')",
"oc delete USD(oc get clusterroles -o name | grep 'migration.openshift.io')",
"oc delete clusterrole migration-operator",
"oc delete USD(oc get clusterroles -o name | grep 'velero')",
"oc delete USD(oc get clusterrolebindings -o name | grep 'migration.openshift.io')",
"oc delete clusterrolebindings migration-operator",
"oc delete USD(oc get clusterrolebindings -o name | grep 'velero')",
"podman login registry.redhat.io",
"podman cp USD(podman create registry.redhat.io/rhmtc/openshift-migration-legacy-rhel8-operator:v1.7):/operator.yml ./",
"oc replace --force -f operator.yml",
"oc scale -n openshift-migration --replicas=0 deployment/migration-operator",
"oc scale -n openshift-migration --replicas=1 deployment/migration-operator",
"oc -o yaml -n openshift-migration get deployment/migration-operator | grep image: | awk -F \":\" '{ print USDNF }'",
"podman cp USD(podman create registry.redhat.io/rhmtc/openshift-migration-legacy-rhel8-operator:v1.7):/controller.yml ./",
"oc create -f controller.yml",
"oc get pods -n openshift-migration",
"oc get migplan <migplan> -o yaml -n openshift-migration",
"spec: indirectImageMigration: true indirectVolumeMigration: true",
"oc replace -f migplan.yaml -n openshift-migration",
"oc get migplan <migplan> -o yaml -n openshift-migration",
"oc get pv",
"oc get pods --all-namespaces | egrep -v 'Running | Completed'",
"oc get pods --all-namespaces --field-selector=status.phase=Running -o json | jq '.items[]|select(any( .status.containerStatuses[]; .restartCount > 3))|.metadata.name'",
"oc get csr -A | grep pending -i",
"oc expose svc <app1-svc> --hostname <app1.apps.source.example.com> -n <app1-namespace>",
"oc get -n openshift-migration route/migration -o go-template='https://{{ .spec.host }}'",
"oc sa get-token migration-controller -n openshift-migration",
"eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJtaWciLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlY3JldC5uYW1lIjoibWlnLXRva2VuLWs4dDJyIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQubmFtZSI6Im1pZyIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImE1YjFiYWMwLWMxYmYtMTFlOS05Y2NiLTAyOWRmODYwYjMwOCIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDptaWc6bWlnIn0.xqeeAINK7UXpdRqAtOj70qhBJPeMwmgLomV9iFxr5RoqUgKchZRG2J2rkqmPm6vr7K-cm7ibD1IBpdQJCcVDuoHYsFgV4mp9vgOfn9osSDp2TGikwNz4Az95e81xnjVUmzh-NjDsEpw71DH92iHV_xt2sTwtzftS49LpPW2LjrV0evtNBP_t_RfskdArt5VSv25eORl7zScqfe1CiMkcVbf2UqACQjo3LbkpfN26HAioO2oH0ECPiRzT0Xyh-KwFutJLS9Xgghyw-LD9kPKcE_xbbJ9Y4Rqajh7WdPYuB0Jd9DPVrslmzK-F6cgHHYoZEv0SvLQi-PO0rpDrcjOEQQ",
"oc create route passthrough --service=docker-registry --port=5000 -n default",
"oc create route passthrough --service=image-registry --port=5000 -n openshift-image-registry",
"oc create route passthrough --service=image-registry -n openshift-image-registry",
"apiVersion: migration.openshift.io/v1alpha1 kind: MigrationController metadata: name: migration-controller namespace: openshift-migration spec: [...] stunnel_tcp_proxy: http://username:password@ip:port",
"apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: allow-all-egress-from-rsync-pods spec: podSelector: matchLabels: owner: directvolumemigration app: directvolumemigration-rsync-transfer egress: - {} policyTypes: - Egress",
"apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: allow-all-egress-from-rsync-pods spec: podSelector: matchLabels: owner: directvolumemigration app: directvolumemigration-rsync-transfer ingress: - {} policyTypes: - Ingress",
"apiVersion: network.openshift.io/v1 kind: EgressNetworkPolicy metadata: name: test-egress-policy namespace: <namespace> spec: egress: - to: cidrSelector: <cidr_of_source_or_target_cluster> type: Deny",
"apiVersion: migration.openshift.io/v1alpha1 kind: MigrationController metadata: name: migration-controller namespace: openshift-migration spec: [...] rsync_endpoint_type: [NodePort|ClusterIP|Route]",
"spec: src_supplemental_groups: \"1000,2000\" target_supplemental_groups: \"2000,3000\"",
"oc get migrationcontroller <migration_controller> -n openshift-migration",
"apiVersion: migration.openshift.io/v1alpha1 kind: MigrationController metadata: name: <migration_controller> namespace: openshift-migration spec: stunnel_tcp_proxy: http://<username>:<password>@<ip>:<port> 1 noProxy: example.com 2",
"oc replace -f migration-controller.yaml -n openshift-migration",
"cat << EOF | oc apply -f - apiVersion: migration.openshift.io/v1alpha1 kind: MigCluster metadata: name: <host_cluster> namespace: openshift-migration spec: isHostCluster: true EOF",
"cat << EOF | oc apply -f - apiVersion: v1 kind: Secret metadata: name: <cluster_secret> namespace: openshift-config type: Opaque data: saToken: <sa_token> 1 EOF",
"oc sa get-token migration-controller -n openshift-migration | base64 -w 0",
"cat << EOF | oc apply -f - apiVersion: migration.openshift.io/v1alpha1 kind: MigCluster metadata: name: <remote_cluster> 1 namespace: openshift-migration spec: exposedRegistryPath: <exposed_registry_route> 2 insecure: false 3 isHostCluster: false serviceAccountSecretRef: name: <remote_cluster_secret> 4 namespace: openshift-config url: <remote_cluster_url> 5 EOF",
"oc describe cluster <cluster>",
"cat << EOF | oc apply -f - apiVersion: v1 kind: Secret metadata: namespace: openshift-config name: <migstorage_creds> type: Opaque data: aws-access-key-id: <key_id_base64> 1 aws-secret-access-key: <secret_key_base64> 2 EOF",
"echo -n \"<key>\" | base64 -w 0 1",
"cat << EOF | oc apply -f - apiVersion: migration.openshift.io/v1alpha1 kind: MigStorage metadata: name: <migstorage> namespace: openshift-migration spec: backupStorageConfig: awsBucketName: <bucket> 1 credsSecretRef: name: <storage_secret> 2 namespace: openshift-config backupStorageProvider: <storage_provider> 3 volumeSnapshotConfig: credsSecretRef: name: <storage_secret> 4 namespace: openshift-config volumeSnapshotProvider: <storage_provider> 5 EOF",
"oc describe migstorage <migstorage>",
"cat << EOF | oc apply -f - apiVersion: migration.openshift.io/v1alpha1 kind: MigPlan metadata: name: <migplan> namespace: openshift-migration spec: destMigClusterRef: name: <host_cluster> namespace: openshift-migration indirectImageMigration: true 1 indirectVolumeMigration: true 2 migStorageRef: name: <migstorage> 3 namespace: openshift-migration namespaces: - <source_namespace_1> 4 - <source_namespace_2> - <source_namespace_3>:<destination_namespace> 5 srcMigClusterRef: name: <remote_cluster> 6 namespace: openshift-migration EOF",
"oc describe migplan <migplan> -n openshift-migration",
"cat << EOF | oc apply -f - apiVersion: migration.openshift.io/v1alpha1 kind: MigMigration metadata: name: <migmigration> namespace: openshift-migration spec: migPlanRef: name: <migplan> 1 namespace: openshift-migration quiescePods: true 2 stage: false 3 rollback: false 4 EOF",
"oc watch migmigration <migmigration> -n openshift-migration",
"Name: c8b034c0-6567-11eb-9a4f-0bc004db0fbc Namespace: openshift-migration Labels: migration.openshift.io/migplan-name=django Annotations: openshift.io/touch: e99f9083-6567-11eb-8420-0a580a81020c API Version: migration.openshift.io/v1alpha1 Kind: MigMigration Spec: Mig Plan Ref: Name: migplan Namespace: openshift-migration Stage: false Status: Conditions: Category: Advisory Last Transition Time: 2021-02-02T15:04:09Z Message: Step: 19/47 Reason: InitialBackupCreated Status: True Type: Running Category: Required Last Transition Time: 2021-02-02T15:03:19Z Message: The migration is ready. Status: True Type: Ready Category: Required Durable: true Last Transition Time: 2021-02-02T15:04:05Z Message: The migration registries are healthy. Status: True Type: RegistriesHealthy Itinerary: Final Observed Digest: 7fae9d21f15979c71ddc7dd075cb97061895caac5b936d92fae967019ab616d5 Phase: InitialBackupCreated Pipeline: Completed: 2021-02-02T15:04:07Z Message: Completed Name: Prepare Started: 2021-02-02T15:03:18Z Message: Waiting for initial Velero backup to complete. Name: Backup Phase: InitialBackupCreated Progress: Backup openshift-migration/c8b034c0-6567-11eb-9a4f-0bc004db0fbc-wpc44: 0 out of estimated total of 0 objects backed up (5s) Started: 2021-02-02T15:04:07Z Message: Not started Name: StageBackup Message: Not started Name: StageRestore Message: Not started Name: DirectImage Message: Not started Name: DirectVolume Message: Not started Name: Restore Message: Not started Name: Cleanup Start Timestamp: 2021-02-02T15:03:18Z Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Running 57s migmigration_controller Step: 2/47 Normal Running 57s migmigration_controller Step: 3/47 Normal Running 57s (x3 over 57s) migmigration_controller Step: 4/47 Normal Running 54s migmigration_controller Step: 5/47 Normal Running 54s migmigration_controller Step: 6/47 Normal Running 52s (x2 over 53s) migmigration_controller Step: 7/47 Normal Running 51s (x2 over 51s) migmigration_controller Step: 8/47 Normal Ready 50s (x12 over 57s) migmigration_controller The migration is ready. Normal Running 50s migmigration_controller Step: 9/47 Normal Running 50s migmigration_controller Step: 10/47",
"- hosts: localhost gather_facts: false tasks: - name: get pod name shell: oc get po --all-namespaces",
"- hosts: localhost gather_facts: false tasks: - name: Get pod k8s_info: kind: pods api: v1 namespace: openshift-migration name: \"{{ lookup( 'env', 'HOSTNAME') }}\" register: pods - name: Print pod name debug: msg: \"{{ pods.resources[0].metadata.name }}\"",
"- hosts: localhost gather_facts: false tasks: - name: Set a boolean set_fact: do_fail: true - name: \"fail\" fail: msg: \"Cause a failure\" when: do_fail",
"- hosts: localhost gather_facts: false tasks: - set_fact: namespaces: \"{{ (lookup( 'env', 'MIGRATION_NAMESPACES')).split(',') }}\" - debug: msg: \"{{ item }}\" with_items: \"{{ namespaces }}\" - debug: msg: \"{{ lookup( 'env', 'MIGRATION_PLAN_NAME') }}\"",
"oc edit migrationcontroller <migration_controller> -n openshift-migration",
"apiVersion: migration.openshift.io/v1alpha1 kind: MigrationController metadata: name: migration-controller namespace: openshift-migration spec: disable_image_migration: true 1 disable_pv_migration: true 2 additional_excluded_resources: 3 - resource1 - resource2",
"oc get deployment -n openshift-migration migration-controller -o yaml | grep EXCLUDED_RESOURCES -A1",
"name: EXCLUDED_RESOURCES value: resource1,resource2,imagetags,templateinstances,clusterserviceversions,packagemanifests,subscriptions,servicebrokers,servicebindings,serviceclasses,serviceinstances,serviceplans,imagestreams,persistentvolumes,persistentvolumeclaims",
"spec: namespaces: - namespace_2 - namespace_1:namespace_2",
"spec: namespaces: - namespace_1:namespace_1",
"spec: namespaces: - namespace_1",
"apiVersion: migration.openshift.io/v1alpha1 kind: MigPlan metadata: name: <migplan> namespace: openshift-migration spec: persistentVolumes: - capacity: 10Gi name: <pv_name> pvc: selection: action: skip",
"apiVersion: migration.openshift.io/v1alpha1 kind: MigPlan metadata: name: <migplan> namespace: openshift-migration spec: persistentVolumes: - capacity: 10Gi name: <pv_name> pvc: name: <source_pvc>:<destination_pvc> 1",
"apiVersion: migration.openshift.io/v1alpha1 kind: MigPlan metadata: name: <migplan> namespace: openshift-migration spec: persistentVolumes: - capacity: 10Gi name: pvc-095a6559-b27f-11eb-b27f-021bddcaf6e4 proposedCapacity: 10Gi pvc: accessModes: - ReadWriteMany hasReference: true name: mysql namespace: mysql-persistent selection: action: <copy> 1 copyMethod: <filesystem> 2 verify: true 3 storageClass: <gp2> 4 accessMode: <ReadWriteMany> 5 storageClass: cephfs",
"apiVersion: migration.openshift.io/v1alpha1 kind: MigPlan metadata: name: <migplan> namespace: openshift-migration spec: includedResources: - kind: <kind> 1 group: \"\" - kind: <kind> group: \"\"",
"apiVersion: migration.openshift.io/v1alpha1 kind: MigPlan metadata: name: <migplan> namespace: openshift-migration spec: includedResources: - kind: <kind> 1 group: \"\" - kind: <kind> group: \"\" labelSelector: matchLabels: <label> 2",
"apiVersion: migration.openshift.io/v1alpha1 kind: MigMigration metadata: generateName: <migplan> namespace: openshift-migration spec: migPlanRef: name: <migplan> namespace: openshift-migration stage: false",
"oc edit migrationcontroller -n openshift-migration",
"mig_controller_limits_cpu: \"1\" 1 mig_controller_limits_memory: \"10Gi\" 2 mig_controller_requests_cpu: \"100m\" 3 mig_controller_requests_memory: \"350Mi\" 4 mig_pv_limit: 100 5 mig_pod_limit: 100 6 mig_namespace_limit: 10 7",
"oc patch migrationcontroller migration-controller -p '{\"spec\":{\"enable_dvm_pv_resizing\":true}}' \\ 1 --type='merge' -n openshift-migration",
"oc patch migrationcontroller migration-controller -p '{\"spec\":{\"pv_resizing_threshold\":41}}' \\ 1 --type='merge' -n openshift-migration",
"status: conditions: - category: Warn durable: true lastTransitionTime: \"2021-06-17T08:57:01Z\" message: 'Capacity of the following volumes will be automatically adjusted to avoid disk capacity issues in the target cluster: [pvc-b800eb7b-cf3b-11eb-a3f7-0eae3e0555f3]' reason: Done status: \"False\" type: PvCapacityAdjustmentRequired",
"oc -n openshift-migration patch migrationcontroller migration-controller --type=json --patch '[{ \"op\": \"replace\", \"path\": \"/spec/mig_controller_enable_cache\", \"value\": true}]'",
"oc -n openshift-migration patch migrationcontroller migration-controller --type=json --patch '[{ \"op\": \"replace\", \"path\": \"/spec/mig_controller_limits_memory\", \"value\": <10Gi>}]'",
"oc -n openshift-migration patch migrationcontroller migration-controller --type=json --patch '[{ \"op\": \"replace\", \"path\": \"/spec/mig_controller_requests_memory\", \"value\": <350Mi>}]'",
"apiVersion: migration.openshift.io/v1alpha1 kind: DirectImageMigration metadata: labels: controller-tools.k8s.io: \"1.0\" name: <direct_image_migration> spec: srcMigClusterRef: name: <source_cluster> namespace: openshift-migration destMigClusterRef: name: <destination_cluster> namespace: openshift-migration namespaces: 1 - <source_namespace_1> - <source_namespace_2>:<destination_namespace_3> 2",
"apiVersion: migration.openshift.io/v1alpha1 kind: DirectImageStreamMigration metadata: labels: controller-tools.k8s.io: \"1.0\" name: <direct_image_stream_migration> spec: srcMigClusterRef: name: <source_cluster> namespace: openshift-migration destMigClusterRef: name: <destination_cluster> namespace: openshift-migration imageStreamRef: name: <image_stream> namespace: <source_image_stream_namespace> destNamespace: <destination_image_stream_namespace>",
"apiVersion: migration.openshift.io/v1alpha1 kind: DirectVolumeMigration metadata: name: <direct_volume_migration> namespace: openshift-migration spec: createDestinationNamespaces: false 1 deleteProgressReportingCRs: false 2 destMigClusterRef: name: <host_cluster> 3 namespace: openshift-migration persistentVolumeClaims: - name: <pvc> 4 namespace: <pvc_namespace> srcMigClusterRef: name: <source_cluster> namespace: openshift-migration",
"apiVersion: migration.openshift.io/v1alpha1 kind: DirectVolumeMigrationProgress metadata: labels: controller-tools.k8s.io: \"1.0\" name: <direct_volume_migration_progress> spec: clusterRef: name: <source_cluster> namespace: openshift-migration podRef: name: <rsync_pod> namespace: openshift-migration",
"apiVersion: migration.openshift.io/v1alpha1 kind: MigAnalytic metadata: annotations: migplan: <migplan> name: <miganalytic> namespace: openshift-migration labels: migplan: <migplan> spec: analyzeImageCount: true 1 analyzeK8SResources: true 2 analyzePVCapacity: true 3 listImages: false 4 listImagesLimit: 50 5 migPlanRef: name: <migplan> namespace: openshift-migration",
"apiVersion: migration.openshift.io/v1alpha1 kind: MigCluster metadata: labels: controller-tools.k8s.io: \"1.0\" name: <host_cluster> 1 namespace: openshift-migration spec: isHostCluster: true 2 The 'azureResourceGroup' parameter is relevant only for Microsoft Azure. azureResourceGroup: <azure_resource_group> 3 caBundle: <ca_bundle_base64> 4 insecure: false 5 refresh: false 6 The 'restartRestic' parameter is relevant for a source cluster. restartRestic: true 7 The following parameters are relevant for a remote cluster. exposedRegistryPath: <registry_route> 8 url: <destination_cluster_url> 9 serviceAccountSecretRef: name: <source_secret> 10 namespace: openshift-config",
"apiVersion: migration.openshift.io/v1alpha1 kind: MigHook metadata: generateName: <hook_name_prefix> 1 name: <mighook> 2 namespace: openshift-migration spec: activeDeadlineSeconds: 1800 3 custom: false 4 image: <hook_image> 5 playbook: <ansible_playbook_base64> 6 targetCluster: source 7",
"apiVersion: migration.openshift.io/v1alpha1 kind: MigMigration metadata: labels: controller-tools.k8s.io: \"1.0\" name: <migmigration> namespace: openshift-migration spec: canceled: false 1 rollback: false 2 stage: false 3 quiescePods: true 4 keepAnnotations: true 5 verify: false 6 migPlanRef: name: <migplan> namespace: openshift-migration",
"apiVersion: migration.openshift.io/v1alpha1 kind: MigPlan metadata: labels: controller-tools.k8s.io: \"1.0\" name: <migplan> namespace: openshift-migration spec: closed: false 1 srcMigClusterRef: name: <source_cluster> namespace: openshift-migration destMigClusterRef: name: <destination_cluster> namespace: openshift-migration hooks: 2 - executionNamespace: <namespace> 3 phase: <migration_phase> 4 reference: name: <hook> 5 namespace: <hook_namespace> 6 serviceAccount: <service_account> 7 indirectImageMigration: true 8 indirectVolumeMigration: false 9 migStorageRef: name: <migstorage> namespace: openshift-migration namespaces: - <source_namespace_1> 10 - <source_namespace_2> - <source_namespace_3>:<destination_namespace_4> 11 refresh: false 12",
"apiVersion: migration.openshift.io/v1alpha1 kind: MigStorage metadata: labels: controller-tools.k8s.io: \"1.0\" name: <migstorage> namespace: openshift-migration spec: backupStorageProvider: <backup_storage_provider> 1 volumeSnapshotProvider: <snapshot_storage_provider> 2 backupStorageConfig: awsBucketName: <bucket> 3 awsRegion: <region> 4 credsSecretRef: namespace: openshift-config name: <storage_secret> 5 awsKmsKeyId: <key_id> 6 awsPublicUrl: <public_url> 7 awsSignatureVersion: <signature_version> 8 volumeSnapshotConfig: awsRegion: <region> 9 credsSecretRef: namespace: openshift-config name: <storage_secret> 10 refresh: false 11",
"oc -n openshift-migration get pods | grep log",
"oc -n openshift-migration logs -f <mig-log-reader-pod> -c color 1",
"oc adm must-gather --image=registry.redhat.io/rhmtc/openshift-migration-must-gather-rhel8:v1.7",
"oc adm must-gather --image=registry.redhat.io/rhmtc/openshift-migration-must-gather-rhel8:v1.7 -- /usr/bin/gather_metrics_dump",
"tar -xvzf must-gather/metrics/prom_data.tar.gz",
"make prometheus-run",
"Started Prometheus on http://localhost:9090",
"make prometheus-cleanup",
"oc -n openshift-migration exec deployment/velero -c velero -- ./velero <backup_restore_cr> <command> <cr_name>",
"oc -n openshift-migration exec deployment/velero -c velero -- ./velero backup describe 0e44ae00-5dc3-11eb-9ca8-df7e5254778b-2d8ql",
"oc -n openshift-migration exec deployment/velero -c velero -- ./velero --help",
"oc -n openshift-migration exec deployment/velero -c velero -- ./velero <backup_restore_cr> describe <cr_name>",
"oc -n openshift-migration exec deployment/velero -c velero -- ./velero backup describe 0e44ae00-5dc3-11eb-9ca8-df7e5254778b-2d8ql",
"oc -n openshift-migration exec deployment/velero -c velero -- ./velero <backup_restore_cr> logs <cr_name>",
"oc -n openshift-migration exec deployment/velero -c velero -- ./velero restore logs ccc7c2d0-6017-11eb-afab-85d0007f5a19-x4lbf",
"oc get migmigration <migmigration> -o yaml",
"status: conditions: - category: Warn durable: true lastTransitionTime: \"2021-01-26T20:48:40Z\" message: 'Final Restore openshift-migration/ccc7c2d0-6017-11eb-afab-85d0007f5a19-x4lbf: partially failed on destination cluster' status: \"True\" type: VeleroFinalRestorePartiallyFailed - category: Advisory durable: true lastTransitionTime: \"2021-01-26T20:48:42Z\" message: The migration has completed with warnings, please look at `Warn` conditions. reason: Completed status: \"True\" type: SucceededWithWarnings",
"oc -n {namespace} exec deployment/velero -c velero -- ./velero restore describe <restore>",
"Phase: PartiallyFailed (run 'velero restore logs ccc7c2d0-6017-11eb-afab-85d0007f5a19-x4lbf' for more information) Errors: Velero: <none> Cluster: <none> Namespaces: migration-example: error restoring example.com/migration-example/migration-example: the server could not find the requested resource",
"oc -n {namespace} exec deployment/velero -c velero -- ./velero restore logs <restore>",
"time=\"2021-01-26T20:48:37Z\" level=info msg=\"Attempting to restore migration-example: migration-example\" logSource=\"pkg/restore/restore.go:1107\" restore=openshift-migration/ccc7c2d0-6017-11eb-afab-85d0007f5a19-x4lbf time=\"2021-01-26T20:48:37Z\" level=info msg=\"error restoring migration-example: the server could not find the requested resource\" logSource=\"pkg/restore/restore.go:1170\" restore=openshift-migration/ccc7c2d0-6017-11eb-afab-85d0007f5a19-x4lbf",
"labels: migrationcontroller: ebe13bee-c803-47d0-a9e9-83f380328b93",
"labels: migrationcontroller: ebe13bee-c803-47d0-a9e9-83f380328b93",
"oc get migmigration -n openshift-migration",
"NAME AGE 88435fe0-c9f8-11e9-85e6-5d593ce65e10 6m42s",
"oc describe migmigration 88435fe0-c9f8-11e9-85e6-5d593ce65e10 -n openshift-migration",
"name: 88435fe0-c9f8-11e9-85e6-5d593ce65e10 namespace: openshift-migration labels: <none> annotations: touch: 3b48b543-b53e-4e44-9d34-33563f0f8147 apiVersion: migration.openshift.io/v1alpha1 kind: MigMigration metadata: creationTimestamp: 2019-08-29T01:01:29Z generation: 20 resourceVersion: 88179 selfLink: /apis/migration.openshift.io/v1alpha1/namespaces/openshift-migration/migmigrations/88435fe0-c9f8-11e9-85e6-5d593ce65e10 uid: 8886de4c-c9f8-11e9-95ad-0205fe66cbb6 spec: migPlanRef: name: socks-shop-mig-plan namespace: openshift-migration quiescePods: true stage: false status: conditions: category: Advisory durable: True lastTransitionTime: 2019-08-29T01:03:40Z message: The migration has completed successfully. reason: Completed status: True type: Succeeded phase: Completed startTimestamp: 2019-08-29T01:01:29Z events: <none>",
"apiVersion: velero.io/v1 kind: Backup metadata: annotations: openshift.io/migrate-copy-phase: final openshift.io/migrate-quiesce-pods: \"true\" openshift.io/migration-registry: 172.30.105.179:5000 openshift.io/migration-registry-dir: /socks-shop-mig-plan-registry-44dd3bd5-c9f8-11e9-95ad-0205fe66cbb6 openshift.io/orig-reclaim-policy: delete creationTimestamp: \"2019-08-29T01:03:15Z\" generateName: 88435fe0-c9f8-11e9-85e6-5d593ce65e10- generation: 1 labels: app.kubernetes.io/part-of: migration migmigration: 8886de4c-c9f8-11e9-95ad-0205fe66cbb6 migration-stage-backup: 8886de4c-c9f8-11e9-95ad-0205fe66cbb6 velero.io/storage-location: myrepo-vpzq9 name: 88435fe0-c9f8-11e9-85e6-5d593ce65e10-59gb7 namespace: openshift-migration resourceVersion: \"87313\" selfLink: /apis/velero.io/v1/namespaces/openshift-migration/backups/88435fe0-c9f8-11e9-85e6-5d593ce65e10-59gb7 uid: c80dbbc0-c9f8-11e9-95ad-0205fe66cbb6 spec: excludedNamespaces: [] excludedResources: [] hooks: resources: [] includeClusterResources: null includedNamespaces: - sock-shop includedResources: - persistentvolumes - persistentvolumeclaims - namespaces - imagestreams - imagestreamtags - secrets - configmaps - pods labelSelector: matchLabels: migration-included-stage-backup: 8886de4c-c9f8-11e9-95ad-0205fe66cbb6 storageLocation: myrepo-vpzq9 ttl: 720h0m0s volumeSnapshotLocations: - myrepo-wv6fx status: completionTimestamp: \"2019-08-29T01:02:36Z\" errors: 0 expiration: \"2019-09-28T01:02:35Z\" phase: Completed startTimestamp: \"2019-08-29T01:02:35Z\" validationErrors: null version: 1 volumeSnapshotsAttempted: 0 volumeSnapshotsCompleted: 0 warnings: 0",
"apiVersion: velero.io/v1 kind: Restore metadata: annotations: openshift.io/migrate-copy-phase: final openshift.io/migrate-quiesce-pods: \"true\" openshift.io/migration-registry: 172.30.90.187:5000 openshift.io/migration-registry-dir: /socks-shop-mig-plan-registry-36f54ca7-c925-11e9-825a-06fa9fb68c88 creationTimestamp: \"2019-08-28T00:09:49Z\" generateName: e13a1b60-c927-11e9-9555-d129df7f3b96- generation: 3 labels: app.kubernetes.io/part-of: migration migmigration: e18252c9-c927-11e9-825a-06fa9fb68c88 migration-final-restore: e18252c9-c927-11e9-825a-06fa9fb68c88 name: e13a1b60-c927-11e9-9555-d129df7f3b96-gb8nx namespace: openshift-migration resourceVersion: \"82329\" selfLink: /apis/velero.io/v1/namespaces/openshift-migration/restores/e13a1b60-c927-11e9-9555-d129df7f3b96-gb8nx uid: 26983ec0-c928-11e9-825a-06fa9fb68c88 spec: backupName: e13a1b60-c927-11e9-9555-d129df7f3b96-sz24f excludedNamespaces: null excludedResources: - nodes - events - events.events.k8s.io - backups.velero.io - restores.velero.io - resticrepositories.velero.io includedNamespaces: null includedResources: null namespaceMapping: null restorePVs: true status: errors: 0 failureReason: \"\" phase: Completed validationErrors: null warnings: 15",
"oc describe migmigration <pod> -n openshift-migration",
"Some or all transfer pods are not running for more than 10 mins on destination cluster",
"oc get namespace <namespace> -o yaml 1",
"oc edit namespace <namespace>",
"apiVersion: v1 kind: Namespace metadata: annotations: openshift.io/node-selector: \"region=east\"",
"echo -n | openssl s_client -connect <host_FQDN>:<port> \\ 1 | sed -ne '/-BEGIN CERTIFICATE-/,/-END CERTIFICATE-/p' > <ca_bundle.cert> 2",
"oc logs <Velero_Pod> -n openshift-migration",
"level=error msg=\"Error checking repository for stale locks\" error=\"error getting backup storage location: BackupStorageLocation.velero.io \\\"ts-dpa-1\\\" not found\" error.file=\"/remote-source/src/github.com/vmware-tanzu/velero/pkg/restic/repository_manager.go:259\"",
"level=error msg=\"Error backing up item\" backup=velero/monitoring error=\"timed out waiting for all PodVolumeBackups to complete\" error.file=\"/go/src/github.com/heptio/velero/pkg/restic/backupper.go:165\" error.function=\"github.com/heptio/velero/pkg/restic.(*backupper).BackupPodVolumes\" group=v1",
"spec: restic_timeout: 1h 1",
"status: conditions: - category: Warn durable: true lastTransitionTime: 2020-04-16T20:35:16Z message: There were verify errors found in 1 Restic volume restores. See restore `<registry-example-migration-rvwcm>` for details 1 status: \"True\" type: ResticVerifyErrors 2",
"oc describe <registry-example-migration-rvwcm> -n openshift-migration",
"status: phase: Completed podVolumeRestoreErrors: - kind: PodVolumeRestore name: <registry-example-migration-rvwcm-98t49> namespace: openshift-migration podVolumeRestoreResticErrors: - kind: PodVolumeRestore name: <registry-example-migration-rvwcm-98t49> namespace: openshift-migration",
"oc describe <migration-example-rvwcm-98t49>",
"completionTimestamp: 2020-05-01T20:49:12Z errors: 1 resticErrors: 1 resticPod: <restic-nr2v5>",
"oc logs -f <restic-nr2v5>",
"backup=openshift-migration/<backup_id> controller=pod-volume-backup error=\"fork/exec /usr/bin/restic: permission denied\" error.file=\"/go/src/github.com/vmware-tanzu/velero/pkg/controller/pod_volume_backup_controller.go:280\" error.function=\"github.com/vmware-tanzu/velero/pkg/controller.(*podVolumeBackupController).processBackup\" logSource=\"pkg/controller/pod_volume_backup_controller.go:280\" name=<backup_id> namespace=openshift-migration",
"spec: restic_supplemental_groups: <group_id> 1",
"cat << EOF | oc apply -f - apiVersion: migration.openshift.io/v1alpha1 kind: MigMigration metadata: labels: controller-tools.k8s.io: \"1.0\" name: <migmigration> namespace: openshift-migration spec: rollback: true migPlanRef: name: <migplan> 1 namespace: openshift-migration EOF",
"oc delete USD(oc get pods -l migration.openshift.io/is-stage-pod -n <namespace>) 1",
"oc scale deployment <deployment> --replicas=<premigration_replicas>",
"apiVersion: extensions/v1beta1 kind: Deployment metadata: annotations: deployment.kubernetes.io/revision: \"1\" migration.openshift.io/preQuiesceReplicas: \"1\"",
"oc get pod -n <namespace>"
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.10/html-single/migration_toolkit_for_containers/index |
Chapter 8. Senders and receivers | Chapter 8. Senders and receivers The client uses sender and receiver links to represent channels for delivering messages. Senders and receivers are unidirectional, with a source end for the message origin, and a target end for the message destination. Source and targets often point to queues or topics on a message broker. Sources are also used to represent subscriptions. 8.1. Creating queues and topics on demand Some message servers support on-demand creation of queues and topics. When a sender or receiver is attached, the server uses the sender target address or the receiver source address to create a queue or topic with a name matching the address. The message server typically defaults to creating either a queue (for one-to-one message delivery) or a topic (for one-to-many message delivery). The client can indicate which it prefers by setting the queue or topic capability on the source or target. To select queue or topic semantics, follow these steps: Configure your message server for automatic creation of queues and topics. This is often the default configuration. Set either the queue or topic capability on your sender target or receiver source, as in the examples below. Example: Sending to a queue created on demand class CapabilityOptions(SenderOption): def apply(self, sender): sender.target.capabilities.put_object(symbol("queue")) class ExampleHandler(MessagingHandler): def on_start(self, event): conn = event.container.connect("amqp://example.com") event.container.create_sender(conn, "jobs", options=CapabilityOptions() ) Example: Receiving from a topic created on demand class CapabilityOptions(ReceiverOption): def apply(self, receiver): receiver.source.capabilities.put_object(symbol("topic")) class ExampleHandler(MessagingHandler): def on_start(self, event): conn = event.container.connect("amqp://example.com") event.container.create_receiver(conn, "notifications", options=CapabilityOptions() ) For more information, see the following examples: queue-send.py queue-receive.py topic-send.py topic-receive.py 8.2. Creating durable subscriptions A durable subscription is a piece of state on the remote server representing a message receiver. Ordinarily, message receivers are discarded when a client closes. However, because durable subscriptions are persistent, clients can detach from them and then re-attach later. Any messages received while detached are available when the client re-attaches. Durable subscriptions are uniquely identified by combining the client container ID and receiver name to form a subscription ID. These must have stable values so that the subscription can be recovered. To create a durable subscription, follow these steps: Set the connection container ID to a stable value, such as client-1 : container = Container(handler) container.container_id = "client-1" Configure the receiver source for durability by setting the durability and expiry_policy properties: class SubscriptionOptions(ReceiverOption): def apply(self, receiver): receiver.source.durability = Terminus.DELIVERIES receiver.source.expiry_policy = Terminus.EXPIRE_NEVER Create a receiver with a stable name, such as sub-1 , and apply the source properties: event.container.create_receiver(conn, "notifications", name="sub-1" , options=SubscriptionOptions() ) To detach from a subscription, use the Receiver.detach() method. To terminate the subscription, use the Receiver.close() method. For more information, see the durable-subscribe.py example . 8.3. Creating shared subscriptions A shared subscription is a piece of state on the remote server representing one or more message receivers. Because it is shared, multiple clients can consume from the same stream of messages. The client configures a shared subscription by setting the shared capability on the receiver source. Shared subscriptions are uniquely identified by combining the client container ID and receiver name to form a subscription ID. These must have stable values so that multiple client processes can locate the same subscription. If the global capability is set in addition to shared , the receiver name alone is used to identify the subscription. To create a durable subscription, follow these steps: Set the connection container ID to a stable value, such as client-1 : container = Container(handler) container.container_id = "client-1" Configure the receiver source for sharing by setting the shared capability: class SubscriptionOptions(ReceiverOption): def apply(self, receiver): receiver.source.capabilities.put_object(symbol("shared")) Create a receiver with a stable name, such as sub-1 , and apply the source properties: event.container.create_receiver(conn, "notifications", name="sub-1" , options=SubscriptionOptions() ) To detach from a subscription, use the Receiver.detach() method. To terminate the subscription, use the Receiver.close() method. For more information, see the shared-subscribe.py example . | [
"class CapabilityOptions(SenderOption): def apply(self, sender): sender.target.capabilities.put_object(symbol(\"queue\")) class ExampleHandler(MessagingHandler): def on_start(self, event): conn = event.container.connect(\"amqp://example.com\") event.container.create_sender(conn, \"jobs\", options=CapabilityOptions() )",
"class CapabilityOptions(ReceiverOption): def apply(self, receiver): receiver.source.capabilities.put_object(symbol(\"topic\")) class ExampleHandler(MessagingHandler): def on_start(self, event): conn = event.container.connect(\"amqp://example.com\") event.container.create_receiver(conn, \"notifications\", options=CapabilityOptions() )",
"container = Container(handler) container.container_id = \"client-1\"",
"class SubscriptionOptions(ReceiverOption): def apply(self, receiver): receiver.source.durability = Terminus.DELIVERIES receiver.source.expiry_policy = Terminus.EXPIRE_NEVER",
"event.container.create_receiver(conn, \"notifications\", name=\"sub-1\" , options=SubscriptionOptions() )",
"container = Container(handler) container.container_id = \"client-1\"",
"class SubscriptionOptions(ReceiverOption): def apply(self, receiver): receiver.source.capabilities.put_object(symbol(\"shared\"))",
"event.container.create_receiver(conn, \"notifications\", name=\"sub-1\" , options=SubscriptionOptions() )"
] | https://docs.redhat.com/en/documentation/red_hat_amq/2020.q4/html/using_the_amq_python_client/senders_and_receivers |
Chapter 1. Extensions overview | Chapter 1. Extensions overview Extensions enable cluster administrators to extend capabilities for users on their OpenShift Container Platform cluster. Operator Lifecycle Manager (OLM) has been included with OpenShift Container Platform 4 since its initial release. OpenShift Container Platform 4.18 includes components for a -generation iteration of OLM as a Generally Available (GA) feature, known during this phase as OLM v1 . This updated framework evolves many of the concepts that have been part of versions of OLM and adds new capabilities. 1.1. Highlights Administrators can explore the following highlights: Fully declarative model that supports GitOps workflows OLM v1 simplifies extension management through two key APIs: A new ClusterExtension API streamlines management of installed extensions, which includes Operators via the registry+v1 bundle format, by consolidating user-facing APIs into a single object. This API is provided as clusterextension.olm.operatorframework.io by the new Operator Controller component. Administrators and SREs can use the API to automate processes and define desired states by using GitOps principles. Note Earlier Technology Preview phases of OLM v1 introduced a new Operator API; this API is renamed ClusterExtension in OpenShift Container Platform 4.16 to address the following improvements: More accurately reflects the simplified functionality of extending a cluster's capabilities Better represents a more flexible packaging format Cluster prefix clearly indicates that ClusterExtension objects are cluster-scoped, a change from OLM (Classic) where Operators could be either namespace-scoped or cluster-scoped The Catalog API, provided by the new catalogd component, serves as the foundation for OLM v1, unpacking catalogs for on-cluster clients so that users can discover installable content, such as Kubernetes extensions and Operators. This provides increased visibility into all available Operator bundle versions, including their details, channels, and update edges. For more information, see Operator Controller and Catalogd . Improved control over extension updates With improved insight into catalog content, administrators can specify target versions for installation and updates. This grants administrators more control over the target version of extension updates. For more information, see Updating an cluster extension . Flexible extension packaging format Administrators can use file-based catalogs to install and manage extensions, such as OLM-based Operators, similar to the OLM (Classic) experience. In addition, bundle size is no longer constrained by the etcd value size limit. For more information, see Installing extensions . Secure catalog communication OLM v1 uses HTTPS encryption for catalogd server responses. 1.2. Purpose The mission of Operator Lifecycle Manager (OLM) has been to manage the lifecycle of cluster extensions centrally and declaratively on Kubernetes clusters. Its purpose has always been to make installing, running, and updating functional extensions to the cluster easy, safe, and reproducible for cluster and platform-as-a-service (PaaS) administrators throughout the lifecycle of the underlying cluster. The initial version of OLM, which launched with OpenShift Container Platform 4 and is included by default, focused on providing unique support for these specific needs for a particular type of cluster extension, known as Operators. Operators are classified as one or more Kubernetes controllers, shipping with one or more API extensions, as CustomResourceDefinition (CRD) objects, to provide additional functionality to the cluster. After running in production clusters for many releases, the -generation of OLM aims to encompass lifecycles for cluster extensions that are not just Operators. | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.18/html/extensions/extensions-overview |
Index | Index Symbols /etc/multipath.conf package, Setting Up DM Multipath A active/active configuration definition, Overview of DM Multipath illustration, Overview of DM Multipath active/passive configuration definition, Overview of DM Multipath illustration, Overview of DM Multipath alias parameter , Multipaths Device Configuration Attributes configuration file, Multipath Device Identifiers alias_prefix parameter, Configuration File Devices all_devs parameter, Configuration File Devices all_tg_pt parameter, Configuration File Defaults , Configuration File Devices B blacklist configuration file, Configuration File Blacklist default devices, Blacklisting By Device Name device name, Blacklisting By Device Name device protocol, Blacklisting By Device Protocol (Red Hat Enterprise Linux 7.6 and Later) device type, Blacklisting By Device Type udev property, Blacklisting By udev Property (Red Hat Enterprise Linux 7.5 and Later) WWID, Blacklisting by WWID blacklist_exceptions section multipath.conf file, Blacklist Exceptions C checker_timeout parameter, Configuration File Defaults configuration file alias parameter, Multipaths Device Configuration Attributes alias_prefix parameter, Configuration File Devices all_devs parameter, Configuration File Devices all_tg_pt parameter, Configuration File Defaults , Configuration File Devices blacklist, Configuration File Blacklist checker_timeout parameter, Configuration File Defaults config_dir parameter, Configuration File Defaults deferred_remove parameter, Configuration File Defaults , Multipaths Device Configuration Attributes , Configuration File Devices delay_wait_checks parameter, Configuration File Defaults , Multipaths Device Configuration Attributes , Configuration File Devices delay_watch_checks parameter, Configuration File Defaults , Multipaths Device Configuration Attributes , Configuration File Devices detect_path_checker parameter, Configuration File Defaults , Configuration File Devices detect_prio parameter, Configuration File Defaults , Multipaths Device Configuration Attributes dev_loss_tmo parameter, Configuration File Defaults , Configuration File Devices disable_changed_wwids parameter, Configuration File Defaults failback parameter, Configuration File Defaults , Multipaths Device Configuration Attributes , Configuration File Devices fast_io_fail_tmo parameter, Configuration File Defaults , Configuration File Devices features parameter, Configuration File Defaults , Multipaths Device Configuration Attributes , Configuration File Devices flush_on_last_del parameter, Configuration File Defaults , Multipaths Device Configuration Attributes , Configuration File Devices force_sync parameter, Configuration File Defaults hardware_handler parameter, Configuration File Devices hw_string_match parameter, Configuration File Defaults ignore_new_boot_devs parameter, Configuration File Defaults log_checker_err parameter, Configuration File Defaults max_fds parameter, Configuration File Defaults max_sectors_kb parameter, Configuration File Defaults , Multipaths Device Configuration Attributes , Configuration File Devices new_bindings_in_boot parameter, Configuration File Defaults no_path_retry parameter, Configuration File Defaults , Multipaths Device Configuration Attributes , Configuration File Devices overview, Configuration File Overview path_checker parameter, Configuration File Defaults , Configuration File Devices path_grouping_policy parameter, Configuration File Defaults , Multipaths Device Configuration Attributes , Configuration File Devices path_selector parameter, Configuration File Defaults , Multipaths Device Configuration Attributes , Configuration File Devices polling-interval parameter, Configuration File Defaults prio parameter, Configuration File Defaults , Configuration File Devices prkeys_file parameter, Configuration File Defaults , Multipaths Device Configuration Attributes product parameter, Configuration File Devices product_blacklist parameter, Configuration File Devices queue_without_daemon parameter, Configuration File Defaults reassign_maps parameter, Configuration File Defaults remove_retries parameter, Configuration File Defaults retain_attached_hw_handler parameter, Configuration File Defaults , Multipaths Device Configuration Attributes retrigger_delay parameter, Configuration File Defaults retrigger_tries parameter, Configuration File Defaults revision parameter, Configuration File Devices rr_min_io parameter, Configuration File Defaults , Multipaths Device Configuration Attributes rr_weight parameter, Configuration File Defaults , Multipaths Device Configuration Attributes , Configuration File Devices skip_kpartx parameter, Configuration File Defaults , Multipaths Device Configuration Attributes , Configuration File Devices uid_attribute parameter, Configuration File Defaults , Configuration File Devices user_friendly_names parameter, Configuration File Defaults , Multipaths Device Configuration Attributes , Configuration File Devices vendor parameter, Configuration File Devices verbosity parameter, Configuration File Defaults wwid parameter, Multipaths Device Configuration Attributes configuring DM Multipath, Setting Up DM Multipath config_dir parameter, Configuration File Defaults D defaults section multipath.conf file, Configuration File Defaults deferred_remove parameter, Configuration File Defaults , Multipaths Device Configuration Attributes , Configuration File Devices delay_wait_checks parameter, Configuration File Defaults , Multipaths Device Configuration Attributes , Configuration File Devices delay_watch_checks parameter, Configuration File Defaults , Multipaths Device Configuration Attributes , Configuration File Devices detect_path_checker parameter, Configuration File Defaults , Configuration File Devices detect_prio parameter, Configuration File Defaults , Multipaths Device Configuration Attributes dev/mapper directory, Multipath Device Identifiers device name, Multipath Device Identifiers device-mapper-multipath package, Setting Up DM Multipath devices adding, Configuring Storage Devices , Configuration File Devices devices section multipath.conf file, Configuration File Devices dev_loss_tmo parameter, Configuration File Defaults , Configuration File Devices disable_changed_wwids parameter, Configuration File Defaults DM Multipath and LVM, Multipath Devices in Logical Volumes components, DM Multipath Components configuration file, The DM Multipath Configuration File configuring, Setting Up DM Multipath definition, Device Mapper Multipathing device name, Multipath Device Identifiers devices, Multipath Devices failover, Overview of DM Multipath overview, Overview of DM Multipath redundancy, Overview of DM Multipath setup, Setting Up DM Multipath setup, overview, DM Multipath Setup Overview dm-n devices, Multipath Device Identifiers dmsetup command, determining device mapper entries, Determining Device Mapper Entries with the dmsetup Command dm_multipath kernel module , DM Multipath Components F failback parameter, Configuration File Defaults , Multipaths Device Configuration Attributes , Configuration File Devices failover, Overview of DM Multipath fast_io_fail_tmo parameter, Configuration File Defaults , Configuration File Devices features parameter, Configuration File Defaults , Multipaths Device Configuration Attributes , Configuration File Devices features, new and changed, New and Changed Features flush_on_last_del parameter, Configuration File Defaults , Multipaths Device Configuration Attributes , Configuration File Devices force_sync parameter, Configuration File Defaults H hardware_handler parameter, Configuration File Devices hw_string_match parameter, Configuration File Defaults I ignore_new_boot_devs parameter, Configuration File Defaults initramfs starting multipath, Setting Up Multipathing in the initramfs File System K kpartx command , DM Multipath Components L local disks, ignoring, Ignoring Local Disks when Generating Multipath Devices log_checker_err parameter, Configuration File Defaults LVM physical volumes multipath devices, Multipath Devices in Logical Volumes lvm.conf file , Multipath Devices in Logical Volumes M max_fds parameter, Configuration File Defaults max_sectors_kb parameter, Configuration File Defaults , Multipaths Device Configuration Attributes , Configuration File Devices mpathconf command , DM Multipath Components multipath command , DM Multipath Components options, Multipath Command Options output, Multipath Command Output queries, Multipath Queries with multipath Command multipath daemon (multipathd), The Multipath Daemon multipath devices, Multipath Devices logical volumes, Multipath Devices in Logical Volumes LVM physical volumes, Multipath Devices in Logical Volumes Multipath Helper, Automatic Configuration File Generation with Multipath Helper multipath.conf file, Storage Array Support , The DM Multipath Configuration File blacklist_exceptions section, Blacklist Exceptions defaults section, Configuration File Defaults devices section, Configuration File Devices multipaths section, Multipaths Device Configuration Attributes multipathd command, Troubleshooting with the multipathd Interactive Console interactive console, Troubleshooting with the multipathd Interactive Console multipathd daemon , DM Multipath Components multipathd start command, Setting Up DM Multipath multipathed root file system, Moving root File Systems from a Single Path Device to a Multipath Device multipathed swap file system, Moving swap File Systems from a Single Path Device to a Multipath Device multipaths section multipath.conf file, Multipaths Device Configuration Attributes N new_bindings_in_boot parameter, Configuration File Defaults no_path_retry parameter, Configuration File Defaults , Multipaths Device Configuration Attributes , Configuration File Devices O overview features, new and changed, New and Changed Features P path_checker parameter, Configuration File Defaults , Configuration File Devices path_grouping_policy parameter, Configuration File Defaults , Multipaths Device Configuration Attributes , Configuration File Devices path_selector parameter, Configuration File Defaults , Multipaths Device Configuration Attributes , Configuration File Devices polling_interval parameter, Configuration File Defaults prio parameter, Configuration File Defaults , Configuration File Devices prkeys_file parameter, Configuration File Defaults , Multipaths Device Configuration Attributes product parameter, Configuration File Devices product_blacklist parameter, Configuration File Devices Q queue_without_daemon parameter, Configuration File Defaults R reassign_maps parameter, Configuration File Defaults remove_retries parameter, Configuration File Defaults resizing a multipath device, Resizing an Online Multipath Device retain_attached_hw_handler parameter, Configuration File Defaults , Multipaths Device Configuration Attributes retrigger_delay parameter, Configuration File Defaults retrigger_tries parameter, Configuration File Defaults revision parameter, Configuration File Devices root file system, Moving root File Systems from a Single Path Device to a Multipath Device rr_min_io parameter, Configuration File Defaults , Multipaths Device Configuration Attributes rr_weight parameter, Configuration File Defaults , Multipaths Device Configuration Attributes , Configuration File Devices S setup DM Multipath, Setting Up DM Multipath skip_kpartxr parameter, Configuration File Defaults , Multipaths Device Configuration Attributes , Configuration File Devices storage array support, Storage Array Support storage arrays adding, Configuring Storage Devices , Configuration File Devices swap file system, Moving swap File Systems from a Single Path Device to a Multipath Device U uid_attribute parameter, Configuration File Defaults , Configuration File Devices user_friendly_names parameter , Multipath Device Identifiers , Configuration File Defaults , Multipaths Device Configuration Attributes , Configuration File Devices V vendor parameter, Configuration File Devices verbosity parameter, Configuration File Defaults W World Wide Identifier (WWID), Multipath Device Identifiers wwid parameter, Multipaths Device Configuration Attributes | null | https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/dm_multipath/ix01 |
Chapter 4. Debug Parameters | Chapter 4. Debug Parameters These parameters allow you to set debug mode on a per-service basis. The Debug parameter acts as a global parameter for all services and the per-service parameters can override the effects of global parameter on individual services. Parameter Description BarbicanDebug Set to True to enable debugging OpenStack Key Manager (barbican) service. The default value is false . CinderDebug Set to True to enable debugging on OpenStack Block Storage (cinder) services. The default value is false . ConfigDebug Whether to run configuration management (e.g. Puppet) in debug mode. The default value is false . Debug Set to True to enable debugging on all services. The default value is false . DesignateDebug Set to True to enable debugging Designate services. The default value is false . GlanceDebug Set to True to enable debugging OpenStack Image Storage (glance) service. The default value is false . HeatDebug Set to True to enable debugging OpenStack Orchestration (heat) services. The default value is false . HorizonDebug Set to True to enable debugging OpenStack Dashboard (horizon) service. The default value is false . IronicDebug Set to True to enable debugging OpenStack Bare Metal (ironic) services. The default value is false . KeystoneDebug Set to True to enable debugging OpenStack Identity (keystone) service. The default value is false . ManilaDebug Set to True to enable debugging OpenStack Shared File Systems (manila) services. The default value is false . MemcachedDebug Set to True to enable debugging Memcached service. The default value is false . NeutronDebug Set to True to enable debugging OpenStack Networking (neutron) services. The default value is false . NovaDebug Set to True to enable debugging OpenStack Compute (nova) services. The default value is false . OctaviaDebug Set to True to enable debugging OpenStack Load Balancing-as-a-Service (octavia) services. The default value is false . | null | https://docs.redhat.com/en/documentation/red_hat_openstack_platform/17.0/html/overcloud_parameters/ref_debug-parameters_overcloud_parameters |
Chapter 4. Node Feature Discovery Operator | Chapter 4. Node Feature Discovery Operator Learn about the Node Feature Discovery (NFD) Operator and how you can use it to expose node-level information by orchestrating Node Feature Discovery, a Kubernetes add-on for detecting hardware features and system configuration. 4.1. About the Node Feature Discovery Operator The Node Feature Discovery Operator (NFD) manages the detection of hardware features and configuration in an OpenShift Container Platform cluster by labeling the nodes with hardware-specific information. NFD labels the host with node-specific attributes, such as PCI cards, kernel, operating system version, and so on. The NFD Operator can be found on the Operator Hub by searching for "Node Feature Discovery". 4.2. Installing the Node Feature Discovery Operator The Node Feature Discovery (NFD) Operator orchestrates all resources needed to run the NFD daemon set. As a cluster administrator, you can install the NFD Operator by using the OpenShift Container Platform CLI or the web console. 4.2.1. Installing the NFD Operator using the CLI As a cluster administrator, you can install the NFD Operator using the CLI. Prerequisites An OpenShift Container Platform cluster Install the OpenShift CLI ( oc ). Log in as a user with cluster-admin privileges. Procedure Create a namespace for the NFD Operator. Create the following Namespace custom resource (CR) that defines the openshift-nfd namespace, and then save the YAML in the nfd-namespace.yaml file: apiVersion: v1 kind: Namespace metadata: name: openshift-nfd Create the namespace by running the following command: USD oc create -f nfd-namespace.yaml Install the NFD Operator in the namespace you created in the step by creating the following objects: Create the following OperatorGroup CR and save the YAML in the nfd-operatorgroup.yaml file: apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: generateName: openshift-nfd- name: openshift-nfd namespace: openshift-nfd spec: targetNamespaces: - openshift-nfd Create the OperatorGroup CR by running the following command: USD oc create -f nfd-operatorgroup.yaml Create the following Subscription CR and save the YAML in the nfd-sub.yaml file: Example Subscription apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: nfd namespace: openshift-nfd spec: channel: "stable" installPlanApproval: Automatic name: nfd source: redhat-operators sourceNamespace: openshift-marketplace Create the subscription object by running the following command: USD oc create -f nfd-sub.yaml Change to the openshift-nfd project: USD oc project openshift-nfd Verification To verify that the Operator deployment is successful, run: USD oc get pods Example output NAME READY STATUS RESTARTS AGE nfd-controller-manager-7f86ccfb58-vgr4x 2/2 Running 0 10m A successful deployment shows a Running status. 4.2.2. Installing the NFD Operator using the web console As a cluster administrator, you can install the NFD Operator using the web console. Procedure In the OpenShift Container Platform web console, click Operators OperatorHub . Choose Node Feature Discovery from the list of available Operators, and then click Install . On the Install Operator page, select A specific namespace on the cluster , and then click Install . You do not need to create a namespace because it is created for you. Verification To verify that the NFD Operator installed successfully: Navigate to the Operators Installed Operators page. Ensure that Node Feature Discovery is listed in the openshift-nfd project with a Status of InstallSucceeded . Note During installation an Operator might display a Failed status. If the installation later succeeds with an InstallSucceeded message, you can ignore the Failed message. Troubleshooting If the Operator does not appear as installed, troubleshoot further: Navigate to the Operators Installed Operators page and inspect the Operator Subscriptions and Install Plans tabs for any failure or errors under Status . Navigate to the Workloads Pods page and check the logs for pods in the openshift-nfd project. 4.3. Using the Node Feature Discovery Operator The Node Feature Discovery (NFD) Operator orchestrates all resources needed to run the Node-Feature-Discovery daemon set by watching for a NodeFeatureDiscovery CR. Based on the NodeFeatureDiscovery CR, the Operator will create the operand (NFD) components in the desired namespace. You can edit the CR to choose another namespace , image , imagePullPolicy , and nfd-worker-conf , among other options. As a cluster administrator, you can create a NodeFeatureDiscovery instance using the OpenShift Container Platform CLI or the web console. 4.3.1. Create a NodeFeatureDiscovery instance using the CLI As a cluster administrator, you can create a NodeFeatureDiscovery CR instance using the CLI. Prerequisites An OpenShift Container Platform cluster Install the OpenShift CLI ( oc ). Log in as a user with cluster-admin privileges. Install the NFD Operator. Procedure Create the following NodeFeatureDiscovery Custom Resource (CR), and then save the YAML in the NodeFeatureDiscovery.yaml file: apiVersion: nfd.openshift.io/v1 kind: NodeFeatureDiscovery metadata: name: nfd-instance namespace: openshift-nfd spec: instance: "" # instance is empty by default topologyupdater: false # False by default operand: image: registry.redhat.io/openshift4/ose-node-feature-discovery:v4.10 imagePullPolicy: Always workerConfig: configData: | core: # labelWhiteList: # noPublish: false sleepInterval: 60s # sources: [all] # klog: # addDirHeader: false # alsologtostderr: false # logBacktraceAt: # logtostderr: true # skipHeaders: false # stderrthreshold: 2 # v: 0 # vmodule: ## NOTE: the following options are not dynamically run-time configurable ## and require a nfd-worker restart to take effect after being changed # logDir: # logFile: # logFileMaxSize: 1800 # skipLogHeaders: false sources: cpu: cpuid: # NOTE: whitelist has priority over blacklist attributeBlacklist: - "BMI1" - "BMI2" - "CLMUL" - "CMOV" - "CX16" - "ERMS" - "F16C" - "HTT" - "LZCNT" - "MMX" - "MMXEXT" - "NX" - "POPCNT" - "RDRAND" - "RDSEED" - "RDTSCP" - "SGX" - "SSE" - "SSE2" - "SSE3" - "SSE4.1" - "SSE4.2" - "SSSE3" attributeWhitelist: kernel: kconfigFile: "/path/to/kconfig" configOpts: - "NO_HZ" - "X86" - "DMI" pci: deviceClassWhitelist: - "0200" - "03" - "12" deviceLabelFields: - "class" customConfig: configData: | - name: "more.kernel.features" matchOn: - loadedKMod: ["example_kmod3"] For more details on how to customize NFD workers, refer to the Configuration file reference of nfd-worker . Create the NodeFeatureDiscovery CR instance by running the following command: USD oc create -f NodeFeatureDiscovery.yaml Verification To verify that the instance is created, run: USD oc get pods Example output NAME READY STATUS RESTARTS AGE nfd-controller-manager-7f86ccfb58-vgr4x 2/2 Running 0 11m nfd-master-hcn64 1/1 Running 0 60s nfd-master-lnnxx 1/1 Running 0 60s nfd-master-mp6hr 1/1 Running 0 60s nfd-worker-vgcz9 1/1 Running 0 60s nfd-worker-xqbws 1/1 Running 0 60s A successful deployment shows a Running status. 4.3.2. Create a NodeFeatureDiscovery CR using the web console Procedure Navigate to the Operators Installed Operators page. Find Node Feature Discovery and see a box under Provided APIs . Click Create instance . Edit the values of the NodeFeatureDiscovery CR. Click Create . 4.4. Configuring the Node Feature Discovery Operator 4.4.1. core The core section contains common configuration settings that are not specific to any particular feature source. core.sleepInterval core.sleepInterval specifies the interval between consecutive passes of feature detection or re-detection, and thus also the interval between node re-labeling. A non-positive value implies infinite sleep interval; no re-detection or re-labeling is done. This value is overridden by the deprecated --sleep-interval command line flag, if specified. Example usage core: sleepInterval: 60s 1 The default value is 60s . core.sources core.sources specifies the list of enabled feature sources. A special value all enables all feature sources. This value is overridden by the deprecated --sources command line flag, if specified. Default: [all] Example usage core: sources: - system - custom core.labelWhiteList core.labelWhiteList specifies a regular expression for filtering feature labels based on the label name. Non-matching labels are not published. The regular expression is only matched against the basename part of the label, the part of the name after '/'. The label prefix, or namespace, is omitted. This value is overridden by the deprecated --label-whitelist command line flag, if specified. Default: null Example usage core: labelWhiteList: '^cpu-cpuid' core.noPublish Setting core.noPublish to true disables all communication with the nfd-master . It is effectively a dry run flag; nfd-worker runs feature detection normally, but no labeling requests are sent to nfd-master . This value is overridden by the --no-publish command line flag, if specified. Example: Example usage core: noPublish: true 1 The default value is false . core.klog The following options specify the logger configuration, most of which can be dynamically adjusted at run-time. The logger options can also be specified using command line flags, which take precedence over any corresponding config file options. core.klog.addDirHeader If set to true , core.klog.addDirHeader adds the file directory to the header of the log messages. Default: false Run-time configurable: yes core.klog.alsologtostderr Log to standard error as well as files. Default: false Run-time configurable: yes core.klog.logBacktraceAt When logging hits line file:N, emit a stack trace. Default: empty Run-time configurable: yes core.klog.logDir If non-empty, write log files in this directory. Default: empty Run-time configurable: no core.klog.logFile If not empty, use this log file. Default: empty Run-time configurable: no core.klog.logFileMaxSize core.klog.logFileMaxSize defines the maximum size a log file can grow to. Unit is megabytes. If the value is 0 , the maximum file size is unlimited. Default: 1800 Run-time configurable: no core.klog.logtostderr Log to standard error instead of files Default: true Run-time configurable: yes core.klog.skipHeaders If core.klog.skipHeaders is set to true , avoid header prefixes in the log messages. Default: false Run-time configurable: yes core.klog.skipLogHeaders If core.klog.skipLogHeaders is set to true , avoid headers when opening log files. Default: false Run-time configurable: no core.klog.stderrthreshold Logs at or above this threshold go to stderr. Default: 2 Run-time configurable: yes core.klog.v core.klog.v is the number for the log level verbosity. Default: 0 Run-time configurable: yes core.klog.vmodule core.klog.vmodule is a comma-separated list of pattern=N settings for file-filtered logging. Default: empty Run-time configurable: yes 4.4.2. sources The sources section contains feature source specific configuration parameters. sources.cpu.cpuid.attributeBlacklist Prevent publishing cpuid features listed in this option. This value is overridden by sources.cpu.cpuid.attributeWhitelist , if specified. Default: [BMI1, BMI2, CLMUL, CMOV, CX16, ERMS, F16C, HTT, LZCNT, MMX, MMXEXT, NX, POPCNT, RDRAND, RDSEED, RDTSCP, SGX, SGXLC, SSE, SSE2, SSE3, SSE4.1, SSE4.2, SSSE3] Example usage sources: cpu: cpuid: attributeBlacklist: [MMX, MMXEXT] sources.cpu.cpuid.attributeWhitelist Only publish the cpuid features listed in this option. sources.cpu.cpuid.attributeWhitelist takes precedence over sources.cpu.cpuid.attributeBlacklist . Default: empty Example usage sources: cpu: cpuid: attributeWhitelist: [AVX512BW, AVX512CD, AVX512DQ, AVX512F, AVX512VL] sources.kernel.kconfigFile sources.kernel.kconfigFile is the path of the kernel config file. If empty, NFD runs a search in the well-known standard locations. Default: empty Example usage sources: kernel: kconfigFile: "/path/to/kconfig" sources.kernel.configOpts sources.kernel.configOpts represents kernel configuration options to publish as feature labels. Default: [NO_HZ, NO_HZ_IDLE, NO_HZ_FULL, PREEMPT] Example usage sources: kernel: configOpts: [NO_HZ, X86, DMI] sources.pci.deviceClassWhitelist sources.pci.deviceClassWhitelist is a list of PCI device class IDs for which to publish a label. It can be specified as a main class only (for example, 03 ) or full class-subclass combination (for example 0300 ). The former implies that all subclasses are accepted. The format of the labels can be further configured with deviceLabelFields . Default: ["03", "0b40", "12"] Example usage sources: pci: deviceClassWhitelist: ["0200", "03"] sources.pci.deviceLabelFields sources.pci.deviceLabelFields is the set of PCI ID fields to use when constructing the name of the feature label. Valid fields are class , vendor , device , subsystem_vendor and subsystem_device . Default: [class, vendor] Example usage sources: pci: deviceLabelFields: [class, vendor, device] With the example config above, NFD would publish labels such as feature.node.kubernetes.io/pci-<class-id>_<vendor-id>_<device-id>.present=true sources.usb.deviceClassWhitelist sources.usb.deviceClassWhitelist is a list of USB device class IDs for which to publish a feature label. The format of the labels can be further configured with deviceLabelFields . Default: ["0e", "ef", "fe", "ff"] Example usage sources: usb: deviceClassWhitelist: ["ef", "ff"] sources.usb.deviceLabelFields sources.usb.deviceLabelFields is the set of USB ID fields from which to compose the name of the feature label. Valid fields are class , vendor , and device . Default: [class, vendor, device] Example usage sources: pci: deviceLabelFields: [class, vendor] With the example config above, NFD would publish labels like: feature.node.kubernetes.io/usb-<class-id>_<vendor-id>.present=true . sources.custom sources.custom is the list of rules to process in the custom feature source to create user-specific labels. Default: empty Example usage source: custom: - name: "my.custom.feature" matchOn: - loadedKMod: ["e1000e"] - pciId: class: ["0200"] vendor: ["8086"] 4.5. Using the NFD Topology Updater The Node Feature Discovery (NFD) Topology Updater is a daemon responsible for examining allocated resources on a worker node. It accounts for resources that are available to be allocated to new pod on a per-zone basis, where a zone can be a Non-Uniform Memory Access (NUMA) node. The NFD Topology Updater communicates the information to nfd-master, which creates a NodeResourceTopology custom resource (CR) corresponding to all of the worker nodes in the cluster. One instance of the NFD Topology Updater runs on each node of the cluster. To enable the Topology Updater workers in NFD, set the topologyupdater variable to true in the NodeFeatureDiscovery CR, as described in the section Using the Node Feature Discovery Operator . 4.5.1. NodeResourceTopology CR When run with NFD Topology Updater, NFD creates custom resource instances corresponding to the node resource hardware topology, such as: apiVersion: topology.node.k8s.io/v1alpha1 kind: NodeResourceTopology metadata: name: node1 topologyPolicies: ["SingleNUMANodeContainerLevel"] zones: - name: node-0 type: Node resources: - name: cpu capacity: 20 allocatable: 16 available: 10 - name: vendor/nic1 capacity: 3 allocatable: 3 available: 3 - name: node-1 type: Node resources: - name: cpu capacity: 30 allocatable: 30 available: 15 - name: vendor/nic2 capacity: 6 allocatable: 6 available: 6 - name: node-2 type: Node resources: - name: cpu capacity: 30 allocatable: 30 available: 15 - name: vendor/nic1 capacity: 3 allocatable: 3 available: 3 4.5.2. NFD Topology Updater command line flags To view available command line flags, run the nfd-topology-updater -help command. For example, in a podman container, run the following command: USD podman run gcr.io/k8s-staging-nfd/node-feature-discovery:master nfd-topology-updater -help -ca-file The -ca-file flag is one of the three flags, together with the -cert-file and `-key-file`flags, that controls the mutual TLS authentication on the NFD Topology Updater. This flag specifies the TLS root certificate that is used for verifying the authenticity of nfd-master. Default: empty Important The -ca-file flag must be specified together with the -cert-file and -key-file flags. Example USD nfd-topology-updater -ca-file=/opt/nfd/ca.crt -cert-file=/opt/nfd/updater.crt -key-file=/opt/nfd/updater.key -cert-file The -cert-file flag is one of the three flags, together with the -ca-file and -key-file flags , that controls mutual TLS authentication on the NFD Topology Updater. This flag specifies the TLS certificate presented for authenticating outgoing requests. Default: empty Important The -cert-file flag must be specified together with the -ca-file and -key-file flags. Example USD nfd-topology-updater -cert-file=/opt/nfd/updater.crt -key-file=/opt/nfd/updater.key -ca-file=/opt/nfd/ca.crt -h, -help Print usage and exit. -key-file The -key-file flag is one of the three flags, together with the -ca-file and -cert-file flags, that controls the mutual TLS authentication on the NFD Topology Updater. This flag specifies the private key corresponding the given certificate file, or -cert-file , that is used for authenticating outgoing requests. Default: empty Important The -key-file flag must be specified together with the -ca-file and -cert-file flags. Example USD nfd-topology-updater -key-file=/opt/nfd/updater.key -cert-file=/opt/nfd/updater.crt -ca-file=/opt/nfd/ca.crt -kubelet-config-file The -kubelet-config-file specifies the path to the Kubelet's configuration file. Default: /host-var/lib/kubelet/config.yaml Example USD nfd-topology-updater -kubelet-config-file=/var/lib/kubelet/config.yaml -no-publish The -no-publish flag disables all communication with the nfd-master, making it a dry run flag for nfd-topology-updater. NFD Topology Updater runs resource hardware topology detection normally, but no CR requests are sent to nfd-master. Default: false Example USD nfd-topology-updater -no-publish 4.5.2.1. -oneshot The -oneshot flag causes the NFD Topology Updater to exit after one pass of resource hardware topology detection. Default: false Example USD nfd-topology-updater -oneshot -no-publish -podresources-socket The -podresources-socket flag specifies the path to the Unix socket where kubelet exports a gRPC service to enable discovery of in-use CPUs and devices, and to provide metadata for them. Default: /host-var/liblib/kubelet/pod-resources/kubelet.sock Example USD nfd-topology-updater -podresources-socket=/var/lib/kubelet/pod-resources/kubelet.sock -server The -server flag specifies the address of the nfd-master endpoint to connect to. Default: localhost:8080 Example USD nfd-topology-updater -server=nfd-master.nfd.svc.cluster.local:443 -server-name-override The -server-name-override flag specifies the common name (CN) which to expect from the nfd-master TLS certificate. This flag is mostly intended for development and debugging purposes. Default: empty Example USD nfd-topology-updater -server-name-override=localhost -sleep-interval The -sleep-interval flag specifies the interval between resource hardware topology re-examination and custom resource updates. A non-positive value implies infinite sleep interval and no re-detection is done. Default: 60s Example USD nfd-topology-updater -sleep-interval=1h -version Print version and exit. -watch-namespace The -watch-namespace flag specifies the namespace to ensure that resource hardware topology examination only happens for the pods running in the specified namespace. Pods that are not running in the specified namespace are not considered during resource accounting. This is particularly useful for testing and debugging purposes. A * value means that all of the pods across all namespaces are considered during the accounting process. Default: * Example USD nfd-topology-updater -watch-namespace=rte | [
"apiVersion: v1 kind: Namespace metadata: name: openshift-nfd",
"oc create -f nfd-namespace.yaml",
"apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: generateName: openshift-nfd- name: openshift-nfd namespace: openshift-nfd spec: targetNamespaces: - openshift-nfd",
"oc create -f nfd-operatorgroup.yaml",
"apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: nfd namespace: openshift-nfd spec: channel: \"stable\" installPlanApproval: Automatic name: nfd source: redhat-operators sourceNamespace: openshift-marketplace",
"oc create -f nfd-sub.yaml",
"oc project openshift-nfd",
"oc get pods",
"NAME READY STATUS RESTARTS AGE nfd-controller-manager-7f86ccfb58-vgr4x 2/2 Running 0 10m",
"apiVersion: nfd.openshift.io/v1 kind: NodeFeatureDiscovery metadata: name: nfd-instance namespace: openshift-nfd spec: instance: \"\" # instance is empty by default topologyupdater: false # False by default operand: image: registry.redhat.io/openshift4/ose-node-feature-discovery:v4.10 imagePullPolicy: Always workerConfig: configData: | core: # labelWhiteList: # noPublish: false sleepInterval: 60s # sources: [all] # klog: # addDirHeader: false # alsologtostderr: false # logBacktraceAt: # logtostderr: true # skipHeaders: false # stderrthreshold: 2 # v: 0 # vmodule: ## NOTE: the following options are not dynamically run-time configurable ## and require a nfd-worker restart to take effect after being changed # logDir: # logFile: # logFileMaxSize: 1800 # skipLogHeaders: false sources: cpu: cpuid: # NOTE: whitelist has priority over blacklist attributeBlacklist: - \"BMI1\" - \"BMI2\" - \"CLMUL\" - \"CMOV\" - \"CX16\" - \"ERMS\" - \"F16C\" - \"HTT\" - \"LZCNT\" - \"MMX\" - \"MMXEXT\" - \"NX\" - \"POPCNT\" - \"RDRAND\" - \"RDSEED\" - \"RDTSCP\" - \"SGX\" - \"SSE\" - \"SSE2\" - \"SSE3\" - \"SSE4.1\" - \"SSE4.2\" - \"SSSE3\" attributeWhitelist: kernel: kconfigFile: \"/path/to/kconfig\" configOpts: - \"NO_HZ\" - \"X86\" - \"DMI\" pci: deviceClassWhitelist: - \"0200\" - \"03\" - \"12\" deviceLabelFields: - \"class\" customConfig: configData: | - name: \"more.kernel.features\" matchOn: - loadedKMod: [\"example_kmod3\"]",
"oc create -f NodeFeatureDiscovery.yaml",
"oc get pods",
"NAME READY STATUS RESTARTS AGE nfd-controller-manager-7f86ccfb58-vgr4x 2/2 Running 0 11m nfd-master-hcn64 1/1 Running 0 60s nfd-master-lnnxx 1/1 Running 0 60s nfd-master-mp6hr 1/1 Running 0 60s nfd-worker-vgcz9 1/1 Running 0 60s nfd-worker-xqbws 1/1 Running 0 60s",
"core: sleepInterval: 60s 1",
"core: sources: - system - custom",
"core: labelWhiteList: '^cpu-cpuid'",
"core: noPublish: true 1",
"sources: cpu: cpuid: attributeBlacklist: [MMX, MMXEXT]",
"sources: cpu: cpuid: attributeWhitelist: [AVX512BW, AVX512CD, AVX512DQ, AVX512F, AVX512VL]",
"sources: kernel: kconfigFile: \"/path/to/kconfig\"",
"sources: kernel: configOpts: [NO_HZ, X86, DMI]",
"sources: pci: deviceClassWhitelist: [\"0200\", \"03\"]",
"sources: pci: deviceLabelFields: [class, vendor, device]",
"sources: usb: deviceClassWhitelist: [\"ef\", \"ff\"]",
"sources: pci: deviceLabelFields: [class, vendor]",
"source: custom: - name: \"my.custom.feature\" matchOn: - loadedKMod: [\"e1000e\"] - pciId: class: [\"0200\"] vendor: [\"8086\"]",
"apiVersion: topology.node.k8s.io/v1alpha1 kind: NodeResourceTopology metadata: name: node1 topologyPolicies: [\"SingleNUMANodeContainerLevel\"] zones: - name: node-0 type: Node resources: - name: cpu capacity: 20 allocatable: 16 available: 10 - name: vendor/nic1 capacity: 3 allocatable: 3 available: 3 - name: node-1 type: Node resources: - name: cpu capacity: 30 allocatable: 30 available: 15 - name: vendor/nic2 capacity: 6 allocatable: 6 available: 6 - name: node-2 type: Node resources: - name: cpu capacity: 30 allocatable: 30 available: 15 - name: vendor/nic1 capacity: 3 allocatable: 3 available: 3",
"podman run gcr.io/k8s-staging-nfd/node-feature-discovery:master nfd-topology-updater -help",
"nfd-topology-updater -ca-file=/opt/nfd/ca.crt -cert-file=/opt/nfd/updater.crt -key-file=/opt/nfd/updater.key",
"nfd-topology-updater -cert-file=/opt/nfd/updater.crt -key-file=/opt/nfd/updater.key -ca-file=/opt/nfd/ca.crt",
"nfd-topology-updater -key-file=/opt/nfd/updater.key -cert-file=/opt/nfd/updater.crt -ca-file=/opt/nfd/ca.crt",
"nfd-topology-updater -kubelet-config-file=/var/lib/kubelet/config.yaml",
"nfd-topology-updater -no-publish",
"nfd-topology-updater -oneshot -no-publish",
"nfd-topology-updater -podresources-socket=/var/lib/kubelet/pod-resources/kubelet.sock",
"nfd-topology-updater -server=nfd-master.nfd.svc.cluster.local:443",
"nfd-topology-updater -server-name-override=localhost",
"nfd-topology-updater -sleep-interval=1h",
"nfd-topology-updater -watch-namespace=rte"
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.10/html/specialized_hardware_and_driver_enablement/node-feature-discovery-operator |
Chapter 9. Liquids: Developer Portal | Chapter 9. Liquids: Developer Portal This section contains information about Liquid formatting tags and how they work in the 3scale system, including the different elements of the markup, the connections between them, and short examples of how to use them in your Developer Portal. To learn the basics about Liquids, see the Liquid reference . 9.1. Using Liquids in the Developer Portal This section explains how to enable liquid markup processing in layouts and pages. 9.1.1. Enabling Liquids Liquid markup processing is enabled by default for all partials and email templates. Enabling them on layouts is done by ticking the checkbox right under the system_name input field. However, to enable them on pages, you will have to go to the advanced options section of the page. Just expand the Advanced options section and mark the Liquid enabled checkbox. From now on, all the liquid markup will be processed by the internal engine, and the Developer Portal built-in editor will also add code highlighting for liquid. 9.1.2. Different use on pages, partials, and layouts The use of liquids usually differs slightly between pages, partials and layouts. Within pages, liquids are single-use elements; while liquids with partials and layouts are the reusable elements of the Developer Portal. This means that instead of applying multiple layouts or partials with small changes to different pages, you can add some logic liquid tags, and alter the layout depending on the page the user is on. <!-- if we are inside '/documentation' URL --> <li class="{% if request.request_uri contains "/documentation" %}active{% endif %}"><!-- add the active class to the menu item --> <a href="/documentation">Documentation</a> </li> 9.1.3. Use with CSS/JS Liquid markup does not just work with HTML, you can easily combine it with CSS and/or JavaScript code for even more control. To enable liquid in a stylesheet or JS, create them as a page and follow the same steps as if you were enabling it for a normal page. Having done that, you'll be able to add some conditional markup in CSS or use the server-side data in JavaScript. Just remember to set the content type of the page as CSS or JS. 9.2. Usage of liquids in email templates This section explains how you can use liquid tags to customize email templates. 9.2.1. Differences from Developer Portal As previously mentioned, liquid tags can also be used to customize the email templates sent to your users. All the general rules for writing liquid mentioned before also apply to the email templates, with some exceptions: There is no commonly shared list of variables that are available on every template. Instead, you will have to do some testing using the previously mentioned {% debug:help %} tag. Since emails are by nature different from web pages, you will have limited or no access to some tags. For example, {{ request.request_uri }} will not make sense, as an email does not have a URL. 9.3. Troubleshooting This troubleshooting section will help you debug and fix typical errors that might occur. 9.3.1. Debugging If something is not working as intended, but is saved correctly, check the following: All the tags are closed correctly. You are referring to variables available on the current page. You are not trying to access an array - for example current_account.applications is an array of applications. The logic is correct. 9.3.2. Typical errors and ways to solve them If the document cannot be saved due to a liquid error, it is usually because some tags or drops were not closed correctly. Check that all your {% %} and {{ }} tags were properly closed and that the logic expressions, for example, if , for and so on, are terminated correctly with endif , enfor . Normally if this is the case, an error will be displayed at the top of the page above the editor with a descriptive error message. If everything saved correctly and you do not see any effect, check that you are not referring to an empty element and you are not using a logic tag to display content. {% %} will never render any content, besides usage in tags which is already an alias of a more complex set of tags and drops. If only a # symbol is displayed, it means that you have tried to display an element that is an array. Check the section on the liquid hierarchy . 9.3.3. Contacting support If you still have a problem, you can open a new case via the Red Hat Customer Portal . | [
"<!-- if we are inside '/documentation' URL --> <li class=\"{% if request.request_uri contains \"/documentation\" %}active{% endif %}\"><!-- add the active class to the menu item --> <a href=\"/documentation\">Documentation</a> </li>"
] | https://docs.redhat.com/en/documentation/red_hat_3scale_api_management/2.15/html/creating_the_developer_portal/liquids |
Chapter 14. Installing a cluster on AWS in a restricted network with user-provisioned infrastructure | Chapter 14. Installing a cluster on AWS in a restricted network with user-provisioned infrastructure In OpenShift Container Platform version 4.14, you can install a cluster on Amazon Web Services (AWS) using infrastructure that you provide and an internal mirror of the installation release content. Important While you can install an OpenShift Container Platform cluster by using mirrored installation release content, your cluster still requires internet access to use the AWS APIs. One way to create this infrastructure is to use the provided CloudFormation templates. You can modify the templates to customize your infrastructure or use the information that they contain to create AWS objects according to your company's policies. Important The steps for performing a user-provisioned infrastructure installation are provided as an example only. Installing a cluster with infrastructure you provide requires knowledge of the cloud provider and the installation process of OpenShift Container Platform. Several CloudFormation templates are provided to assist in completing these steps or to help model your own. You are also free to create the required resources through other methods; the templates are just an example. 14.1. Prerequisites You reviewed details about the OpenShift Container Platform installation and update processes. You read the documentation on selecting a cluster installation method and preparing it for users . You created a mirror registry on your mirror host and obtained the imageContentSources data for your version of OpenShift Container Platform. Important Because the installation media is on the mirror host, you can use that computer to complete all installation steps. You configured an AWS account to host the cluster. Important If you have an AWS profile stored on your computer, it must not use a temporary session token that you generated while using a multi-factor authentication device. The cluster continues to use your current AWS credentials to create AWS resources for the entire life of the cluster, so you must use key-based, long-term credentials. To generate appropriate keys, see Managing Access Keys for IAM Users in the AWS documentation. You can supply the keys when you run the installation program. You downloaded the AWS CLI and installed it on your computer. See Install the AWS CLI Using the Bundled Installer (Linux, macOS, or Unix) in the AWS documentation. If you use a firewall and plan to use the Telemetry service, you configured the firewall to allow the sites that your cluster requires access to. Note Be sure to also review this site list if you are configuring a proxy. If the cloud identity and access management (IAM) APIs are not accessible in your environment, or if you do not want to store an administrator-level credential secret in the kube-system namespace, you can manually create and maintain long-term credentials . 14.2. About installations in restricted networks In OpenShift Container Platform 4.14, you can perform an installation that does not require an active connection to the internet to obtain software components. Restricted network installations can be completed using installer-provisioned infrastructure or user-provisioned infrastructure, depending on the cloud platform to which you are installing the cluster. If you choose to perform a restricted network installation on a cloud platform, you still require access to its cloud APIs. Some cloud functions, like Amazon Web Service's Route 53 DNS and IAM services, require internet access. Depending on your network, you might require less internet access for an installation on bare metal hardware, Nutanix, or on VMware vSphere. To complete a restricted network installation, you must create a registry that mirrors the contents of the OpenShift image registry and contains the installation media. You can create this registry on a mirror host, which can access both the internet and your closed network, or by using other methods that meet your restrictions. Important Because of the complexity of the configuration for user-provisioned installations, consider completing a standard user-provisioned infrastructure installation before you attempt a restricted network installation using user-provisioned infrastructure. Completing this test installation might make it easier to isolate and troubleshoot any issues that might arise during your installation in a restricted network. 14.2.1. Additional limits Clusters in restricted networks have the following additional limitations and restrictions: The ClusterVersion status includes an Unable to retrieve available updates error. By default, you cannot use the contents of the Developer Catalog because you cannot access the required image stream tags. 14.3. Internet access for OpenShift Container Platform In OpenShift Container Platform 4.14, you require access to the internet to obtain the images that are necessary to install your cluster. You must have internet access to: Access OpenShift Cluster Manager to download the installation program and perform subscription management. If the cluster has internet access and you do not disable Telemetry, that service automatically entitles your cluster. Access Quay.io to obtain the packages that are required to install your cluster. Obtain the packages that are required to perform cluster updates. 14.4. Requirements for a cluster with user-provisioned infrastructure For a cluster that contains user-provisioned infrastructure, you must deploy all of the required machines. This section describes the requirements for deploying OpenShift Container Platform on user-provisioned infrastructure. 14.4.1. Required machines for cluster installation The smallest OpenShift Container Platform clusters require the following hosts: Table 14.1. Minimum required hosts Hosts Description One temporary bootstrap machine The cluster requires the bootstrap machine to deploy the OpenShift Container Platform cluster on the three control plane machines. You can remove the bootstrap machine after you install the cluster. Three control plane machines The control plane machines run the Kubernetes and OpenShift Container Platform services that form the control plane. At least two compute machines, which are also known as worker machines. The workloads requested by OpenShift Container Platform users run on the compute machines. Important To maintain high availability of your cluster, use separate physical hosts for these cluster machines. The bootstrap and control plane machines must use Red Hat Enterprise Linux CoreOS (RHCOS) as the operating system. However, the compute machines can choose between Red Hat Enterprise Linux CoreOS (RHCOS), Red Hat Enterprise Linux (RHEL) 8.6 and later. Note that RHCOS is based on Red Hat Enterprise Linux (RHEL) 9.2 and inherits all of its hardware certifications and requirements. See Red Hat Enterprise Linux technology capabilities and limits . 14.4.2. Minimum resource requirements for cluster installation Each cluster machine must meet the following minimum requirements: Table 14.2. Minimum resource requirements Machine Operating System vCPU [1] Virtual RAM Storage Input/Output Per Second (IOPS) [2] Bootstrap RHCOS 4 16 GB 100 GB 300 Control plane RHCOS 4 16 GB 100 GB 300 Compute RHCOS, RHEL 8.6 and later [3] 2 8 GB 100 GB 300 One vCPU is equivalent to one physical core when simultaneous multithreading (SMT), or Hyper-Threading, is not enabled. When enabled, use the following formula to calculate the corresponding ratio: (threads per core x cores) x sockets = vCPUs. OpenShift Container Platform and Kubernetes are sensitive to disk performance, and faster storage is recommended, particularly for etcd on the control plane nodes which require a 10 ms p99 fsync duration. Note that on many cloud platforms, storage size and IOPS scale together, so you might need to over-allocate storage volume to obtain sufficient performance. As with all user-provisioned installations, if you choose to use RHEL compute machines in your cluster, you take responsibility for all operating system life cycle management and maintenance, including performing system updates, applying patches, and completing all other required tasks. Use of RHEL 7 compute machines is deprecated and has been removed in OpenShift Container Platform 4.10 and later. Note As of OpenShift Container Platform version 4.13, RHCOS is based on RHEL version 9.2, which updates the micro-architecture requirements. The following list contains the minimum instruction set architectures (ISA) that each architecture requires: x86-64 architecture requires x86-64-v2 ISA ARM64 architecture requires ARMv8.0-A ISA IBM Power architecture requires Power 9 ISA s390x architecture requires z14 ISA For more information, see RHEL Architectures . If an instance type for your platform meets the minimum requirements for cluster machines, it is supported to use in OpenShift Container Platform. Additional resources Optimizing storage 14.4.3. Tested instance types for AWS The following Amazon Web Services (AWS) instance types have been tested with OpenShift Container Platform. Note Use the machine types included in the following charts for your AWS instances. If you use an instance type that is not listed in the chart, ensure that the instance size you use matches the minimum resource requirements that are listed in "Minimum resource requirements for cluster installation". Example 14.1. Machine types based on 64-bit x86 architecture c4.* c5.* c5a.* i3.* m4.* m5.* m5a.* m6i.* r4.* r5.* r5a.* r6i.* t3.* t3a.* 14.4.4. Tested instance types for AWS on 64-bit ARM infrastructures The following Amazon Web Services (AWS) 64-bit ARM instance types have been tested with OpenShift Container Platform. Note Use the machine types included in the following charts for your AWS ARM instances. If you use an instance type that is not listed in the chart, ensure that the instance size you use matches the minimum resource requirements that are listed in "Minimum resource requirements for cluster installation". Example 14.2. Machine types based on 64-bit ARM architecture c6g.* m6g.* r8g.* 14.4.5. Certificate signing requests management Because your cluster has limited access to automatic machine management when you use infrastructure that you provision, you must provide a mechanism for approving cluster certificate signing requests (CSRs) after installation. The kube-controller-manager only approves the kubelet client CSRs. The machine-approver cannot guarantee the validity of a serving certificate that is requested by using kubelet credentials because it cannot confirm that the correct machine issued the request. You must determine and implement a method of verifying the validity of the kubelet serving certificate requests and approving them. 14.5. Required AWS infrastructure components To install OpenShift Container Platform on user-provisioned infrastructure in Amazon Web Services (AWS), you must manually create both the machines and their supporting infrastructure. For more information about the integration testing for different platforms, see the OpenShift Container Platform 4.x Tested Integrations page. By using the provided CloudFormation templates, you can create stacks of AWS resources that represent the following components: An AWS Virtual Private Cloud (VPC) Networking and load balancing components Security groups and roles An OpenShift Container Platform bootstrap node OpenShift Container Platform control plane nodes An OpenShift Container Platform compute node Alternatively, you can manually create the components or you can reuse existing infrastructure that meets the cluster requirements. Review the CloudFormation templates for more details about how the components interrelate. 14.5.1. Other infrastructure components A VPC DNS entries Load balancers (classic or network) and listeners A public and a private Route 53 zone Security groups IAM roles S3 buckets If you are working in a disconnected environment, you are unable to reach the public IP addresses for EC2, ELB, and S3 endpoints. Depending on the level to which you want to restrict internet traffic during the installation, the following configuration options are available: Option 1: Create VPC endpoints Create a VPC endpoint and attach it to the subnets that the clusters are using. Name the endpoints as follows: ec2.<aws_region>.amazonaws.com elasticloadbalancing.<aws_region>.amazonaws.com s3.<aws_region>.amazonaws.com With this option, network traffic remains private between your VPC and the required AWS services. Option 2: Create a proxy without VPC endpoints As part of the installation process, you can configure an HTTP or HTTPS proxy. With this option, internet traffic goes through the proxy to reach the required AWS services. Option 3: Create a proxy with VPC endpoints As part of the installation process, you can configure an HTTP or HTTPS proxy with VPC endpoints. Create a VPC endpoint and attach it to the subnets that the clusters are using. Name the endpoints as follows: ec2.<aws_region>.amazonaws.com elasticloadbalancing.<aws_region>.amazonaws.com s3.<aws_region>.amazonaws.com When configuring the proxy in the install-config.yaml file, add these endpoints to the noProxy field. With this option, the proxy prevents the cluster from accessing the internet directly. However, network traffic remains private between your VPC and the required AWS services. Required VPC components You must provide a suitable VPC and subnets that allow communication to your machines. Component AWS type Description VPC AWS::EC2::VPC AWS::EC2::VPCEndpoint You must provide a public VPC for the cluster to use. The VPC uses an endpoint that references the route tables for each subnet to improve communication with the registry that is hosted in S3. Public subnets AWS::EC2::Subnet AWS::EC2::SubnetNetworkAclAssociation Your VPC must have public subnets for between 1 and 3 availability zones and associate them with appropriate Ingress rules. Internet gateway AWS::EC2::InternetGateway AWS::EC2::VPCGatewayAttachment AWS::EC2::RouteTable AWS::EC2::Route AWS::EC2::SubnetRouteTableAssociation AWS::EC2::NatGateway AWS::EC2::EIP You must have a public internet gateway, with public routes, attached to the VPC. In the provided templates, each public subnet has a NAT gateway with an EIP address. These NAT gateways allow cluster resources, like private subnet instances, to reach the internet and are not required for some restricted network or proxy scenarios. Network access control AWS::EC2::NetworkAcl AWS::EC2::NetworkAclEntry You must allow the VPC to access the following ports: Port Reason 80 Inbound HTTP traffic 443 Inbound HTTPS traffic 22 Inbound SSH traffic 1024 - 65535 Inbound ephemeral traffic 0 - 65535 Outbound ephemeral traffic Private subnets AWS::EC2::Subnet AWS::EC2::RouteTable AWS::EC2::SubnetRouteTableAssociation Your VPC can have private subnets. The provided CloudFormation templates can create private subnets for between 1 and 3 availability zones. If you use private subnets, you must provide appropriate routes and tables for them. Required DNS and load balancing components Your DNS and load balancer configuration needs to use a public hosted zone and can use a private hosted zone similar to the one that the installation program uses if it provisions the cluster's infrastructure. You must create a DNS entry that resolves to your load balancer. An entry for api.<cluster_name>.<domain> must point to the external load balancer, and an entry for api-int.<cluster_name>.<domain> must point to the internal load balancer. The cluster also requires load balancers and listeners for port 6443, which are required for the Kubernetes API and its extensions, and port 22623, which are required for the Ignition config files for new machines. The targets will be the control plane nodes. Port 6443 must be accessible to both clients external to the cluster and nodes within the cluster. Port 22623 must be accessible to nodes within the cluster. Component AWS type Description DNS AWS::Route53::HostedZone The hosted zone for your internal DNS. Public load balancer AWS::ElasticLoadBalancingV2::LoadBalancer The load balancer for your public subnets. External API server record AWS::Route53::RecordSetGroup Alias records for the external API server. External listener AWS::ElasticLoadBalancingV2::Listener A listener on port 6443 for the external load balancer. External target group AWS::ElasticLoadBalancingV2::TargetGroup The target group for the external load balancer. Private load balancer AWS::ElasticLoadBalancingV2::LoadBalancer The load balancer for your private subnets. Internal API server record AWS::Route53::RecordSetGroup Alias records for the internal API server. Internal listener AWS::ElasticLoadBalancingV2::Listener A listener on port 22623 for the internal load balancer. Internal target group AWS::ElasticLoadBalancingV2::TargetGroup The target group for the internal load balancer. Internal listener AWS::ElasticLoadBalancingV2::Listener A listener on port 6443 for the internal load balancer. Internal target group AWS::ElasticLoadBalancingV2::TargetGroup The target group for the internal load balancer. Security groups The control plane and worker machines require access to the following ports: Group Type IP Protocol Port range MasterSecurityGroup AWS::EC2::SecurityGroup icmp 0 tcp 22 tcp 6443 tcp 22623 WorkerSecurityGroup AWS::EC2::SecurityGroup icmp 0 tcp 22 BootstrapSecurityGroup AWS::EC2::SecurityGroup tcp 22 tcp 19531 Control plane Ingress The control plane machines require the following Ingress groups. Each Ingress group is a AWS::EC2::SecurityGroupIngress resource. Ingress group Description IP protocol Port range MasterIngressEtcd etcd tcp 2379 - 2380 MasterIngressVxlan Vxlan packets udp 4789 MasterIngressWorkerVxlan Vxlan packets udp 4789 MasterIngressInternal Internal cluster communication and Kubernetes proxy metrics tcp 9000 - 9999 MasterIngressWorkerInternal Internal cluster communication tcp 9000 - 9999 MasterIngressKube Kubernetes kubelet, scheduler and controller manager tcp 10250 - 10259 MasterIngressWorkerKube Kubernetes kubelet, scheduler and controller manager tcp 10250 - 10259 MasterIngressIngressServices Kubernetes Ingress services tcp 30000 - 32767 MasterIngressWorkerIngressServices Kubernetes Ingress services tcp 30000 - 32767 MasterIngressGeneve Geneve packets udp 6081 MasterIngressWorkerGeneve Geneve packets udp 6081 MasterIngressIpsecIke IPsec IKE packets udp 500 MasterIngressWorkerIpsecIke IPsec IKE packets udp 500 MasterIngressIpsecNat IPsec NAT-T packets udp 4500 MasterIngressWorkerIpsecNat IPsec NAT-T packets udp 4500 MasterIngressIpsecEsp IPsec ESP packets 50 All MasterIngressWorkerIpsecEsp IPsec ESP packets 50 All MasterIngressInternalUDP Internal cluster communication udp 9000 - 9999 MasterIngressWorkerInternalUDP Internal cluster communication udp 9000 - 9999 MasterIngressIngressServicesUDP Kubernetes Ingress services udp 30000 - 32767 MasterIngressWorkerIngressServicesUDP Kubernetes Ingress services udp 30000 - 32767 Worker Ingress The worker machines require the following Ingress groups. Each Ingress group is a AWS::EC2::SecurityGroupIngress resource. Ingress group Description IP protocol Port range WorkerIngressVxlan Vxlan packets udp 4789 WorkerIngressWorkerVxlan Vxlan packets udp 4789 WorkerIngressInternal Internal cluster communication tcp 9000 - 9999 WorkerIngressWorkerInternal Internal cluster communication tcp 9000 - 9999 WorkerIngressKube Kubernetes kubelet, scheduler, and controller manager tcp 10250 WorkerIngressWorkerKube Kubernetes kubelet, scheduler, and controller manager tcp 10250 WorkerIngressIngressServices Kubernetes Ingress services tcp 30000 - 32767 WorkerIngressWorkerIngressServices Kubernetes Ingress services tcp 30000 - 32767 WorkerIngressGeneve Geneve packets udp 6081 WorkerIngressMasterGeneve Geneve packets udp 6081 WorkerIngressIpsecIke IPsec IKE packets udp 500 WorkerIngressMasterIpsecIke IPsec IKE packets udp 500 WorkerIngressIpsecNat IPsec NAT-T packets udp 4500 WorkerIngressMasterIpsecNat IPsec NAT-T packets udp 4500 WorkerIngressIpsecEsp IPsec ESP packets 50 All WorkerIngressMasterIpsecEsp IPsec ESP packets 50 All WorkerIngressInternalUDP Internal cluster communication udp 9000 - 9999 WorkerIngressMasterInternalUDP Internal cluster communication udp 9000 - 9999 WorkerIngressIngressServicesUDP Kubernetes Ingress services udp 30000 - 32767 WorkerIngressMasterIngressServicesUDP Kubernetes Ingress services udp 30000 - 32767 Roles and instance profiles You must grant the machines permissions in AWS. The provided CloudFormation templates grant the machines Allow permissions for the following AWS::IAM::Role objects and provide a AWS::IAM::InstanceProfile for each set of roles. If you do not use the templates, you can grant the machines the following broad permissions or the following individual permissions. Role Effect Action Resource Master Allow ec2:* * Allow elasticloadbalancing:* * Allow iam:PassRole * Allow s3:GetObject * Worker Allow ec2:Describe* * Bootstrap Allow ec2:Describe* * Allow ec2:AttachVolume * Allow ec2:DetachVolume * 14.5.2. Cluster machines You need AWS::EC2::Instance objects for the following machines: A bootstrap machine. This machine is required during installation, but you can remove it after your cluster deploys. Three control plane machines. The control plane machines are not governed by a control plane machine set. Compute machines. You must create at least two compute machines, which are also known as worker machines, during installation. These machines are not governed by a compute machine set. 14.5.3. Required AWS permissions for the IAM user Note Your IAM user must have the permission tag:GetResources in the region us-east-1 to delete the base cluster resources. As part of the AWS API requirement, the OpenShift Container Platform installation program performs various actions in this region. When you attach the AdministratorAccess policy to the IAM user that you create in Amazon Web Services (AWS), you grant that user all of the required permissions. To deploy all components of an OpenShift Container Platform cluster, the IAM user requires the following permissions: Example 14.3. Required EC2 permissions for installation ec2:AttachNetworkInterface ec2:AuthorizeSecurityGroupEgress ec2:AuthorizeSecurityGroupIngress ec2:CopyImage ec2:CreateNetworkInterface ec2:CreateSecurityGroup ec2:CreateTags ec2:CreateVolume ec2:DeleteSecurityGroup ec2:DeleteSnapshot ec2:DeleteTags ec2:DeregisterImage ec2:DescribeAccountAttributes ec2:DescribeAddresses ec2:DescribeAvailabilityZones ec2:DescribeDhcpOptions ec2:DescribeImages ec2:DescribeInstanceAttribute ec2:DescribeInstanceCreditSpecifications ec2:DescribeInstances ec2:DescribeInstanceTypes ec2:DescribeInternetGateways ec2:DescribeKeyPairs ec2:DescribeNatGateways ec2:DescribeNetworkAcls ec2:DescribeNetworkInterfaces ec2:DescribePrefixLists ec2:DescribeRegions ec2:DescribeRouteTables ec2:DescribeSecurityGroupRules ec2:DescribeSecurityGroups ec2:DescribeSubnets ec2:DescribeTags ec2:DescribeVolumes ec2:DescribeVpcAttribute ec2:DescribeVpcClassicLink ec2:DescribeVpcClassicLinkDnsSupport ec2:DescribeVpcEndpoints ec2:DescribeVpcs ec2:GetEbsDefaultKmsKeyId ec2:ModifyInstanceAttribute ec2:ModifyNetworkInterfaceAttribute ec2:RevokeSecurityGroupEgress ec2:RevokeSecurityGroupIngress ec2:RunInstances ec2:TerminateInstances Example 14.4. Required permissions for creating network resources during installation ec2:AllocateAddress ec2:AssociateAddress ec2:AssociateDhcpOptions ec2:AssociateRouteTable ec2:AttachInternetGateway ec2:CreateDhcpOptions ec2:CreateInternetGateway ec2:CreateNatGateway ec2:CreateRoute ec2:CreateRouteTable ec2:CreateSubnet ec2:CreateVpc ec2:CreateVpcEndpoint ec2:ModifySubnetAttribute ec2:ModifyVpcAttribute Note If you use an existing Virtual Private Cloud (VPC), your account does not require these permissions for creating network resources. Example 14.5. Required Elastic Load Balancing permissions (ELB) for installation elasticloadbalancing:AddTags elasticloadbalancing:ApplySecurityGroupsToLoadBalancer elasticloadbalancing:AttachLoadBalancerToSubnets elasticloadbalancing:ConfigureHealthCheck elasticloadbalancing:CreateListener elasticloadbalancing:CreateLoadBalancer elasticloadbalancing:CreateLoadBalancerListeners elasticloadbalancing:CreateTargetGroup elasticloadbalancing:DeleteLoadBalancer elasticloadbalancing:DeregisterInstancesFromLoadBalancer elasticloadbalancing:DeregisterTargets elasticloadbalancing:DescribeInstanceHealth elasticloadbalancing:DescribeListeners elasticloadbalancing:DescribeLoadBalancerAttributes elasticloadbalancing:DescribeLoadBalancers elasticloadbalancing:DescribeTags elasticloadbalancing:DescribeTargetGroupAttributes elasticloadbalancing:DescribeTargetHealth elasticloadbalancing:ModifyLoadBalancerAttributes elasticloadbalancing:ModifyTargetGroup elasticloadbalancing:ModifyTargetGroupAttributes elasticloadbalancing:RegisterInstancesWithLoadBalancer elasticloadbalancing:RegisterTargets elasticloadbalancing:SetLoadBalancerPoliciesOfListener Important OpenShift Container Platform uses both the ELB and ELBv2 API services to provision load balancers. The permission list shows permissions required by both services. A known issue exists in the AWS web console where both services use the same elasticloadbalancing action prefix but do not recognize the same actions. You can ignore the warnings about the service not recognizing certain elasticloadbalancing actions. Example 14.6. Required IAM permissions for installation iam:AddRoleToInstanceProfile iam:CreateInstanceProfile iam:CreateRole iam:DeleteInstanceProfile iam:DeleteRole iam:DeleteRolePolicy iam:GetInstanceProfile iam:GetRole iam:GetRolePolicy iam:GetUser iam:ListInstanceProfilesForRole iam:ListRoles iam:ListUsers iam:PassRole iam:PutRolePolicy iam:RemoveRoleFromInstanceProfile iam:SimulatePrincipalPolicy iam:TagRole Note If you have not created a load balancer in your AWS account, the IAM user also requires the iam:CreateServiceLinkedRole permission. Example 14.7. Required Route 53 permissions for installation route53:ChangeResourceRecordSets route53:ChangeTagsForResource route53:CreateHostedZone route53:DeleteHostedZone route53:GetChange route53:GetHostedZone route53:ListHostedZones route53:ListHostedZonesByName route53:ListResourceRecordSets route53:ListTagsForResource route53:UpdateHostedZoneComment Example 14.8. Required Amazon Simple Storage Service (S3) permissions for installation s3:CreateBucket s3:DeleteBucket s3:GetAccelerateConfiguration s3:GetBucketAcl s3:GetBucketCors s3:GetBucketLocation s3:GetBucketLogging s3:GetBucketObjectLockConfiguration s3:GetBucketPolicy s3:GetBucketRequestPayment s3:GetBucketTagging s3:GetBucketVersioning s3:GetBucketWebsite s3:GetEncryptionConfiguration s3:GetLifecycleConfiguration s3:GetReplicationConfiguration s3:ListBucket s3:PutBucketAcl s3:PutBucketTagging s3:PutEncryptionConfiguration Example 14.9. S3 permissions that cluster Operators require s3:DeleteObject s3:GetObject s3:GetObjectAcl s3:GetObjectTagging s3:GetObjectVersion s3:PutObject s3:PutObjectAcl s3:PutObjectTagging Example 14.10. Required permissions to delete base cluster resources autoscaling:DescribeAutoScalingGroups ec2:DeleteNetworkInterface ec2:DeletePlacementGroup ec2:DeleteVolume elasticloadbalancing:DeleteTargetGroup elasticloadbalancing:DescribeTargetGroups iam:DeleteAccessKey iam:DeleteUser iam:DeleteUserPolicy iam:ListAttachedRolePolicies iam:ListInstanceProfiles iam:ListRolePolicies iam:ListUserPolicies s3:DeleteObject s3:ListBucketVersions tag:GetResources Example 14.11. Required permissions to delete network resources ec2:DeleteDhcpOptions ec2:DeleteInternetGateway ec2:DeleteNatGateway ec2:DeleteRoute ec2:DeleteRouteTable ec2:DeleteSubnet ec2:DeleteVpc ec2:DeleteVpcEndpoints ec2:DetachInternetGateway ec2:DisassociateRouteTable ec2:ReleaseAddress ec2:ReplaceRouteTableAssociation Note If you use an existing VPC, your account does not require these permissions to delete network resources. Instead, your account only requires the tag:UntagResources permission to delete network resources. Example 14.12. Optional permissions for installing a cluster with a custom Key Management Service (KMS) key kms:CreateGrant kms:Decrypt kms:DescribeKey kms:Encrypt kms:GenerateDataKey kms:GenerateDataKeyWithoutPlainText kms:ListGrants kms:RevokeGrant Example 14.13. Required permissions to delete a cluster with shared instance roles iam:UntagRole Example 14.14. Additional IAM and S3 permissions that are required to create manifests iam:GetUserPolicy iam:ListAccessKeys iam:PutUserPolicy iam:TagUser s3:AbortMultipartUpload s3:GetBucketPublicAccessBlock s3:ListBucket s3:ListBucketMultipartUploads s3:PutBucketPublicAccessBlock s3:PutLifecycleConfiguration Note If you are managing your cloud provider credentials with mint mode, the IAM user also requires the iam:CreateAccessKey and iam:CreateUser permissions. Example 14.15. Optional permissions for instance and quota checks for installation ec2:DescribeInstanceTypeOfferings servicequotas:ListAWSDefaultServiceQuotas Example 14.16. Optional permissions for the cluster owner account when installing a cluster on a shared VPC sts:AssumeRole 14.6. Generating a key pair for cluster node SSH access During an OpenShift Container Platform installation, you can provide an SSH public key to the installation program. The key is passed to the Red Hat Enterprise Linux CoreOS (RHCOS) nodes through their Ignition config files and is used to authenticate SSH access to the nodes. The key is added to the ~/.ssh/authorized_keys list for the core user on each node, which enables password-less authentication. After the key is passed to the nodes, you can use the key pair to SSH in to the RHCOS nodes as the user core . To access the nodes through SSH, the private key identity must be managed by SSH for your local user. If you want to SSH in to your cluster nodes to perform installation debugging or disaster recovery, you must provide the SSH public key during the installation process. The ./openshift-install gather command also requires the SSH public key to be in place on the cluster nodes. Important Do not skip this procedure in production environments, where disaster recovery and debugging is required. Note You must use a local key, not one that you configured with platform-specific approaches such as AWS key pairs . Procedure If you do not have an existing SSH key pair on your local machine to use for authentication onto your cluster nodes, create one. For example, on a computer that uses a Linux operating system, run the following command: USD ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1 1 Specify the path and file name, such as ~/.ssh/id_ed25519 , of the new SSH key. If you have an existing key pair, ensure your public key is in the your ~/.ssh directory. Note If you plan to install an OpenShift Container Platform cluster that uses the RHEL cryptographic libraries that have been submitted to NIST for FIPS 140-2/140-3 Validation on only the x86_64 , ppc64le , and s390x architectures, do not create a key that uses the ed25519 algorithm. Instead, create a key that uses the rsa or ecdsa algorithm. View the public SSH key: USD cat <path>/<file_name>.pub For example, run the following to view the ~/.ssh/id_ed25519.pub public key: USD cat ~/.ssh/id_ed25519.pub Add the SSH private key identity to the SSH agent for your local user, if it has not already been added. SSH agent management of the key is required for password-less SSH authentication onto your cluster nodes, or if you want to use the ./openshift-install gather command. Note On some distributions, default SSH private key identities such as ~/.ssh/id_rsa and ~/.ssh/id_dsa are managed automatically. If the ssh-agent process is not already running for your local user, start it as a background task: USD eval "USD(ssh-agent -s)" Example output Agent pid 31874 Note If your cluster is in FIPS mode, only use FIPS-compliant algorithms to generate the SSH key. The key must be either RSA or ECDSA. Add your SSH private key to the ssh-agent : USD ssh-add <path>/<file_name> 1 1 Specify the path and file name for your SSH private key, such as ~/.ssh/id_ed25519 Example output Identity added: /home/<you>/<path>/<file_name> (<computer_name>) steps When you install OpenShift Container Platform, provide the SSH public key to the installation program. If you install a cluster on infrastructure that you provision, you must provide the key to the installation program. 14.7. Creating the installation files for AWS To install OpenShift Container Platform on Amazon Web Services (AWS) using user-provisioned infrastructure, you must generate the files that the installation program needs to deploy your cluster and modify them so that the cluster creates only the machines that it will use. You generate and customize the install-config.yaml file, Kubernetes manifests, and Ignition config files. You also have the option to first set up a separate var partition during the preparation phases of installation. 14.7.1. Optional: Creating a separate /var partition It is recommended that disk partitioning for OpenShift Container Platform be left to the installer. However, there are cases where you might want to create separate partitions in a part of the filesystem that you expect to grow. OpenShift Container Platform supports the addition of a single partition to attach storage to either the /var partition or a subdirectory of /var . For example: /var/lib/containers : Holds container-related content that can grow as more images and containers are added to a system. /var/lib/etcd : Holds data that you might want to keep separate for purposes such as performance optimization of etcd storage. /var : Holds data that you might want to keep separate for purposes such as auditing. Storing the contents of a /var directory separately makes it easier to grow storage for those areas as needed and reinstall OpenShift Container Platform at a later date and keep that data intact. With this method, you will not have to pull all your containers again, nor will you have to copy massive log files when you update systems. Because /var must be in place before a fresh installation of Red Hat Enterprise Linux CoreOS (RHCOS), the following procedure sets up the separate /var partition by creating a machine config manifest that is inserted during the openshift-install preparation phases of an OpenShift Container Platform installation. Important If you follow the steps to create a separate /var partition in this procedure, it is not necessary to create the Kubernetes manifest and Ignition config files again as described later in this section. Procedure Create a directory to hold the OpenShift Container Platform installation files: USD mkdir USDHOME/clusterconfig Run openshift-install to create a set of files in the manifest and openshift subdirectories. Answer the system questions as you are prompted: USD openshift-install create manifests --dir USDHOME/clusterconfig Example output ? SSH Public Key ... INFO Credentials loaded from the "myprofile" profile in file "/home/myuser/.aws/credentials" INFO Consuming Install Config from target directory INFO Manifests created in: USDHOME/clusterconfig/manifests and USDHOME/clusterconfig/openshift Optional: Confirm that the installation program created manifests in the clusterconfig/openshift directory: USD ls USDHOME/clusterconfig/openshift/ Example output 99_kubeadmin-password-secret.yaml 99_openshift-cluster-api_master-machines-0.yaml 99_openshift-cluster-api_master-machines-1.yaml 99_openshift-cluster-api_master-machines-2.yaml ... Create a Butane config that configures the additional partition. For example, name the file USDHOME/clusterconfig/98-var-partition.bu , change the disk device name to the name of the storage device on the worker systems, and set the storage size as appropriate. This example places the /var directory on a separate partition: variant: openshift version: 4.14.0 metadata: labels: machineconfiguration.openshift.io/role: worker name: 98-var-partition storage: disks: - device: /dev/disk/by-id/<device_name> 1 partitions: - label: var start_mib: <partition_start_offset> 2 size_mib: <partition_size> 3 number: 5 filesystems: - device: /dev/disk/by-partlabel/var path: /var format: xfs mount_options: [defaults, prjquota] 4 with_mount_unit: true 1 The storage device name of the disk that you want to partition. 2 When adding a data partition to the boot disk, a minimum value of 25000 MiB (Mebibytes) is recommended. The root file system is automatically resized to fill all available space up to the specified offset. If no value is specified, or if the specified value is smaller than the recommended minimum, the resulting root file system will be too small, and future reinstalls of RHCOS might overwrite the beginning of the data partition. 3 The size of the data partition in mebibytes. 4 The prjquota mount option must be enabled for filesystems used for container storage. Note When creating a separate /var partition, you cannot use different instance types for worker nodes, if the different instance types do not have the same device name. Create a manifest from the Butane config and save it to the clusterconfig/openshift directory. For example, run the following command: USD butane USDHOME/clusterconfig/98-var-partition.bu -o USDHOME/clusterconfig/openshift/98-var-partition.yaml Run openshift-install again to create Ignition configs from a set of files in the manifest and openshift subdirectories: USD openshift-install create ignition-configs --dir USDHOME/clusterconfig USD ls USDHOME/clusterconfig/ auth bootstrap.ign master.ign metadata.json worker.ign Now you can use the Ignition config files as input to the installation procedures to install Red Hat Enterprise Linux CoreOS (RHCOS) systems. 14.7.2. Creating the installation configuration file Generate and customize the installation configuration file that the installation program needs to deploy your cluster. Prerequisites You obtained the OpenShift Container Platform installation program for user-provisioned infrastructure and the pull secret for your cluster. For a restricted network installation, these files are on your mirror host. You checked that you are deploying your cluster to a region with an accompanying Red Hat Enterprise Linux CoreOS (RHCOS) AMI published by Red Hat. If you are deploying to a region that requires a custom AMI, such as an AWS GovCloud region, you must create the install-config.yaml file manually. Procedure Create the install-config.yaml file. Change to the directory that contains the installation program and run the following command: USD ./openshift-install create install-config --dir <installation_directory> 1 1 For <installation_directory> , specify the directory name to store the files that the installation program creates. Important Specify an empty directory. Some installation assets, like bootstrap X.509 certificates have short expiration intervals, so you must not reuse an installation directory. If you want to reuse individual files from another cluster installation, you can copy them into your directory. However, the file names for the installation assets might change between releases. Use caution when copying installation files from an earlier OpenShift Container Platform version. At the prompts, provide the configuration details for your cloud: Optional: Select an SSH key to use to access your cluster machines. Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. Select aws as the platform to target. If you do not have an AWS profile stored on your computer, enter the AWS access key ID and secret access key for the user that you configured to run the installation program. Note The AWS access key ID and secret access key are stored in ~/.aws/credentials in the home directory of the current user on the installation host. You are prompted for the credentials by the installation program if the credentials for the exported profile are not present in the file. Any credentials that you provide to the installation program are stored in the file. Select the AWS region to deploy the cluster to. Select the base domain for the Route 53 service that you configured for your cluster. Enter a descriptive name for your cluster. Paste the pull secret from Red Hat OpenShift Cluster Manager . Edit the install-config.yaml file to give the additional information that is required for an installation in a restricted network. Update the pullSecret value to contain the authentication information for your registry: pullSecret: '{"auths":{"<local_registry>": {"auth": "<credentials>","email": "[email protected]"}}}' For <local_registry> , specify the registry domain name, and optionally the port, that your mirror registry uses to serve content. For example registry.example.com or registry.example.com:5000 . For <credentials> , specify the base64-encoded user name and password for your mirror registry. Add the additionalTrustBundle parameter and value. The value must be the contents of the certificate file that you used for your mirror registry. The certificate file can be an existing, trusted certificate authority or the self-signed certificate that you generated for the mirror registry. additionalTrustBundle: | -----BEGIN CERTIFICATE----- ZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZ -----END CERTIFICATE----- Add the image content resources: imageContentSources: - mirrors: - <local_registry>/<local_repository_name>/release source: quay.io/openshift-release-dev/ocp-release - mirrors: - <local_registry>/<local_repository_name>/release source: quay.io/openshift-release-dev/ocp-v4.0-art-dev Use the imageContentSources section from the output of the command to mirror the repository or the values that you used when you mirrored the content from the media that you brought into your restricted network. Optional: Set the publishing strategy to Internal : publish: Internal By setting this option, you create an internal Ingress Controller and a private load balancer. Optional: Back up the install-config.yaml file. Important The install-config.yaml file is consumed during the installation process. If you want to reuse the file, you must back it up now. Additional resources See Configuration and credential file settings in the AWS documentation for more information about AWS profile and credential configuration. 14.7.3. Configuring the cluster-wide proxy during installation Production environments can deny direct access to the internet and instead have an HTTP or HTTPS proxy available. You can configure a new OpenShift Container Platform cluster to use a proxy by configuring the proxy settings in the install-config.yaml file. Prerequisites You have an existing install-config.yaml file. You reviewed the sites that your cluster requires access to and determined whether any of them need to bypass the proxy. By default, all cluster egress traffic is proxied, including calls to hosting cloud provider APIs. You added sites to the Proxy object's spec.noProxy field to bypass the proxy if necessary. Note The Proxy object status.noProxy field is populated with the values of the networking.machineNetwork[].cidr , networking.clusterNetwork[].cidr , and networking.serviceNetwork[] fields from your installation configuration. For installations on Amazon Web Services (AWS), Google Cloud Platform (GCP), Microsoft Azure, and Red Hat OpenStack Platform (RHOSP), the Proxy object status.noProxy field is also populated with the instance metadata endpoint ( 169.254.169.254 ). Procedure Edit your install-config.yaml file and add the proxy settings. For example: apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: ec2.<aws_region>.amazonaws.com,elasticloadbalancing.<aws_region>.amazonaws.com,s3.<aws_region>.amazonaws.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5 1 A proxy URL to use for creating HTTP connections outside the cluster. The URL scheme must be http . 2 A proxy URL to use for creating HTTPS connections outside the cluster. 3 A comma-separated list of destination domain names, IP addresses, or other network CIDRs to exclude from proxying. Preface a domain with . to match subdomains only. For example, .y.com matches x.y.com , but not y.com . Use * to bypass the proxy for all destinations. If you have added the Amazon EC2 , Elastic Load Balancing , and S3 VPC endpoints to your VPC, you must add these endpoints to the noProxy field. 4 If provided, the installation program generates a config map that is named user-ca-bundle in the openshift-config namespace that contains one or more additional CA certificates that are required for proxying HTTPS connections. The Cluster Network Operator then creates a trusted-ca-bundle config map that merges these contents with the Red Hat Enterprise Linux CoreOS (RHCOS) trust bundle, and this config map is referenced in the trustedCA field of the Proxy object. The additionalTrustBundle field is required unless the proxy's identity certificate is signed by an authority from the RHCOS trust bundle. 5 Optional: The policy to determine the configuration of the Proxy object to reference the user-ca-bundle config map in the trustedCA field. The allowed values are Proxyonly and Always . Use Proxyonly to reference the user-ca-bundle config map only when http/https proxy is configured. Use Always to always reference the user-ca-bundle config map. The default value is Proxyonly . Note The installation program does not support the proxy readinessEndpoints field. Note If the installer times out, restart and then complete the deployment by using the wait-for command of the installer. For example: USD ./openshift-install wait-for install-complete --log-level debug Save the file and reference it when installing OpenShift Container Platform. The installation program creates a cluster-wide proxy that is named cluster that uses the proxy settings in the provided install-config.yaml file. If no proxy settings are provided, a cluster Proxy object is still created, but it will have a nil spec . Note Only the Proxy object named cluster is supported, and no additional proxies can be created. 14.7.4. Creating the Kubernetes manifest and Ignition config files Because you must modify some cluster definition files and manually start the cluster machines, you must generate the Kubernetes manifest and Ignition config files that the cluster needs to configure the machines. The installation configuration file transforms into the Kubernetes manifests. The manifests wrap into the Ignition configuration files, which are later used to configure the cluster machines. Important The Ignition config files that the OpenShift Container Platform installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending node-bootstrapper certificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information. It is recommended that you use Ignition config files within 12 hours after they are generated because the 24-hour certificate rotates from 16 to 22 hours after the cluster is installed. By using the Ignition config files within 12 hours, you can avoid installation failure if the certificate update runs during installation. Prerequisites You obtained the OpenShift Container Platform installation program. For a restricted network installation, these files are on your mirror host. You created the install-config.yaml installation configuration file. Procedure Change to the directory that contains the OpenShift Container Platform installation program and generate the Kubernetes manifests for the cluster: USD ./openshift-install create manifests --dir <installation_directory> 1 1 For <installation_directory> , specify the installation directory that contains the install-config.yaml file you created. Remove the Kubernetes manifest files that define the control plane machines: USD rm -f <installation_directory>/openshift/99_openshift-cluster-api_master-machines-*.yaml By removing these files, you prevent the cluster from automatically generating control plane machines. Remove the Kubernetes manifest files that define the control plane machine set: USD rm -f <installation_directory>/openshift/99_openshift-machine-api_master-control-plane-machine-set.yaml Remove the Kubernetes manifest files that define the worker machines: USD rm -f <installation_directory>/openshift/99_openshift-cluster-api_worker-machineset-*.yaml Important If you disabled the MachineAPI capability when installing a cluster on user-provisioned infrastructure, you must remove the Kubernetes manifest files that define the worker machines. Otherwise, your cluster fails to install. Because you create and manage the worker machines yourself, you do not need to initialize these machines. Check that the mastersSchedulable parameter in the <installation_directory>/manifests/cluster-scheduler-02-config.yml Kubernetes manifest file is set to false . This setting prevents pods from being scheduled on the control plane machines: Open the <installation_directory>/manifests/cluster-scheduler-02-config.yml file. Locate the mastersSchedulable parameter and ensure that it is set to false . Save and exit the file. Optional: If you do not want the Ingress Operator to create DNS records on your behalf, remove the privateZone and publicZone sections from the <installation_directory>/manifests/cluster-dns-02-config.yml DNS configuration file: apiVersion: config.openshift.io/v1 kind: DNS metadata: creationTimestamp: null name: cluster spec: baseDomain: example.openshift.com privateZone: 1 id: mycluster-100419-private-zone publicZone: 2 id: example.openshift.com status: {} 1 2 Remove this section completely. If you do so, you must add ingress DNS records manually in a later step. To create the Ignition configuration files, run the following command from the directory that contains the installation program: USD ./openshift-install create ignition-configs --dir <installation_directory> 1 1 For <installation_directory> , specify the same installation directory. Ignition config files are created for the bootstrap, control plane, and compute nodes in the installation directory. The kubeadmin-password and kubeconfig files are created in the ./<installation_directory>/auth directory: Additional resources Manually creating long-term credentials 14.8. Extracting the infrastructure name The Ignition config files contain a unique cluster identifier that you can use to uniquely identify your cluster in Amazon Web Services (AWS). The infrastructure name is also used to locate the appropriate AWS resources during an OpenShift Container Platform installation. The provided CloudFormation templates contain references to this infrastructure name, so you must extract it. Prerequisites You obtained the OpenShift Container Platform installation program and the pull secret for your cluster. You generated the Ignition config files for your cluster. You installed the jq package. Procedure To extract and view the infrastructure name from the Ignition config file metadata, run the following command: USD jq -r .infraID <installation_directory>/metadata.json 1 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. Example output openshift-vw9j6 1 1 The output of this command is your cluster name and a random string. 14.9. Creating a VPC in AWS You must create a Virtual Private Cloud (VPC) in Amazon Web Services (AWS) for your OpenShift Container Platform cluster to use. You can customize the VPC to meet your requirements, including VPN and route tables. You can use the provided CloudFormation template and a custom parameter file to create a stack of AWS resources that represent the VPC. Note If you do not use the provided CloudFormation template to create your AWS infrastructure, you must review the provided information and manually create the infrastructure. If your cluster does not initialize correctly, you might have to contact Red Hat support with your installation logs. Prerequisites You configured an AWS account. You added your AWS keys and region to your local AWS profile by running aws configure . You generated the Ignition config files for your cluster. Procedure Create a JSON file that contains the parameter values that the template requires: [ { "ParameterKey": "VpcCidr", 1 "ParameterValue": "10.0.0.0/16" 2 }, { "ParameterKey": "AvailabilityZoneCount", 3 "ParameterValue": "1" 4 }, { "ParameterKey": "SubnetBits", 5 "ParameterValue": "12" 6 } ] 1 The CIDR block for the VPC. 2 Specify a CIDR block in the format x.x.x.x/16-24 . 3 The number of availability zones to deploy the VPC in. 4 Specify an integer between 1 and 3 . 5 The size of each subnet in each availability zone. 6 Specify an integer between 5 and 13 , where 5 is /27 and 13 is /19 . Copy the template from the CloudFormation template for the VPC section of this topic and save it as a YAML file on your computer. This template describes the VPC that your cluster requires. Launch the CloudFormation template to create a stack of AWS resources that represent the VPC: Important You must enter the command on a single line. USD aws cloudformation create-stack --stack-name <name> 1 --template-body file://<template>.yaml 2 --parameters file://<parameters>.json 3 1 <name> is the name for the CloudFormation stack, such as cluster-vpc . You need the name of this stack if you remove the cluster. 2 <template> is the relative path to and name of the CloudFormation template YAML file that you saved. 3 <parameters> is the relative path to and name of the CloudFormation parameters JSON file. Example output arn:aws:cloudformation:us-east-1:269333783861:stack/cluster-vpc/dbedae40-2fd3-11eb-820e-12a48460849f Confirm that the template components exist: USD aws cloudformation describe-stacks --stack-name <name> After the StackStatus displays CREATE_COMPLETE , the output displays values for the following parameters. You must provide these parameter values to the other CloudFormation templates that you run to create your cluster: VpcId The ID of your VPC. PublicSubnetIds The IDs of the new public subnets. PrivateSubnetIds The IDs of the new private subnets. 14.9.1. CloudFormation template for the VPC You can use the following CloudFormation template to deploy the VPC that you need for your OpenShift Container Platform cluster. Example 14.17. CloudFormation template for the VPC AWSTemplateFormatVersion: 2010-09-09 Description: Template for Best Practice VPC with 1-3 AZs Parameters: VpcCidr: AllowedPattern: ^(([0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])\.){3}([0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])(\/(1[6-9]|2[0-4]))USD ConstraintDescription: CIDR block parameter must be in the form x.x.x.x/16-24. Default: 10.0.0.0/16 Description: CIDR block for VPC. Type: String AvailabilityZoneCount: ConstraintDescription: "The number of availability zones. (Min: 1, Max: 3)" MinValue: 1 MaxValue: 3 Default: 1 Description: "How many AZs to create VPC subnets for. (Min: 1, Max: 3)" Type: Number SubnetBits: ConstraintDescription: CIDR block parameter must be in the form x.x.x.x/19-27. MinValue: 5 MaxValue: 13 Default: 12 Description: "Size of each subnet to create within the availability zones. (Min: 5 = /27, Max: 13 = /19)" Type: Number Metadata: AWS::CloudFormation::Interface: ParameterGroups: - Label: default: "Network Configuration" Parameters: - VpcCidr - SubnetBits - Label: default: "Availability Zones" Parameters: - AvailabilityZoneCount ParameterLabels: AvailabilityZoneCount: default: "Availability Zone Count" VpcCidr: default: "VPC CIDR" SubnetBits: default: "Bits Per Subnet" Conditions: DoAz3: !Equals [3, !Ref AvailabilityZoneCount] DoAz2: !Or [!Equals [2, !Ref AvailabilityZoneCount], Condition: DoAz3] Resources: VPC: Type: "AWS::EC2::VPC" Properties: EnableDnsSupport: "true" EnableDnsHostnames: "true" CidrBlock: !Ref VpcCidr PublicSubnet: Type: "AWS::EC2::Subnet" Properties: VpcId: !Ref VPC CidrBlock: !Select [0, !Cidr [!Ref VpcCidr, 6, !Ref SubnetBits]] AvailabilityZone: !Select - 0 - Fn::GetAZs: !Ref "AWS::Region" PublicSubnet2: Type: "AWS::EC2::Subnet" Condition: DoAz2 Properties: VpcId: !Ref VPC CidrBlock: !Select [1, !Cidr [!Ref VpcCidr, 6, !Ref SubnetBits]] AvailabilityZone: !Select - 1 - Fn::GetAZs: !Ref "AWS::Region" PublicSubnet3: Type: "AWS::EC2::Subnet" Condition: DoAz3 Properties: VpcId: !Ref VPC CidrBlock: !Select [2, !Cidr [!Ref VpcCidr, 6, !Ref SubnetBits]] AvailabilityZone: !Select - 2 - Fn::GetAZs: !Ref "AWS::Region" InternetGateway: Type: "AWS::EC2::InternetGateway" GatewayToInternet: Type: "AWS::EC2::VPCGatewayAttachment" Properties: VpcId: !Ref VPC InternetGatewayId: !Ref InternetGateway PublicRouteTable: Type: "AWS::EC2::RouteTable" Properties: VpcId: !Ref VPC PublicRoute: Type: "AWS::EC2::Route" DependsOn: GatewayToInternet Properties: RouteTableId: !Ref PublicRouteTable DestinationCidrBlock: 0.0.0.0/0 GatewayId: !Ref InternetGateway PublicSubnetRouteTableAssociation: Type: "AWS::EC2::SubnetRouteTableAssociation" Properties: SubnetId: !Ref PublicSubnet RouteTableId: !Ref PublicRouteTable PublicSubnetRouteTableAssociation2: Type: "AWS::EC2::SubnetRouteTableAssociation" Condition: DoAz2 Properties: SubnetId: !Ref PublicSubnet2 RouteTableId: !Ref PublicRouteTable PublicSubnetRouteTableAssociation3: Condition: DoAz3 Type: "AWS::EC2::SubnetRouteTableAssociation" Properties: SubnetId: !Ref PublicSubnet3 RouteTableId: !Ref PublicRouteTable PrivateSubnet: Type: "AWS::EC2::Subnet" Properties: VpcId: !Ref VPC CidrBlock: !Select [3, !Cidr [!Ref VpcCidr, 6, !Ref SubnetBits]] AvailabilityZone: !Select - 0 - Fn::GetAZs: !Ref "AWS::Region" PrivateRouteTable: Type: "AWS::EC2::RouteTable" Properties: VpcId: !Ref VPC PrivateSubnetRouteTableAssociation: Type: "AWS::EC2::SubnetRouteTableAssociation" Properties: SubnetId: !Ref PrivateSubnet RouteTableId: !Ref PrivateRouteTable NAT: DependsOn: - GatewayToInternet Type: "AWS::EC2::NatGateway" Properties: AllocationId: "Fn::GetAtt": - EIP - AllocationId SubnetId: !Ref PublicSubnet EIP: Type: "AWS::EC2::EIP" Properties: Domain: vpc Route: Type: "AWS::EC2::Route" Properties: RouteTableId: Ref: PrivateRouteTable DestinationCidrBlock: 0.0.0.0/0 NatGatewayId: Ref: NAT PrivateSubnet2: Type: "AWS::EC2::Subnet" Condition: DoAz2 Properties: VpcId: !Ref VPC CidrBlock: !Select [4, !Cidr [!Ref VpcCidr, 6, !Ref SubnetBits]] AvailabilityZone: !Select - 1 - Fn::GetAZs: !Ref "AWS::Region" PrivateRouteTable2: Type: "AWS::EC2::RouteTable" Condition: DoAz2 Properties: VpcId: !Ref VPC PrivateSubnetRouteTableAssociation2: Type: "AWS::EC2::SubnetRouteTableAssociation" Condition: DoAz2 Properties: SubnetId: !Ref PrivateSubnet2 RouteTableId: !Ref PrivateRouteTable2 NAT2: DependsOn: - GatewayToInternet Type: "AWS::EC2::NatGateway" Condition: DoAz2 Properties: AllocationId: "Fn::GetAtt": - EIP2 - AllocationId SubnetId: !Ref PublicSubnet2 EIP2: Type: "AWS::EC2::EIP" Condition: DoAz2 Properties: Domain: vpc Route2: Type: "AWS::EC2::Route" Condition: DoAz2 Properties: RouteTableId: Ref: PrivateRouteTable2 DestinationCidrBlock: 0.0.0.0/0 NatGatewayId: Ref: NAT2 PrivateSubnet3: Type: "AWS::EC2::Subnet" Condition: DoAz3 Properties: VpcId: !Ref VPC CidrBlock: !Select [5, !Cidr [!Ref VpcCidr, 6, !Ref SubnetBits]] AvailabilityZone: !Select - 2 - Fn::GetAZs: !Ref "AWS::Region" PrivateRouteTable3: Type: "AWS::EC2::RouteTable" Condition: DoAz3 Properties: VpcId: !Ref VPC PrivateSubnetRouteTableAssociation3: Type: "AWS::EC2::SubnetRouteTableAssociation" Condition: DoAz3 Properties: SubnetId: !Ref PrivateSubnet3 RouteTableId: !Ref PrivateRouteTable3 NAT3: DependsOn: - GatewayToInternet Type: "AWS::EC2::NatGateway" Condition: DoAz3 Properties: AllocationId: "Fn::GetAtt": - EIP3 - AllocationId SubnetId: !Ref PublicSubnet3 EIP3: Type: "AWS::EC2::EIP" Condition: DoAz3 Properties: Domain: vpc Route3: Type: "AWS::EC2::Route" Condition: DoAz3 Properties: RouteTableId: Ref: PrivateRouteTable3 DestinationCidrBlock: 0.0.0.0/0 NatGatewayId: Ref: NAT3 S3Endpoint: Type: AWS::EC2::VPCEndpoint Properties: PolicyDocument: Version: 2012-10-17 Statement: - Effect: Allow Principal: '*' Action: - '*' Resource: - '*' RouteTableIds: - !Ref PublicRouteTable - !Ref PrivateRouteTable - !If [DoAz2, !Ref PrivateRouteTable2, !Ref "AWS::NoValue"] - !If [DoAz3, !Ref PrivateRouteTable3, !Ref "AWS::NoValue"] ServiceName: !Join - '' - - com.amazonaws. - !Ref 'AWS::Region' - .s3 VpcId: !Ref VPC Outputs: VpcId: Description: ID of the new VPC. Value: !Ref VPC PublicSubnetIds: Description: Subnet IDs of the public subnets. Value: !Join [ ",", [!Ref PublicSubnet, !If [DoAz2, !Ref PublicSubnet2, !Ref "AWS::NoValue"], !If [DoAz3, !Ref PublicSubnet3, !Ref "AWS::NoValue"]] ] PrivateSubnetIds: Description: Subnet IDs of the private subnets. Value: !Join [ ",", [!Ref PrivateSubnet, !If [DoAz2, !Ref PrivateSubnet2, !Ref "AWS::NoValue"], !If [DoAz3, !Ref PrivateSubnet3, !Ref "AWS::NoValue"]] ] PublicRouteTableId: Description: Public Route table ID Value: !Ref PublicRouteTable 14.10. Creating networking and load balancing components in AWS You must configure networking and classic or network load balancing in Amazon Web Services (AWS) that your OpenShift Container Platform cluster can use. You can use the provided CloudFormation template and a custom parameter file to create a stack of AWS resources. The stack represents the networking and load balancing components that your OpenShift Container Platform cluster requires. The template also creates a hosted zone and subnet tags. You can run the template multiple times within a single Virtual Private Cloud (VPC). Note If you do not use the provided CloudFormation template to create your AWS infrastructure, you must review the provided information and manually create the infrastructure. If your cluster does not initialize correctly, you might have to contact Red Hat support with your installation logs. Prerequisites You configured an AWS account. You added your AWS keys and region to your local AWS profile by running aws configure . You generated the Ignition config files for your cluster. You created and configured a VPC and associated subnets in AWS. Procedure Obtain the hosted zone ID for the Route 53 base domain that you specified in the install-config.yaml file for your cluster. You can obtain details about your hosted zone by running the following command: USD aws route53 list-hosted-zones-by-name --dns-name <route53_domain> 1 1 For the <route53_domain> , specify the Route 53 base domain that you used when you generated the install-config.yaml file for the cluster. Example output mycluster.example.com. False 100 HOSTEDZONES 65F8F38E-2268-B835-E15C-AB55336FCBFA /hostedzone/Z21IXYZABCZ2A4 mycluster.example.com. 10 In the example output, the hosted zone ID is Z21IXYZABCZ2A4 . Create a JSON file that contains the parameter values that the template requires: [ { "ParameterKey": "ClusterName", 1 "ParameterValue": "mycluster" 2 }, { "ParameterKey": "InfrastructureName", 3 "ParameterValue": "mycluster-<random_string>" 4 }, { "ParameterKey": "HostedZoneId", 5 "ParameterValue": "<random_string>" 6 }, { "ParameterKey": "HostedZoneName", 7 "ParameterValue": "example.com" 8 }, { "ParameterKey": "PublicSubnets", 9 "ParameterValue": "subnet-<random_string>" 10 }, { "ParameterKey": "PrivateSubnets", 11 "ParameterValue": "subnet-<random_string>" 12 }, { "ParameterKey": "VpcId", 13 "ParameterValue": "vpc-<random_string>" 14 } ] 1 A short, representative cluster name to use for hostnames, etc. 2 Specify the cluster name that you used when you generated the install-config.yaml file for the cluster. 3 The name for your cluster infrastructure that is encoded in your Ignition config files for the cluster. 4 Specify the infrastructure name that you extracted from the Ignition config file metadata, which has the format <cluster-name>-<random-string> . 5 The Route 53 public zone ID to register the targets with. 6 Specify the Route 53 public zone ID, which as a format similar to Z21IXYZABCZ2A4 . You can obtain this value from the AWS console. 7 The Route 53 zone to register the targets with. 8 Specify the Route 53 base domain that you used when you generated the install-config.yaml file for the cluster. Do not include the trailing period (.) that is displayed in the AWS console. 9 The public subnets that you created for your VPC. 10 Specify the PublicSubnetIds value from the output of the CloudFormation template for the VPC. 11 The private subnets that you created for your VPC. 12 Specify the PrivateSubnetIds value from the output of the CloudFormation template for the VPC. 13 The VPC that you created for the cluster. 14 Specify the VpcId value from the output of the CloudFormation template for the VPC. Copy the template from the CloudFormation template for the network and load balancers section of this topic and save it as a YAML file on your computer. This template describes the networking and load balancing objects that your cluster requires. Important If you are deploying your cluster to an AWS government or secret region, you must update the InternalApiServerRecord in the CloudFormation template to use CNAME records. Records of type ALIAS are not supported for AWS government regions. Launch the CloudFormation template to create a stack of AWS resources that provide the networking and load balancing components: Important You must enter the command on a single line. USD aws cloudformation create-stack --stack-name <name> 1 --template-body file://<template>.yaml 2 --parameters file://<parameters>.json 3 --capabilities CAPABILITY_NAMED_IAM 4 1 <name> is the name for the CloudFormation stack, such as cluster-dns . You need the name of this stack if you remove the cluster. 2 <template> is the relative path to and name of the CloudFormation template YAML file that you saved. 3 <parameters> is the relative path to and name of the CloudFormation parameters JSON file. 4 You must explicitly declare the CAPABILITY_NAMED_IAM capability because the provided template creates some AWS::IAM::Role resources. Example output arn:aws:cloudformation:us-east-1:269333783861:stack/cluster-dns/cd3e5de0-2fd4-11eb-5cf0-12be5c33a183 Confirm that the template components exist: USD aws cloudformation describe-stacks --stack-name <name> After the StackStatus displays CREATE_COMPLETE , the output displays values for the following parameters. You must provide these parameter values to the other CloudFormation templates that you run to create your cluster: PrivateHostedZoneId Hosted zone ID for the private DNS. ExternalApiLoadBalancerName Full name of the external API load balancer. InternalApiLoadBalancerName Full name of the internal API load balancer. ApiServerDnsName Full hostname of the API server. RegisterNlbIpTargetsLambda Lambda ARN useful to help register/deregister IP targets for these load balancers. ExternalApiTargetGroupArn ARN of external API target group. InternalApiTargetGroupArn ARN of internal API target group. InternalServiceTargetGroupArn ARN of internal service target group. 14.10.1. CloudFormation template for the network and load balancers You can use the following CloudFormation template to deploy the networking objects and load balancers that you need for your OpenShift Container Platform cluster. Example 14.18. CloudFormation template for the network and load balancers AWSTemplateFormatVersion: 2010-09-09 Description: Template for OpenShift Cluster Network Elements (Route53 & LBs) Parameters: ClusterName: AllowedPattern: ^([a-zA-Z][a-zA-Z0-9\-]{0,26})USD MaxLength: 27 MinLength: 1 ConstraintDescription: Cluster name must be alphanumeric, start with a letter, and have a maximum of 27 characters. Description: A short, representative cluster name to use for host names and other identifying names. Type: String InfrastructureName: AllowedPattern: ^([a-zA-Z][a-zA-Z0-9\-]{0,26})USD MaxLength: 27 MinLength: 1 ConstraintDescription: Infrastructure name must be alphanumeric, start with a letter, and have a maximum of 27 characters. Description: A short, unique cluster ID used to tag cloud resources and identify items owned or used by the cluster. Type: String HostedZoneId: Description: The Route53 public zone ID to register the targets with, such as Z21IXYZABCZ2A4. Type: String HostedZoneName: Description: The Route53 zone to register the targets with, such as example.com. Omit the trailing period. Type: String Default: "example.com" PublicSubnets: Description: The internet-facing subnets. Type: List<AWS::EC2::Subnet::Id> PrivateSubnets: Description: The internal subnets. Type: List<AWS::EC2::Subnet::Id> VpcId: Description: The VPC-scoped resources will belong to this VPC. Type: AWS::EC2::VPC::Id Metadata: AWS::CloudFormation::Interface: ParameterGroups: - Label: default: "Cluster Information" Parameters: - ClusterName - InfrastructureName - Label: default: "Network Configuration" Parameters: - VpcId - PublicSubnets - PrivateSubnets - Label: default: "DNS" Parameters: - HostedZoneName - HostedZoneId ParameterLabels: ClusterName: default: "Cluster Name" InfrastructureName: default: "Infrastructure Name" VpcId: default: "VPC ID" PublicSubnets: default: "Public Subnets" PrivateSubnets: default: "Private Subnets" HostedZoneName: default: "Public Hosted Zone Name" HostedZoneId: default: "Public Hosted Zone ID" Resources: ExtApiElb: Type: AWS::ElasticLoadBalancingV2::LoadBalancer Properties: Name: !Join ["-", [!Ref InfrastructureName, "ext"]] IpAddressType: ipv4 Subnets: !Ref PublicSubnets Type: network IntApiElb: Type: AWS::ElasticLoadBalancingV2::LoadBalancer Properties: Name: !Join ["-", [!Ref InfrastructureName, "int"]] Scheme: internal IpAddressType: ipv4 Subnets: !Ref PrivateSubnets Type: network IntDns: Type: "AWS::Route53::HostedZone" Properties: HostedZoneConfig: Comment: "Managed by CloudFormation" Name: !Join [".", [!Ref ClusterName, !Ref HostedZoneName]] HostedZoneTags: - Key: Name Value: !Join ["-", [!Ref InfrastructureName, "int"]] - Key: !Join ["", ["kubernetes.io/cluster/", !Ref InfrastructureName]] Value: "owned" VPCs: - VPCId: !Ref VpcId VPCRegion: !Ref "AWS::Region" ExternalApiServerRecord: Type: AWS::Route53::RecordSetGroup Properties: Comment: Alias record for the API server HostedZoneId: !Ref HostedZoneId RecordSets: - Name: !Join [ ".", ["api", !Ref ClusterName, !Join ["", [!Ref HostedZoneName, "."]]], ] Type: A AliasTarget: HostedZoneId: !GetAtt ExtApiElb.CanonicalHostedZoneID DNSName: !GetAtt ExtApiElb.DNSName InternalApiServerRecord: Type: AWS::Route53::RecordSetGroup Properties: Comment: Alias record for the API server HostedZoneId: !Ref IntDns RecordSets: - Name: !Join [ ".", ["api", !Ref ClusterName, !Join ["", [!Ref HostedZoneName, "."]]], ] Type: A AliasTarget: HostedZoneId: !GetAtt IntApiElb.CanonicalHostedZoneID DNSName: !GetAtt IntApiElb.DNSName - Name: !Join [ ".", ["api-int", !Ref ClusterName, !Join ["", [!Ref HostedZoneName, "."]]], ] Type: A AliasTarget: HostedZoneId: !GetAtt IntApiElb.CanonicalHostedZoneID DNSName: !GetAtt IntApiElb.DNSName ExternalApiListener: Type: AWS::ElasticLoadBalancingV2::Listener Properties: DefaultActions: - Type: forward TargetGroupArn: Ref: ExternalApiTargetGroup LoadBalancerArn: Ref: ExtApiElb Port: 6443 Protocol: TCP ExternalApiTargetGroup: Type: AWS::ElasticLoadBalancingV2::TargetGroup Properties: HealthCheckIntervalSeconds: 10 HealthCheckPath: "/readyz" HealthCheckPort: 6443 HealthCheckProtocol: HTTPS HealthyThresholdCount: 2 UnhealthyThresholdCount: 2 Port: 6443 Protocol: TCP TargetType: ip VpcId: Ref: VpcId TargetGroupAttributes: - Key: deregistration_delay.timeout_seconds Value: 60 InternalApiListener: Type: AWS::ElasticLoadBalancingV2::Listener Properties: DefaultActions: - Type: forward TargetGroupArn: Ref: InternalApiTargetGroup LoadBalancerArn: Ref: IntApiElb Port: 6443 Protocol: TCP InternalApiTargetGroup: Type: AWS::ElasticLoadBalancingV2::TargetGroup Properties: HealthCheckIntervalSeconds: 10 HealthCheckPath: "/readyz" HealthCheckPort: 6443 HealthCheckProtocol: HTTPS HealthyThresholdCount: 2 UnhealthyThresholdCount: 2 Port: 6443 Protocol: TCP TargetType: ip VpcId: Ref: VpcId TargetGroupAttributes: - Key: deregistration_delay.timeout_seconds Value: 60 InternalServiceInternalListener: Type: AWS::ElasticLoadBalancingV2::Listener Properties: DefaultActions: - Type: forward TargetGroupArn: Ref: InternalServiceTargetGroup LoadBalancerArn: Ref: IntApiElb Port: 22623 Protocol: TCP InternalServiceTargetGroup: Type: AWS::ElasticLoadBalancingV2::TargetGroup Properties: HealthCheckIntervalSeconds: 10 HealthCheckPath: "/healthz" HealthCheckPort: 22623 HealthCheckProtocol: HTTPS HealthyThresholdCount: 2 UnhealthyThresholdCount: 2 Port: 22623 Protocol: TCP TargetType: ip VpcId: Ref: VpcId TargetGroupAttributes: - Key: deregistration_delay.timeout_seconds Value: 60 RegisterTargetLambdaIamRole: Type: AWS::IAM::Role Properties: RoleName: !Join ["-", [!Ref InfrastructureName, "nlb", "lambda", "role"]] AssumeRolePolicyDocument: Version: "2012-10-17" Statement: - Effect: "Allow" Principal: Service: - "lambda.amazonaws.com" Action: - "sts:AssumeRole" Path: "/" Policies: - PolicyName: !Join ["-", [!Ref InfrastructureName, "master", "policy"]] PolicyDocument: Version: "2012-10-17" Statement: - Effect: "Allow" Action: [ "elasticloadbalancing:RegisterTargets", "elasticloadbalancing:DeregisterTargets", ] Resource: !Ref InternalApiTargetGroup - Effect: "Allow" Action: [ "elasticloadbalancing:RegisterTargets", "elasticloadbalancing:DeregisterTargets", ] Resource: !Ref InternalServiceTargetGroup - Effect: "Allow" Action: [ "elasticloadbalancing:RegisterTargets", "elasticloadbalancing:DeregisterTargets", ] Resource: !Ref ExternalApiTargetGroup RegisterNlbIpTargets: Type: "AWS::Lambda::Function" Properties: Handler: "index.handler" Role: Fn::GetAtt: - "RegisterTargetLambdaIamRole" - "Arn" Code: ZipFile: | import json import boto3 import cfnresponse def handler(event, context): elb = boto3.client('elbv2') if event['RequestType'] == 'Delete': elb.deregister_targets(TargetGroupArn=event['ResourceProperties']['TargetArn'],Targets=[{'Id': event['ResourceProperties']['TargetIp']}]) elif event['RequestType'] == 'Create': elb.register_targets(TargetGroupArn=event['ResourceProperties']['TargetArn'],Targets=[{'Id': event['ResourceProperties']['TargetIp']}]) responseData = {} cfnresponse.send(event, context, cfnresponse.SUCCESS, responseData, event['ResourceProperties']['TargetArn']+event['ResourceProperties']['TargetIp']) Runtime: "python3.8" Timeout: 120 RegisterSubnetTagsLambdaIamRole: Type: AWS::IAM::Role Properties: RoleName: !Join ["-", [!Ref InfrastructureName, "subnet-tags-lambda-role"]] AssumeRolePolicyDocument: Version: "2012-10-17" Statement: - Effect: "Allow" Principal: Service: - "lambda.amazonaws.com" Action: - "sts:AssumeRole" Path: "/" Policies: - PolicyName: !Join ["-", [!Ref InfrastructureName, "subnet-tagging-policy"]] PolicyDocument: Version: "2012-10-17" Statement: - Effect: "Allow" Action: [ "ec2:DeleteTags", "ec2:CreateTags" ] Resource: "arn:aws:ec2:*:*:subnet/*" - Effect: "Allow" Action: [ "ec2:DescribeSubnets", "ec2:DescribeTags" ] Resource: "*" RegisterSubnetTags: Type: "AWS::Lambda::Function" Properties: Handler: "index.handler" Role: Fn::GetAtt: - "RegisterSubnetTagsLambdaIamRole" - "Arn" Code: ZipFile: | import json import boto3 import cfnresponse def handler(event, context): ec2_client = boto3.client('ec2') if event['RequestType'] == 'Delete': for subnet_id in event['ResourceProperties']['Subnets']: ec2_client.delete_tags(Resources=[subnet_id], Tags=[{'Key': 'kubernetes.io/cluster/' + event['ResourceProperties']['InfrastructureName']}]); elif event['RequestType'] == 'Create': for subnet_id in event['ResourceProperties']['Subnets']: ec2_client.create_tags(Resources=[subnet_id], Tags=[{'Key': 'kubernetes.io/cluster/' + event['ResourceProperties']['InfrastructureName'], 'Value': 'shared'}]); responseData = {} cfnresponse.send(event, context, cfnresponse.SUCCESS, responseData, event['ResourceProperties']['InfrastructureName']+event['ResourceProperties']['Subnets'][0]) Runtime: "python3.8" Timeout: 120 RegisterPublicSubnetTags: Type: Custom::SubnetRegister Properties: ServiceToken: !GetAtt RegisterSubnetTags.Arn InfrastructureName: !Ref InfrastructureName Subnets: !Ref PublicSubnets RegisterPrivateSubnetTags: Type: Custom::SubnetRegister Properties: ServiceToken: !GetAtt RegisterSubnetTags.Arn InfrastructureName: !Ref InfrastructureName Subnets: !Ref PrivateSubnets Outputs: PrivateHostedZoneId: Description: Hosted zone ID for the private DNS, which is required for private records. Value: !Ref IntDns ExternalApiLoadBalancerName: Description: Full name of the external API load balancer. Value: !GetAtt ExtApiElb.LoadBalancerFullName InternalApiLoadBalancerName: Description: Full name of the internal API load balancer. Value: !GetAtt IntApiElb.LoadBalancerFullName ApiServerDnsName: Description: Full hostname of the API server, which is required for the Ignition config files. Value: !Join [".", ["api-int", !Ref ClusterName, !Ref HostedZoneName]] RegisterNlbIpTargetsLambda: Description: Lambda ARN useful to help register or deregister IP targets for these load balancers. Value: !GetAtt RegisterNlbIpTargets.Arn ExternalApiTargetGroupArn: Description: ARN of the external API target group. Value: !Ref ExternalApiTargetGroup InternalApiTargetGroupArn: Description: ARN of the internal API target group. Value: !Ref InternalApiTargetGroup InternalServiceTargetGroupArn: Description: ARN of the internal service target group. Value: !Ref InternalServiceTargetGroup Important If you are deploying your cluster to an AWS government or secret region, you must update the InternalApiServerRecord to use CNAME records. Records of type ALIAS are not supported for AWS government regions. For example: Type: CNAME TTL: 10 ResourceRecords: - !GetAtt IntApiElb.DNSName Additional resources See Listing public hosted zones in the AWS documentation for more information about listing public hosted zones. 14.11. Creating security group and roles in AWS You must create security groups and roles in Amazon Web Services (AWS) for your OpenShift Container Platform cluster to use. You can use the provided CloudFormation template and a custom parameter file to create a stack of AWS resources. The stack represents the security groups and roles that your OpenShift Container Platform cluster requires. Note If you do not use the provided CloudFormation template to create your AWS infrastructure, you must review the provided information and manually create the infrastructure. If your cluster does not initialize correctly, you might have to contact Red Hat support with your installation logs. Prerequisites You configured an AWS account. You added your AWS keys and region to your local AWS profile by running aws configure . You generated the Ignition config files for your cluster. You created and configured a VPC and associated subnets in AWS. Procedure Create a JSON file that contains the parameter values that the template requires: [ { "ParameterKey": "InfrastructureName", 1 "ParameterValue": "mycluster-<random_string>" 2 }, { "ParameterKey": "VpcCidr", 3 "ParameterValue": "10.0.0.0/16" 4 }, { "ParameterKey": "PrivateSubnets", 5 "ParameterValue": "subnet-<random_string>" 6 }, { "ParameterKey": "VpcId", 7 "ParameterValue": "vpc-<random_string>" 8 } ] 1 The name for your cluster infrastructure that is encoded in your Ignition config files for the cluster. 2 Specify the infrastructure name that you extracted from the Ignition config file metadata, which has the format <cluster-name>-<random-string> . 3 The CIDR block for the VPC. 4 Specify the CIDR block parameter that you used for the VPC that you defined in the form x.x.x.x/16-24 . 5 The private subnets that you created for your VPC. 6 Specify the PrivateSubnetIds value from the output of the CloudFormation template for the VPC. 7 The VPC that you created for the cluster. 8 Specify the VpcId value from the output of the CloudFormation template for the VPC. Copy the template from the CloudFormation template for security objects section of this topic and save it as a YAML file on your computer. This template describes the security groups and roles that your cluster requires. Launch the CloudFormation template to create a stack of AWS resources that represent the security groups and roles: Important You must enter the command on a single line. USD aws cloudformation create-stack --stack-name <name> 1 --template-body file://<template>.yaml 2 --parameters file://<parameters>.json 3 --capabilities CAPABILITY_NAMED_IAM 4 1 <name> is the name for the CloudFormation stack, such as cluster-sec . You need the name of this stack if you remove the cluster. 2 <template> is the relative path to and name of the CloudFormation template YAML file that you saved. 3 <parameters> is the relative path to and name of the CloudFormation parameters JSON file. 4 You must explicitly declare the CAPABILITY_NAMED_IAM capability because the provided template creates some AWS::IAM::Role and AWS::IAM::InstanceProfile resources. Example output arn:aws:cloudformation:us-east-1:269333783861:stack/cluster-sec/03bd4210-2ed7-11eb-6d7a-13fc0b61e9db Confirm that the template components exist: USD aws cloudformation describe-stacks --stack-name <name> After the StackStatus displays CREATE_COMPLETE , the output displays values for the following parameters. You must provide these parameter values to the other CloudFormation templates that you run to create your cluster: MasterSecurityGroupId Master Security Group ID WorkerSecurityGroupId Worker Security Group ID MasterInstanceProfile Master IAM Instance Profile WorkerInstanceProfile Worker IAM Instance Profile 14.11.1. CloudFormation template for security objects You can use the following CloudFormation template to deploy the security objects that you need for your OpenShift Container Platform cluster. Example 14.19. CloudFormation template for security objects AWSTemplateFormatVersion: 2010-09-09 Description: Template for OpenShift Cluster Security Elements (Security Groups & IAM) Parameters: InfrastructureName: AllowedPattern: ^([a-zA-Z][a-zA-Z0-9\-]{0,26})USD MaxLength: 27 MinLength: 1 ConstraintDescription: Infrastructure name must be alphanumeric, start with a letter, and have a maximum of 27 characters. Description: A short, unique cluster ID used to tag cloud resources and identify items owned or used by the cluster. Type: String VpcCidr: AllowedPattern: ^(([0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])\.){3}([0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])(\/(1[6-9]|2[0-4]))USD ConstraintDescription: CIDR block parameter must be in the form x.x.x.x/16-24. Default: 10.0.0.0/16 Description: CIDR block for VPC. Type: String VpcId: Description: The VPC-scoped resources will belong to this VPC. Type: AWS::EC2::VPC::Id PrivateSubnets: Description: The internal subnets. Type: List<AWS::EC2::Subnet::Id> Metadata: AWS::CloudFormation::Interface: ParameterGroups: - Label: default: "Cluster Information" Parameters: - InfrastructureName - Label: default: "Network Configuration" Parameters: - VpcId - VpcCidr - PrivateSubnets ParameterLabels: InfrastructureName: default: "Infrastructure Name" VpcId: default: "VPC ID" VpcCidr: default: "VPC CIDR" PrivateSubnets: default: "Private Subnets" Resources: MasterSecurityGroup: Type: AWS::EC2::SecurityGroup Properties: GroupDescription: Cluster Master Security Group SecurityGroupIngress: - IpProtocol: icmp FromPort: 0 ToPort: 0 CidrIp: !Ref VpcCidr - IpProtocol: tcp FromPort: 22 ToPort: 22 CidrIp: !Ref VpcCidr - IpProtocol: tcp ToPort: 6443 FromPort: 6443 CidrIp: !Ref VpcCidr - IpProtocol: tcp FromPort: 22623 ToPort: 22623 CidrIp: !Ref VpcCidr VpcId: !Ref VpcId WorkerSecurityGroup: Type: AWS::EC2::SecurityGroup Properties: GroupDescription: Cluster Worker Security Group SecurityGroupIngress: - IpProtocol: icmp FromPort: 0 ToPort: 0 CidrIp: !Ref VpcCidr - IpProtocol: tcp FromPort: 22 ToPort: 22 CidrIp: !Ref VpcCidr VpcId: !Ref VpcId MasterIngressEtcd: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: etcd FromPort: 2379 ToPort: 2380 IpProtocol: tcp MasterIngressVxlan: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: Vxlan packets FromPort: 4789 ToPort: 4789 IpProtocol: udp MasterIngressWorkerVxlan: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: Vxlan packets FromPort: 4789 ToPort: 4789 IpProtocol: udp MasterIngressGeneve: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: Geneve packets FromPort: 6081 ToPort: 6081 IpProtocol: udp MasterIngressWorkerGeneve: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: Geneve packets FromPort: 6081 ToPort: 6081 IpProtocol: udp MasterIngressIpsecIke: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: IPsec IKE packets FromPort: 500 ToPort: 500 IpProtocol: udp MasterIngressIpsecNat: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: IPsec NAT-T packets FromPort: 4500 ToPort: 4500 IpProtocol: udp MasterIngressIpsecEsp: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: IPsec ESP packets IpProtocol: 50 MasterIngressWorkerIpsecIke: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: IPsec IKE packets FromPort: 500 ToPort: 500 IpProtocol: udp MasterIngressWorkerIpsecNat: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: IPsec NAT-T packets FromPort: 4500 ToPort: 4500 IpProtocol: udp MasterIngressWorkerIpsecEsp: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: IPsec ESP packets IpProtocol: 50 MasterIngressInternal: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: Internal cluster communication FromPort: 9000 ToPort: 9999 IpProtocol: tcp MasterIngressWorkerInternal: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: Internal cluster communication FromPort: 9000 ToPort: 9999 IpProtocol: tcp MasterIngressInternalUDP: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: Internal cluster communication FromPort: 9000 ToPort: 9999 IpProtocol: udp MasterIngressWorkerInternalUDP: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: Internal cluster communication FromPort: 9000 ToPort: 9999 IpProtocol: udp MasterIngressKube: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: Kubernetes kubelet, scheduler and controller manager FromPort: 10250 ToPort: 10259 IpProtocol: tcp MasterIngressWorkerKube: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: Kubernetes kubelet, scheduler and controller manager FromPort: 10250 ToPort: 10259 IpProtocol: tcp MasterIngressIngressServices: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: Kubernetes ingress services FromPort: 30000 ToPort: 32767 IpProtocol: tcp MasterIngressWorkerIngressServices: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: Kubernetes ingress services FromPort: 30000 ToPort: 32767 IpProtocol: tcp MasterIngressIngressServicesUDP: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: Kubernetes ingress services FromPort: 30000 ToPort: 32767 IpProtocol: udp MasterIngressWorkerIngressServicesUDP: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: Kubernetes ingress services FromPort: 30000 ToPort: 32767 IpProtocol: udp WorkerIngressVxlan: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: Vxlan packets FromPort: 4789 ToPort: 4789 IpProtocol: udp WorkerIngressMasterVxlan: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: Vxlan packets FromPort: 4789 ToPort: 4789 IpProtocol: udp WorkerIngressGeneve: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: Geneve packets FromPort: 6081 ToPort: 6081 IpProtocol: udp WorkerIngressMasterGeneve: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: Geneve packets FromPort: 6081 ToPort: 6081 IpProtocol: udp WorkerIngressIpsecIke: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: IPsec IKE packets FromPort: 500 ToPort: 500 IpProtocol: udp WorkerIngressIpsecNat: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: IPsec NAT-T packets FromPort: 4500 ToPort: 4500 IpProtocol: udp WorkerIngressIpsecEsp: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: IPsec ESP packets IpProtocol: 50 WorkerIngressMasterIpsecIke: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: IPsec IKE packets FromPort: 500 ToPort: 500 IpProtocol: udp WorkerIngressMasterIpsecNat: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: IPsec NAT-T packets FromPort: 4500 ToPort: 4500 IpProtocol: udp WorkerIngressMasterIpsecEsp: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: IPsec ESP packets IpProtocol: 50 WorkerIngressInternal: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: Internal cluster communication FromPort: 9000 ToPort: 9999 IpProtocol: tcp WorkerIngressMasterInternal: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: Internal cluster communication FromPort: 9000 ToPort: 9999 IpProtocol: tcp WorkerIngressInternalUDP: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: Internal cluster communication FromPort: 9000 ToPort: 9999 IpProtocol: udp WorkerIngressMasterInternalUDP: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: Internal cluster communication FromPort: 9000 ToPort: 9999 IpProtocol: udp WorkerIngressKube: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: Kubernetes secure kubelet port FromPort: 10250 ToPort: 10250 IpProtocol: tcp WorkerIngressWorkerKube: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: Internal Kubernetes communication FromPort: 10250 ToPort: 10250 IpProtocol: tcp WorkerIngressIngressServices: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: Kubernetes ingress services FromPort: 30000 ToPort: 32767 IpProtocol: tcp WorkerIngressMasterIngressServices: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: Kubernetes ingress services FromPort: 30000 ToPort: 32767 IpProtocol: tcp WorkerIngressIngressServicesUDP: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: Kubernetes ingress services FromPort: 30000 ToPort: 32767 IpProtocol: udp WorkerIngressMasterIngressServicesUDP: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: Kubernetes ingress services FromPort: 30000 ToPort: 32767 IpProtocol: udp MasterIamRole: Type: AWS::IAM::Role Properties: AssumeRolePolicyDocument: Version: "2012-10-17" Statement: - Effect: "Allow" Principal: Service: - "ec2.amazonaws.com" Action: - "sts:AssumeRole" Policies: - PolicyName: !Join ["-", [!Ref InfrastructureName, "master", "policy"]] PolicyDocument: Version: "2012-10-17" Statement: - Effect: "Allow" Action: - "ec2:AttachVolume" - "ec2:AuthorizeSecurityGroupIngress" - "ec2:CreateSecurityGroup" - "ec2:CreateTags" - "ec2:CreateVolume" - "ec2:DeleteSecurityGroup" - "ec2:DeleteVolume" - "ec2:Describe*" - "ec2:DetachVolume" - "ec2:ModifyInstanceAttribute" - "ec2:ModifyVolume" - "ec2:RevokeSecurityGroupIngress" - "elasticloadbalancing:AddTags" - "elasticloadbalancing:AttachLoadBalancerToSubnets" - "elasticloadbalancing:ApplySecurityGroupsToLoadBalancer" - "elasticloadbalancing:CreateListener" - "elasticloadbalancing:CreateLoadBalancer" - "elasticloadbalancing:CreateLoadBalancerPolicy" - "elasticloadbalancing:CreateLoadBalancerListeners" - "elasticloadbalancing:CreateTargetGroup" - "elasticloadbalancing:ConfigureHealthCheck" - "elasticloadbalancing:DeleteListener" - "elasticloadbalancing:DeleteLoadBalancer" - "elasticloadbalancing:DeleteLoadBalancerListeners" - "elasticloadbalancing:DeleteTargetGroup" - "elasticloadbalancing:DeregisterInstancesFromLoadBalancer" - "elasticloadbalancing:DeregisterTargets" - "elasticloadbalancing:Describe*" - "elasticloadbalancing:DetachLoadBalancerFromSubnets" - "elasticloadbalancing:ModifyListener" - "elasticloadbalancing:ModifyLoadBalancerAttributes" - "elasticloadbalancing:ModifyTargetGroup" - "elasticloadbalancing:ModifyTargetGroupAttributes" - "elasticloadbalancing:RegisterInstancesWithLoadBalancer" - "elasticloadbalancing:RegisterTargets" - "elasticloadbalancing:SetLoadBalancerPoliciesForBackendServer" - "elasticloadbalancing:SetLoadBalancerPoliciesOfListener" - "kms:DescribeKey" Resource: "*" MasterInstanceProfile: Type: "AWS::IAM::InstanceProfile" Properties: Roles: - Ref: "MasterIamRole" WorkerIamRole: Type: AWS::IAM::Role Properties: AssumeRolePolicyDocument: Version: "2012-10-17" Statement: - Effect: "Allow" Principal: Service: - "ec2.amazonaws.com" Action: - "sts:AssumeRole" Policies: - PolicyName: !Join ["-", [!Ref InfrastructureName, "worker", "policy"]] PolicyDocument: Version: "2012-10-17" Statement: - Effect: "Allow" Action: - "ec2:DescribeInstances" - "ec2:DescribeRegions" Resource: "*" WorkerInstanceProfile: Type: "AWS::IAM::InstanceProfile" Properties: Roles: - Ref: "WorkerIamRole" Outputs: MasterSecurityGroupId: Description: Master Security Group ID Value: !GetAtt MasterSecurityGroup.GroupId WorkerSecurityGroupId: Description: Worker Security Group ID Value: !GetAtt WorkerSecurityGroup.GroupId MasterInstanceProfile: Description: Master IAM Instance Profile Value: !Ref MasterInstanceProfile WorkerInstanceProfile: Description: Worker IAM Instance Profile Value: !Ref WorkerInstanceProfile 14.12. Accessing RHCOS AMIs with stream metadata In OpenShift Container Platform, stream metadata provides standardized metadata about RHCOS in the JSON format and injects the metadata into the cluster. Stream metadata is a stable format that supports multiple architectures and is intended to be self-documenting for maintaining automation. You can use the coreos print-stream-json sub-command of openshift-install to access information about the boot images in the stream metadata format. This command provides a method for printing stream metadata in a scriptable, machine-readable format. For user-provisioned installations, the openshift-install binary contains references to the version of RHCOS boot images that are tested for use with OpenShift Container Platform, such as the AWS AMI. Procedure To parse the stream metadata, use one of the following methods: From a Go program, use the official stream-metadata-go library at https://github.com/coreos/stream-metadata-go . You can also view example code in the library. From another programming language, such as Python or Ruby, use the JSON library of your preferred programming language. From a command-line utility that handles JSON data, such as jq : Print the current x86_64 or aarch64 AMI for an AWS region, such as us-west-1 : For x86_64 USD openshift-install coreos print-stream-json | jq -r '.architectures.x86_64.images.aws.regions["us-west-1"].image' Example output ami-0d3e625f84626bbda For aarch64 USD openshift-install coreos print-stream-json | jq -r '.architectures.aarch64.images.aws.regions["us-west-1"].image' Example output ami-0af1d3b7fa5be2131 The output of this command is the AWS AMI ID for your designated architecture and the us-west-1 region. The AMI must belong to the same region as the cluster. 14.13. RHCOS AMIs for the AWS infrastructure Red Hat provides Red Hat Enterprise Linux CoreOS (RHCOS) AMIs that are valid for the various AWS regions and instance architectures that you can manually specify for your OpenShift Container Platform nodes. Note By importing your own AMI, you can also install to regions that do not have a published RHCOS AMI. Table 14.3. x86_64 RHCOS AMIs AWS zone AWS AMI af-south-1 ami-01860370941726bdd ap-east-1 ami-05bc702cdaf7e4251 ap-northeast-1 ami-098932fd93c15690d ap-northeast-2 ami-006f4e02d97910a36 ap-northeast-3 ami-0c4bd5b1724f82273 ap-south-1 ami-0cbf22b638724853d ap-south-2 ami-031f4d165f4b429c4 ap-southeast-1 ami-0dc3e381a731ab9fc ap-southeast-2 ami-032ae8d0f287a66a6 ap-southeast-3 ami-0393130e034b86423 ap-southeast-4 ami-0b38f776bded7d7d7 ca-central-1 ami-058ea81b3a1d17edd eu-central-1 ami-011010debd974a250 eu-central-2 ami-0623b105ae811a5e2 eu-north-1 ami-0c4bb9ce04f3526d4 eu-south-1 ami-06c29eccd3d74df52 eu-south-2 ami-00e0b5f3181a3f98b eu-west-1 ami-087bfa513dc600676 eu-west-2 ami-0ebad59c0e9554473 eu-west-3 ami-074e63b65eaf83f96 me-central-1 ami-0179d6ae1d908ace9 me-south-1 ami-0b60c75273d3efcd7 sa-east-1 ami-0913cbfbfa9a7a53c us-east-1 ami-0f71dcd99e6a1cd53 us-east-2 ami-0545fae7edbbbf061 us-gov-east-1 ami-081eabdc478e501e5 us-gov-west-1 ami-076102c394767f319 us-west-1 ami-0609e4436c4ae5eff us-west-2 ami-0c5d3e03c0ab9b19a Table 14.4. aarch64 RHCOS AMIs AWS zone AWS AMI af-south-1 ami-08dd66a61a2caa326 ap-east-1 ami-0232cd715f8168c34 ap-northeast-1 ami-0bc0b17618da96700 ap-northeast-2 ami-0ee505fb62eed2fd6 ap-northeast-3 ami-0462cd2c3b7044c77 ap-south-1 ami-0e0b4d951b43adc58 ap-south-2 ami-06d457b151cc0e407 ap-southeast-1 ami-0874e1640dfc15f17 ap-southeast-2 ami-05f215734ceb0f5ad ap-southeast-3 ami-073030df265c88b3f ap-southeast-4 ami-043f4c40a6fc3238a ca-central-1 ami-0840622f99a32f586 eu-central-1 ami-09a5e6ebe667ae6b5 eu-central-2 ami-0835cb1bf387e609a eu-north-1 ami-069ddbda521a10a27 eu-south-1 ami-09c5cc21026032b4c eu-south-2 ami-0c36ab2a8bbeed045 eu-west-1 ami-0d2723c8228cb2df3 eu-west-2 ami-0abd66103d069f9a8 eu-west-3 ami-08c7249d59239fc5c me-central-1 ami-0685f33ebb18445a2 me-south-1 ami-0466941f4e5c56fe6 sa-east-1 ami-08cdc0c8a972f4763 us-east-1 ami-0d461970173c4332d us-east-2 ami-0e9cdc0e85e0a6aeb us-gov-east-1 ami-0b896df727672ce09 us-gov-west-1 ami-0b896df727672ce09 us-west-1 ami-027b7cc5f4c74e6c1 us-west-2 ami-0b189d89b44bdfbf2 14.14. Creating the bootstrap node in AWS You must create the bootstrap node in Amazon Web Services (AWS) to use during OpenShift Container Platform cluster initialization. You do this by: Providing a location to serve the bootstrap.ign Ignition config file to your cluster. This file is located in your installation directory. The provided CloudFormation Template assumes that the Ignition config files for your cluster are served from an S3 bucket. If you choose to serve the files from another location, you must modify the templates. Using the provided CloudFormation template and a custom parameter file to create a stack of AWS resources. The stack represents the bootstrap node that your OpenShift Container Platform installation requires. Note If you do not use the provided CloudFormation template to create your bootstrap node, you must review the provided information and manually create the infrastructure. If your cluster does not initialize correctly, you might have to contact Red Hat support with your installation logs. Prerequisites You configured an AWS account. You added your AWS keys and region to your local AWS profile by running aws configure . You generated the Ignition config files for your cluster. You created and configured a VPC and associated subnets in AWS. You created and configured DNS, load balancers, and listeners in AWS. You created the security groups and roles required for your cluster in AWS. Procedure Create the bucket by running the following command: USD aws s3 mb s3://<cluster-name>-infra 1 1 <cluster-name>-infra is the bucket name. When creating the install-config.yaml file, replace <cluster-name> with the name specified for the cluster. You must use a presigned URL for your S3 bucket, instead of the s3:// schema, if you are: Deploying to a region that has endpoints that differ from the AWS SDK. Deploying a proxy. Providing your own custom endpoints. Upload the bootstrap.ign Ignition config file to the bucket by running the following command: USD aws s3 cp <installation_directory>/bootstrap.ign s3://<cluster-name>-infra/bootstrap.ign 1 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. Verify that the file uploaded by running the following command: USD aws s3 ls s3://<cluster-name>-infra/ Example output 2019-04-03 16:15:16 314878 bootstrap.ign Note The bootstrap Ignition config file does contain secrets, like X.509 keys. The following steps provide basic security for the S3 bucket. To provide additional security, you can enable an S3 bucket policy to allow only certain users, such as the OpenShift IAM user, to access objects that the bucket contains. You can avoid S3 entirely and serve your bootstrap Ignition config file from any address that the bootstrap machine can reach. Create a JSON file that contains the parameter values that the template requires: [ { "ParameterKey": "InfrastructureName", 1 "ParameterValue": "mycluster-<random_string>" 2 }, { "ParameterKey": "RhcosAmi", 3 "ParameterValue": "ami-<random_string>" 4 }, { "ParameterKey": "AllowedBootstrapSshCidr", 5 "ParameterValue": "0.0.0.0/0" 6 }, { "ParameterKey": "PublicSubnet", 7 "ParameterValue": "subnet-<random_string>" 8 }, { "ParameterKey": "MasterSecurityGroupId", 9 "ParameterValue": "sg-<random_string>" 10 }, { "ParameterKey": "VpcId", 11 "ParameterValue": "vpc-<random_string>" 12 }, { "ParameterKey": "BootstrapIgnitionLocation", 13 "ParameterValue": "s3://<bucket_name>/bootstrap.ign" 14 }, { "ParameterKey": "AutoRegisterELB", 15 "ParameterValue": "yes" 16 }, { "ParameterKey": "RegisterNlbIpTargetsLambdaArn", 17 "ParameterValue": "arn:aws:lambda:<aws_region>:<account_number>:function:<dns_stack_name>-RegisterNlbIpTargets-<random_string>" 18 }, { "ParameterKey": "ExternalApiTargetGroupArn", 19 "ParameterValue": "arn:aws:elasticloadbalancing:<aws_region>:<account_number>:targetgroup/<dns_stack_name>-Exter-<random_string>" 20 }, { "ParameterKey": "InternalApiTargetGroupArn", 21 "ParameterValue": "arn:aws:elasticloadbalancing:<aws_region>:<account_number>:targetgroup/<dns_stack_name>-Inter-<random_string>" 22 }, { "ParameterKey": "InternalServiceTargetGroupArn", 23 "ParameterValue": "arn:aws:elasticloadbalancing:<aws_region>:<account_number>:targetgroup/<dns_stack_name>-Inter-<random_string>" 24 } ] 1 The name for your cluster infrastructure that is encoded in your Ignition config files for the cluster. 2 Specify the infrastructure name that you extracted from the Ignition config file metadata, which has the format <cluster-name>-<random-string> . 3 Current Red Hat Enterprise Linux CoreOS (RHCOS) AMI to use for the bootstrap node based on your selected architecture. 4 Specify a valid AWS::EC2::Image::Id value. 5 CIDR block to allow SSH access to the bootstrap node. 6 Specify a CIDR block in the format x.x.x.x/16-24 . 7 The public subnet that is associated with your VPC to launch the bootstrap node into. 8 Specify the PublicSubnetIds value from the output of the CloudFormation template for the VPC. 9 The master security group ID (for registering temporary rules) 10 Specify the MasterSecurityGroupId value from the output of the CloudFormation template for the security group and roles. 11 The VPC created resources will belong to. 12 Specify the VpcId value from the output of the CloudFormation template for the VPC. 13 Location to fetch bootstrap Ignition config file from. 14 Specify the S3 bucket and file name in the form s3://<bucket_name>/bootstrap.ign . 15 Whether or not to register a network load balancer (NLB). 16 Specify yes or no . If you specify yes , you must provide a Lambda Amazon Resource Name (ARN) value. 17 The ARN for NLB IP target registration lambda group. 18 Specify the RegisterNlbIpTargetsLambda value from the output of the CloudFormation template for DNS and load balancing. Use arn:aws-us-gov if deploying the cluster to an AWS GovCloud region. 19 The ARN for external API load balancer target group. 20 Specify the ExternalApiTargetGroupArn value from the output of the CloudFormation template for DNS and load balancing. Use arn:aws-us-gov if deploying the cluster to an AWS GovCloud region. 21 The ARN for internal API load balancer target group. 22 Specify the InternalApiTargetGroupArn value from the output of the CloudFormation template for DNS and load balancing. Use arn:aws-us-gov if deploying the cluster to an AWS GovCloud region. 23 The ARN for internal service load balancer target group. 24 Specify the InternalServiceTargetGroupArn value from the output of the CloudFormation template for DNS and load balancing. Use arn:aws-us-gov if deploying the cluster to an AWS GovCloud region. Copy the template from the CloudFormation template for the bootstrap machine section of this topic and save it as a YAML file on your computer. This template describes the bootstrap machine that your cluster requires. Optional: If you are deploying the cluster with a proxy, you must update the ignition in the template to add the ignition.config.proxy fields. Additionally, If you have added the Amazon EC2, Elastic Load Balancing, and S3 VPC endpoints to your VPC, you must add these endpoints to the noProxy field. Launch the CloudFormation template to create a stack of AWS resources that represent the bootstrap node: Important You must enter the command on a single line. USD aws cloudformation create-stack --stack-name <name> 1 --template-body file://<template>.yaml 2 --parameters file://<parameters>.json 3 --capabilities CAPABILITY_NAMED_IAM 4 1 <name> is the name for the CloudFormation stack, such as cluster-bootstrap . You need the name of this stack if you remove the cluster. 2 <template> is the relative path to and name of the CloudFormation template YAML file that you saved. 3 <parameters> is the relative path to and name of the CloudFormation parameters JSON file. 4 You must explicitly declare the CAPABILITY_NAMED_IAM capability because the provided template creates some AWS::IAM::Role and AWS::IAM::InstanceProfile resources. Example output arn:aws:cloudformation:us-east-1:269333783861:stack/cluster-bootstrap/12944486-2add-11eb-9dee-12dace8e3a83 Confirm that the template components exist: USD aws cloudformation describe-stacks --stack-name <name> After the StackStatus displays CREATE_COMPLETE , the output displays values for the following parameters. You must provide these parameter values to the other CloudFormation templates that you run to create your cluster: BootstrapInstanceId The bootstrap Instance ID. BootstrapPublicIp The bootstrap node public IP address. BootstrapPrivateIp The bootstrap node private IP address. 14.14.1. CloudFormation template for the bootstrap machine You can use the following CloudFormation template to deploy the bootstrap machine that you need for your OpenShift Container Platform cluster. Example 14.20. CloudFormation template for the bootstrap machine AWSTemplateFormatVersion: 2010-09-09 Description: Template for OpenShift Cluster Bootstrap (EC2 Instance, Security Groups and IAM) Parameters: InfrastructureName: AllowedPattern: ^([a-zA-Z][a-zA-Z0-9\-]{0,26})USD MaxLength: 27 MinLength: 1 ConstraintDescription: Infrastructure name must be alphanumeric, start with a letter, and have a maximum of 27 characters. Description: A short, unique cluster ID used to tag cloud resources and identify items owned or used by the cluster. Type: String RhcosAmi: Description: Current Red Hat Enterprise Linux CoreOS AMI to use for bootstrap. Type: AWS::EC2::Image::Id AllowedBootstrapSshCidr: AllowedPattern: ^(([0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])\.){3}([0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])(\/([0-9]|1[0-9]|2[0-9]|3[0-2]))USD ConstraintDescription: CIDR block parameter must be in the form x.x.x.x/0-32. Default: 0.0.0.0/0 Description: CIDR block to allow SSH access to the bootstrap node. Type: String PublicSubnet: Description: The public subnet to launch the bootstrap node into. Type: AWS::EC2::Subnet::Id MasterSecurityGroupId: Description: The master security group ID for registering temporary rules. Type: AWS::EC2::SecurityGroup::Id VpcId: Description: The VPC-scoped resources will belong to this VPC. Type: AWS::EC2::VPC::Id BootstrapIgnitionLocation: Default: s3://my-s3-bucket/bootstrap.ign Description: Ignition config file location. Type: String AutoRegisterELB: Default: "yes" AllowedValues: - "yes" - "no" Description: Do you want to invoke NLB registration, which requires a Lambda ARN parameter? Type: String RegisterNlbIpTargetsLambdaArn: Description: ARN for NLB IP target registration lambda. Type: String ExternalApiTargetGroupArn: Description: ARN for external API load balancer target group. Type: String InternalApiTargetGroupArn: Description: ARN for internal API load balancer target group. Type: String InternalServiceTargetGroupArn: Description: ARN for internal service load balancer target group. Type: String BootstrapInstanceType: Description: Instance type for the bootstrap EC2 instance Default: "i3.large" Type: String Metadata: AWS::CloudFormation::Interface: ParameterGroups: - Label: default: "Cluster Information" Parameters: - InfrastructureName - Label: default: "Host Information" Parameters: - RhcosAmi - BootstrapIgnitionLocation - MasterSecurityGroupId - Label: default: "Network Configuration" Parameters: - VpcId - AllowedBootstrapSshCidr - PublicSubnet - Label: default: "Load Balancer Automation" Parameters: - AutoRegisterELB - RegisterNlbIpTargetsLambdaArn - ExternalApiTargetGroupArn - InternalApiTargetGroupArn - InternalServiceTargetGroupArn ParameterLabels: InfrastructureName: default: "Infrastructure Name" VpcId: default: "VPC ID" AllowedBootstrapSshCidr: default: "Allowed SSH Source" PublicSubnet: default: "Public Subnet" RhcosAmi: default: "Red Hat Enterprise Linux CoreOS AMI ID" BootstrapIgnitionLocation: default: "Bootstrap Ignition Source" MasterSecurityGroupId: default: "Master Security Group ID" AutoRegisterELB: default: "Use Provided ELB Automation" Conditions: DoRegistration: !Equals ["yes", !Ref AutoRegisterELB] Resources: BootstrapIamRole: Type: AWS::IAM::Role Properties: AssumeRolePolicyDocument: Version: "2012-10-17" Statement: - Effect: "Allow" Principal: Service: - "ec2.amazonaws.com" Action: - "sts:AssumeRole" Path: "/" Policies: - PolicyName: !Join ["-", [!Ref InfrastructureName, "bootstrap", "policy"]] PolicyDocument: Version: "2012-10-17" Statement: - Effect: "Allow" Action: "ec2:Describe*" Resource: "*" - Effect: "Allow" Action: "ec2:AttachVolume" Resource: "*" - Effect: "Allow" Action: "ec2:DetachVolume" Resource: "*" - Effect: "Allow" Action: "s3:GetObject" Resource: "*" BootstrapInstanceProfile: Type: "AWS::IAM::InstanceProfile" Properties: Path: "/" Roles: - Ref: "BootstrapIamRole" BootstrapSecurityGroup: Type: AWS::EC2::SecurityGroup Properties: GroupDescription: Cluster Bootstrap Security Group SecurityGroupIngress: - IpProtocol: tcp FromPort: 22 ToPort: 22 CidrIp: !Ref AllowedBootstrapSshCidr - IpProtocol: tcp ToPort: 19531 FromPort: 19531 CidrIp: 0.0.0.0/0 VpcId: !Ref VpcId BootstrapInstance: Type: AWS::EC2::Instance Properties: ImageId: !Ref RhcosAmi IamInstanceProfile: !Ref BootstrapInstanceProfile InstanceType: !Ref BootstrapInstanceType NetworkInterfaces: - AssociatePublicIpAddress: "true" DeviceIndex: "0" GroupSet: - !Ref "BootstrapSecurityGroup" - !Ref "MasterSecurityGroupId" SubnetId: !Ref "PublicSubnet" UserData: Fn::Base64: !Sub - '{"ignition":{"config":{"replace":{"source":"USD{S3Loc}"}},"version":"3.1.0"}}' - { S3Loc: !Ref BootstrapIgnitionLocation } RegisterBootstrapApiTarget: Condition: DoRegistration Type: Custom::NLBRegister Properties: ServiceToken: !Ref RegisterNlbIpTargetsLambdaArn TargetArn: !Ref ExternalApiTargetGroupArn TargetIp: !GetAtt BootstrapInstance.PrivateIp RegisterBootstrapInternalApiTarget: Condition: DoRegistration Type: Custom::NLBRegister Properties: ServiceToken: !Ref RegisterNlbIpTargetsLambdaArn TargetArn: !Ref InternalApiTargetGroupArn TargetIp: !GetAtt BootstrapInstance.PrivateIp RegisterBootstrapInternalServiceTarget: Condition: DoRegistration Type: Custom::NLBRegister Properties: ServiceToken: !Ref RegisterNlbIpTargetsLambdaArn TargetArn: !Ref InternalServiceTargetGroupArn TargetIp: !GetAtt BootstrapInstance.PrivateIp Outputs: BootstrapInstanceId: Description: Bootstrap Instance ID. Value: !Ref BootstrapInstance BootstrapPublicIp: Description: The bootstrap node public IP address. Value: !GetAtt BootstrapInstance.PublicIp BootstrapPrivateIp: Description: The bootstrap node private IP address. Value: !GetAtt BootstrapInstance.PrivateIp Additional resources See RHCOS AMIs for the AWS infrastructure for details about the Red Hat Enterprise Linux CoreOS (RHCOS) AMIs for the AWS zones. 14.15. Creating the control plane machines in AWS You must create the control plane machines in Amazon Web Services (AWS) that your cluster will use. You can use the provided CloudFormation template and a custom parameter file to create a stack of AWS resources that represent the control plane nodes. Important The CloudFormation template creates a stack that represents three control plane nodes. Note If you do not use the provided CloudFormation template to create your control plane nodes, you must review the provided information and manually create the infrastructure. If your cluster does not initialize correctly, you might have to contact Red Hat support with your installation logs. Prerequisites You configured an AWS account. You added your AWS keys and region to your local AWS profile by running aws configure . You generated the Ignition config files for your cluster. You created and configured a VPC and associated subnets in AWS. You created and configured DNS, load balancers, and listeners in AWS. You created the security groups and roles required for your cluster in AWS. You created the bootstrap machine. Procedure Create a JSON file that contains the parameter values that the template requires: [ { "ParameterKey": "InfrastructureName", 1 "ParameterValue": "mycluster-<random_string>" 2 }, { "ParameterKey": "RhcosAmi", 3 "ParameterValue": "ami-<random_string>" 4 }, { "ParameterKey": "AutoRegisterDNS", 5 "ParameterValue": "yes" 6 }, { "ParameterKey": "PrivateHostedZoneId", 7 "ParameterValue": "<random_string>" 8 }, { "ParameterKey": "PrivateHostedZoneName", 9 "ParameterValue": "mycluster.example.com" 10 }, { "ParameterKey": "Master0Subnet", 11 "ParameterValue": "subnet-<random_string>" 12 }, { "ParameterKey": "Master1Subnet", 13 "ParameterValue": "subnet-<random_string>" 14 }, { "ParameterKey": "Master2Subnet", 15 "ParameterValue": "subnet-<random_string>" 16 }, { "ParameterKey": "MasterSecurityGroupId", 17 "ParameterValue": "sg-<random_string>" 18 }, { "ParameterKey": "IgnitionLocation", 19 "ParameterValue": "https://api-int.<cluster_name>.<domain_name>:22623/config/master" 20 }, { "ParameterKey": "CertificateAuthorities", 21 "ParameterValue": "data:text/plain;charset=utf-8;base64,ABC...xYz==" 22 }, { "ParameterKey": "MasterInstanceProfileName", 23 "ParameterValue": "<roles_stack>-MasterInstanceProfile-<random_string>" 24 }, { "ParameterKey": "MasterInstanceType", 25 "ParameterValue": "" 26 }, { "ParameterKey": "AutoRegisterELB", 27 "ParameterValue": "yes" 28 }, { "ParameterKey": "RegisterNlbIpTargetsLambdaArn", 29 "ParameterValue": "arn:aws:lambda:<aws_region>:<account_number>:function:<dns_stack_name>-RegisterNlbIpTargets-<random_string>" 30 }, { "ParameterKey": "ExternalApiTargetGroupArn", 31 "ParameterValue": "arn:aws:elasticloadbalancing:<aws_region>:<account_number>:targetgroup/<dns_stack_name>-Exter-<random_string>" 32 }, { "ParameterKey": "InternalApiTargetGroupArn", 33 "ParameterValue": "arn:aws:elasticloadbalancing:<aws_region>:<account_number>:targetgroup/<dns_stack_name>-Inter-<random_string>" 34 }, { "ParameterKey": "InternalServiceTargetGroupArn", 35 "ParameterValue": "arn:aws:elasticloadbalancing:<aws_region>:<account_number>:targetgroup/<dns_stack_name>-Inter-<random_string>" 36 } ] 1 The name for your cluster infrastructure that is encoded in your Ignition config files for the cluster. 2 Specify the infrastructure name that you extracted from the Ignition config file metadata, which has the format <cluster-name>-<random-string> . 3 Current Red Hat Enterprise Linux CoreOS (RHCOS) AMI to use for the control plane machines based on your selected architecture. 4 Specify an AWS::EC2::Image::Id value. 5 Whether or not to perform DNS etcd registration. 6 Specify yes or no . If you specify yes , you must provide hosted zone information. 7 The Route 53 private zone ID to register the etcd targets with. 8 Specify the PrivateHostedZoneId value from the output of the CloudFormation template for DNS and load balancing. 9 The Route 53 zone to register the targets with. 10 Specify <cluster_name>.<domain_name> where <domain_name> is the Route 53 base domain that you used when you generated install-config.yaml file for the cluster. Do not include the trailing period (.) that is displayed in the AWS console. 11 13 15 A subnet, preferably private, to launch the control plane machines on. 12 14 16 Specify a subnet from the PrivateSubnets value from the output of the CloudFormation template for DNS and load balancing. 17 The master security group ID to associate with control plane nodes. 18 Specify the MasterSecurityGroupId value from the output of the CloudFormation template for the security group and roles. 19 The location to fetch control plane Ignition config file from. 20 Specify the generated Ignition config file location, https://api-int.<cluster_name>.<domain_name>:22623/config/master . 21 The base64 encoded certificate authority string to use. 22 Specify the value from the master.ign file that is in the installation directory. This value is the long string with the format data:text/plain;charset=utf-8;base64,ABC... xYz== . 23 The IAM profile to associate with control plane nodes. 24 Specify the MasterInstanceProfile parameter value from the output of the CloudFormation template for the security group and roles. 25 The type of AWS instance to use for the control plane machines based on your selected architecture. 26 The instance type value corresponds to the minimum resource requirements for control plane machines. For example m6i.xlarge is a type for AMD64 and m6g.xlarge is a type for ARM64. 27 Whether or not to register a network load balancer (NLB). 28 Specify yes or no . If you specify yes , you must provide a Lambda Amazon Resource Name (ARN) value. 29 The ARN for NLB IP target registration lambda group. 30 Specify the RegisterNlbIpTargetsLambda value from the output of the CloudFormation template for DNS and load balancing. Use arn:aws-us-gov if deploying the cluster to an AWS GovCloud region. 31 The ARN for external API load balancer target group. 32 Specify the ExternalApiTargetGroupArn value from the output of the CloudFormation template for DNS and load balancing. Use arn:aws-us-gov if deploying the cluster to an AWS GovCloud region. 33 The ARN for internal API load balancer target group. 34 Specify the InternalApiTargetGroupArn value from the output of the CloudFormation template for DNS and load balancing. Use arn:aws-us-gov if deploying the cluster to an AWS GovCloud region. 35 The ARN for internal service load balancer target group. 36 Specify the InternalServiceTargetGroupArn value from the output of the CloudFormation template for DNS and load balancing. Use arn:aws-us-gov if deploying the cluster to an AWS GovCloud region. Copy the template from the CloudFormation template for control plane machines section of this topic and save it as a YAML file on your computer. This template describes the control plane machines that your cluster requires. If you specified an m5 instance type as the value for MasterInstanceType , add that instance type to the MasterInstanceType.AllowedValues parameter in the CloudFormation template. Launch the CloudFormation template to create a stack of AWS resources that represent the control plane nodes: Important You must enter the command on a single line. USD aws cloudformation create-stack --stack-name <name> 1 --template-body file://<template>.yaml 2 --parameters file://<parameters>.json 3 1 <name> is the name for the CloudFormation stack, such as cluster-control-plane . You need the name of this stack if you remove the cluster. 2 <template> is the relative path to and name of the CloudFormation template YAML file that you saved. 3 <parameters> is the relative path to and name of the CloudFormation parameters JSON file. Example output arn:aws:cloudformation:us-east-1:269333783861:stack/cluster-control-plane/21c7e2b0-2ee2-11eb-c6f6-0aa34627df4b Note The CloudFormation template creates a stack that represents three control plane nodes. Confirm that the template components exist: USD aws cloudformation describe-stacks --stack-name <name> 14.15.1. CloudFormation template for control plane machines You can use the following CloudFormation template to deploy the control plane machines that you need for your OpenShift Container Platform cluster. Example 14.21. CloudFormation template for control plane machines AWSTemplateFormatVersion: 2010-09-09 Description: Template for OpenShift Cluster Node Launch (EC2 master instances) Parameters: InfrastructureName: AllowedPattern: ^([a-zA-Z][a-zA-Z0-9\-]{0,26})USD MaxLength: 27 MinLength: 1 ConstraintDescription: Infrastructure name must be alphanumeric, start with a letter, and have a maximum of 27 characters. Description: A short, unique cluster ID used to tag nodes for the kubelet cloud provider. Type: String RhcosAmi: Description: Current Red Hat Enterprise Linux CoreOS AMI to use for bootstrap. Type: AWS::EC2::Image::Id AutoRegisterDNS: Default: "" Description: unused Type: String PrivateHostedZoneId: Default: "" Description: unused Type: String PrivateHostedZoneName: Default: "" Description: unused Type: String Master0Subnet: Description: The subnets, recommend private, to launch the master nodes into. Type: AWS::EC2::Subnet::Id Master1Subnet: Description: The subnets, recommend private, to launch the master nodes into. Type: AWS::EC2::Subnet::Id Master2Subnet: Description: The subnets, recommend private, to launch the master nodes into. Type: AWS::EC2::Subnet::Id MasterSecurityGroupId: Description: The master security group ID to associate with master nodes. Type: AWS::EC2::SecurityGroup::Id IgnitionLocation: Default: https://api-int.USDCLUSTER_NAME.USDDOMAIN:22623/config/master Description: Ignition config file location. Type: String CertificateAuthorities: Default: data:text/plain;charset=utf-8;base64,ABC...xYz== Description: Base64 encoded certificate authority string to use. Type: String MasterInstanceProfileName: Description: IAM profile to associate with master nodes. Type: String MasterInstanceType: Default: m5.xlarge Type: String AutoRegisterELB: Default: "yes" AllowedValues: - "yes" - "no" Description: Do you want to invoke NLB registration, which requires a Lambda ARN parameter? Type: String RegisterNlbIpTargetsLambdaArn: Description: ARN for NLB IP target registration lambda. Supply the value from the cluster infrastructure or select "no" for AutoRegisterELB. Type: String ExternalApiTargetGroupArn: Description: ARN for external API load balancer target group. Supply the value from the cluster infrastructure or select "no" for AutoRegisterELB. Type: String InternalApiTargetGroupArn: Description: ARN for internal API load balancer target group. Supply the value from the cluster infrastructure or select "no" for AutoRegisterELB. Type: String InternalServiceTargetGroupArn: Description: ARN for internal service load balancer target group. Supply the value from the cluster infrastructure or select "no" for AutoRegisterELB. Type: String Metadata: AWS::CloudFormation::Interface: ParameterGroups: - Label: default: "Cluster Information" Parameters: - InfrastructureName - Label: default: "Host Information" Parameters: - MasterInstanceType - RhcosAmi - IgnitionLocation - CertificateAuthorities - MasterSecurityGroupId - MasterInstanceProfileName - Label: default: "Network Configuration" Parameters: - VpcId - AllowedBootstrapSshCidr - Master0Subnet - Master1Subnet - Master2Subnet - Label: default: "Load Balancer Automation" Parameters: - AutoRegisterELB - RegisterNlbIpTargetsLambdaArn - ExternalApiTargetGroupArn - InternalApiTargetGroupArn - InternalServiceTargetGroupArn ParameterLabels: InfrastructureName: default: "Infrastructure Name" VpcId: default: "VPC ID" Master0Subnet: default: "Master-0 Subnet" Master1Subnet: default: "Master-1 Subnet" Master2Subnet: default: "Master-2 Subnet" MasterInstanceType: default: "Master Instance Type" MasterInstanceProfileName: default: "Master Instance Profile Name" RhcosAmi: default: "Red Hat Enterprise Linux CoreOS AMI ID" BootstrapIgnitionLocation: default: "Master Ignition Source" CertificateAuthorities: default: "Ignition CA String" MasterSecurityGroupId: default: "Master Security Group ID" AutoRegisterELB: default: "Use Provided ELB Automation" Conditions: DoRegistration: !Equals ["yes", !Ref AutoRegisterELB] Resources: Master0: Type: AWS::EC2::Instance Properties: ImageId: !Ref RhcosAmi BlockDeviceMappings: - DeviceName: /dev/xvda Ebs: VolumeSize: "120" VolumeType: "gp2" IamInstanceProfile: !Ref MasterInstanceProfileName InstanceType: !Ref MasterInstanceType NetworkInterfaces: - AssociatePublicIpAddress: "false" DeviceIndex: "0" GroupSet: - !Ref "MasterSecurityGroupId" SubnetId: !Ref "Master0Subnet" UserData: Fn::Base64: !Sub - '{"ignition":{"config":{"merge":[{"source":"USD{SOURCE}"}]},"security":{"tls":{"certificateAuthorities":[{"source":"USD{CA_BUNDLE}"}]}},"version":"3.1.0"}}' - { SOURCE: !Ref IgnitionLocation, CA_BUNDLE: !Ref CertificateAuthorities, } Tags: - Key: !Join ["", ["kubernetes.io/cluster/", !Ref InfrastructureName]] Value: "shared" RegisterMaster0: Condition: DoRegistration Type: Custom::NLBRegister Properties: ServiceToken: !Ref RegisterNlbIpTargetsLambdaArn TargetArn: !Ref ExternalApiTargetGroupArn TargetIp: !GetAtt Master0.PrivateIp RegisterMaster0InternalApiTarget: Condition: DoRegistration Type: Custom::NLBRegister Properties: ServiceToken: !Ref RegisterNlbIpTargetsLambdaArn TargetArn: !Ref InternalApiTargetGroupArn TargetIp: !GetAtt Master0.PrivateIp RegisterMaster0InternalServiceTarget: Condition: DoRegistration Type: Custom::NLBRegister Properties: ServiceToken: !Ref RegisterNlbIpTargetsLambdaArn TargetArn: !Ref InternalServiceTargetGroupArn TargetIp: !GetAtt Master0.PrivateIp Master1: Type: AWS::EC2::Instance Properties: ImageId: !Ref RhcosAmi BlockDeviceMappings: - DeviceName: /dev/xvda Ebs: VolumeSize: "120" VolumeType: "gp2" IamInstanceProfile: !Ref MasterInstanceProfileName InstanceType: !Ref MasterInstanceType NetworkInterfaces: - AssociatePublicIpAddress: "false" DeviceIndex: "0" GroupSet: - !Ref "MasterSecurityGroupId" SubnetId: !Ref "Master1Subnet" UserData: Fn::Base64: !Sub - '{"ignition":{"config":{"merge":[{"source":"USD{SOURCE}"}]},"security":{"tls":{"certificateAuthorities":[{"source":"USD{CA_BUNDLE}"}]}},"version":"3.1.0"}}' - { SOURCE: !Ref IgnitionLocation, CA_BUNDLE: !Ref CertificateAuthorities, } Tags: - Key: !Join ["", ["kubernetes.io/cluster/", !Ref InfrastructureName]] Value: "shared" RegisterMaster1: Condition: DoRegistration Type: Custom::NLBRegister Properties: ServiceToken: !Ref RegisterNlbIpTargetsLambdaArn TargetArn: !Ref ExternalApiTargetGroupArn TargetIp: !GetAtt Master1.PrivateIp RegisterMaster1InternalApiTarget: Condition: DoRegistration Type: Custom::NLBRegister Properties: ServiceToken: !Ref RegisterNlbIpTargetsLambdaArn TargetArn: !Ref InternalApiTargetGroupArn TargetIp: !GetAtt Master1.PrivateIp RegisterMaster1InternalServiceTarget: Condition: DoRegistration Type: Custom::NLBRegister Properties: ServiceToken: !Ref RegisterNlbIpTargetsLambdaArn TargetArn: !Ref InternalServiceTargetGroupArn TargetIp: !GetAtt Master1.PrivateIp Master2: Type: AWS::EC2::Instance Properties: ImageId: !Ref RhcosAmi BlockDeviceMappings: - DeviceName: /dev/xvda Ebs: VolumeSize: "120" VolumeType: "gp2" IamInstanceProfile: !Ref MasterInstanceProfileName InstanceType: !Ref MasterInstanceType NetworkInterfaces: - AssociatePublicIpAddress: "false" DeviceIndex: "0" GroupSet: - !Ref "MasterSecurityGroupId" SubnetId: !Ref "Master2Subnet" UserData: Fn::Base64: !Sub - '{"ignition":{"config":{"merge":[{"source":"USD{SOURCE}"}]},"security":{"tls":{"certificateAuthorities":[{"source":"USD{CA_BUNDLE}"}]}},"version":"3.1.0"}}' - { SOURCE: !Ref IgnitionLocation, CA_BUNDLE: !Ref CertificateAuthorities, } Tags: - Key: !Join ["", ["kubernetes.io/cluster/", !Ref InfrastructureName]] Value: "shared" RegisterMaster2: Condition: DoRegistration Type: Custom::NLBRegister Properties: ServiceToken: !Ref RegisterNlbIpTargetsLambdaArn TargetArn: !Ref ExternalApiTargetGroupArn TargetIp: !GetAtt Master2.PrivateIp RegisterMaster2InternalApiTarget: Condition: DoRegistration Type: Custom::NLBRegister Properties: ServiceToken: !Ref RegisterNlbIpTargetsLambdaArn TargetArn: !Ref InternalApiTargetGroupArn TargetIp: !GetAtt Master2.PrivateIp RegisterMaster2InternalServiceTarget: Condition: DoRegistration Type: Custom::NLBRegister Properties: ServiceToken: !Ref RegisterNlbIpTargetsLambdaArn TargetArn: !Ref InternalServiceTargetGroupArn TargetIp: !GetAtt Master2.PrivateIp Outputs: PrivateIPs: Description: The control-plane node private IP addresses. Value: !Join [ ",", [!GetAtt Master0.PrivateIp, !GetAtt Master1.PrivateIp, !GetAtt Master2.PrivateIp] ] 14.16. Creating the worker nodes in AWS You can create worker nodes in Amazon Web Services (AWS) for your cluster to use. You can use the provided CloudFormation template and a custom parameter file to create a stack of AWS resources that represent a worker node. Important The CloudFormation template creates a stack that represents one worker node. You must create a stack for each worker node. Note If you do not use the provided CloudFormation template to create your worker nodes, you must review the provided information and manually create the infrastructure. If your cluster does not initialize correctly, you might have to contact Red Hat support with your installation logs. Prerequisites You configured an AWS account. You added your AWS keys and region to your local AWS profile by running aws configure . You generated the Ignition config files for your cluster. You created and configured a VPC and associated subnets in AWS. You created and configured DNS, load balancers, and listeners in AWS. You created the security groups and roles required for your cluster in AWS. You created the bootstrap machine. You created the control plane machines. Procedure Create a JSON file that contains the parameter values that the CloudFormation template requires: [ { "ParameterKey": "InfrastructureName", 1 "ParameterValue": "mycluster-<random_string>" 2 }, { "ParameterKey": "RhcosAmi", 3 "ParameterValue": "ami-<random_string>" 4 }, { "ParameterKey": "Subnet", 5 "ParameterValue": "subnet-<random_string>" 6 }, { "ParameterKey": "WorkerSecurityGroupId", 7 "ParameterValue": "sg-<random_string>" 8 }, { "ParameterKey": "IgnitionLocation", 9 "ParameterValue": "https://api-int.<cluster_name>.<domain_name>:22623/config/worker" 10 }, { "ParameterKey": "CertificateAuthorities", 11 "ParameterValue": "" 12 }, { "ParameterKey": "WorkerInstanceProfileName", 13 "ParameterValue": "" 14 }, { "ParameterKey": "WorkerInstanceType", 15 "ParameterValue": "" 16 } ] 1 The name for your cluster infrastructure that is encoded in your Ignition config files for the cluster. 2 Specify the infrastructure name that you extracted from the Ignition config file metadata, which has the format <cluster-name>-<random-string> . 3 Current Red Hat Enterprise Linux CoreOS (RHCOS) AMI to use for the worker nodes based on your selected architecture. 4 Specify an AWS::EC2::Image::Id value. 5 A subnet, preferably private, to start the worker nodes on. 6 Specify a subnet from the PrivateSubnets value from the output of the CloudFormation template for DNS and load balancing. 7 The worker security group ID to associate with worker nodes. 8 Specify the WorkerSecurityGroupId value from the output of the CloudFormation template for the security group and roles. 9 The location to fetch the bootstrap Ignition config file from. 10 Specify the generated Ignition config location, https://api-int.<cluster_name>.<domain_name>:22623/config/worker . 11 Base64 encoded certificate authority string to use. 12 Specify the value from the worker.ign file that is in the installation directory. This value is the long string with the format data:text/plain;charset=utf-8;base64,ABC... xYz== . 13 The IAM profile to associate with worker nodes. 14 Specify the WorkerInstanceProfile parameter value from the output of the CloudFormation template for the security group and roles. 15 The type of AWS instance to use for the compute machines based on your selected architecture. 16 The instance type value corresponds to the minimum resource requirements for compute machines. For example m6i.large is a type for AMD64 and m6g.large is a type for ARM64. Copy the template from the CloudFormation template for worker machines section of this topic and save it as a YAML file on your computer. This template describes the networking objects and load balancers that your cluster requires. Optional: If you specified an m5 instance type as the value for WorkerInstanceType , add that instance type to the WorkerInstanceType.AllowedValues parameter in the CloudFormation template. Optional: If you are deploying with an AWS Marketplace image, update the Worker0.type.properties.ImageID parameter with the AMI ID that you obtained from your subscription. Use the CloudFormation template to create a stack of AWS resources that represent a worker node: Important You must enter the command on a single line. USD aws cloudformation create-stack --stack-name <name> 1 --template-body file://<template>.yaml \ 2 --parameters file://<parameters>.json 3 1 <name> is the name for the CloudFormation stack, such as cluster-worker-1 . You need the name of this stack if you remove the cluster. 2 <template> is the relative path to and name of the CloudFormation template YAML file that you saved. 3 <parameters> is the relative path to and name of the CloudFormation parameters JSON file. Example output arn:aws:cloudformation:us-east-1:269333783861:stack/cluster-worker-1/729ee301-1c2a-11eb-348f-sd9888c65b59 Note The CloudFormation template creates a stack that represents one worker node. Confirm that the template components exist: USD aws cloudformation describe-stacks --stack-name <name> Continue to create worker stacks until you have created enough worker machines for your cluster. You can create additional worker stacks by referencing the same template and parameter files and specifying a different stack name. Important You must create at least two worker machines, so you must create at least two stacks that use this CloudFormation template. 14.16.1. CloudFormation template for worker machines You can use the following CloudFormation template to deploy the worker machines that you need for your OpenShift Container Platform cluster. Example 14.22. CloudFormation template for worker machines AWSTemplateFormatVersion: 2010-09-09 Description: Template for OpenShift Cluster Node Launch (EC2 worker instance) Parameters: InfrastructureName: AllowedPattern: ^([a-zA-Z][a-zA-Z0-9\-]{0,26})USD MaxLength: 27 MinLength: 1 ConstraintDescription: Infrastructure name must be alphanumeric, start with a letter, and have a maximum of 27 characters. Description: A short, unique cluster ID used to tag nodes for the kubelet cloud provider. Type: String RhcosAmi: Description: Current Red Hat Enterprise Linux CoreOS AMI to use for bootstrap. Type: AWS::EC2::Image::Id Subnet: Description: The subnets, recommend private, to launch the worker nodes into. Type: AWS::EC2::Subnet::Id WorkerSecurityGroupId: Description: The worker security group ID to associate with worker nodes. Type: AWS::EC2::SecurityGroup::Id IgnitionLocation: Default: https://api-int.USDCLUSTER_NAME.USDDOMAIN:22623/config/worker Description: Ignition config file location. Type: String CertificateAuthorities: Default: data:text/plain;charset=utf-8;base64,ABC...xYz== Description: Base64 encoded certificate authority string to use. Type: String WorkerInstanceProfileName: Description: IAM profile to associate with worker nodes. Type: String WorkerInstanceType: Default: m5.large Type: String Metadata: AWS::CloudFormation::Interface: ParameterGroups: - Label: default: "Cluster Information" Parameters: - InfrastructureName - Label: default: "Host Information" Parameters: - WorkerInstanceType - RhcosAmi - IgnitionLocation - CertificateAuthorities - WorkerSecurityGroupId - WorkerInstanceProfileName - Label: default: "Network Configuration" Parameters: - Subnet ParameterLabels: Subnet: default: "Subnet" InfrastructureName: default: "Infrastructure Name" WorkerInstanceType: default: "Worker Instance Type" WorkerInstanceProfileName: default: "Worker Instance Profile Name" RhcosAmi: default: "Red Hat Enterprise Linux CoreOS AMI ID" IgnitionLocation: default: "Worker Ignition Source" CertificateAuthorities: default: "Ignition CA String" WorkerSecurityGroupId: default: "Worker Security Group ID" Resources: Worker0: Type: AWS::EC2::Instance Properties: ImageId: !Ref RhcosAmi BlockDeviceMappings: - DeviceName: /dev/xvda Ebs: VolumeSize: "120" VolumeType: "gp2" IamInstanceProfile: !Ref WorkerInstanceProfileName InstanceType: !Ref WorkerInstanceType NetworkInterfaces: - AssociatePublicIpAddress: "false" DeviceIndex: "0" GroupSet: - !Ref "WorkerSecurityGroupId" SubnetId: !Ref "Subnet" UserData: Fn::Base64: !Sub - '{"ignition":{"config":{"merge":[{"source":"USD{SOURCE}"}]},"security":{"tls":{"certificateAuthorities":[{"source":"USD{CA_BUNDLE}"}]}},"version":"3.1.0"}}' - { SOURCE: !Ref IgnitionLocation, CA_BUNDLE: !Ref CertificateAuthorities, } Tags: - Key: !Join ["", ["kubernetes.io/cluster/", !Ref InfrastructureName]] Value: "shared" Outputs: PrivateIP: Description: The compute node private IP address. Value: !GetAtt Worker0.PrivateIp 14.17. Initializing the bootstrap sequence on AWS with user-provisioned infrastructure After you create all of the required infrastructure in Amazon Web Services (AWS), you can start the bootstrap sequence that initializes the OpenShift Container Platform control plane. Prerequisites You configured an AWS account. You added your AWS keys and region to your local AWS profile by running aws configure . You generated the Ignition config files for your cluster. You created and configured a VPC and associated subnets in AWS. You created and configured DNS, load balancers, and listeners in AWS. You created the security groups and roles required for your cluster in AWS. You created the bootstrap machine. You created the control plane machines. You created the worker nodes. Procedure Change to the directory that contains the installation program and start the bootstrap process that initializes the OpenShift Container Platform control plane: USD ./openshift-install wait-for bootstrap-complete --dir <installation_directory> \ 1 --log-level=info 2 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. 2 To view different installation details, specify warn , debug , or error instead of info . Example output INFO Waiting up to 20m0s for the Kubernetes API at https://api.mycluster.example.com:6443... INFO API v1.27.3 up INFO Waiting up to 30m0s for bootstrapping to complete... INFO It is now safe to remove the bootstrap resources INFO Time elapsed: 1s If the command exits without a FATAL warning, your OpenShift Container Platform control plane has initialized. Note After the control plane initializes, it sets up the compute nodes and installs additional services in the form of Operators. Additional resources See Monitoring installation progress for details about monitoring the installation, bootstrap, and control plane logs as an OpenShift Container Platform installation progresses. See Gathering bootstrap node diagnostic data for information about troubleshooting issues related to the bootstrap process. 14.18. Logging in to the cluster by using the CLI You can log in to your cluster as a default system user by exporting the cluster kubeconfig file. The kubeconfig file contains information about the cluster that is used by the CLI to connect a client to the correct cluster and API server. The file is specific to a cluster and is created during OpenShift Container Platform installation. Prerequisites You deployed an OpenShift Container Platform cluster. You installed the oc CLI. Procedure Export the kubeadmin credentials: USD export KUBECONFIG=<installation_directory>/auth/kubeconfig 1 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. Verify you can run oc commands successfully using the exported configuration: USD oc whoami Example output system:admin 14.19. Approving the certificate signing requests for your machines When you add machines to a cluster, two pending certificate signing requests (CSRs) are generated for each machine that you added. You must confirm that these CSRs are approved or, if necessary, approve them yourself. The client requests must be approved first, followed by the server requests. Prerequisites You added machines to your cluster. Procedure Confirm that the cluster recognizes the machines: USD oc get nodes Example output NAME STATUS ROLES AGE VERSION master-0 Ready master 63m v1.27.3 master-1 Ready master 63m v1.27.3 master-2 Ready master 64m v1.27.3 The output lists all of the machines that you created. Note The preceding output might not include the compute nodes, also known as worker nodes, until some CSRs are approved. Review the pending CSRs and ensure that you see the client requests with the Pending or Approved status for each machine that you added to the cluster: USD oc get csr Example output NAME AGE REQUESTOR CONDITION csr-8b2br 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending csr-8vnps 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending ... In this example, two machines are joining the cluster. You might see more approved CSRs in the list. If the CSRs were not approved, after all of the pending CSRs for the machines you added are in Pending status, approve the CSRs for your cluster machines: Note Because the CSRs rotate automatically, approve your CSRs within an hour of adding the machines to the cluster. If you do not approve them within an hour, the certificates will rotate, and more than two certificates will be present for each node. You must approve all of these certificates. After the client CSR is approved, the Kubelet creates a secondary CSR for the serving certificate, which requires manual approval. Then, subsequent serving certificate renewal requests are automatically approved by the machine-approver if the Kubelet requests a new certificate with identical parameters. Note For clusters running on platforms that are not machine API enabled, such as bare metal and other user-provisioned infrastructure, you must implement a method of automatically approving the kubelet serving certificate requests (CSRs). If a request is not approved, then the oc exec , oc rsh , and oc logs commands cannot succeed, because a serving certificate is required when the API server connects to the kubelet. Any operation that contacts the Kubelet endpoint requires this certificate approval to be in place. The method must watch for new CSRs, confirm that the CSR was submitted by the node-bootstrapper service account in the system:node or system:admin groups, and confirm the identity of the node. To approve them individually, run the following command for each valid CSR: USD oc adm certificate approve <csr_name> 1 1 <csr_name> is the name of a CSR from the list of current CSRs. To approve all pending CSRs, run the following command: USD oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve Note Some Operators might not become available until some CSRs are approved. Now that your client requests are approved, you must review the server requests for each machine that you added to the cluster: USD oc get csr Example output NAME AGE REQUESTOR CONDITION csr-bfd72 5m26s system:node:ip-10-0-50-126.us-east-2.compute.internal Pending csr-c57lv 5m26s system:node:ip-10-0-95-157.us-east-2.compute.internal Pending ... If the remaining CSRs are not approved, and are in the Pending status, approve the CSRs for your cluster machines: To approve them individually, run the following command for each valid CSR: USD oc adm certificate approve <csr_name> 1 1 <csr_name> is the name of a CSR from the list of current CSRs. To approve all pending CSRs, run the following command: USD oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs oc adm certificate approve After all client and server CSRs have been approved, the machines have the Ready status. Verify this by running the following command: USD oc get nodes Example output NAME STATUS ROLES AGE VERSION master-0 Ready master 73m v1.27.3 master-1 Ready master 73m v1.27.3 master-2 Ready master 74m v1.27.3 worker-0 Ready worker 11m v1.27.3 worker-1 Ready worker 11m v1.27.3 Note It can take a few minutes after approval of the server CSRs for the machines to transition to the Ready status. Additional information For more information on CSRs, see Certificate Signing Requests . 14.20. Initial Operator configuration After the control plane initializes, you must immediately configure some Operators so that they all become available. Prerequisites Your control plane has initialized. Procedure Watch the cluster components come online: USD watch -n5 oc get clusteroperators Example output NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE authentication 4.14.0 True False False 19m baremetal 4.14.0 True False False 37m cloud-credential 4.14.0 True False False 40m cluster-autoscaler 4.14.0 True False False 37m config-operator 4.14.0 True False False 38m console 4.14.0 True False False 26m csi-snapshot-controller 4.14.0 True False False 37m dns 4.14.0 True False False 37m etcd 4.14.0 True False False 36m image-registry 4.14.0 True False False 31m ingress 4.14.0 True False False 30m insights 4.14.0 True False False 31m kube-apiserver 4.14.0 True False False 26m kube-controller-manager 4.14.0 True False False 36m kube-scheduler 4.14.0 True False False 36m kube-storage-version-migrator 4.14.0 True False False 37m machine-api 4.14.0 True False False 29m machine-approver 4.14.0 True False False 37m machine-config 4.14.0 True False False 36m marketplace 4.14.0 True False False 37m monitoring 4.14.0 True False False 29m network 4.14.0 True False False 38m node-tuning 4.14.0 True False False 37m openshift-apiserver 4.14.0 True False False 32m openshift-controller-manager 4.14.0 True False False 30m openshift-samples 4.14.0 True False False 32m operator-lifecycle-manager 4.14.0 True False False 37m operator-lifecycle-manager-catalog 4.14.0 True False False 37m operator-lifecycle-manager-packageserver 4.14.0 True False False 32m service-ca 4.14.0 True False False 38m storage 4.14.0 True False False 37m Configure the Operators that are not available. 14.20.1. Disabling the default OperatorHub catalog sources Operator catalogs that source content provided by Red Hat and community projects are configured for OperatorHub by default during an OpenShift Container Platform installation. In a restricted network environment, you must disable the default catalogs as a cluster administrator. Procedure Disable the sources for the default catalogs by adding disableAllDefaultSources: true to the OperatorHub object: USD oc patch OperatorHub cluster --type json \ -p '[{"op": "add", "path": "/spec/disableAllDefaultSources", "value": true}]' Tip Alternatively, you can use the web console to manage catalog sources. From the Administration Cluster Settings Configuration OperatorHub page, click the Sources tab, where you can create, update, delete, disable, and enable individual sources. 14.20.2. Image registry storage configuration Amazon Web Services provides default storage, which means the Image Registry Operator is available after installation. However, if the Registry Operator cannot create an S3 bucket and automatically configure storage, you must manually configure registry storage. Instructions are shown for configuring a persistent volume, which is required for production clusters. Where applicable, instructions are shown for configuring an empty directory as the storage location, which is available for only non-production clusters. Additional instructions are provided for allowing the image registry to use block storage types by using the Recreate rollout strategy during upgrades. 14.20.2.1. Configuring registry storage for AWS with user-provisioned infrastructure During installation, your cloud credentials are sufficient to create an Amazon S3 bucket and the Registry Operator will automatically configure storage. If the Registry Operator cannot create an S3 bucket and automatically configure storage, you can create an S3 bucket and configure storage with the following procedure. Prerequisites You have a cluster on AWS with user-provisioned infrastructure. For Amazon S3 storage, the secret is expected to contain two keys: REGISTRY_STORAGE_S3_ACCESSKEY REGISTRY_STORAGE_S3_SECRETKEY Procedure Use the following procedure if the Registry Operator cannot create an S3 bucket and automatically configure storage. Set up a Bucket Lifecycle Policy to abort incomplete multipart uploads that are one day old. Fill in the storage configuration in configs.imageregistry.operator.openshift.io/cluster : USD oc edit configs.imageregistry.operator.openshift.io/cluster Example configuration storage: s3: bucket: <bucket-name> region: <region-name> Warning To secure your registry images in AWS, block public access to the S3 bucket. 14.20.2.2. Configuring storage for the image registry in non-production clusters You must configure storage for the Image Registry Operator. For non-production clusters, you can set the image registry to an empty directory. If you do so, all images are lost if you restart the registry. Procedure To set the image registry storage to an empty directory: USD oc patch configs.imageregistry.operator.openshift.io cluster --type merge --patch '{"spec":{"storage":{"emptyDir":{}}}}' Warning Configure this option for only non-production clusters. If you run this command before the Image Registry Operator initializes its components, the oc patch command fails with the following error: Error from server (NotFound): configs.imageregistry.operator.openshift.io "cluster" not found Wait a few minutes and run the command again. 14.21. Deleting the bootstrap resources After you complete the initial Operator configuration for the cluster, remove the bootstrap resources from Amazon Web Services (AWS). Prerequisites You completed the initial Operator configuration for your cluster. Procedure Delete the bootstrap resources. If you used the CloudFormation template, delete its stack : Delete the stack by using the AWS CLI: USD aws cloudformation delete-stack --stack-name <name> 1 1 <name> is the name of your bootstrap stack. Delete the stack by using the AWS CloudFormation console . 14.22. Creating the Ingress DNS Records If you removed the DNS Zone configuration, manually create DNS records that point to the Ingress load balancer. You can create either a wildcard record or specific records. While the following procedure uses A records, you can use other record types that you require, such as CNAME or alias. Prerequisites You deployed an OpenShift Container Platform cluster on Amazon Web Services (AWS) that uses infrastructure that you provisioned. You installed the OpenShift CLI ( oc ). You installed the jq package. You downloaded the AWS CLI and installed it on your computer. See Install the AWS CLI Using the Bundled Installer (Linux, macOS, or Unix) . Procedure Determine the routes to create. To create a wildcard record, use *.apps.<cluster_name>.<domain_name> , where <cluster_name> is your cluster name, and <domain_name> is the Route 53 base domain for your OpenShift Container Platform cluster. To create specific records, you must create a record for each route that your cluster uses, as shown in the output of the following command: USD oc get --all-namespaces -o jsonpath='{range .items[*]}{range .status.ingress[*]}{.host}{"\n"}{end}{end}' routes Example output oauth-openshift.apps.<cluster_name>.<domain_name> console-openshift-console.apps.<cluster_name>.<domain_name> downloads-openshift-console.apps.<cluster_name>.<domain_name> alertmanager-main-openshift-monitoring.apps.<cluster_name>.<domain_name> prometheus-k8s-openshift-monitoring.apps.<cluster_name>.<domain_name> Retrieve the Ingress Operator load balancer status and note the value of the external IP address that it uses, which is shown in the EXTERNAL-IP column: USD oc -n openshift-ingress get service router-default Example output NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE router-default LoadBalancer 172.30.62.215 ab3...28.us-east-2.elb.amazonaws.com 80:31499/TCP,443:30693/TCP 5m Locate the hosted zone ID for the load balancer: USD aws elb describe-load-balancers | jq -r '.LoadBalancerDescriptions[] | select(.DNSName == "<external_ip>").CanonicalHostedZoneNameID' 1 1 For <external_ip> , specify the value of the external IP address of the Ingress Operator load balancer that you obtained. Example output Z3AADJGX6KTTL2 The output of this command is the load balancer hosted zone ID. Obtain the public hosted zone ID for your cluster's domain: USD aws route53 list-hosted-zones-by-name \ --dns-name "<domain_name>" \ 1 --query 'HostedZones[? Config.PrivateZone != `true` && Name == `<domain_name>.`].Id' 2 --output text 1 2 For <domain_name> , specify the Route 53 base domain for your OpenShift Container Platform cluster. Example output /hostedzone/Z3URY6TWQ91KVV The public hosted zone ID for your domain is shown in the command output. In this example, it is Z3URY6TWQ91KVV . Add the alias records to your private zone: USD aws route53 change-resource-record-sets --hosted-zone-id "<private_hosted_zone_id>" --change-batch '{ 1 > "Changes": [ > { > "Action": "CREATE", > "ResourceRecordSet": { > "Name": "\\052.apps.<cluster_domain>", 2 > "Type": "A", > "AliasTarget":{ > "HostedZoneId": "<hosted_zone_id>", 3 > "DNSName": "<external_ip>.", 4 > "EvaluateTargetHealth": false > } > } > } > ] > }' 1 For <private_hosted_zone_id> , specify the value from the output of the CloudFormation template for DNS and load balancing. 2 For <cluster_domain> , specify the domain or subdomain that you use with your OpenShift Container Platform cluster. 3 For <hosted_zone_id> , specify the public hosted zone ID for the load balancer that you obtained. 4 For <external_ip> , specify the value of the external IP address of the Ingress Operator load balancer. Ensure that you include the trailing period ( . ) in this parameter value. Add the records to your public zone: USD aws route53 change-resource-record-sets --hosted-zone-id "<public_hosted_zone_id>"" --change-batch '{ 1 > "Changes": [ > { > "Action": "CREATE", > "ResourceRecordSet": { > "Name": "\\052.apps.<cluster_domain>", 2 > "Type": "A", > "AliasTarget":{ > "HostedZoneId": "<hosted_zone_id>", 3 > "DNSName": "<external_ip>.", 4 > "EvaluateTargetHealth": false > } > } > } > ] > }' 1 For <public_hosted_zone_id> , specify the public hosted zone for your domain. 2 For <cluster_domain> , specify the domain or subdomain that you use with your OpenShift Container Platform cluster. 3 For <hosted_zone_id> , specify the public hosted zone ID for the load balancer that you obtained. 4 For <external_ip> , specify the value of the external IP address of the Ingress Operator load balancer. Ensure that you include the trailing period ( . ) in this parameter value. 14.23. Completing an AWS installation on user-provisioned infrastructure After you start the OpenShift Container Platform installation on Amazon Web Service (AWS) user-provisioned infrastructure, monitor the deployment to completion. Prerequisites You removed the bootstrap node for an OpenShift Container Platform cluster on user-provisioned AWS infrastructure. You installed the oc CLI. Procedure From the directory that contains the installation program, complete the cluster installation: USD ./openshift-install --dir <installation_directory> wait-for install-complete 1 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. Example output INFO Waiting up to 40m0s for the cluster at https://api.mycluster.example.com:6443 to initialize... INFO Waiting up to 10m0s for the openshift-console route to be created... INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: "kubeadmin", and password: "password" INFO Time elapsed: 1s Important The Ignition config files that the installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending node-bootstrapper certificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information. It is recommended that you use Ignition config files within 12 hours after they are generated because the 24-hour certificate rotates from 16 to 22 hours after the cluster is installed. By using the Ignition config files within 12 hours, you can avoid installation failure if the certificate update runs during installation. Register your cluster on the Cluster registration page. /validating-an-installation.adoc 14.24. Logging in to the cluster by using the web console The kubeadmin user exists by default after an OpenShift Container Platform installation. You can log in to your cluster as the kubeadmin user by using the OpenShift Container Platform web console. Prerequisites You have access to the installation host. You completed a cluster installation and all cluster Operators are available. Procedure Obtain the password for the kubeadmin user from the kubeadmin-password file on the installation host: USD cat <installation_directory>/auth/kubeadmin-password Note Alternatively, you can obtain the kubeadmin password from the <installation_directory>/.openshift_install.log log file on the installation host. List the OpenShift Container Platform web console route: USD oc get routes -n openshift-console | grep 'console-openshift' Note Alternatively, you can obtain the OpenShift Container Platform route from the <installation_directory>/.openshift_install.log log file on the installation host. Example output console console-openshift-console.apps.<cluster_name>.<base_domain> console https reencrypt/Redirect None Navigate to the route detailed in the output of the preceding command in a web browser and log in as the kubeadmin user. Additional resources See Accessing the web console for more details about accessing and understanding the OpenShift Container Platform web console. 14.25. Telemetry access for OpenShift Container Platform In OpenShift Container Platform 4.14, the Telemetry service, which runs by default to provide metrics about cluster health and the success of updates, requires internet access. If your cluster is connected to the internet, Telemetry runs automatically, and your cluster is registered to OpenShift Cluster Manager . After you confirm that your OpenShift Cluster Manager inventory is correct, either maintained automatically by Telemetry or manually by using OpenShift Cluster Manager, use subscription watch to track your OpenShift Container Platform subscriptions at the account or multi-cluster level. Additional resources See About remote health monitoring for more information about the Telemetry service 14.26. Additional resources See Working with stacks in the AWS documentation for more information about AWS CloudFormation stacks. 14.27. steps Validate an installation . Customize your cluster . Configure image streams for the Cluster Samples Operator and the must-gather tool. Learn how to use Operator Lifecycle Manager (OLM) on restricted networks . If the mirror registry that you used to install your cluster has a trusted CA, add it to the cluster by configuring additional trust stores . If necessary, you can opt out of remote health reporting . If necessary, see Registering your disconnected cluster If necessary, you can remove cloud provider credentials . | [
"ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1",
"cat <path>/<file_name>.pub",
"cat ~/.ssh/id_ed25519.pub",
"eval \"USD(ssh-agent -s)\"",
"Agent pid 31874",
"ssh-add <path>/<file_name> 1",
"Identity added: /home/<you>/<path>/<file_name> (<computer_name>)",
"mkdir USDHOME/clusterconfig",
"openshift-install create manifests --dir USDHOME/clusterconfig",
"? SSH Public Key INFO Credentials loaded from the \"myprofile\" profile in file \"/home/myuser/.aws/credentials\" INFO Consuming Install Config from target directory INFO Manifests created in: USDHOME/clusterconfig/manifests and USDHOME/clusterconfig/openshift",
"ls USDHOME/clusterconfig/openshift/",
"99_kubeadmin-password-secret.yaml 99_openshift-cluster-api_master-machines-0.yaml 99_openshift-cluster-api_master-machines-1.yaml 99_openshift-cluster-api_master-machines-2.yaml",
"variant: openshift version: 4.14.0 metadata: labels: machineconfiguration.openshift.io/role: worker name: 98-var-partition storage: disks: - device: /dev/disk/by-id/<device_name> 1 partitions: - label: var start_mib: <partition_start_offset> 2 size_mib: <partition_size> 3 number: 5 filesystems: - device: /dev/disk/by-partlabel/var path: /var format: xfs mount_options: [defaults, prjquota] 4 with_mount_unit: true",
"butane USDHOME/clusterconfig/98-var-partition.bu -o USDHOME/clusterconfig/openshift/98-var-partition.yaml",
"openshift-install create ignition-configs --dir USDHOME/clusterconfig ls USDHOME/clusterconfig/ auth bootstrap.ign master.ign metadata.json worker.ign",
"./openshift-install create install-config --dir <installation_directory> 1",
"pullSecret: '{\"auths\":{\"<local_registry>\": {\"auth\": \"<credentials>\",\"email\": \"[email protected]\"}}}'",
"additionalTrustBundle: | -----BEGIN CERTIFICATE----- ZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZ -----END CERTIFICATE-----",
"imageContentSources: - mirrors: - <local_registry>/<local_repository_name>/release source: quay.io/openshift-release-dev/ocp-release - mirrors: - <local_registry>/<local_repository_name>/release source: quay.io/openshift-release-dev/ocp-v4.0-art-dev",
"publish: Internal",
"apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: ec2.<aws_region>.amazonaws.com,elasticloadbalancing.<aws_region>.amazonaws.com,s3.<aws_region>.amazonaws.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5",
"./openshift-install wait-for install-complete --log-level debug",
"./openshift-install create manifests --dir <installation_directory> 1",
"rm -f <installation_directory>/openshift/99_openshift-cluster-api_master-machines-*.yaml",
"rm -f <installation_directory>/openshift/99_openshift-machine-api_master-control-plane-machine-set.yaml",
"rm -f <installation_directory>/openshift/99_openshift-cluster-api_worker-machineset-*.yaml",
"apiVersion: config.openshift.io/v1 kind: DNS metadata: creationTimestamp: null name: cluster spec: baseDomain: example.openshift.com privateZone: 1 id: mycluster-100419-private-zone publicZone: 2 id: example.openshift.com status: {}",
"./openshift-install create ignition-configs --dir <installation_directory> 1",
". ├── auth │ ├── kubeadmin-password │ └── kubeconfig ├── bootstrap.ign ├── master.ign ├── metadata.json └── worker.ign",
"jq -r .infraID <installation_directory>/metadata.json 1",
"openshift-vw9j6 1",
"[ { \"ParameterKey\": \"VpcCidr\", 1 \"ParameterValue\": \"10.0.0.0/16\" 2 }, { \"ParameterKey\": \"AvailabilityZoneCount\", 3 \"ParameterValue\": \"1\" 4 }, { \"ParameterKey\": \"SubnetBits\", 5 \"ParameterValue\": \"12\" 6 } ]",
"aws cloudformation create-stack --stack-name <name> 1 --template-body file://<template>.yaml 2 --parameters file://<parameters>.json 3",
"arn:aws:cloudformation:us-east-1:269333783861:stack/cluster-vpc/dbedae40-2fd3-11eb-820e-12a48460849f",
"aws cloudformation describe-stacks --stack-name <name>",
"AWSTemplateFormatVersion: 2010-09-09 Description: Template for Best Practice VPC with 1-3 AZs Parameters: VpcCidr: AllowedPattern: ^(([0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])\\.){3}([0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])(\\/(1[6-9]|2[0-4]))USD ConstraintDescription: CIDR block parameter must be in the form x.x.x.x/16-24. Default: 10.0.0.0/16 Description: CIDR block for VPC. Type: String AvailabilityZoneCount: ConstraintDescription: \"The number of availability zones. (Min: 1, Max: 3)\" MinValue: 1 MaxValue: 3 Default: 1 Description: \"How many AZs to create VPC subnets for. (Min: 1, Max: 3)\" Type: Number SubnetBits: ConstraintDescription: CIDR block parameter must be in the form x.x.x.x/19-27. MinValue: 5 MaxValue: 13 Default: 12 Description: \"Size of each subnet to create within the availability zones. (Min: 5 = /27, Max: 13 = /19)\" Type: Number Metadata: AWS::CloudFormation::Interface: ParameterGroups: - Label: default: \"Network Configuration\" Parameters: - VpcCidr - SubnetBits - Label: default: \"Availability Zones\" Parameters: - AvailabilityZoneCount ParameterLabels: AvailabilityZoneCount: default: \"Availability Zone Count\" VpcCidr: default: \"VPC CIDR\" SubnetBits: default: \"Bits Per Subnet\" Conditions: DoAz3: !Equals [3, !Ref AvailabilityZoneCount] DoAz2: !Or [!Equals [2, !Ref AvailabilityZoneCount], Condition: DoAz3] Resources: VPC: Type: \"AWS::EC2::VPC\" Properties: EnableDnsSupport: \"true\" EnableDnsHostnames: \"true\" CidrBlock: !Ref VpcCidr PublicSubnet: Type: \"AWS::EC2::Subnet\" Properties: VpcId: !Ref VPC CidrBlock: !Select [0, !Cidr [!Ref VpcCidr, 6, !Ref SubnetBits]] AvailabilityZone: !Select - 0 - Fn::GetAZs: !Ref \"AWS::Region\" PublicSubnet2: Type: \"AWS::EC2::Subnet\" Condition: DoAz2 Properties: VpcId: !Ref VPC CidrBlock: !Select [1, !Cidr [!Ref VpcCidr, 6, !Ref SubnetBits]] AvailabilityZone: !Select - 1 - Fn::GetAZs: !Ref \"AWS::Region\" PublicSubnet3: Type: \"AWS::EC2::Subnet\" Condition: DoAz3 Properties: VpcId: !Ref VPC CidrBlock: !Select [2, !Cidr [!Ref VpcCidr, 6, !Ref SubnetBits]] AvailabilityZone: !Select - 2 - Fn::GetAZs: !Ref \"AWS::Region\" InternetGateway: Type: \"AWS::EC2::InternetGateway\" GatewayToInternet: Type: \"AWS::EC2::VPCGatewayAttachment\" Properties: VpcId: !Ref VPC InternetGatewayId: !Ref InternetGateway PublicRouteTable: Type: \"AWS::EC2::RouteTable\" Properties: VpcId: !Ref VPC PublicRoute: Type: \"AWS::EC2::Route\" DependsOn: GatewayToInternet Properties: RouteTableId: !Ref PublicRouteTable DestinationCidrBlock: 0.0.0.0/0 GatewayId: !Ref InternetGateway PublicSubnetRouteTableAssociation: Type: \"AWS::EC2::SubnetRouteTableAssociation\" Properties: SubnetId: !Ref PublicSubnet RouteTableId: !Ref PublicRouteTable PublicSubnetRouteTableAssociation2: Type: \"AWS::EC2::SubnetRouteTableAssociation\" Condition: DoAz2 Properties: SubnetId: !Ref PublicSubnet2 RouteTableId: !Ref PublicRouteTable PublicSubnetRouteTableAssociation3: Condition: DoAz3 Type: \"AWS::EC2::SubnetRouteTableAssociation\" Properties: SubnetId: !Ref PublicSubnet3 RouteTableId: !Ref PublicRouteTable PrivateSubnet: Type: \"AWS::EC2::Subnet\" Properties: VpcId: !Ref VPC CidrBlock: !Select [3, !Cidr [!Ref VpcCidr, 6, !Ref SubnetBits]] AvailabilityZone: !Select - 0 - Fn::GetAZs: !Ref \"AWS::Region\" PrivateRouteTable: Type: \"AWS::EC2::RouteTable\" Properties: VpcId: !Ref VPC PrivateSubnetRouteTableAssociation: Type: \"AWS::EC2::SubnetRouteTableAssociation\" Properties: SubnetId: !Ref PrivateSubnet RouteTableId: !Ref PrivateRouteTable NAT: DependsOn: - GatewayToInternet Type: \"AWS::EC2::NatGateway\" Properties: AllocationId: \"Fn::GetAtt\": - EIP - AllocationId SubnetId: !Ref PublicSubnet EIP: Type: \"AWS::EC2::EIP\" Properties: Domain: vpc Route: Type: \"AWS::EC2::Route\" Properties: RouteTableId: Ref: PrivateRouteTable DestinationCidrBlock: 0.0.0.0/0 NatGatewayId: Ref: NAT PrivateSubnet2: Type: \"AWS::EC2::Subnet\" Condition: DoAz2 Properties: VpcId: !Ref VPC CidrBlock: !Select [4, !Cidr [!Ref VpcCidr, 6, !Ref SubnetBits]] AvailabilityZone: !Select - 1 - Fn::GetAZs: !Ref \"AWS::Region\" PrivateRouteTable2: Type: \"AWS::EC2::RouteTable\" Condition: DoAz2 Properties: VpcId: !Ref VPC PrivateSubnetRouteTableAssociation2: Type: \"AWS::EC2::SubnetRouteTableAssociation\" Condition: DoAz2 Properties: SubnetId: !Ref PrivateSubnet2 RouteTableId: !Ref PrivateRouteTable2 NAT2: DependsOn: - GatewayToInternet Type: \"AWS::EC2::NatGateway\" Condition: DoAz2 Properties: AllocationId: \"Fn::GetAtt\": - EIP2 - AllocationId SubnetId: !Ref PublicSubnet2 EIP2: Type: \"AWS::EC2::EIP\" Condition: DoAz2 Properties: Domain: vpc Route2: Type: \"AWS::EC2::Route\" Condition: DoAz2 Properties: RouteTableId: Ref: PrivateRouteTable2 DestinationCidrBlock: 0.0.0.0/0 NatGatewayId: Ref: NAT2 PrivateSubnet3: Type: \"AWS::EC2::Subnet\" Condition: DoAz3 Properties: VpcId: !Ref VPC CidrBlock: !Select [5, !Cidr [!Ref VpcCidr, 6, !Ref SubnetBits]] AvailabilityZone: !Select - 2 - Fn::GetAZs: !Ref \"AWS::Region\" PrivateRouteTable3: Type: \"AWS::EC2::RouteTable\" Condition: DoAz3 Properties: VpcId: !Ref VPC PrivateSubnetRouteTableAssociation3: Type: \"AWS::EC2::SubnetRouteTableAssociation\" Condition: DoAz3 Properties: SubnetId: !Ref PrivateSubnet3 RouteTableId: !Ref PrivateRouteTable3 NAT3: DependsOn: - GatewayToInternet Type: \"AWS::EC2::NatGateway\" Condition: DoAz3 Properties: AllocationId: \"Fn::GetAtt\": - EIP3 - AllocationId SubnetId: !Ref PublicSubnet3 EIP3: Type: \"AWS::EC2::EIP\" Condition: DoAz3 Properties: Domain: vpc Route3: Type: \"AWS::EC2::Route\" Condition: DoAz3 Properties: RouteTableId: Ref: PrivateRouteTable3 DestinationCidrBlock: 0.0.0.0/0 NatGatewayId: Ref: NAT3 S3Endpoint: Type: AWS::EC2::VPCEndpoint Properties: PolicyDocument: Version: 2012-10-17 Statement: - Effect: Allow Principal: '*' Action: - '*' Resource: - '*' RouteTableIds: - !Ref PublicRouteTable - !Ref PrivateRouteTable - !If [DoAz2, !Ref PrivateRouteTable2, !Ref \"AWS::NoValue\"] - !If [DoAz3, !Ref PrivateRouteTable3, !Ref \"AWS::NoValue\"] ServiceName: !Join - '' - - com.amazonaws. - !Ref 'AWS::Region' - .s3 VpcId: !Ref VPC Outputs: VpcId: Description: ID of the new VPC. Value: !Ref VPC PublicSubnetIds: Description: Subnet IDs of the public subnets. Value: !Join [ \",\", [!Ref PublicSubnet, !If [DoAz2, !Ref PublicSubnet2, !Ref \"AWS::NoValue\"], !If [DoAz3, !Ref PublicSubnet3, !Ref \"AWS::NoValue\"]] ] PrivateSubnetIds: Description: Subnet IDs of the private subnets. Value: !Join [ \",\", [!Ref PrivateSubnet, !If [DoAz2, !Ref PrivateSubnet2, !Ref \"AWS::NoValue\"], !If [DoAz3, !Ref PrivateSubnet3, !Ref \"AWS::NoValue\"]] ] PublicRouteTableId: Description: Public Route table ID Value: !Ref PublicRouteTable",
"aws route53 list-hosted-zones-by-name --dns-name <route53_domain> 1",
"mycluster.example.com. False 100 HOSTEDZONES 65F8F38E-2268-B835-E15C-AB55336FCBFA /hostedzone/Z21IXYZABCZ2A4 mycluster.example.com. 10",
"[ { \"ParameterKey\": \"ClusterName\", 1 \"ParameterValue\": \"mycluster\" 2 }, { \"ParameterKey\": \"InfrastructureName\", 3 \"ParameterValue\": \"mycluster-<random_string>\" 4 }, { \"ParameterKey\": \"HostedZoneId\", 5 \"ParameterValue\": \"<random_string>\" 6 }, { \"ParameterKey\": \"HostedZoneName\", 7 \"ParameterValue\": \"example.com\" 8 }, { \"ParameterKey\": \"PublicSubnets\", 9 \"ParameterValue\": \"subnet-<random_string>\" 10 }, { \"ParameterKey\": \"PrivateSubnets\", 11 \"ParameterValue\": \"subnet-<random_string>\" 12 }, { \"ParameterKey\": \"VpcId\", 13 \"ParameterValue\": \"vpc-<random_string>\" 14 } ]",
"aws cloudformation create-stack --stack-name <name> 1 --template-body file://<template>.yaml 2 --parameters file://<parameters>.json 3 --capabilities CAPABILITY_NAMED_IAM 4",
"arn:aws:cloudformation:us-east-1:269333783861:stack/cluster-dns/cd3e5de0-2fd4-11eb-5cf0-12be5c33a183",
"aws cloudformation describe-stacks --stack-name <name>",
"AWSTemplateFormatVersion: 2010-09-09 Description: Template for OpenShift Cluster Network Elements (Route53 & LBs) Parameters: ClusterName: AllowedPattern: ^([a-zA-Z][a-zA-Z0-9\\-]{0,26})USD MaxLength: 27 MinLength: 1 ConstraintDescription: Cluster name must be alphanumeric, start with a letter, and have a maximum of 27 characters. Description: A short, representative cluster name to use for host names and other identifying names. Type: String InfrastructureName: AllowedPattern: ^([a-zA-Z][a-zA-Z0-9\\-]{0,26})USD MaxLength: 27 MinLength: 1 ConstraintDescription: Infrastructure name must be alphanumeric, start with a letter, and have a maximum of 27 characters. Description: A short, unique cluster ID used to tag cloud resources and identify items owned or used by the cluster. Type: String HostedZoneId: Description: The Route53 public zone ID to register the targets with, such as Z21IXYZABCZ2A4. Type: String HostedZoneName: Description: The Route53 zone to register the targets with, such as example.com. Omit the trailing period. Type: String Default: \"example.com\" PublicSubnets: Description: The internet-facing subnets. Type: List<AWS::EC2::Subnet::Id> PrivateSubnets: Description: The internal subnets. Type: List<AWS::EC2::Subnet::Id> VpcId: Description: The VPC-scoped resources will belong to this VPC. Type: AWS::EC2::VPC::Id Metadata: AWS::CloudFormation::Interface: ParameterGroups: - Label: default: \"Cluster Information\" Parameters: - ClusterName - InfrastructureName - Label: default: \"Network Configuration\" Parameters: - VpcId - PublicSubnets - PrivateSubnets - Label: default: \"DNS\" Parameters: - HostedZoneName - HostedZoneId ParameterLabels: ClusterName: default: \"Cluster Name\" InfrastructureName: default: \"Infrastructure Name\" VpcId: default: \"VPC ID\" PublicSubnets: default: \"Public Subnets\" PrivateSubnets: default: \"Private Subnets\" HostedZoneName: default: \"Public Hosted Zone Name\" HostedZoneId: default: \"Public Hosted Zone ID\" Resources: ExtApiElb: Type: AWS::ElasticLoadBalancingV2::LoadBalancer Properties: Name: !Join [\"-\", [!Ref InfrastructureName, \"ext\"]] IpAddressType: ipv4 Subnets: !Ref PublicSubnets Type: network IntApiElb: Type: AWS::ElasticLoadBalancingV2::LoadBalancer Properties: Name: !Join [\"-\", [!Ref InfrastructureName, \"int\"]] Scheme: internal IpAddressType: ipv4 Subnets: !Ref PrivateSubnets Type: network IntDns: Type: \"AWS::Route53::HostedZone\" Properties: HostedZoneConfig: Comment: \"Managed by CloudFormation\" Name: !Join [\".\", [!Ref ClusterName, !Ref HostedZoneName]] HostedZoneTags: - Key: Name Value: !Join [\"-\", [!Ref InfrastructureName, \"int\"]] - Key: !Join [\"\", [\"kubernetes.io/cluster/\", !Ref InfrastructureName]] Value: \"owned\" VPCs: - VPCId: !Ref VpcId VPCRegion: !Ref \"AWS::Region\" ExternalApiServerRecord: Type: AWS::Route53::RecordSetGroup Properties: Comment: Alias record for the API server HostedZoneId: !Ref HostedZoneId RecordSets: - Name: !Join [ \".\", [\"api\", !Ref ClusterName, !Join [\"\", [!Ref HostedZoneName, \".\"]]], ] Type: A AliasTarget: HostedZoneId: !GetAtt ExtApiElb.CanonicalHostedZoneID DNSName: !GetAtt ExtApiElb.DNSName InternalApiServerRecord: Type: AWS::Route53::RecordSetGroup Properties: Comment: Alias record for the API server HostedZoneId: !Ref IntDns RecordSets: - Name: !Join [ \".\", [\"api\", !Ref ClusterName, !Join [\"\", [!Ref HostedZoneName, \".\"]]], ] Type: A AliasTarget: HostedZoneId: !GetAtt IntApiElb.CanonicalHostedZoneID DNSName: !GetAtt IntApiElb.DNSName - Name: !Join [ \".\", [\"api-int\", !Ref ClusterName, !Join [\"\", [!Ref HostedZoneName, \".\"]]], ] Type: A AliasTarget: HostedZoneId: !GetAtt IntApiElb.CanonicalHostedZoneID DNSName: !GetAtt IntApiElb.DNSName ExternalApiListener: Type: AWS::ElasticLoadBalancingV2::Listener Properties: DefaultActions: - Type: forward TargetGroupArn: Ref: ExternalApiTargetGroup LoadBalancerArn: Ref: ExtApiElb Port: 6443 Protocol: TCP ExternalApiTargetGroup: Type: AWS::ElasticLoadBalancingV2::TargetGroup Properties: HealthCheckIntervalSeconds: 10 HealthCheckPath: \"/readyz\" HealthCheckPort: 6443 HealthCheckProtocol: HTTPS HealthyThresholdCount: 2 UnhealthyThresholdCount: 2 Port: 6443 Protocol: TCP TargetType: ip VpcId: Ref: VpcId TargetGroupAttributes: - Key: deregistration_delay.timeout_seconds Value: 60 InternalApiListener: Type: AWS::ElasticLoadBalancingV2::Listener Properties: DefaultActions: - Type: forward TargetGroupArn: Ref: InternalApiTargetGroup LoadBalancerArn: Ref: IntApiElb Port: 6443 Protocol: TCP InternalApiTargetGroup: Type: AWS::ElasticLoadBalancingV2::TargetGroup Properties: HealthCheckIntervalSeconds: 10 HealthCheckPath: \"/readyz\" HealthCheckPort: 6443 HealthCheckProtocol: HTTPS HealthyThresholdCount: 2 UnhealthyThresholdCount: 2 Port: 6443 Protocol: TCP TargetType: ip VpcId: Ref: VpcId TargetGroupAttributes: - Key: deregistration_delay.timeout_seconds Value: 60 InternalServiceInternalListener: Type: AWS::ElasticLoadBalancingV2::Listener Properties: DefaultActions: - Type: forward TargetGroupArn: Ref: InternalServiceTargetGroup LoadBalancerArn: Ref: IntApiElb Port: 22623 Protocol: TCP InternalServiceTargetGroup: Type: AWS::ElasticLoadBalancingV2::TargetGroup Properties: HealthCheckIntervalSeconds: 10 HealthCheckPath: \"/healthz\" HealthCheckPort: 22623 HealthCheckProtocol: HTTPS HealthyThresholdCount: 2 UnhealthyThresholdCount: 2 Port: 22623 Protocol: TCP TargetType: ip VpcId: Ref: VpcId TargetGroupAttributes: - Key: deregistration_delay.timeout_seconds Value: 60 RegisterTargetLambdaIamRole: Type: AWS::IAM::Role Properties: RoleName: !Join [\"-\", [!Ref InfrastructureName, \"nlb\", \"lambda\", \"role\"]] AssumeRolePolicyDocument: Version: \"2012-10-17\" Statement: - Effect: \"Allow\" Principal: Service: - \"lambda.amazonaws.com\" Action: - \"sts:AssumeRole\" Path: \"/\" Policies: - PolicyName: !Join [\"-\", [!Ref InfrastructureName, \"master\", \"policy\"]] PolicyDocument: Version: \"2012-10-17\" Statement: - Effect: \"Allow\" Action: [ \"elasticloadbalancing:RegisterTargets\", \"elasticloadbalancing:DeregisterTargets\", ] Resource: !Ref InternalApiTargetGroup - Effect: \"Allow\" Action: [ \"elasticloadbalancing:RegisterTargets\", \"elasticloadbalancing:DeregisterTargets\", ] Resource: !Ref InternalServiceTargetGroup - Effect: \"Allow\" Action: [ \"elasticloadbalancing:RegisterTargets\", \"elasticloadbalancing:DeregisterTargets\", ] Resource: !Ref ExternalApiTargetGroup RegisterNlbIpTargets: Type: \"AWS::Lambda::Function\" Properties: Handler: \"index.handler\" Role: Fn::GetAtt: - \"RegisterTargetLambdaIamRole\" - \"Arn\" Code: ZipFile: | import json import boto3 import cfnresponse def handler(event, context): elb = boto3.client('elbv2') if event['RequestType'] == 'Delete': elb.deregister_targets(TargetGroupArn=event['ResourceProperties']['TargetArn'],Targets=[{'Id': event['ResourceProperties']['TargetIp']}]) elif event['RequestType'] == 'Create': elb.register_targets(TargetGroupArn=event['ResourceProperties']['TargetArn'],Targets=[{'Id': event['ResourceProperties']['TargetIp']}]) responseData = {} cfnresponse.send(event, context, cfnresponse.SUCCESS, responseData, event['ResourceProperties']['TargetArn']+event['ResourceProperties']['TargetIp']) Runtime: \"python3.8\" Timeout: 120 RegisterSubnetTagsLambdaIamRole: Type: AWS::IAM::Role Properties: RoleName: !Join [\"-\", [!Ref InfrastructureName, \"subnet-tags-lambda-role\"]] AssumeRolePolicyDocument: Version: \"2012-10-17\" Statement: - Effect: \"Allow\" Principal: Service: - \"lambda.amazonaws.com\" Action: - \"sts:AssumeRole\" Path: \"/\" Policies: - PolicyName: !Join [\"-\", [!Ref InfrastructureName, \"subnet-tagging-policy\"]] PolicyDocument: Version: \"2012-10-17\" Statement: - Effect: \"Allow\" Action: [ \"ec2:DeleteTags\", \"ec2:CreateTags\" ] Resource: \"arn:aws:ec2:*:*:subnet/*\" - Effect: \"Allow\" Action: [ \"ec2:DescribeSubnets\", \"ec2:DescribeTags\" ] Resource: \"*\" RegisterSubnetTags: Type: \"AWS::Lambda::Function\" Properties: Handler: \"index.handler\" Role: Fn::GetAtt: - \"RegisterSubnetTagsLambdaIamRole\" - \"Arn\" Code: ZipFile: | import json import boto3 import cfnresponse def handler(event, context): ec2_client = boto3.client('ec2') if event['RequestType'] == 'Delete': for subnet_id in event['ResourceProperties']['Subnets']: ec2_client.delete_tags(Resources=[subnet_id], Tags=[{'Key': 'kubernetes.io/cluster/' + event['ResourceProperties']['InfrastructureName']}]); elif event['RequestType'] == 'Create': for subnet_id in event['ResourceProperties']['Subnets']: ec2_client.create_tags(Resources=[subnet_id], Tags=[{'Key': 'kubernetes.io/cluster/' + event['ResourceProperties']['InfrastructureName'], 'Value': 'shared'}]); responseData = {} cfnresponse.send(event, context, cfnresponse.SUCCESS, responseData, event['ResourceProperties']['InfrastructureName']+event['ResourceProperties']['Subnets'][0]) Runtime: \"python3.8\" Timeout: 120 RegisterPublicSubnetTags: Type: Custom::SubnetRegister Properties: ServiceToken: !GetAtt RegisterSubnetTags.Arn InfrastructureName: !Ref InfrastructureName Subnets: !Ref PublicSubnets RegisterPrivateSubnetTags: Type: Custom::SubnetRegister Properties: ServiceToken: !GetAtt RegisterSubnetTags.Arn InfrastructureName: !Ref InfrastructureName Subnets: !Ref PrivateSubnets Outputs: PrivateHostedZoneId: Description: Hosted zone ID for the private DNS, which is required for private records. Value: !Ref IntDns ExternalApiLoadBalancerName: Description: Full name of the external API load balancer. Value: !GetAtt ExtApiElb.LoadBalancerFullName InternalApiLoadBalancerName: Description: Full name of the internal API load balancer. Value: !GetAtt IntApiElb.LoadBalancerFullName ApiServerDnsName: Description: Full hostname of the API server, which is required for the Ignition config files. Value: !Join [\".\", [\"api-int\", !Ref ClusterName, !Ref HostedZoneName]] RegisterNlbIpTargetsLambda: Description: Lambda ARN useful to help register or deregister IP targets for these load balancers. Value: !GetAtt RegisterNlbIpTargets.Arn ExternalApiTargetGroupArn: Description: ARN of the external API target group. Value: !Ref ExternalApiTargetGroup InternalApiTargetGroupArn: Description: ARN of the internal API target group. Value: !Ref InternalApiTargetGroup InternalServiceTargetGroupArn: Description: ARN of the internal service target group. Value: !Ref InternalServiceTargetGroup",
"Type: CNAME TTL: 10 ResourceRecords: - !GetAtt IntApiElb.DNSName",
"[ { \"ParameterKey\": \"InfrastructureName\", 1 \"ParameterValue\": \"mycluster-<random_string>\" 2 }, { \"ParameterKey\": \"VpcCidr\", 3 \"ParameterValue\": \"10.0.0.0/16\" 4 }, { \"ParameterKey\": \"PrivateSubnets\", 5 \"ParameterValue\": \"subnet-<random_string>\" 6 }, { \"ParameterKey\": \"VpcId\", 7 \"ParameterValue\": \"vpc-<random_string>\" 8 } ]",
"aws cloudformation create-stack --stack-name <name> 1 --template-body file://<template>.yaml 2 --parameters file://<parameters>.json 3 --capabilities CAPABILITY_NAMED_IAM 4",
"arn:aws:cloudformation:us-east-1:269333783861:stack/cluster-sec/03bd4210-2ed7-11eb-6d7a-13fc0b61e9db",
"aws cloudformation describe-stacks --stack-name <name>",
"AWSTemplateFormatVersion: 2010-09-09 Description: Template for OpenShift Cluster Security Elements (Security Groups & IAM) Parameters: InfrastructureName: AllowedPattern: ^([a-zA-Z][a-zA-Z0-9\\-]{0,26})USD MaxLength: 27 MinLength: 1 ConstraintDescription: Infrastructure name must be alphanumeric, start with a letter, and have a maximum of 27 characters. Description: A short, unique cluster ID used to tag cloud resources and identify items owned or used by the cluster. Type: String VpcCidr: AllowedPattern: ^(([0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])\\.){3}([0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])(\\/(1[6-9]|2[0-4]))USD ConstraintDescription: CIDR block parameter must be in the form x.x.x.x/16-24. Default: 10.0.0.0/16 Description: CIDR block for VPC. Type: String VpcId: Description: The VPC-scoped resources will belong to this VPC. Type: AWS::EC2::VPC::Id PrivateSubnets: Description: The internal subnets. Type: List<AWS::EC2::Subnet::Id> Metadata: AWS::CloudFormation::Interface: ParameterGroups: - Label: default: \"Cluster Information\" Parameters: - InfrastructureName - Label: default: \"Network Configuration\" Parameters: - VpcId - VpcCidr - PrivateSubnets ParameterLabels: InfrastructureName: default: \"Infrastructure Name\" VpcId: default: \"VPC ID\" VpcCidr: default: \"VPC CIDR\" PrivateSubnets: default: \"Private Subnets\" Resources: MasterSecurityGroup: Type: AWS::EC2::SecurityGroup Properties: GroupDescription: Cluster Master Security Group SecurityGroupIngress: - IpProtocol: icmp FromPort: 0 ToPort: 0 CidrIp: !Ref VpcCidr - IpProtocol: tcp FromPort: 22 ToPort: 22 CidrIp: !Ref VpcCidr - IpProtocol: tcp ToPort: 6443 FromPort: 6443 CidrIp: !Ref VpcCidr - IpProtocol: tcp FromPort: 22623 ToPort: 22623 CidrIp: !Ref VpcCidr VpcId: !Ref VpcId WorkerSecurityGroup: Type: AWS::EC2::SecurityGroup Properties: GroupDescription: Cluster Worker Security Group SecurityGroupIngress: - IpProtocol: icmp FromPort: 0 ToPort: 0 CidrIp: !Ref VpcCidr - IpProtocol: tcp FromPort: 22 ToPort: 22 CidrIp: !Ref VpcCidr VpcId: !Ref VpcId MasterIngressEtcd: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: etcd FromPort: 2379 ToPort: 2380 IpProtocol: tcp MasterIngressVxlan: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: Vxlan packets FromPort: 4789 ToPort: 4789 IpProtocol: udp MasterIngressWorkerVxlan: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: Vxlan packets FromPort: 4789 ToPort: 4789 IpProtocol: udp MasterIngressGeneve: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: Geneve packets FromPort: 6081 ToPort: 6081 IpProtocol: udp MasterIngressWorkerGeneve: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: Geneve packets FromPort: 6081 ToPort: 6081 IpProtocol: udp MasterIngressIpsecIke: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: IPsec IKE packets FromPort: 500 ToPort: 500 IpProtocol: udp MasterIngressIpsecNat: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: IPsec NAT-T packets FromPort: 4500 ToPort: 4500 IpProtocol: udp MasterIngressIpsecEsp: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: IPsec ESP packets IpProtocol: 50 MasterIngressWorkerIpsecIke: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: IPsec IKE packets FromPort: 500 ToPort: 500 IpProtocol: udp MasterIngressWorkerIpsecNat: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: IPsec NAT-T packets FromPort: 4500 ToPort: 4500 IpProtocol: udp MasterIngressWorkerIpsecEsp: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: IPsec ESP packets IpProtocol: 50 MasterIngressInternal: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: Internal cluster communication FromPort: 9000 ToPort: 9999 IpProtocol: tcp MasterIngressWorkerInternal: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: Internal cluster communication FromPort: 9000 ToPort: 9999 IpProtocol: tcp MasterIngressInternalUDP: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: Internal cluster communication FromPort: 9000 ToPort: 9999 IpProtocol: udp MasterIngressWorkerInternalUDP: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: Internal cluster communication FromPort: 9000 ToPort: 9999 IpProtocol: udp MasterIngressKube: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: Kubernetes kubelet, scheduler and controller manager FromPort: 10250 ToPort: 10259 IpProtocol: tcp MasterIngressWorkerKube: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: Kubernetes kubelet, scheduler and controller manager FromPort: 10250 ToPort: 10259 IpProtocol: tcp MasterIngressIngressServices: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: Kubernetes ingress services FromPort: 30000 ToPort: 32767 IpProtocol: tcp MasterIngressWorkerIngressServices: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: Kubernetes ingress services FromPort: 30000 ToPort: 32767 IpProtocol: tcp MasterIngressIngressServicesUDP: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: Kubernetes ingress services FromPort: 30000 ToPort: 32767 IpProtocol: udp MasterIngressWorkerIngressServicesUDP: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: Kubernetes ingress services FromPort: 30000 ToPort: 32767 IpProtocol: udp WorkerIngressVxlan: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: Vxlan packets FromPort: 4789 ToPort: 4789 IpProtocol: udp WorkerIngressMasterVxlan: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: Vxlan packets FromPort: 4789 ToPort: 4789 IpProtocol: udp WorkerIngressGeneve: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: Geneve packets FromPort: 6081 ToPort: 6081 IpProtocol: udp WorkerIngressMasterGeneve: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: Geneve packets FromPort: 6081 ToPort: 6081 IpProtocol: udp WorkerIngressIpsecIke: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: IPsec IKE packets FromPort: 500 ToPort: 500 IpProtocol: udp WorkerIngressIpsecNat: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: IPsec NAT-T packets FromPort: 4500 ToPort: 4500 IpProtocol: udp WorkerIngressIpsecEsp: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: IPsec ESP packets IpProtocol: 50 WorkerIngressMasterIpsecIke: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: IPsec IKE packets FromPort: 500 ToPort: 500 IpProtocol: udp WorkerIngressMasterIpsecNat: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: IPsec NAT-T packets FromPort: 4500 ToPort: 4500 IpProtocol: udp WorkerIngressMasterIpsecEsp: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: IPsec ESP packets IpProtocol: 50 WorkerIngressInternal: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: Internal cluster communication FromPort: 9000 ToPort: 9999 IpProtocol: tcp WorkerIngressMasterInternal: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: Internal cluster communication FromPort: 9000 ToPort: 9999 IpProtocol: tcp WorkerIngressInternalUDP: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: Internal cluster communication FromPort: 9000 ToPort: 9999 IpProtocol: udp WorkerIngressMasterInternalUDP: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: Internal cluster communication FromPort: 9000 ToPort: 9999 IpProtocol: udp WorkerIngressKube: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: Kubernetes secure kubelet port FromPort: 10250 ToPort: 10250 IpProtocol: tcp WorkerIngressWorkerKube: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: Internal Kubernetes communication FromPort: 10250 ToPort: 10250 IpProtocol: tcp WorkerIngressIngressServices: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: Kubernetes ingress services FromPort: 30000 ToPort: 32767 IpProtocol: tcp WorkerIngressMasterIngressServices: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: Kubernetes ingress services FromPort: 30000 ToPort: 32767 IpProtocol: tcp WorkerIngressIngressServicesUDP: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: Kubernetes ingress services FromPort: 30000 ToPort: 32767 IpProtocol: udp WorkerIngressMasterIngressServicesUDP: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: Kubernetes ingress services FromPort: 30000 ToPort: 32767 IpProtocol: udp MasterIamRole: Type: AWS::IAM::Role Properties: AssumeRolePolicyDocument: Version: \"2012-10-17\" Statement: - Effect: \"Allow\" Principal: Service: - \"ec2.amazonaws.com\" Action: - \"sts:AssumeRole\" Policies: - PolicyName: !Join [\"-\", [!Ref InfrastructureName, \"master\", \"policy\"]] PolicyDocument: Version: \"2012-10-17\" Statement: - Effect: \"Allow\" Action: - \"ec2:AttachVolume\" - \"ec2:AuthorizeSecurityGroupIngress\" - \"ec2:CreateSecurityGroup\" - \"ec2:CreateTags\" - \"ec2:CreateVolume\" - \"ec2:DeleteSecurityGroup\" - \"ec2:DeleteVolume\" - \"ec2:Describe*\" - \"ec2:DetachVolume\" - \"ec2:ModifyInstanceAttribute\" - \"ec2:ModifyVolume\" - \"ec2:RevokeSecurityGroupIngress\" - \"elasticloadbalancing:AddTags\" - \"elasticloadbalancing:AttachLoadBalancerToSubnets\" - \"elasticloadbalancing:ApplySecurityGroupsToLoadBalancer\" - \"elasticloadbalancing:CreateListener\" - \"elasticloadbalancing:CreateLoadBalancer\" - \"elasticloadbalancing:CreateLoadBalancerPolicy\" - \"elasticloadbalancing:CreateLoadBalancerListeners\" - \"elasticloadbalancing:CreateTargetGroup\" - \"elasticloadbalancing:ConfigureHealthCheck\" - \"elasticloadbalancing:DeleteListener\" - \"elasticloadbalancing:DeleteLoadBalancer\" - \"elasticloadbalancing:DeleteLoadBalancerListeners\" - \"elasticloadbalancing:DeleteTargetGroup\" - \"elasticloadbalancing:DeregisterInstancesFromLoadBalancer\" - \"elasticloadbalancing:DeregisterTargets\" - \"elasticloadbalancing:Describe*\" - \"elasticloadbalancing:DetachLoadBalancerFromSubnets\" - \"elasticloadbalancing:ModifyListener\" - \"elasticloadbalancing:ModifyLoadBalancerAttributes\" - \"elasticloadbalancing:ModifyTargetGroup\" - \"elasticloadbalancing:ModifyTargetGroupAttributes\" - \"elasticloadbalancing:RegisterInstancesWithLoadBalancer\" - \"elasticloadbalancing:RegisterTargets\" - \"elasticloadbalancing:SetLoadBalancerPoliciesForBackendServer\" - \"elasticloadbalancing:SetLoadBalancerPoliciesOfListener\" - \"kms:DescribeKey\" Resource: \"*\" MasterInstanceProfile: Type: \"AWS::IAM::InstanceProfile\" Properties: Roles: - Ref: \"MasterIamRole\" WorkerIamRole: Type: AWS::IAM::Role Properties: AssumeRolePolicyDocument: Version: \"2012-10-17\" Statement: - Effect: \"Allow\" Principal: Service: - \"ec2.amazonaws.com\" Action: - \"sts:AssumeRole\" Policies: - PolicyName: !Join [\"-\", [!Ref InfrastructureName, \"worker\", \"policy\"]] PolicyDocument: Version: \"2012-10-17\" Statement: - Effect: \"Allow\" Action: - \"ec2:DescribeInstances\" - \"ec2:DescribeRegions\" Resource: \"*\" WorkerInstanceProfile: Type: \"AWS::IAM::InstanceProfile\" Properties: Roles: - Ref: \"WorkerIamRole\" Outputs: MasterSecurityGroupId: Description: Master Security Group ID Value: !GetAtt MasterSecurityGroup.GroupId WorkerSecurityGroupId: Description: Worker Security Group ID Value: !GetAtt WorkerSecurityGroup.GroupId MasterInstanceProfile: Description: Master IAM Instance Profile Value: !Ref MasterInstanceProfile WorkerInstanceProfile: Description: Worker IAM Instance Profile Value: !Ref WorkerInstanceProfile",
"openshift-install coreos print-stream-json | jq -r '.architectures.x86_64.images.aws.regions[\"us-west-1\"].image'",
"ami-0d3e625f84626bbda",
"openshift-install coreos print-stream-json | jq -r '.architectures.aarch64.images.aws.regions[\"us-west-1\"].image'",
"ami-0af1d3b7fa5be2131",
"aws s3 mb s3://<cluster-name>-infra 1",
"aws s3 cp <installation_directory>/bootstrap.ign s3://<cluster-name>-infra/bootstrap.ign 1",
"aws s3 ls s3://<cluster-name>-infra/",
"2019-04-03 16:15:16 314878 bootstrap.ign",
"[ { \"ParameterKey\": \"InfrastructureName\", 1 \"ParameterValue\": \"mycluster-<random_string>\" 2 }, { \"ParameterKey\": \"RhcosAmi\", 3 \"ParameterValue\": \"ami-<random_string>\" 4 }, { \"ParameterKey\": \"AllowedBootstrapSshCidr\", 5 \"ParameterValue\": \"0.0.0.0/0\" 6 }, { \"ParameterKey\": \"PublicSubnet\", 7 \"ParameterValue\": \"subnet-<random_string>\" 8 }, { \"ParameterKey\": \"MasterSecurityGroupId\", 9 \"ParameterValue\": \"sg-<random_string>\" 10 }, { \"ParameterKey\": \"VpcId\", 11 \"ParameterValue\": \"vpc-<random_string>\" 12 }, { \"ParameterKey\": \"BootstrapIgnitionLocation\", 13 \"ParameterValue\": \"s3://<bucket_name>/bootstrap.ign\" 14 }, { \"ParameterKey\": \"AutoRegisterELB\", 15 \"ParameterValue\": \"yes\" 16 }, { \"ParameterKey\": \"RegisterNlbIpTargetsLambdaArn\", 17 \"ParameterValue\": \"arn:aws:lambda:<aws_region>:<account_number>:function:<dns_stack_name>-RegisterNlbIpTargets-<random_string>\" 18 }, { \"ParameterKey\": \"ExternalApiTargetGroupArn\", 19 \"ParameterValue\": \"arn:aws:elasticloadbalancing:<aws_region>:<account_number>:targetgroup/<dns_stack_name>-Exter-<random_string>\" 20 }, { \"ParameterKey\": \"InternalApiTargetGroupArn\", 21 \"ParameterValue\": \"arn:aws:elasticloadbalancing:<aws_region>:<account_number>:targetgroup/<dns_stack_name>-Inter-<random_string>\" 22 }, { \"ParameterKey\": \"InternalServiceTargetGroupArn\", 23 \"ParameterValue\": \"arn:aws:elasticloadbalancing:<aws_region>:<account_number>:targetgroup/<dns_stack_name>-Inter-<random_string>\" 24 } ]",
"aws cloudformation create-stack --stack-name <name> 1 --template-body file://<template>.yaml 2 --parameters file://<parameters>.json 3 --capabilities CAPABILITY_NAMED_IAM 4",
"arn:aws:cloudformation:us-east-1:269333783861:stack/cluster-bootstrap/12944486-2add-11eb-9dee-12dace8e3a83",
"aws cloudformation describe-stacks --stack-name <name>",
"AWSTemplateFormatVersion: 2010-09-09 Description: Template for OpenShift Cluster Bootstrap (EC2 Instance, Security Groups and IAM) Parameters: InfrastructureName: AllowedPattern: ^([a-zA-Z][a-zA-Z0-9\\-]{0,26})USD MaxLength: 27 MinLength: 1 ConstraintDescription: Infrastructure name must be alphanumeric, start with a letter, and have a maximum of 27 characters. Description: A short, unique cluster ID used to tag cloud resources and identify items owned or used by the cluster. Type: String RhcosAmi: Description: Current Red Hat Enterprise Linux CoreOS AMI to use for bootstrap. Type: AWS::EC2::Image::Id AllowedBootstrapSshCidr: AllowedPattern: ^(([0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])\\.){3}([0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])(\\/([0-9]|1[0-9]|2[0-9]|3[0-2]))USD ConstraintDescription: CIDR block parameter must be in the form x.x.x.x/0-32. Default: 0.0.0.0/0 Description: CIDR block to allow SSH access to the bootstrap node. Type: String PublicSubnet: Description: The public subnet to launch the bootstrap node into. Type: AWS::EC2::Subnet::Id MasterSecurityGroupId: Description: The master security group ID for registering temporary rules. Type: AWS::EC2::SecurityGroup::Id VpcId: Description: The VPC-scoped resources will belong to this VPC. Type: AWS::EC2::VPC::Id BootstrapIgnitionLocation: Default: s3://my-s3-bucket/bootstrap.ign Description: Ignition config file location. Type: String AutoRegisterELB: Default: \"yes\" AllowedValues: - \"yes\" - \"no\" Description: Do you want to invoke NLB registration, which requires a Lambda ARN parameter? Type: String RegisterNlbIpTargetsLambdaArn: Description: ARN for NLB IP target registration lambda. Type: String ExternalApiTargetGroupArn: Description: ARN for external API load balancer target group. Type: String InternalApiTargetGroupArn: Description: ARN for internal API load balancer target group. Type: String InternalServiceTargetGroupArn: Description: ARN for internal service load balancer target group. Type: String BootstrapInstanceType: Description: Instance type for the bootstrap EC2 instance Default: \"i3.large\" Type: String Metadata: AWS::CloudFormation::Interface: ParameterGroups: - Label: default: \"Cluster Information\" Parameters: - InfrastructureName - Label: default: \"Host Information\" Parameters: - RhcosAmi - BootstrapIgnitionLocation - MasterSecurityGroupId - Label: default: \"Network Configuration\" Parameters: - VpcId - AllowedBootstrapSshCidr - PublicSubnet - Label: default: \"Load Balancer Automation\" Parameters: - AutoRegisterELB - RegisterNlbIpTargetsLambdaArn - ExternalApiTargetGroupArn - InternalApiTargetGroupArn - InternalServiceTargetGroupArn ParameterLabels: InfrastructureName: default: \"Infrastructure Name\" VpcId: default: \"VPC ID\" AllowedBootstrapSshCidr: default: \"Allowed SSH Source\" PublicSubnet: default: \"Public Subnet\" RhcosAmi: default: \"Red Hat Enterprise Linux CoreOS AMI ID\" BootstrapIgnitionLocation: default: \"Bootstrap Ignition Source\" MasterSecurityGroupId: default: \"Master Security Group ID\" AutoRegisterELB: default: \"Use Provided ELB Automation\" Conditions: DoRegistration: !Equals [\"yes\", !Ref AutoRegisterELB] Resources: BootstrapIamRole: Type: AWS::IAM::Role Properties: AssumeRolePolicyDocument: Version: \"2012-10-17\" Statement: - Effect: \"Allow\" Principal: Service: - \"ec2.amazonaws.com\" Action: - \"sts:AssumeRole\" Path: \"/\" Policies: - PolicyName: !Join [\"-\", [!Ref InfrastructureName, \"bootstrap\", \"policy\"]] PolicyDocument: Version: \"2012-10-17\" Statement: - Effect: \"Allow\" Action: \"ec2:Describe*\" Resource: \"*\" - Effect: \"Allow\" Action: \"ec2:AttachVolume\" Resource: \"*\" - Effect: \"Allow\" Action: \"ec2:DetachVolume\" Resource: \"*\" - Effect: \"Allow\" Action: \"s3:GetObject\" Resource: \"*\" BootstrapInstanceProfile: Type: \"AWS::IAM::InstanceProfile\" Properties: Path: \"/\" Roles: - Ref: \"BootstrapIamRole\" BootstrapSecurityGroup: Type: AWS::EC2::SecurityGroup Properties: GroupDescription: Cluster Bootstrap Security Group SecurityGroupIngress: - IpProtocol: tcp FromPort: 22 ToPort: 22 CidrIp: !Ref AllowedBootstrapSshCidr - IpProtocol: tcp ToPort: 19531 FromPort: 19531 CidrIp: 0.0.0.0/0 VpcId: !Ref VpcId BootstrapInstance: Type: AWS::EC2::Instance Properties: ImageId: !Ref RhcosAmi IamInstanceProfile: !Ref BootstrapInstanceProfile InstanceType: !Ref BootstrapInstanceType NetworkInterfaces: - AssociatePublicIpAddress: \"true\" DeviceIndex: \"0\" GroupSet: - !Ref \"BootstrapSecurityGroup\" - !Ref \"MasterSecurityGroupId\" SubnetId: !Ref \"PublicSubnet\" UserData: Fn::Base64: !Sub - '{\"ignition\":{\"config\":{\"replace\":{\"source\":\"USD{S3Loc}\"}},\"version\":\"3.1.0\"}}' - { S3Loc: !Ref BootstrapIgnitionLocation } RegisterBootstrapApiTarget: Condition: DoRegistration Type: Custom::NLBRegister Properties: ServiceToken: !Ref RegisterNlbIpTargetsLambdaArn TargetArn: !Ref ExternalApiTargetGroupArn TargetIp: !GetAtt BootstrapInstance.PrivateIp RegisterBootstrapInternalApiTarget: Condition: DoRegistration Type: Custom::NLBRegister Properties: ServiceToken: !Ref RegisterNlbIpTargetsLambdaArn TargetArn: !Ref InternalApiTargetGroupArn TargetIp: !GetAtt BootstrapInstance.PrivateIp RegisterBootstrapInternalServiceTarget: Condition: DoRegistration Type: Custom::NLBRegister Properties: ServiceToken: !Ref RegisterNlbIpTargetsLambdaArn TargetArn: !Ref InternalServiceTargetGroupArn TargetIp: !GetAtt BootstrapInstance.PrivateIp Outputs: BootstrapInstanceId: Description: Bootstrap Instance ID. Value: !Ref BootstrapInstance BootstrapPublicIp: Description: The bootstrap node public IP address. Value: !GetAtt BootstrapInstance.PublicIp BootstrapPrivateIp: Description: The bootstrap node private IP address. Value: !GetAtt BootstrapInstance.PrivateIp",
"[ { \"ParameterKey\": \"InfrastructureName\", 1 \"ParameterValue\": \"mycluster-<random_string>\" 2 }, { \"ParameterKey\": \"RhcosAmi\", 3 \"ParameterValue\": \"ami-<random_string>\" 4 }, { \"ParameterKey\": \"AutoRegisterDNS\", 5 \"ParameterValue\": \"yes\" 6 }, { \"ParameterKey\": \"PrivateHostedZoneId\", 7 \"ParameterValue\": \"<random_string>\" 8 }, { \"ParameterKey\": \"PrivateHostedZoneName\", 9 \"ParameterValue\": \"mycluster.example.com\" 10 }, { \"ParameterKey\": \"Master0Subnet\", 11 \"ParameterValue\": \"subnet-<random_string>\" 12 }, { \"ParameterKey\": \"Master1Subnet\", 13 \"ParameterValue\": \"subnet-<random_string>\" 14 }, { \"ParameterKey\": \"Master2Subnet\", 15 \"ParameterValue\": \"subnet-<random_string>\" 16 }, { \"ParameterKey\": \"MasterSecurityGroupId\", 17 \"ParameterValue\": \"sg-<random_string>\" 18 }, { \"ParameterKey\": \"IgnitionLocation\", 19 \"ParameterValue\": \"https://api-int.<cluster_name>.<domain_name>:22623/config/master\" 20 }, { \"ParameterKey\": \"CertificateAuthorities\", 21 \"ParameterValue\": \"data:text/plain;charset=utf-8;base64,ABC...xYz==\" 22 }, { \"ParameterKey\": \"MasterInstanceProfileName\", 23 \"ParameterValue\": \"<roles_stack>-MasterInstanceProfile-<random_string>\" 24 }, { \"ParameterKey\": \"MasterInstanceType\", 25 \"ParameterValue\": \"\" 26 }, { \"ParameterKey\": \"AutoRegisterELB\", 27 \"ParameterValue\": \"yes\" 28 }, { \"ParameterKey\": \"RegisterNlbIpTargetsLambdaArn\", 29 \"ParameterValue\": \"arn:aws:lambda:<aws_region>:<account_number>:function:<dns_stack_name>-RegisterNlbIpTargets-<random_string>\" 30 }, { \"ParameterKey\": \"ExternalApiTargetGroupArn\", 31 \"ParameterValue\": \"arn:aws:elasticloadbalancing:<aws_region>:<account_number>:targetgroup/<dns_stack_name>-Exter-<random_string>\" 32 }, { \"ParameterKey\": \"InternalApiTargetGroupArn\", 33 \"ParameterValue\": \"arn:aws:elasticloadbalancing:<aws_region>:<account_number>:targetgroup/<dns_stack_name>-Inter-<random_string>\" 34 }, { \"ParameterKey\": \"InternalServiceTargetGroupArn\", 35 \"ParameterValue\": \"arn:aws:elasticloadbalancing:<aws_region>:<account_number>:targetgroup/<dns_stack_name>-Inter-<random_string>\" 36 } ]",
"aws cloudformation create-stack --stack-name <name> 1 --template-body file://<template>.yaml 2 --parameters file://<parameters>.json 3",
"arn:aws:cloudformation:us-east-1:269333783861:stack/cluster-control-plane/21c7e2b0-2ee2-11eb-c6f6-0aa34627df4b",
"aws cloudformation describe-stacks --stack-name <name>",
"AWSTemplateFormatVersion: 2010-09-09 Description: Template for OpenShift Cluster Node Launch (EC2 master instances) Parameters: InfrastructureName: AllowedPattern: ^([a-zA-Z][a-zA-Z0-9\\-]{0,26})USD MaxLength: 27 MinLength: 1 ConstraintDescription: Infrastructure name must be alphanumeric, start with a letter, and have a maximum of 27 characters. Description: A short, unique cluster ID used to tag nodes for the kubelet cloud provider. Type: String RhcosAmi: Description: Current Red Hat Enterprise Linux CoreOS AMI to use for bootstrap. Type: AWS::EC2::Image::Id AutoRegisterDNS: Default: \"\" Description: unused Type: String PrivateHostedZoneId: Default: \"\" Description: unused Type: String PrivateHostedZoneName: Default: \"\" Description: unused Type: String Master0Subnet: Description: The subnets, recommend private, to launch the master nodes into. Type: AWS::EC2::Subnet::Id Master1Subnet: Description: The subnets, recommend private, to launch the master nodes into. Type: AWS::EC2::Subnet::Id Master2Subnet: Description: The subnets, recommend private, to launch the master nodes into. Type: AWS::EC2::Subnet::Id MasterSecurityGroupId: Description: The master security group ID to associate with master nodes. Type: AWS::EC2::SecurityGroup::Id IgnitionLocation: Default: https://api-int.USDCLUSTER_NAME.USDDOMAIN:22623/config/master Description: Ignition config file location. Type: String CertificateAuthorities: Default: data:text/plain;charset=utf-8;base64,ABC...xYz== Description: Base64 encoded certificate authority string to use. Type: String MasterInstanceProfileName: Description: IAM profile to associate with master nodes. Type: String MasterInstanceType: Default: m5.xlarge Type: String AutoRegisterELB: Default: \"yes\" AllowedValues: - \"yes\" - \"no\" Description: Do you want to invoke NLB registration, which requires a Lambda ARN parameter? Type: String RegisterNlbIpTargetsLambdaArn: Description: ARN for NLB IP target registration lambda. Supply the value from the cluster infrastructure or select \"no\" for AutoRegisterELB. Type: String ExternalApiTargetGroupArn: Description: ARN for external API load balancer target group. Supply the value from the cluster infrastructure or select \"no\" for AutoRegisterELB. Type: String InternalApiTargetGroupArn: Description: ARN for internal API load balancer target group. Supply the value from the cluster infrastructure or select \"no\" for AutoRegisterELB. Type: String InternalServiceTargetGroupArn: Description: ARN for internal service load balancer target group. Supply the value from the cluster infrastructure or select \"no\" for AutoRegisterELB. Type: String Metadata: AWS::CloudFormation::Interface: ParameterGroups: - Label: default: \"Cluster Information\" Parameters: - InfrastructureName - Label: default: \"Host Information\" Parameters: - MasterInstanceType - RhcosAmi - IgnitionLocation - CertificateAuthorities - MasterSecurityGroupId - MasterInstanceProfileName - Label: default: \"Network Configuration\" Parameters: - VpcId - AllowedBootstrapSshCidr - Master0Subnet - Master1Subnet - Master2Subnet - Label: default: \"Load Balancer Automation\" Parameters: - AutoRegisterELB - RegisterNlbIpTargetsLambdaArn - ExternalApiTargetGroupArn - InternalApiTargetGroupArn - InternalServiceTargetGroupArn ParameterLabels: InfrastructureName: default: \"Infrastructure Name\" VpcId: default: \"VPC ID\" Master0Subnet: default: \"Master-0 Subnet\" Master1Subnet: default: \"Master-1 Subnet\" Master2Subnet: default: \"Master-2 Subnet\" MasterInstanceType: default: \"Master Instance Type\" MasterInstanceProfileName: default: \"Master Instance Profile Name\" RhcosAmi: default: \"Red Hat Enterprise Linux CoreOS AMI ID\" BootstrapIgnitionLocation: default: \"Master Ignition Source\" CertificateAuthorities: default: \"Ignition CA String\" MasterSecurityGroupId: default: \"Master Security Group ID\" AutoRegisterELB: default: \"Use Provided ELB Automation\" Conditions: DoRegistration: !Equals [\"yes\", !Ref AutoRegisterELB] Resources: Master0: Type: AWS::EC2::Instance Properties: ImageId: !Ref RhcosAmi BlockDeviceMappings: - DeviceName: /dev/xvda Ebs: VolumeSize: \"120\" VolumeType: \"gp2\" IamInstanceProfile: !Ref MasterInstanceProfileName InstanceType: !Ref MasterInstanceType NetworkInterfaces: - AssociatePublicIpAddress: \"false\" DeviceIndex: \"0\" GroupSet: - !Ref \"MasterSecurityGroupId\" SubnetId: !Ref \"Master0Subnet\" UserData: Fn::Base64: !Sub - '{\"ignition\":{\"config\":{\"merge\":[{\"source\":\"USD{SOURCE}\"}]},\"security\":{\"tls\":{\"certificateAuthorities\":[{\"source\":\"USD{CA_BUNDLE}\"}]}},\"version\":\"3.1.0\"}}' - { SOURCE: !Ref IgnitionLocation, CA_BUNDLE: !Ref CertificateAuthorities, } Tags: - Key: !Join [\"\", [\"kubernetes.io/cluster/\", !Ref InfrastructureName]] Value: \"shared\" RegisterMaster0: Condition: DoRegistration Type: Custom::NLBRegister Properties: ServiceToken: !Ref RegisterNlbIpTargetsLambdaArn TargetArn: !Ref ExternalApiTargetGroupArn TargetIp: !GetAtt Master0.PrivateIp RegisterMaster0InternalApiTarget: Condition: DoRegistration Type: Custom::NLBRegister Properties: ServiceToken: !Ref RegisterNlbIpTargetsLambdaArn TargetArn: !Ref InternalApiTargetGroupArn TargetIp: !GetAtt Master0.PrivateIp RegisterMaster0InternalServiceTarget: Condition: DoRegistration Type: Custom::NLBRegister Properties: ServiceToken: !Ref RegisterNlbIpTargetsLambdaArn TargetArn: !Ref InternalServiceTargetGroupArn TargetIp: !GetAtt Master0.PrivateIp Master1: Type: AWS::EC2::Instance Properties: ImageId: !Ref RhcosAmi BlockDeviceMappings: - DeviceName: /dev/xvda Ebs: VolumeSize: \"120\" VolumeType: \"gp2\" IamInstanceProfile: !Ref MasterInstanceProfileName InstanceType: !Ref MasterInstanceType NetworkInterfaces: - AssociatePublicIpAddress: \"false\" DeviceIndex: \"0\" GroupSet: - !Ref \"MasterSecurityGroupId\" SubnetId: !Ref \"Master1Subnet\" UserData: Fn::Base64: !Sub - '{\"ignition\":{\"config\":{\"merge\":[{\"source\":\"USD{SOURCE}\"}]},\"security\":{\"tls\":{\"certificateAuthorities\":[{\"source\":\"USD{CA_BUNDLE}\"}]}},\"version\":\"3.1.0\"}}' - { SOURCE: !Ref IgnitionLocation, CA_BUNDLE: !Ref CertificateAuthorities, } Tags: - Key: !Join [\"\", [\"kubernetes.io/cluster/\", !Ref InfrastructureName]] Value: \"shared\" RegisterMaster1: Condition: DoRegistration Type: Custom::NLBRegister Properties: ServiceToken: !Ref RegisterNlbIpTargetsLambdaArn TargetArn: !Ref ExternalApiTargetGroupArn TargetIp: !GetAtt Master1.PrivateIp RegisterMaster1InternalApiTarget: Condition: DoRegistration Type: Custom::NLBRegister Properties: ServiceToken: !Ref RegisterNlbIpTargetsLambdaArn TargetArn: !Ref InternalApiTargetGroupArn TargetIp: !GetAtt Master1.PrivateIp RegisterMaster1InternalServiceTarget: Condition: DoRegistration Type: Custom::NLBRegister Properties: ServiceToken: !Ref RegisterNlbIpTargetsLambdaArn TargetArn: !Ref InternalServiceTargetGroupArn TargetIp: !GetAtt Master1.PrivateIp Master2: Type: AWS::EC2::Instance Properties: ImageId: !Ref RhcosAmi BlockDeviceMappings: - DeviceName: /dev/xvda Ebs: VolumeSize: \"120\" VolumeType: \"gp2\" IamInstanceProfile: !Ref MasterInstanceProfileName InstanceType: !Ref MasterInstanceType NetworkInterfaces: - AssociatePublicIpAddress: \"false\" DeviceIndex: \"0\" GroupSet: - !Ref \"MasterSecurityGroupId\" SubnetId: !Ref \"Master2Subnet\" UserData: Fn::Base64: !Sub - '{\"ignition\":{\"config\":{\"merge\":[{\"source\":\"USD{SOURCE}\"}]},\"security\":{\"tls\":{\"certificateAuthorities\":[{\"source\":\"USD{CA_BUNDLE}\"}]}},\"version\":\"3.1.0\"}}' - { SOURCE: !Ref IgnitionLocation, CA_BUNDLE: !Ref CertificateAuthorities, } Tags: - Key: !Join [\"\", [\"kubernetes.io/cluster/\", !Ref InfrastructureName]] Value: \"shared\" RegisterMaster2: Condition: DoRegistration Type: Custom::NLBRegister Properties: ServiceToken: !Ref RegisterNlbIpTargetsLambdaArn TargetArn: !Ref ExternalApiTargetGroupArn TargetIp: !GetAtt Master2.PrivateIp RegisterMaster2InternalApiTarget: Condition: DoRegistration Type: Custom::NLBRegister Properties: ServiceToken: !Ref RegisterNlbIpTargetsLambdaArn TargetArn: !Ref InternalApiTargetGroupArn TargetIp: !GetAtt Master2.PrivateIp RegisterMaster2InternalServiceTarget: Condition: DoRegistration Type: Custom::NLBRegister Properties: ServiceToken: !Ref RegisterNlbIpTargetsLambdaArn TargetArn: !Ref InternalServiceTargetGroupArn TargetIp: !GetAtt Master2.PrivateIp Outputs: PrivateIPs: Description: The control-plane node private IP addresses. Value: !Join [ \",\", [!GetAtt Master0.PrivateIp, !GetAtt Master1.PrivateIp, !GetAtt Master2.PrivateIp] ]",
"[ { \"ParameterKey\": \"InfrastructureName\", 1 \"ParameterValue\": \"mycluster-<random_string>\" 2 }, { \"ParameterKey\": \"RhcosAmi\", 3 \"ParameterValue\": \"ami-<random_string>\" 4 }, { \"ParameterKey\": \"Subnet\", 5 \"ParameterValue\": \"subnet-<random_string>\" 6 }, { \"ParameterKey\": \"WorkerSecurityGroupId\", 7 \"ParameterValue\": \"sg-<random_string>\" 8 }, { \"ParameterKey\": \"IgnitionLocation\", 9 \"ParameterValue\": \"https://api-int.<cluster_name>.<domain_name>:22623/config/worker\" 10 }, { \"ParameterKey\": \"CertificateAuthorities\", 11 \"ParameterValue\": \"\" 12 }, { \"ParameterKey\": \"WorkerInstanceProfileName\", 13 \"ParameterValue\": \"\" 14 }, { \"ParameterKey\": \"WorkerInstanceType\", 15 \"ParameterValue\": \"\" 16 } ]",
"aws cloudformation create-stack --stack-name <name> 1 --template-body file://<template>.yaml \\ 2 --parameters file://<parameters>.json 3",
"arn:aws:cloudformation:us-east-1:269333783861:stack/cluster-worker-1/729ee301-1c2a-11eb-348f-sd9888c65b59",
"aws cloudformation describe-stacks --stack-name <name>",
"AWSTemplateFormatVersion: 2010-09-09 Description: Template for OpenShift Cluster Node Launch (EC2 worker instance) Parameters: InfrastructureName: AllowedPattern: ^([a-zA-Z][a-zA-Z0-9\\-]{0,26})USD MaxLength: 27 MinLength: 1 ConstraintDescription: Infrastructure name must be alphanumeric, start with a letter, and have a maximum of 27 characters. Description: A short, unique cluster ID used to tag nodes for the kubelet cloud provider. Type: String RhcosAmi: Description: Current Red Hat Enterprise Linux CoreOS AMI to use for bootstrap. Type: AWS::EC2::Image::Id Subnet: Description: The subnets, recommend private, to launch the worker nodes into. Type: AWS::EC2::Subnet::Id WorkerSecurityGroupId: Description: The worker security group ID to associate with worker nodes. Type: AWS::EC2::SecurityGroup::Id IgnitionLocation: Default: https://api-int.USDCLUSTER_NAME.USDDOMAIN:22623/config/worker Description: Ignition config file location. Type: String CertificateAuthorities: Default: data:text/plain;charset=utf-8;base64,ABC...xYz== Description: Base64 encoded certificate authority string to use. Type: String WorkerInstanceProfileName: Description: IAM profile to associate with worker nodes. Type: String WorkerInstanceType: Default: m5.large Type: String Metadata: AWS::CloudFormation::Interface: ParameterGroups: - Label: default: \"Cluster Information\" Parameters: - InfrastructureName - Label: default: \"Host Information\" Parameters: - WorkerInstanceType - RhcosAmi - IgnitionLocation - CertificateAuthorities - WorkerSecurityGroupId - WorkerInstanceProfileName - Label: default: \"Network Configuration\" Parameters: - Subnet ParameterLabels: Subnet: default: \"Subnet\" InfrastructureName: default: \"Infrastructure Name\" WorkerInstanceType: default: \"Worker Instance Type\" WorkerInstanceProfileName: default: \"Worker Instance Profile Name\" RhcosAmi: default: \"Red Hat Enterprise Linux CoreOS AMI ID\" IgnitionLocation: default: \"Worker Ignition Source\" CertificateAuthorities: default: \"Ignition CA String\" WorkerSecurityGroupId: default: \"Worker Security Group ID\" Resources: Worker0: Type: AWS::EC2::Instance Properties: ImageId: !Ref RhcosAmi BlockDeviceMappings: - DeviceName: /dev/xvda Ebs: VolumeSize: \"120\" VolumeType: \"gp2\" IamInstanceProfile: !Ref WorkerInstanceProfileName InstanceType: !Ref WorkerInstanceType NetworkInterfaces: - AssociatePublicIpAddress: \"false\" DeviceIndex: \"0\" GroupSet: - !Ref \"WorkerSecurityGroupId\" SubnetId: !Ref \"Subnet\" UserData: Fn::Base64: !Sub - '{\"ignition\":{\"config\":{\"merge\":[{\"source\":\"USD{SOURCE}\"}]},\"security\":{\"tls\":{\"certificateAuthorities\":[{\"source\":\"USD{CA_BUNDLE}\"}]}},\"version\":\"3.1.0\"}}' - { SOURCE: !Ref IgnitionLocation, CA_BUNDLE: !Ref CertificateAuthorities, } Tags: - Key: !Join [\"\", [\"kubernetes.io/cluster/\", !Ref InfrastructureName]] Value: \"shared\" Outputs: PrivateIP: Description: The compute node private IP address. Value: !GetAtt Worker0.PrivateIp",
"./openshift-install wait-for bootstrap-complete --dir <installation_directory> \\ 1 --log-level=info 2",
"INFO Waiting up to 20m0s for the Kubernetes API at https://api.mycluster.example.com:6443 INFO API v1.27.3 up INFO Waiting up to 30m0s for bootstrapping to complete INFO It is now safe to remove the bootstrap resources INFO Time elapsed: 1s",
"export KUBECONFIG=<installation_directory>/auth/kubeconfig 1",
"oc whoami",
"system:admin",
"oc get nodes",
"NAME STATUS ROLES AGE VERSION master-0 Ready master 63m v1.27.3 master-1 Ready master 63m v1.27.3 master-2 Ready master 64m v1.27.3",
"oc get csr",
"NAME AGE REQUESTOR CONDITION csr-8b2br 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending csr-8vnps 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending",
"oc adm certificate approve <csr_name> 1",
"oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve",
"oc get csr",
"NAME AGE REQUESTOR CONDITION csr-bfd72 5m26s system:node:ip-10-0-50-126.us-east-2.compute.internal Pending csr-c57lv 5m26s system:node:ip-10-0-95-157.us-east-2.compute.internal Pending",
"oc adm certificate approve <csr_name> 1",
"oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs oc adm certificate approve",
"oc get nodes",
"NAME STATUS ROLES AGE VERSION master-0 Ready master 73m v1.27.3 master-1 Ready master 73m v1.27.3 master-2 Ready master 74m v1.27.3 worker-0 Ready worker 11m v1.27.3 worker-1 Ready worker 11m v1.27.3",
"watch -n5 oc get clusteroperators",
"NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE authentication 4.14.0 True False False 19m baremetal 4.14.0 True False False 37m cloud-credential 4.14.0 True False False 40m cluster-autoscaler 4.14.0 True False False 37m config-operator 4.14.0 True False False 38m console 4.14.0 True False False 26m csi-snapshot-controller 4.14.0 True False False 37m dns 4.14.0 True False False 37m etcd 4.14.0 True False False 36m image-registry 4.14.0 True False False 31m ingress 4.14.0 True False False 30m insights 4.14.0 True False False 31m kube-apiserver 4.14.0 True False False 26m kube-controller-manager 4.14.0 True False False 36m kube-scheduler 4.14.0 True False False 36m kube-storage-version-migrator 4.14.0 True False False 37m machine-api 4.14.0 True False False 29m machine-approver 4.14.0 True False False 37m machine-config 4.14.0 True False False 36m marketplace 4.14.0 True False False 37m monitoring 4.14.0 True False False 29m network 4.14.0 True False False 38m node-tuning 4.14.0 True False False 37m openshift-apiserver 4.14.0 True False False 32m openshift-controller-manager 4.14.0 True False False 30m openshift-samples 4.14.0 True False False 32m operator-lifecycle-manager 4.14.0 True False False 37m operator-lifecycle-manager-catalog 4.14.0 True False False 37m operator-lifecycle-manager-packageserver 4.14.0 True False False 32m service-ca 4.14.0 True False False 38m storage 4.14.0 True False False 37m",
"oc patch OperatorHub cluster --type json -p '[{\"op\": \"add\", \"path\": \"/spec/disableAllDefaultSources\", \"value\": true}]'",
"oc edit configs.imageregistry.operator.openshift.io/cluster",
"storage: s3: bucket: <bucket-name> region: <region-name>",
"oc patch configs.imageregistry.operator.openshift.io cluster --type merge --patch '{\"spec\":{\"storage\":{\"emptyDir\":{}}}}'",
"Error from server (NotFound): configs.imageregistry.operator.openshift.io \"cluster\" not found",
"aws cloudformation delete-stack --stack-name <name> 1",
"oc get --all-namespaces -o jsonpath='{range .items[*]}{range .status.ingress[*]}{.host}{\"\\n\"}{end}{end}' routes",
"oauth-openshift.apps.<cluster_name>.<domain_name> console-openshift-console.apps.<cluster_name>.<domain_name> downloads-openshift-console.apps.<cluster_name>.<domain_name> alertmanager-main-openshift-monitoring.apps.<cluster_name>.<domain_name> prometheus-k8s-openshift-monitoring.apps.<cluster_name>.<domain_name>",
"oc -n openshift-ingress get service router-default",
"NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE router-default LoadBalancer 172.30.62.215 ab3...28.us-east-2.elb.amazonaws.com 80:31499/TCP,443:30693/TCP 5m",
"aws elb describe-load-balancers | jq -r '.LoadBalancerDescriptions[] | select(.DNSName == \"<external_ip>\").CanonicalHostedZoneNameID' 1",
"Z3AADJGX6KTTL2",
"aws route53 list-hosted-zones-by-name --dns-name \"<domain_name>\" \\ 1 --query 'HostedZones[? Config.PrivateZone != `true` && Name == `<domain_name>.`].Id' 2 --output text",
"/hostedzone/Z3URY6TWQ91KVV",
"aws route53 change-resource-record-sets --hosted-zone-id \"<private_hosted_zone_id>\" --change-batch '{ 1 > \"Changes\": [ > { > \"Action\": \"CREATE\", > \"ResourceRecordSet\": { > \"Name\": \"\\\\052.apps.<cluster_domain>\", 2 > \"Type\": \"A\", > \"AliasTarget\":{ > \"HostedZoneId\": \"<hosted_zone_id>\", 3 > \"DNSName\": \"<external_ip>.\", 4 > \"EvaluateTargetHealth\": false > } > } > } > ] > }'",
"aws route53 change-resource-record-sets --hosted-zone-id \"<public_hosted_zone_id>\"\" --change-batch '{ 1 > \"Changes\": [ > { > \"Action\": \"CREATE\", > \"ResourceRecordSet\": { > \"Name\": \"\\\\052.apps.<cluster_domain>\", 2 > \"Type\": \"A\", > \"AliasTarget\":{ > \"HostedZoneId\": \"<hosted_zone_id>\", 3 > \"DNSName\": \"<external_ip>.\", 4 > \"EvaluateTargetHealth\": false > } > } > } > ] > }'",
"./openshift-install --dir <installation_directory> wait-for install-complete 1",
"INFO Waiting up to 40m0s for the cluster at https://api.mycluster.example.com:6443 to initialize INFO Waiting up to 10m0s for the openshift-console route to be created INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: \"kubeadmin\", and password: \"password\" INFO Time elapsed: 1s",
"cat <installation_directory>/auth/kubeadmin-password",
"oc get routes -n openshift-console | grep 'console-openshift'",
"console console-openshift-console.apps.<cluster_name>.<base_domain> console https reencrypt/Redirect None"
] | https://docs.redhat.com/en/documentation/openshift_container_platform_installation/4.14/html/installing_on_aws/installing-restricted-networks-aws |
Chapter 7. Setting up Data Grid services | Chapter 7. Setting up Data Grid services Use Data Grid Operator to create clusters of Data Grid service pods. 7.1. Service types Services are stateful applications, based on the Data Grid Server image, that provide flexible and robust in-memory data storage. Data Grid operator supports only DataGrid service type which deploys Data Grid clusters with full configuration and capabilities. Cache service type is no longer supported. DataGrid` service type for clusters lets you: Back up data across global clusters with cross-site replication. Create caches with any valid configuration. Add file-based cache stores to save data in a persistent volume. Query values across caches using the Data Grid Query API. Use advanced Data Grid features and capabilities. 7.2. Creating Data Grid service pods To use custom cache definitions along with Data Grid capabilities such as cross-site replication, create clusters of Data Grid service pods. Procedure Create an Infinispan CR that sets spec.service.type: DataGrid and configures any other Data Grid service resources. apiVersion: infinispan.org/v1 kind: Infinispan metadata: name: infinispan spec: replicas: 2 version: <Data Grid_version> service: type: DataGrid Important You cannot change the spec.service.type field after you create pods. To change the service type, you must delete the existing pods and create new ones. Apply your Infinispan CR to create the cluster. 7.2.1. Data Grid service CR This topic describes the Infinispan CR for Data Grid service pods. apiVersion: infinispan.org/v1 kind: Infinispan metadata: name: infinispan annotations: infinispan.org/monitoring: 'true' spec: replicas: 6 version: 8.4.6-1 upgrades: type: Shutdown service: type: DataGrid container: storage: 2Gi # The ephemeralStorage and storageClassName fields are mutually exclusive. ephemeralStorage: false storageClassName: my-storage-class sites: local: name: azure expose: type: LoadBalancer locations: - name: azure url: openshift://api.azure.host:6443 secretName: azure-token - name: aws clusterName: infinispan namespace: rhdg-namespace url: openshift://api.aws.host:6443 secretName: aws-token security: endpointSecretName: endpoint-identities endpointEncryption: type: Secret certSecretName: tls-secret container: extraJvmOpts: "-XX:NativeMemoryTracking=summary" cpu: "2000m:1000m" memory: "2Gi:1Gi" logging: categories: org.infinispan: debug org.jgroups: debug org.jgroups.protocols.TCP: error org.jgroups.protocols.relay.RELAY2: error expose: type: LoadBalancer configMapName: "my-cluster-config" configListener: enabled: true affinity: podAntiAffinity: preferredDuringSchedulingIgnoredDuringExecution: - weight: 100 podAffinityTerm: labelSelector: matchLabels: app: infinispan-pod clusterName: infinispan infinispan_cr: infinispan topologyKey: "kubernetes.io/hostname" Field Description metadata.name Names your Data Grid cluster. metadata.annotations.infinispan.org/monitoring Automatically creates a ServiceMonitor for your cluster. spec.replicas Specifies the number of pods in your cluster. spec.version Specifies the Data Grid Server version of your cluster. spec.upgrades.type Controls how Data Grid Operator upgrades your Data Grid cluster when new versions become available. spec.service.type Configures the type Data Grid service. A value of DataGrid creates a cluster with Data Grid service pods. spec.service.container Configures the storage resources for Data Grid service pods. spec.service.sites Configures cross-site replication. spec.security.endpointSecretName Specifies an authentication secret that contains Data Grid user credentials. spec.security.endpointEncryption Specifies TLS certificates and keystores to encrypt client connections. spec.container Specifies JVM, CPU, and memory resources for Data Grid pods. spec.logging Configures Data Grid logging categories. spec.expose Controls how Data Grid endpoints are exposed on the network. spec.configMapName Specifies a ConfigMap that contains Data Grid configuration. spec.configListener.enabled Creates a listener pod in each Data Grid cluster that allows Data Grid Operator to reconcile server-side modifications with Data Grid resources such as the Cache CR. The listener pod consumes minimal resources and is enabled by default. Setting a value of false removes the listener pod and disables bi-directional reconciliation. You should do this only if you do not need declarative Kubernetes representations of Data Grid resources created through the Data Grid Console, CLI, or client applications. spec.configListener.logging.level Configures the logging level for the ConfigListener deployments. The default level is info . You can change it to debug or error . spec.affinity Configures anti-affinity strategies that guarantee Data Grid availability. 7.3. Allocating storage resources By default, Data Grid Operator allocates 1Gi for the persistent volume claim. However you should adjust the amount of storage available to Data Grid service pods so that Data Grid can preserve cluster state during shutdown. Important If available container storage is less than the amount of available memory, data loss can occur. Procedure Allocate storage resources with the spec.service.container.storage field. Configure either the ephemeralStorage field or the storageClassName field as required. Note These fields are mutually exclusive. Add only one of them to your Infinispan CR. Apply the changes. Ephemeral storage Name of a StorageClass object Field Description spec.service.container.storage Specifies the amount of storage for Data Grid service pods. spec.service.container.ephemeralStorage Defines whether storage is ephemeral or permanent. Set the value to true to use ephemeral storage, which means all data in storage is deleted when clusters shut down or restart. The default value is false , which means storage is permanent. spec.service.container.storageClassName Specifies the name of a StorageClass object to use for the persistent volume claim (PVC). If you include this field, you must specify an existing storage class as the value. If you do not include this field, the persistent volume claim uses the storage class that has the storageclass.kubernetes.io/is-default-class annotation set to true . 7.3.1. Persistent volume claims Data Grid Operator creates a persistent volume claim (PVC) and mounts container storage at: /opt/infinispan/server/data Caches When you create caches, Data Grid permanently stores their configuration so your caches are available after cluster restarts. Data Use a file-based cache store, by adding the <file-store/> element to your Data Grid cache configuration, if you want Data Grid service pods to persist data during cluster shutdown. 7.4. Allocating CPU and memory Allocate CPU and memory resources to Data Grid pods with the Infinispan CR. Note Data Grid Operator requests 1Gi of memory from the OpenShift scheduler when creating Data Grid pods. CPU requests are unbounded by default. Procedure Allocate the number of CPU units with the spec.container.cpu field. Allocate the amount of memory, in bytes, with the spec.container.memory field. The cpu and memory fields have values in the format of <limit>:<requests> . For example, cpu: "2000m:1000m" limits pods to a maximum of 2000m of CPU and requests 1000m of CPU for each pod at startup. Specifying a single value sets both the limit and request. Apply your Infinispan CR. If your cluster is running, Data Grid Operator restarts the Data Grid pods so changes take effect. 7.5. Setting JVM options Pass additional JVM options to Data Grid pods at startup. Procedure Configure JVM options with the spec.container filed in your Infinispan CR. Apply your Infinispan CR. If your cluster is running, Data Grid Operator restarts the Data Grid pods so changes take effect. JVM options Field Description spec.container.extraJvmOpts Specifies additional JVM options for the Data Grid Server. spec.container.routerExtraJvmOpts Specifies additional JVM options for the Gossip router. spec.container.cliExtraJvmOpts Specifies additional JVM options for the Data Grid CLI. 7.6. Configuring pod probes Optionally configure the values of the Liveness, Readiness and Startup probes used by Data Grid pods. The Data Grid Operator automatically configures the probe values to sensible defaults. We only recommend providing your own values once you have determined that the default values do not match your requirements. Procedure Configure probe values using the spec.service.container.*Probe fields: spec: service: container: readinessProbe: failureThreshold: 1 initialDelaySeconds: 1 periodSeconds: 1 successThreshold: 1 timeoutSeconds: 1 livenessProbe: failureThreshold: 1 initialDelaySeconds: 1 periodSeconds: 1 successThreshold: 1 timeoutSeconds: 1 startupProbe: failureThreshold: 1 initialDelaySeconds: 1 periodSeconds: 1 successThreshold: 1 timeoutSeconds: 1 Important If no value is specified for a given probe value, then the Data Grid Operator default is used. Apply your Infinispan CR. If your cluster is running, Data Grid Operator restarts the Data Grid pods in order for the changes to take effect. 7.7. Configuring pod priority Create one or more priority classes to indicate the importance of a pod relative to other pods. Pods with higher priority are scheduled ahead of pods with lower priority, ensuring prioritization of pods running critical workloads, especially when resources become constrained. Prerequisites Have cluster-admin access to OpenShift. Procedure Define a PriorityClass object by specifying its name and value. high-priority.yaml apiVersion: scheduling.k8s.io/v1 kind: PriorityClass metadata: name: high-priority value: 1000000 globalDefault: false description: "Use this priority class for high priority service pods only." Create the priority class. Reference the priority class name in the pod configuration. Infinispan CR kind: Infinispan ... spec: scheduling: affinity: ... priorityClassName: "high-priority" ... You must reference an existing priority class name, otherwise the pod is rejected. Apply the changes. Additional resources Including pod priority in pod scheduling decisions 7.8. FIPS mode for your Infinispan CR The Red Hat OpenShift Container Platform can use certain Federal Information Processing Standards (FIPS) components that ensure OpenShift clusters meet the requirements of a FIPS compliance audit. If you enabled FIPS mode on your OpenShift cluster then the Data Grid Operator automatically enables FIPS mode for your Infinispan custom resource (CR). Important Client certificate authentication is not currently supported with FIPS mode. Attempts to create Infinispan CR with spec.security.endpointEncryption.clientCert set to a value other than None will fail. Additional resources Support for FIPS cryptography Red Hat OpenShift Container Platform 7.9. Adjusting log pattern To customize the log display for Data Grid log traces, update the log pattern. If no custom pattern is set, the default format is: %d{HH:mm:ss,SSS} %-5p (%t) [%c] %m%throwable%n Procedure Configure Data Grid logging with the spec.logging.pattern field in your Infinispan CR. Apply the changes. Retrieve logs from Data Grid pods as required. 7.10. Adjusting log levels Change levels for different Data Grid logging categories when you need to debug issues. You can also adjust log levels to reduce the number of messages for certain categories to minimize the use of container resources. Procedure Configure Data Grid logging with the spec.logging.categories field in your Infinispan CR. Apply the changes. Retrieve logs from Data Grid pods as required. 7.10.1. Logging reference Find information about log categories and levels. Table 7.1. Log categories Root category Description Default level org.infinispan Data Grid messages info org.jgroups Cluster transport messages info Table 7.2. Log levels Log level Description trace Provides detailed information about running state of applications. This is the most verbose log level. debug Indicates the progress of individual requests or activities. info Indicates overall progress of applications, including lifecycle events. warn Indicates circumstances that can lead to error or degrade performance. error Indicates error conditions that might prevent operations or activities from being successful but do not prevent applications from running. Garbage collection (GC) messages Data Grid Operator does not log GC messages by default. You can direct GC messages to stdout with the following JVM options: extraJvmOpts: "-Xlog:gc*:stdout:time,level,tags" 7.11. Adding labels and annotations to Data Grid resources Attach key/value labels and annotations to pods and services that Data Grid Operator creates and manages. Labels help you identify relationships between objects to better organize and monitor Data Grid resources. Annotations are arbitrary non-identifying metadata for client applications or deployment and management tooling. Note Red Hat subscription labels are automatically applied to Data Grid resources. Procedure Open your Infinispan CR for editing. Attach labels and annotations to Data Grid resources in the metadata.annotations section. Define values for annotations directly in the metadata.annotations section. Define values for labels with the metadata.labels field. Apply your Infinispan CR. Custom annotations apiVersion: infinispan.org/v1 kind: Infinispan metadata: annotations: infinispan.org/targetAnnotations: service-annotation1, service-annotation2 infinispan.org/podTargetAnnotations: pod-annotation1, pod-annotation2 infinispan.org/routerAnnotations: router-annotation1, router-annotation2 service-annotation1: value service-annotation2: value pod-annotation1: value pod-annotation2: value router-annotation1: value router-annotation2: value Custom labels apiVersion: infinispan.org/v1 kind: Infinispan metadata: annotations: infinispan.org/targetLabels: service-label1, service-label2 infinispan.org/podTargetLabels: pod-label1, pod-label2 labels: service-label1: value service-label2: value pod-label1: value pod-label2: value # The operator does not attach these labels to resources. my-label: my-value environment: development 7.12. Adding labels and annotations with environment variables Set environment variables for Data Grid Operator to add labels and annotations that automatically propagate to all Data Grid pods and services. Procedure Add labels and annotations to your Data Grid Operator subscription with the spec.config.env field in one of the following ways: Use the oc edit subscription command. Use the Red Hat OpenShift Console. Navigate to Operators > Installed Operators > Data Grid Operator . From the Actions menu, select Edit Subscription . Labels and annotations with environment variables spec: config: env: - name: INFINISPAN_OPERATOR_TARGET_LABELS value: | {"service-label1":"value", service-label1":"value"} - name: INFINISPAN_OPERATOR_POD_TARGET_LABELS value: | {"pod-label1":"value", "pod-label2":"value"} - name: INFINISPAN_OPERATOR_TARGET_ANNOTATIONS value: | {"service-annotation1":"value", "service-annotation2":"value"} - name: INFINISPAN_OPERATOR_POD_TARGET_ANNOTATIONS value: | {"pod-annotation1":"value", "pod-annotation2":"value"} 7.13. Defining environment variables in the Data Grid Operator subscription You can define environment variables in your Data Grid Operator subscription either when you create or edit the subscription. Note If you are using the Red Hat OpenShift Console, you must first install the Data Grid Operator and then edit the existing subscription. spec.config.env field Includes the name and value fields to define environment variables. ADDITIONAL_VARS variable Includes the names of environment variables in a format of JSON array. Environment variables within the value of the ADDITIONAL_VARS variable automatically propagate to each Data Grid Server pod managed by the associated Operator. Prerequisites Ensure the Operator Lifecycle Manager (OLM) is installed. Have an oc client. Procedure Create a subscription definition YAML for your Data Grid Operator: Use the spec.config.env field to define environment variables. Within the ADDITIONAL_VARS variable, include environment variable names in a JSON array. subscription-datagrid.yaml For example, use the environment variables to set the local time zone: subscription-datagrid.yaml Create a subscription for Data Grid Operator: Verification Retrieve the environment variables from the subscription-datagrid.yaml : steps Use the oc edit subscription command to modify the environment variable: To ensure the changes take effect on your Data Grid clusters, you must recreate the existing clusters. Terminate the pods by deleting the StatefulSet associated with the existing Infinispan CRs. In the Red Hat OpenShift Console, navigate to Operators > Installed Operators > Data Grid Operator . From the Actions menu, select Edit Subscription . | [
"apiVersion: infinispan.org/v1 kind: Infinispan metadata: name: infinispan spec: replicas: 2 version: <Data Grid_version> service: type: DataGrid",
"apiVersion: infinispan.org/v1 kind: Infinispan metadata: name: infinispan annotations: infinispan.org/monitoring: 'true' spec: replicas: 6 version: 8.4.6-1 upgrades: type: Shutdown service: type: DataGrid container: storage: 2Gi # The ephemeralStorage and storageClassName fields are mutually exclusive. ephemeralStorage: false storageClassName: my-storage-class sites: local: name: azure expose: type: LoadBalancer locations: - name: azure url: openshift://api.azure.host:6443 secretName: azure-token - name: aws clusterName: infinispan namespace: rhdg-namespace url: openshift://api.aws.host:6443 secretName: aws-token security: endpointSecretName: endpoint-identities endpointEncryption: type: Secret certSecretName: tls-secret container: extraJvmOpts: \"-XX:NativeMemoryTracking=summary\" cpu: \"2000m:1000m\" memory: \"2Gi:1Gi\" logging: categories: org.infinispan: debug org.jgroups: debug org.jgroups.protocols.TCP: error org.jgroups.protocols.relay.RELAY2: error expose: type: LoadBalancer configMapName: \"my-cluster-config\" configListener: enabled: true affinity: podAntiAffinity: preferredDuringSchedulingIgnoredDuringExecution: - weight: 100 podAffinityTerm: labelSelector: matchLabels: app: infinispan-pod clusterName: infinispan infinispan_cr: infinispan topologyKey: \"kubernetes.io/hostname\"",
"spec: service: type: DataGrid container: storage: 2Gi ephemeralStorage: true",
"spec: service: type: DataGrid container: storage: 2Gi storageClassName: my-storage-class",
"spec: container: cpu: \"2000m:1000m\" memory: \"2Gi:1Gi\"",
"spec: container: extraJvmOpts: \"-<option>=<value>\" routerExtraJvmOpts: \"-<option>=<value>\" cliExtraJvmOpts: \"-<option>=<value>\"",
"spec: service: container: readinessProbe: failureThreshold: 1 initialDelaySeconds: 1 periodSeconds: 1 successThreshold: 1 timeoutSeconds: 1 livenessProbe: failureThreshold: 1 initialDelaySeconds: 1 periodSeconds: 1 successThreshold: 1 timeoutSeconds: 1 startupProbe: failureThreshold: 1 initialDelaySeconds: 1 periodSeconds: 1 successThreshold: 1 timeoutSeconds: 1",
"apiVersion: scheduling.k8s.io/v1 kind: PriorityClass metadata: name: high-priority value: 1000000 globalDefault: false description: \"Use this priority class for high priority service pods only.\"",
"create -f high-priority.yaml",
"kind: Infinispan spec: scheduling: affinity: priorityClassName: \"high-priority\"",
"spec: logging: pattern: %X{address} %X{user} [%d{dd/MMM/yyyy:HH:mm:ss Z}]",
"logs -f USDPOD_NAME",
"spec: logging: categories: org.infinispan: debug org.jgroups: debug",
"logs -f USDPOD_NAME",
"extraJvmOpts: \"-Xlog:gc*:stdout:time,level,tags\"",
"apiVersion: infinispan.org/v1 kind: Infinispan metadata: annotations: infinispan.org/targetAnnotations: service-annotation1, service-annotation2 infinispan.org/podTargetAnnotations: pod-annotation1, pod-annotation2 infinispan.org/routerAnnotations: router-annotation1, router-annotation2 service-annotation1: value service-annotation2: value pod-annotation1: value pod-annotation2: value router-annotation1: value router-annotation2: value",
"apiVersion: infinispan.org/v1 kind: Infinispan metadata: annotations: infinispan.org/targetLabels: service-label1, service-label2 infinispan.org/podTargetLabels: pod-label1, pod-label2 labels: service-label1: value service-label2: value pod-label1: value pod-label2: value # The operator does not attach these labels to resources. my-label: my-value environment: development",
"edit subscription datagrid -n openshift-operators",
"spec: config: env: - name: INFINISPAN_OPERATOR_TARGET_LABELS value: | {\"service-label1\":\"value\", service-label1\":\"value\"} - name: INFINISPAN_OPERATOR_POD_TARGET_LABELS value: | {\"pod-label1\":\"value\", \"pod-label2\":\"value\"} - name: INFINISPAN_OPERATOR_TARGET_ANNOTATIONS value: | {\"service-annotation1\":\"value\", \"service-annotation2\":\"value\"} - name: INFINISPAN_OPERATOR_POD_TARGET_ANNOTATIONS value: | {\"pod-annotation1\":\"value\", \"pod-annotation2\":\"value\"}",
"apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: datagrid namespace: openshift-operators spec: channel: 8.5.x installPlanApproval: Automatic name: datagrid source: redhat-operators sourceNamespace: openshift-marketplace config: env: - name: ADDITIONAL_VARS value: \"[\\\"VAR_NAME\\\", \\\"ANOTHER_VAR\\\"]\" - name: VAR_NAME value: USD(VAR_NAME_VALUE) - name: ANOTHER_VAR value: USD(ANOTHER_VAR_VALUE)",
"kind: Subscription spec: config: env: - name: ADDITIONAL_VARS value: \"[\\\"TZ\\\"]\" - name: TZ value: \"JST-9\"",
"apply -f subscription-datagrid.yaml",
"get subscription datagrid -n openshift-operators -o jsonpath='{.spec.config.env[*].name}'",
"edit subscription datagrid -n openshift-operators"
] | https://docs.redhat.com/en/documentation/red_hat_data_grid/8.5/html/data_grid_operator_guide/creating-services |
10.3. Configure 802.1Q VLAN Tagging Using the Command Line Tool, nmcli | 10.3. Configure 802.1Q VLAN Tagging Using the Command Line Tool, nmcli To view the available interfaces on the system, issue a command as follows: Note that the NAME field in the output always denotes the connection ID. It is not the interface name even though it might look the same. The ID can be used in nmcli connection commands to identify a connection. Use the DEVICE name with other applications such as firewalld . To create an 802.1Q VLAN interface on Ethernet interface enp1s0 , with VLAN interface VLAN10 and ID 10 , issue a command as follows: Note that as no con-name was given for the VLAN interface, the name was derived from the interface name by prepending the type. Alternatively, specify a name with the con-name option as follows: Assigning Addresses to VLAN Interfaces You can use the same nmcli commands to assign static and dynamic interface addresses as with any other interface. For example, a command to create a VLAN interface with a static IPv4 address and gateway is as follows: To create a VLAN interface with dynamically assigned addressing, issue a command as follows: See Section 3.3.6, "Connecting to a Network Using nmcli" for examples of using nmcli commands to configure interfaces. To review the VLAN interfaces created, issue a command as follows: To view detailed information about the newly configured connection, issue a command as follows: Further options for the VLAN command are listed in the VLAN section of the nmcli(1) man page. In the man pages the device on which the VLAN is created is referred to as the parent device. In the example above the device was specified by its interface name, enp1s0 , it can also be specified by the connection UUID or MAC address. To create an 802.1Q VLAN connection profile with ingress priority mapping on Ethernet interface enp2s0 , with name VLAN1 and ID 13 , issue a command as follows: To view all the parameters associated with the VLAN created above, issue a command as follows: To change the MTU, issue a command as follows: The MTU setting determines the maximum size of the network layer packet. The maximum size of the payload the link-layer frame can carry in turn limits the network layer MTU. For standard Ethernet frames this means an MTU of 1500 bytes. It should not be necessary to change the MTU when setting up a VLAN as the link-layer header is increased in size by 4 bytes to accommodate the 802.1Q tag. At time of writing, connection.interface-name and vlan.interface-name have to be the same (if they are set). They must therefore be changed simultaneously using nmcli 's interactive mode. To change a VLAN connections name, issue commands as follows: The nmcli utility can be used to set and clear ioctl flags which change the way the 802.1Q code functions. The following VLAN flags are supported by NetworkManager : 0x01 - reordering of output packet headers 0x02 - use GVRP protocol 0x04 - loose binding of the interface and its master The state of the VLAN is synchronized to the state of the parent or master interface (the interface or device on which the VLAN is created). If the parent interface is set to the " down " administrative state then all associated VLANs are set down and all routes are flushed from the routing table. Flag 0x04 enables a loose binding mode, in which only the operational state is passed from the parent to the associated VLANs, but the VLAN device state is not changed. To set a VLAN flag, issue a command as follows: See Section 3.3, "Configuring IP Networking with nmcli" for an introduction to nmcli . | [
"~]USD nmcli con show NAME UUID TYPE DEVICE System enp2s0 9c92fad9-6ecb-3e6c-eb4d-8a47c6f50c04 802-3-ethernet enp2s0 System enp1s0 5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03 802-3-ethernet enp1s0",
"~]USD nmcli con add type vlan ifname VLAN10 dev enp1s0 id 10 Connection 'vlan-VLAN10' (37750b4a-8ef5-40e6-be9b-4fb21a4b6d17) successfully added.",
"~]USD nmcli con add type vlan con-name VLAN12 dev enp1s0 id 12 Connection 'VLAN12' (b796c16a-9f5f-441c-835c-f594d40e6533) successfully added.",
"~]USD nmcli con add type vlan con-name VLAN20 dev enp1s0 id 20 ip4 10.10.10.10/24 gw4 10.10.10.254",
"~]USD nmcli con add type vlan con-name VLAN30 dev enp1s0 id 30",
"~]USD nmcli con show NAME UUID TYPE DEVICE VLAN12 4129a37d-4feb-4be5-ac17-14a193821755 vlan enp1s0.12 System enp2s0 9c92fad9-6ecb-3e6c-eb4d-8a47c6f50c04 802-3-ethernet enp2s0 System enp1s0 5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03 802-3-ethernet enp1s0 vlan-VLAN10 1be91581-11c2-461a-b40d-893d42fed4f4 vlan VLAN10",
"~]USD nmcli -p con show VLAN12 =============================================================================== Connection profile details (VLAN12) =============================================================================== connection.id: VLAN12 connection.uuid: 4129a37d-4feb-4be5-ac17-14a193821755 connection.interface-name: -- connection.type: vlan connection.autoconnect: yes ... ------------------------------------------------------------------------------- 802-3-ethernet.port: -- 802-3-ethernet.speed: 0 802-3-ethernet.duplex: -- 802-3-ethernet.auto-negotiate: yes 802-3-ethernet.mac-address: -- 802-3-ethernet.cloned-mac-address: -- 802-3-ethernet.mac-address-blacklist: 802-3-ethernet.mtu: auto ... vlan.interface-name: -- vlan.parent: enp1s0 vlan.id: 12 vlan.flags: 0 (NONE) vlan.ingress-priority-map: vlan.egress-priority-map: ------------------------------------------------------------------------------- =============================================================================== Activate connection details (4129a37d-4feb-4be5-ac17-14a193821755) =============================================================================== GENERAL.NAME: VLAN12 GENERAL.UUID: 4129a37d-4feb-4be5-ac17-14a193821755 GENERAL.DEVICES: enp1s0.12 GENERAL.STATE: activating [output truncated]",
"~]USD nmcli con add type vlan con-name VLAN1 dev enp2s0 id 13 ingress \"2:3,3:5\"",
"~]USD nmcli connection show vlan-VLAN10",
"~]USD nmcli connection modify vlan-VLAN10 802.mtu 1496",
"~]USD nmcli con edit vlan-VLAN10 nmcli> set vlan.interface-name superVLAN nmcli> set connection.interface-name superVLAN nmcli> save nmcli> quit",
"~]USD nmcli connection modify vlan-VLAN10 vlan.flags 1"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/networking_guide/sec-configure_802_1q_vlan_tagging_using_the_command_line_tool_nmcli |
Chapter 1. Template APIs | Chapter 1. Template APIs 1.1. BrokerTemplateInstance [template.openshift.io/v1] Description BrokerTemplateInstance holds the service broker-related state associated with a TemplateInstance. BrokerTemplateInstance is part of an experimental API. Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object 1.2. PodTemplate [v1] Description PodTemplate describes a template for creating copies of a predefined pod. Type object 1.3. Template [template.openshift.io/v1] Description Template contains the inputs needed to produce a Config. Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object 1.4. TemplateInstance [template.openshift.io/v1] Description TemplateInstance requests and records the instantiation of a Template. TemplateInstance is part of an experimental API. Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.14/html/template_apis/template-apis |
Making open source more inclusive | Making open source more inclusive Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see our CTO Chris Wright's message . | null | https://docs.redhat.com/en/documentation/red_hat_openstack_platform/17.1/html/managing_openstack_identity_resources/making-open-source-more-inclusive |
Chapter 6. Using credentials and configurations in workspaces | Chapter 6. Using credentials and configurations in workspaces You can use your credentials and configurations in your workspaces. To do so, mount your credentials and configurations to the Dev Workspace containers in the OpenShift cluster of your organization's OpenShift Dev Spaces instance: Mount your credentials and sensitive configurations as Kubernetes Secrets . Mount your non-sensitive configurations as Kubernetes ConfigMaps . If you need to allow the Dev Workspace Pods in the cluster to access container registries that require authentication, create an image pull Secret for the Dev Workspace Pods. The mounting process uses the standard Kubernetes mounting mechanism and requires applying additional labels and annotations to your existing resources. Resources are mounted when starting a new workspace or restarting an existing one. You can create permanent mount points for various components: Maven configuration, such as the user-specific settings.xml file SSH key pairs Git-provider access tokens Git configuration AWS authorization tokens Configuration files Persistent storage Additional resources Kubernetes Documentation: Secrets Kubernetes Documentation: ConfigMaps 6.1. Mounting Secrets To mount confidential data into your workspaces, use Kubernetes Secrets. Using Kubernetes Secrets, you can mount usernames, passwords, SSH key pairs, authentication tokens (for example, for AWS), and sensitive configurations. Mount Kubernetes Secrets to the Dev Workspace containers in the OpenShift cluster of your organization's OpenShift Dev Spaces instance. Prerequisites An active oc session with administrative permissions to the destination OpenShift cluster. See Getting started with the CLI . In your user project, you created a new Secret or determined an existing Secret to mount to all Dev Workspace containers. Procedure Add the labels, which are required for mounting the Secret, to the Secret. Optional: Use the annotations to configure how the Secret is mounted. Table 6.1. Optional annotations Annotation Description controller.devfile.io/mount-path: Specifies the mount path. Defaults to /etc/secret/ <Secret_name> . controller.devfile.io/mount-as: Specifies how the resource should be mounted: file , subpath , or env . Defaults to file . mount-as: file mounts the keys and values as files within the mount path. mount-as: subpath mounts the keys and values within the mount path using subpath volume mounts. mount-as: env mounts the keys and values as environment variables in all Dev Workspace containers. Example 6.1. Mounting a Secret as a file apiVersion: v1 kind: Secret metadata: name: mvn-settings-secret labels: controller.devfile.io/mount-to-devworkspace: 'true' controller.devfile.io/watch-secret: 'true' annotations: controller.devfile.io/mount-path: '/home/user/.m2' data: settings.xml: <Base64_encoded_content> When you start a workspace, the /home/user/.m2/settings.xml file will be available in the Dev Workspace containers. With Maven, you can set a custom path for the settings.xml file. For example: 6.1.1. Creating image pull Secrets To allow the Dev Workspace Pods in the OpenShift cluster of your organization's OpenShift Dev Spaces instance to access container registries that require authentication, create an image pull Secret. You can create image pull Secrets by using oc or a .dockercfg file or a config.json file. 6.1.1.1. Creating an image pull Secret with oc Prerequisites An active oc session with administrative permissions to the destination OpenShift cluster. See Getting started with the CLI . Procedure In your user project, create an image pull Secret with your private container registry details and credentials: Add the following label to the image pull Secret: 6.1.1.2. Creating an image pull Secret from a .dockercfg file If you already store the credentials for the private container registry in a .dockercfg file, you can use that file to create an image pull Secret. Prerequisites An active oc session with administrative permissions to the destination OpenShift cluster. See Getting started with the CLI . base64 command line tools are installed in the operating system you are using. Procedure Encode the .dockercfg file to Base64: Create a new OpenShift Secret in your user project: apiVersion: v1 kind: Secret metadata: name: <Secret_name> labels: controller.devfile.io/devworkspace_pullsecret: 'true' controller.devfile.io/watch-secret: 'true' data: .dockercfg: <Base64_content_of_.dockercfg> type: kubernetes.io/dockercfg Apply the Secret: 6.1.1.3. Creating an image pull Secret from a config.json file If you already store the credentials for the private container registry in a USDHOME/.docker/config.json file, you can use that file to create an image pull Secret. Prerequisites An active oc session with administrative permissions to the destination OpenShift cluster. See Getting started with the CLI . base64 command line tools are installed in the operating system you are using. Procedure Encode the USDHOME/.docker/config.json file to Base64. Create a new OpenShift Secret in your user project: apiVersion: v1 kind: Secret metadata: name: <Secret_name> labels: controller.devfile.io/devworkspace_pullsecret: 'true' controller.devfile.io/watch-secret: 'true' data: .dockerconfigjson: <Base64_content_of_config.json> type: kubernetes.io/dockerconfigjson Apply the Secret: 6.1.2. Using a Git-provider access token OAuth for GitHub, GitLab, Bitbucket, or Microsoft Azure Repos needs to be configured by the administrator of your organization's OpenShift Dev Spaces instance. If your administrator could not configure it for OpenShift Dev Spaces users, the workaround is for you to use a personal access token. You can configure personal access tokens on the User Preferences page of your OpenShift Dev Spaces dashboard: https:// <openshift_dev_spaces_fqdn> /dashboard/#/user-preferences?tab=personal-access-tokens , or apply it manually as a Kubernetes Secret in the namespace. Mounting your access token as a Secret enables the OpenShift Dev Spaces Server to access the remote repository that is cloned during workspace creation, including access to the repository's /.che and /.vscode folders. Apply the Secret in your user project of the OpenShift cluster of your organization's OpenShift Dev Spaces instance. After applying the Secret, you can create workspaces with clones of private Git repositories that are hosted on GitHub, GitLab, Bitbucket Server, or Microsoft Azure Repos. You can create and apply multiple access-token Secrets per Git provider. You must apply each of those Secrets in your user project. Prerequisites You have logged in to the cluster. Tip On OpenShift, you can use the oc command-line tool to log in to the cluster: USD oc login https:// <openshift_dev_spaces_fqdn> --username= <my_user> Procedure Generate your access token on your Git provider's website. Important Personal access tokens are sensitive information and should be kept confidential. Treat them like passwords. If you are having trouble with authentication, ensure you are using the correct token and have the appropriate permissions for cloning repositories: Open a terminal locally on your computer Use the git command to clone the repository using your personal access token. The format of the git command vary based on the Git Provider. As an example, GitHub personal access token verification can be done using the following command: Replace <PAT> with your personal access token, and username/repo with the appropriate repository path. If the token is valid and has the necessary permissions, the cloning process should be successful. Otherwise, this is an indicator of incorrect personal access token, insufficient permissions, or other issues. Important For GitHub Enterprise Cloud, verify that the token is authorized for use within the organization . Go to https:// <openshift_dev_spaces_fqdn> /api/user/id in the web browser to get your OpenShift Dev Spaces user ID. Prepare a new OpenShift Secret. kind: Secret apiVersion: v1 metadata: name: personal-access-token- <your_choice_of_name_for_this_token> labels: app.kubernetes.io/component: scm-personal-access-token app.kubernetes.io/part-of: che.eclipse.org annotations: che.eclipse.org/che-userid: <devspaces_user_id> 1 che.eclipse.org/scm-personal-access-token-name: <git_provider_name> 2 che.eclipse.org/scm-url: <git_provider_endpoint> 3 che.eclipse.org/scm-organization: <git_provider_organization> 4 stringData: token: <Content_of_access_token> type: Opaque 1 Your OpenShift Dev Spaces user ID. 2 The Git provider name: github or gitlab or bitbucket-server or azure-devops . 3 The Git provider URL. 4 This line is only applicable to azure-devops : your Git provider user organization. Visit https:// <openshift_dev_spaces_fqdn> /api/kubernetes/namespace to get your OpenShift Dev Spaces user namespace as name . Switch to your OpenShift Dev Spaces user namespace in the cluster. Tip On OpenShift: The oc command-line tool can return the namespace you are currently on in the cluster, which you can use to check your current namespace: USD oc project You can switch to your OpenShift Dev Spaces user namespace on a command line if needed: USD oc project <your_user_namespace> Apply the Secret. Tip On OpenShift, you can use the oc command-line tool: Verification Start a new workspace by using the URL of a remote Git repository that the Git provider hosts. Make some changes and push to the remote Git repository from the workspace. Additional resources Deploying Che with support for Git repositories with self-signed certificates Authorizing a personal access token for use with SAML single sign-on 6.2. Mounting ConfigMaps To mount non-confidential configuration data into your workspaces, use Kubernetes ConfigMaps. Using Kubernetes ConfigMaps, you can mount non-sensitive data such as configuration values for an application. Mount Kubernetes ConfigMaps to the Dev Workspace containers in the OpenShift cluster of your organization's OpenShift Dev Spaces instance. Prerequisites An active oc session with administrative permissions to the destination OpenShift cluster. See Getting started with the CLI . In your user project, you created a new ConfigMap or determined an existing ConfigMap to mount to all Dev Workspace containers. Procedure Add the labels, which are required for mounting the ConfigMap, to the ConfigMap. Optional: Use the annotations to configure how the ConfigMap is mounted. Table 6.2. Optional annotations Annotation Description controller.devfile.io/mount-path: Specifies the mount path. Defaults to /etc/config/ <ConfigMap_name> . controller.devfile.io/mount-as: Specifies how the resource should be mounted: file , subpath , or env . Defaults to file . mount-as:file mounts the keys and values as files within the mount path. mount-as:subpath mounts the keys and values within the mount path using subpath volume mounts. mount-as:env mounts the keys and values as environment variables in all Dev Workspace containers. Example 6.2. Mounting a ConfigMap as environment variables kind: ConfigMap apiVersion: v1 metadata: name: my-settings labels: controller.devfile.io/mount-to-devworkspace: 'true' controller.devfile.io/watch-configmap: 'true' annotations: controller.devfile.io/mount-as: env data: <env_var_1> : <value_1> <env_var_2> : <value_2> When you start a workspace, the <env_var_1> and <env_var_2> environment variables will be available in the Dev Workspace containers. 6.2.1. Mounting Git configuration Note The user.name and user.email fields will be set automatically to the gitconfig content from a git provider, connected to OpenShift Dev Spaces by a Git-provider access token or a token generated via OAuth, if username and email are set on the provider's user profile page. Follow the instructions below to mount a Git config file in a workspace. Prerequisites You have logged in to the cluster. Procedure Prepare a new OpenShift ConfigMap. kind: ConfigMap apiVersion: v1 metadata: name: workspace-userdata-gitconfig-configmap namespace: <user_namespace> 1 labels: controller.devfile.io/mount-to-devworkspace: 'true' controller.devfile.io/watch-configmap: 'true' annotations: controller.devfile.io/mount-as: subpath controller.devfile.io/mount-path: /etc/ data: gitconfig: <gitconfig content> 2 1 A user namespace. Visit https:// <openshift_dev_spaces_fqdn> /api/kubernetes/namespace to get your OpenShift Dev Spaces user namespace as name . 2 The content of your gitconfig file content. Apply the ConfigMap. Verification Start a new workspace by using the URL of a remote Git repository that the Git provider hosts. Once the workspace is started, open a new terminal in the tools container and run git config --get-regexp user.* . Your Git user name and email should appear in the output. 6.3. Enabling artifact repositories in a restricted environment By configuring technology stacks, you can work with artifacts from in-house repositories using self-signed certificates: Maven Gradle npm Python Go NuGet 6.3.1. Maven You can enable a Maven artifact repository in Maven workspaces that run in a restricted environment. Prerequisites You are not running any Maven workspace. You know your user namespace, which is <username> -devspaces where <username> is your OpenShift Dev Spaces username. Procedure In the <username> -devspaces namespace, apply the Secret for the TLS certificate: kind: Secret apiVersion: v1 metadata: name: tls-cer annotations: controller.devfile.io/mount-path: /home/user/certs controller.devfile.io/mount-as: file labels: controller.devfile.io/mount-to-devworkspace: 'true' controller.devfile.io/watch-secret: 'true' data: tls.cer: >- <Base64_encoded_content_of_public_cert> 1 1 Base64 encoding with disabled line wrapping. In the <username> -devspaces namespace, apply the ConfigMap to create the settings.xml file: kind: ConfigMap apiVersion: v1 metadata: name: settings-xml annotations: controller.devfile.io/mount-as: subpath controller.devfile.io/mount-path: /home/user/.m2 labels: controller.devfile.io/mount-to-devworkspace: 'true' controller.devfile.io/watch-configmap: 'true' data: settings.xml: | <settings xmlns="http://maven.apache.org/SETTINGS/1.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/SETTINGS/1.0.0 https://maven.apache.org/xsd/settings-1.0.0.xsd"> <localRepository/> <interactiveMode/> <offline/> <pluginGroups/> <servers/> <mirrors> <mirror> <id>redhat-ga-mirror</id> <name>Red Hat GA</name> <url>https:// <maven_artifact_repository_route> /repository/redhat-ga/</url> <mirrorOf>redhat-ga</mirrorOf> </mirror> <mirror> <id>maven-central-mirror</id> <name>Maven Central</name> <url>https:// <maven_artifact_repository_route> /repository/maven-central/</url> <mirrorOf>maven-central</mirrorOf> </mirror> <mirror> <id>jboss-public-repository-mirror</id> <name>JBoss Public Maven Repository</name> <url>https:// <maven_artifact_repository_route> /repository/jboss-public/</url> <mirrorOf>jboss-public-repository</mirrorOf> </mirror> </mirrors> <proxies/> <profiles/> <activeProfiles/> </settings> Optional: When using JBoss EAP-based devfiles, apply a second settings-xml ConfigMap in the <username> -devspaces namespace, and with the same content, a different name, and the /home/jboss/.m2 mount path. In the <username> -devspaces namespace, apply the ConfigMap for the TrustStore initialization script: Java 8 kind: ConfigMap apiVersion: v1 metadata: name: init-truststore annotations: controller.devfile.io/mount-as: subpath controller.devfile.io/mount-path: /home/user/ labels: controller.devfile.io/mount-to-devworkspace: 'true' controller.devfile.io/watch-configmap: 'true' data: init-java8-truststore.sh: | #!/usr/bin/env bash keytool -importcert -noprompt -file /home/user/certs/tls.cer -trustcacerts -keystore ~/.java/current/jre/lib/security/cacerts -storepass changeit Java 11 kind: ConfigMap apiVersion: v1 metadata: name: init-truststore annotations: controller.devfile.io/mount-as: subpath controller.devfile.io/mount-path: /home/user/ labels: controller.devfile.io/mount-to-devworkspace: 'true' controller.devfile.io/watch-configmap: 'true' data: init-java11-truststore.sh: | #!/usr/bin/env bash keytool -importcert -noprompt -file /home/user/certs/tls.cer -cacerts -storepass changeit Start a Maven workspace. Open a new terminal in the tools container. Run ~/init-truststore.sh . 6.3.2. Gradle You can enable a Gradle artifact repository in Gradle workspaces that run in a restricted environment. Prerequisites You are not running any Gradle workspace. Procedure Apply the Secret for the TLS certificate: kind: Secret apiVersion: v1 metadata: name: tls-cer annotations: controller.devfile.io/mount-path: /home/user/certs controller.devfile.io/mount-as: file labels: controller.devfile.io/mount-to-devworkspace: 'true' controller.devfile.io/watch-secret: 'true' data: tls.cer: >- <Base64_encoded_content_of_public_cert> 1 1 Base64 encoding with disabled line wrapping. Apply the ConfigMap for the TrustStore initialization script: kind: ConfigMap apiVersion: v1 metadata: name: init-truststore annotations: controller.devfile.io/mount-as: subpath controller.devfile.io/mount-path: /home/user/ labels: controller.devfile.io/mount-to-devworkspace: 'true' controller.devfile.io/watch-configmap: 'true' data: init-truststore.sh: | #!/usr/bin/env bash keytool -importcert -noprompt -file /home/user/certs/tls.cer -cacerts -storepass changeit Apply the ConfigMap for the Gradle init script: kind: ConfigMap apiVersion: v1 metadata: name: init-gradle annotations: controller.devfile.io/mount-as: subpath controller.devfile.io/mount-path: /home/user/.gradle labels: controller.devfile.io/mount-to-devworkspace: 'true' controller.devfile.io/watch-configmap: 'true' data: init.gradle: | allprojects { repositories { mavenLocal () maven { url "https:// <gradle_artifact_repository_route> /repository/maven-public/" credentials { username "admin" password "passwd" } } } } Start a Gradle workspace. Open a new terminal in the tools container. Run ~/init-truststore.sh . 6.3.3. npm You can enable an npm artifact repository in npm workspaces that run in a restricted environment. Prerequisites You are not running any npm workspace. Warning Applying a ConfigMap that sets environment variables might cause a workspace boot loop. If you encounter this behavior, remove the ConfigMap and edit the devfile directly. Procedure Apply the Secret for the TLS certificate: kind: Secret apiVersion: v1 metadata: name: tls-cer annotations: controller.devfile.io/mount-path: /public-certs controller.devfile.io/mount-as: file labels: controller.devfile.io/mount-to-devworkspace: 'true' controller.devfile.io/watch-secret: 'true' data: nexus.cer: >- <Base64_encoded_content_of_public_cert>__ 1 1 Base64 encoding with disabled line wrapping. Apply the ConfigMap to set the following environment variables in the tools container: kind: ConfigMap apiVersion: v1 metadata: name: disconnected-env annotations: controller.devfile.io/mount-as: env labels: controller.devfile.io/mount-to-devworkspace: 'true' controller.devfile.io/watch-configmap: 'true' data: NPM_CONFIG_REGISTRY: >- https:// <npm_artifact_repository_route> /repository/npm-all/ 6.3.3.1. Disabling self-signed certificate validation Run the command below to disable SSL/TLS, bypassing the validation of your self-signed certificates. Note that this is a potential security risk. For a better solution, configure a self-signed certificate you trust with NODE_EXTRA_CA_CERTS . Procedure Run the following command in the terminal: npm config set strict-ssl false 6.3.3.2. Configuring NODE_EXTRA_CA_CERTS to use a certificate Use the command below to set NODE_EXTRA_CA_CERTS to point to where you have your SSL/TLS certificate. Procedure Run the following command in the terminal: `export NODE_EXTRA_CA_CERTS=/public-certs/nexus.cer` 1 `npm install` 1 /public-certs/nexus.cer is the path to self-signed SSL/TLS certificate of Nexus artifactory. 6.3.4. Python You can enable a Python artifact repository in Python workspaces that run in a restricted environment. Prerequisites You are not running any Python workspace. Warning Applying a ConfigMap that sets environment variables might cause a workspace boot loop. If you encounter this behavior, remove the ConfigMap and edit the devfile directly. Procedure Apply the Secret for the TLS certificate: kind: Secret apiVersion: v1 metadata: name: tls-cer annotations: controller.devfile.io/mount-path: /home/user/certs controller.devfile.io/mount-as: file labels: controller.devfile.io/mount-to-devworkspace: 'true' controller.devfile.io/watch-secret: 'true' data: tls.cer: >- <Base64_encoded_content_of_public_cert> 1 1 Base64 encoding with disabled line wrapping. Apply the ConfigMap to set the following environment variables in the tools container: kind: ConfigMap apiVersion: v1 metadata: name: disconnected-env annotations: controller.devfile.io/mount-as: env labels: controller.devfile.io/mount-to-devworkspace: 'true' controller.devfile.io/watch-configmap: 'true' data: PIP_INDEX_URL: >- https:// <python_artifact_repository_route> /repository/pypi-all/ PIP_CERT: /home/user/certs/tls.cer 6.3.5. Go You can enable a Go artifact repository in Go workspaces that run in a restricted environment. Prerequisites You are not running any Go workspace. Warning Applying a ConfigMap that sets environment variables might cause a workspace boot loop. If you encounter this behavior, remove the ConfigMap and edit the devfile directly. Procedure Apply the Secret for the TLS certificate: kind: Secret apiVersion: v1 metadata: name: tls-cer annotations: controller.devfile.io/mount-path: /home/user/certs controller.devfile.io/mount-as: file labels: controller.devfile.io/mount-to-devworkspace: 'true' controller.devfile.io/watch-secret: 'true' data: tls.cer: >- <Base64_encoded_content_of_public_cert> 1 1 Base64 encoding with disabled line wrapping. Apply the ConfigMap to set the following environment variables in the tools container: kind: ConfigMap apiVersion: v1 metadata: name: disconnected-env annotations: controller.devfile.io/mount-as: env labels: controller.devfile.io/mount-to-devworkspace: 'true' controller.devfile.io/watch-configmap: 'true' data: GOPROXY: >- http:// <athens_proxy_route> SSL_CERT_FILE: /home/user/certs/tls.cer 6.3.6. NuGet You can enable a NuGet artifact repository in NuGet workspaces that run in a restricted environment. Prerequisites You are not running any NuGet workspace. Warning Applying a ConfigMap that sets environment variables might cause a workspace boot loop. If you encounter this behavior, remove the ConfigMap and edit the devfile directly. Procedure Apply the Secret for the TLS certificate: kind: Secret apiVersion: v1 metadata: name: tls-cer annotations: controller.devfile.io/mount-path: /home/user/certs controller.devfile.io/mount-as: file labels: controller.devfile.io/mount-to-devworkspace: 'true' controller.devfile.io/watch-secret: 'true' data: tls.cer: >- <Base64_encoded_content_of_public_cert> 1 1 Base64 encoding with disabled line wrapping. Apply the ConfigMap to set the environment variable for the path of the TLS certificate file in the tools container: kind: ConfigMap apiVersion: v1 metadata: name: disconnected-env annotations: controller.devfile.io/mount-as: env labels: controller.devfile.io/mount-to-devworkspace: 'true' controller.devfile.io/watch-configmap: 'true' data: SSL_CERT_FILE: /home/user/certs/tls.cer Apply the ConfigMap to create the nuget.config file: kind: ConfigMap apiVersion: v1 metadata: name: init-nuget annotations: controller.devfile.io/mount-as: subpath controller.devfile.io/mount-path: /projects labels: controller.devfile.io/mount-to-devworkspace: 'true' controller.devfile.io/watch-configmap: 'true' data: nuget.config: | <?xml version="1.0" encoding="UTF-8"?> <configuration> <packageSources> <add key="nexus2" value="https:// <nuget_artifact_repository_route> /repository/nuget-group/"/> </packageSources> <packageSourceCredentials> <nexus2> <add key="Username" value="admin" /> <add key="Password" value="passwd" /> </nexus2> </packageSourceCredentials> </configuration> | [
"oc label secret <Secret_name> controller.devfile.io/mount-to-devworkspace=true controller.devfile.io/watch-secret=true",
"apiVersion: v1 kind: Secret metadata: name: mvn-settings-secret labels: controller.devfile.io/mount-to-devworkspace: 'true' controller.devfile.io/watch-secret: 'true' annotations: controller.devfile.io/mount-path: '/home/user/.m2' data: settings.xml: <Base64_encoded_content>",
"mvn --settings /home/user/.m2/settings.xml clean install",
"oc create secret docker-registry <Secret_name> --docker-server= <registry_server> --docker-username= <username> --docker-password= <password> --docker-email= <email_address>",
"oc label secret <Secret_name> controller.devfile.io/devworkspace_pullsecret=true controller.devfile.io/watch-secret=true",
"cat .dockercfg | base64 | tr -d '\\n'",
"apiVersion: v1 kind: Secret metadata: name: <Secret_name> labels: controller.devfile.io/devworkspace_pullsecret: 'true' controller.devfile.io/watch-secret: 'true' data: .dockercfg: <Base64_content_of_.dockercfg> type: kubernetes.io/dockercfg",
"oc apply -f - <<EOF <Secret_prepared_in_the_previous_step> EOF",
"cat config.json | base64 | tr -d '\\n'",
"apiVersion: v1 kind: Secret metadata: name: <Secret_name> labels: controller.devfile.io/devworkspace_pullsecret: 'true' controller.devfile.io/watch-secret: 'true' data: .dockerconfigjson: <Base64_content_of_config.json> type: kubernetes.io/dockerconfigjson",
"oc apply -f - <<EOF <Secret_prepared_in_the_previous_step> EOF",
"git clone https://<PAT>@github.com/username/repo.git",
"kind: Secret apiVersion: v1 metadata: name: personal-access-token- <your_choice_of_name_for_this_token> labels: app.kubernetes.io/component: scm-personal-access-token app.kubernetes.io/part-of: che.eclipse.org annotations: che.eclipse.org/che-userid: <devspaces_user_id> 1 che.eclipse.org/scm-personal-access-token-name: <git_provider_name> 2 che.eclipse.org/scm-url: <git_provider_endpoint> 3 che.eclipse.org/scm-organization: <git_provider_organization> 4 stringData: token: <Content_of_access_token> type: Opaque",
"oc apply -f - <<EOF <Secret_prepared_in_step_5> EOF",
"oc label configmap <ConfigMap_name> controller.devfile.io/mount-to-devworkspace=true controller.devfile.io/watch-configmap=true",
"kind: ConfigMap apiVersion: v1 metadata: name: my-settings labels: controller.devfile.io/mount-to-devworkspace: 'true' controller.devfile.io/watch-configmap: 'true' annotations: controller.devfile.io/mount-as: env data: <env_var_1> : <value_1> <env_var_2> : <value_2>",
"kind: ConfigMap apiVersion: v1 metadata: name: workspace-userdata-gitconfig-configmap namespace: <user_namespace> 1 labels: controller.devfile.io/mount-to-devworkspace: 'true' controller.devfile.io/watch-configmap: 'true' annotations: controller.devfile.io/mount-as: subpath controller.devfile.io/mount-path: /etc/ data: gitconfig: <gitconfig content> 2",
"oc apply -f - <<EOF <ConfigMap_prepared_in_step_1> EOF",
"kind: Secret apiVersion: v1 metadata: name: tls-cer annotations: controller.devfile.io/mount-path: /home/user/certs controller.devfile.io/mount-as: file labels: controller.devfile.io/mount-to-devworkspace: 'true' controller.devfile.io/watch-secret: 'true' data: tls.cer: >- <Base64_encoded_content_of_public_cert> 1",
"kind: ConfigMap apiVersion: v1 metadata: name: settings-xml annotations: controller.devfile.io/mount-as: subpath controller.devfile.io/mount-path: /home/user/.m2 labels: controller.devfile.io/mount-to-devworkspace: 'true' controller.devfile.io/watch-configmap: 'true' data: settings.xml: | <settings xmlns=\"http://maven.apache.org/SETTINGS/1.0.0\" xmlns:xsi=\"http://www.w3.org/2001/XMLSchema-instance\" xsi:schemaLocation=\"http://maven.apache.org/SETTINGS/1.0.0 https://maven.apache.org/xsd/settings-1.0.0.xsd\"> <localRepository/> <interactiveMode/> <offline/> <pluginGroups/> <servers/> <mirrors> <mirror> <id>redhat-ga-mirror</id> <name>Red Hat GA</name> <url>https:// <maven_artifact_repository_route> /repository/redhat-ga/</url> <mirrorOf>redhat-ga</mirrorOf> </mirror> <mirror> <id>maven-central-mirror</id> <name>Maven Central</name> <url>https:// <maven_artifact_repository_route> /repository/maven-central/</url> <mirrorOf>maven-central</mirrorOf> </mirror> <mirror> <id>jboss-public-repository-mirror</id> <name>JBoss Public Maven Repository</name> <url>https:// <maven_artifact_repository_route> /repository/jboss-public/</url> <mirrorOf>jboss-public-repository</mirrorOf> </mirror> </mirrors> <proxies/> <profiles/> <activeProfiles/> </settings>",
"kind: ConfigMap apiVersion: v1 metadata: name: init-truststore annotations: controller.devfile.io/mount-as: subpath controller.devfile.io/mount-path: /home/user/ labels: controller.devfile.io/mount-to-devworkspace: 'true' controller.devfile.io/watch-configmap: 'true' data: init-java8-truststore.sh: | #!/usr/bin/env bash keytool -importcert -noprompt -file /home/user/certs/tls.cer -trustcacerts -keystore ~/.java/current/jre/lib/security/cacerts -storepass changeit",
"kind: ConfigMap apiVersion: v1 metadata: name: init-truststore annotations: controller.devfile.io/mount-as: subpath controller.devfile.io/mount-path: /home/user/ labels: controller.devfile.io/mount-to-devworkspace: 'true' controller.devfile.io/watch-configmap: 'true' data: init-java11-truststore.sh: | #!/usr/bin/env bash keytool -importcert -noprompt -file /home/user/certs/tls.cer -cacerts -storepass changeit",
"kind: Secret apiVersion: v1 metadata: name: tls-cer annotations: controller.devfile.io/mount-path: /home/user/certs controller.devfile.io/mount-as: file labels: controller.devfile.io/mount-to-devworkspace: 'true' controller.devfile.io/watch-secret: 'true' data: tls.cer: >- <Base64_encoded_content_of_public_cert> 1",
"kind: ConfigMap apiVersion: v1 metadata: name: init-truststore annotations: controller.devfile.io/mount-as: subpath controller.devfile.io/mount-path: /home/user/ labels: controller.devfile.io/mount-to-devworkspace: 'true' controller.devfile.io/watch-configmap: 'true' data: init-truststore.sh: | #!/usr/bin/env bash keytool -importcert -noprompt -file /home/user/certs/tls.cer -cacerts -storepass changeit",
"kind: ConfigMap apiVersion: v1 metadata: name: init-gradle annotations: controller.devfile.io/mount-as: subpath controller.devfile.io/mount-path: /home/user/.gradle labels: controller.devfile.io/mount-to-devworkspace: 'true' controller.devfile.io/watch-configmap: 'true' data: init.gradle: | allprojects { repositories { mavenLocal () maven { url \"https:// <gradle_artifact_repository_route> /repository/maven-public/\" credentials { username \"admin\" password \"passwd\" } } } }",
"kind: Secret apiVersion: v1 metadata: name: tls-cer annotations: controller.devfile.io/mount-path: /public-certs controller.devfile.io/mount-as: file labels: controller.devfile.io/mount-to-devworkspace: 'true' controller.devfile.io/watch-secret: 'true' data: nexus.cer: >- <Base64_encoded_content_of_public_cert>__ 1",
"kind: ConfigMap apiVersion: v1 metadata: name: disconnected-env annotations: controller.devfile.io/mount-as: env labels: controller.devfile.io/mount-to-devworkspace: 'true' controller.devfile.io/watch-configmap: 'true' data: NPM_CONFIG_REGISTRY: >- https:// <npm_artifact_repository_route> /repository/npm-all/",
"npm config set strict-ssl false",
"`export NODE_EXTRA_CA_CERTS=/public-certs/nexus.cer` 1 `npm install`",
"kind: Secret apiVersion: v1 metadata: name: tls-cer annotations: controller.devfile.io/mount-path: /home/user/certs controller.devfile.io/mount-as: file labels: controller.devfile.io/mount-to-devworkspace: 'true' controller.devfile.io/watch-secret: 'true' data: tls.cer: >- <Base64_encoded_content_of_public_cert> 1",
"kind: ConfigMap apiVersion: v1 metadata: name: disconnected-env annotations: controller.devfile.io/mount-as: env labels: controller.devfile.io/mount-to-devworkspace: 'true' controller.devfile.io/watch-configmap: 'true' data: PIP_INDEX_URL: >- https:// <python_artifact_repository_route> /repository/pypi-all/ PIP_CERT: /home/user/certs/tls.cer",
"kind: Secret apiVersion: v1 metadata: name: tls-cer annotations: controller.devfile.io/mount-path: /home/user/certs controller.devfile.io/mount-as: file labels: controller.devfile.io/mount-to-devworkspace: 'true' controller.devfile.io/watch-secret: 'true' data: tls.cer: >- <Base64_encoded_content_of_public_cert> 1",
"kind: ConfigMap apiVersion: v1 metadata: name: disconnected-env annotations: controller.devfile.io/mount-as: env labels: controller.devfile.io/mount-to-devworkspace: 'true' controller.devfile.io/watch-configmap: 'true' data: GOPROXY: >- http:// <athens_proxy_route> SSL_CERT_FILE: /home/user/certs/tls.cer",
"kind: Secret apiVersion: v1 metadata: name: tls-cer annotations: controller.devfile.io/mount-path: /home/user/certs controller.devfile.io/mount-as: file labels: controller.devfile.io/mount-to-devworkspace: 'true' controller.devfile.io/watch-secret: 'true' data: tls.cer: >- <Base64_encoded_content_of_public_cert> 1",
"kind: ConfigMap apiVersion: v1 metadata: name: disconnected-env annotations: controller.devfile.io/mount-as: env labels: controller.devfile.io/mount-to-devworkspace: 'true' controller.devfile.io/watch-configmap: 'true' data: SSL_CERT_FILE: /home/user/certs/tls.cer",
"kind: ConfigMap apiVersion: v1 metadata: name: init-nuget annotations: controller.devfile.io/mount-as: subpath controller.devfile.io/mount-path: /projects labels: controller.devfile.io/mount-to-devworkspace: 'true' controller.devfile.io/watch-configmap: 'true' data: nuget.config: | <?xml version=\"1.0\" encoding=\"UTF-8\"?> <configuration> <packageSources> <add key=\"nexus2\" value=\"https:// <nuget_artifact_repository_route> /repository/nuget-group/\"/> </packageSources> <packageSourceCredentials> <nexus2> <add key=\"Username\" value=\"admin\" /> <add key=\"Password\" value=\"passwd\" /> </nexus2> </packageSourceCredentials> </configuration>"
] | https://docs.redhat.com/en/documentation/red_hat_openshift_dev_spaces/3.19/html/user_guide/using-credentials-and-configurations-in-workspaces |
16.7.2. Running virt-df | 16.7.2. Running virt-df To display file system usage for all file systems found in a disk image, enter the following: (Where /dev/vg_guests/RHEL6 is a Red Hat Enterprise Linux 6 guest virtual machine disk image. The path in this case is the host physical machine logical volume where this disk image is located.) You can also use virt-df on its own to list information about all of your guest virtual machines (ie. those known to libvirt). The virt-df command recognizes some of the same options as the standard df such as -h (human-readable) and -i (show inodes instead of blocks). virt-df also works on Windows guest virtual machines: Note You can use virt-df safely on live guest virtual machines, since it only needs read-only access. However, you should not expect the numbers to be precisely the same as those from a df command running inside the guest virtual machine. This is because what is on disk will be slightly out of synch with the state of the live guest virtual machine. Nevertheless it should be a good enough approximation for analysis and monitoring purposes. virt-df is designed to allow you to integrate the statistics into monitoring tools, databases and so on. This allows system administrators to generate reports on trends in disk usage, and alerts if a guest virtual machine is about to run out of disk space. To do this you should use the --csv option to generate machine-readable Comma-Separated-Values (CSV) output. CSV output is readable by most databases, spreadsheet software and a variety of other tools and programming languages. The raw CSV looks like the following: For resources and ideas on how to process this output to produce trends and alerts, refer to the following URL: http://libguestfs.org/virt-df.1.html . | [
"virt-df /dev/vg_guests/RHEL6 Filesystem 1K-blocks Used Available Use% RHEL6:/dev/sda1 101086 10233 85634 11% RHEL6:/dev/VolGroup00/LogVol00 7127864 2272744 4493036 32%",
"virt-df -h Filesystem Size Used Available Use% F14x64:/dev/sda1 484.2M 66.3M 392.9M 14% F14x64:/dev/vg_f14x64/lv_root 7.4G 3.0G 4.4G 41% RHEL6brewx64:/dev/sda1 484.2M 52.6M 406.6M 11% RHEL6brewx64:/dev/vg_rhel6brewx64/lv_root 13.3G 3.4G 9.2G 26% Win7x32:/dev/sda1 100.0M 24.1M 75.9M 25% Win7x32:/dev/sda2 19.9G 7.4G 12.5G 38%",
"virt-df --csv WindowsGuest Virtual Machine,Filesystem,1K-blocks,Used,Available,Use% Win7x32,/dev/sda1,102396,24712,77684,24.1% Win7x32,/dev/sda2,20866940,7786652,13080288,37.3%"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/virtualization_administration_guide/run-virt-df |
Apache HTTP Server Installation Guide | Apache HTTP Server Installation Guide Red Hat JBoss Core Services 2.4.57 For use with Red Hat JBoss middleware products. Red Hat Customer Content Services | null | https://docs.redhat.com/en/documentation/red_hat_jboss_core_services/2.4.57/html/apache_http_server_installation_guide/index |
Chapter 3. Developing and deploying Eclipse Vert.x runtime application | Chapter 3. Developing and deploying Eclipse Vert.x runtime application You can create a new Eclipse Vert.x application and deploy it to OpenShift or stand-alone Red Hat Enterprise Linux. 3.1. Developing Eclipse Vert.x application For a basic Eclipse Vert.x application, you need to create the following: A Java class containing Eclipse Vert.x methods. A pom.xml file containing information required by Maven to build the application. The following procedure creates a simple Greeting application that returns "Greetings!" as response. Note For building and deploying your applications to OpenShift, Eclipse Vert.x 4.3 only supports builder images based on OpenJDK 8 and OpenJDK 11. Oracle JDK and OpenJDK 9 builder images are not supported. Prerequisites OpenJDK 8 or OpenJDK 11 installed. Maven installed. Procedure Create a new directory myApp , and navigate to it. USD mkdir myApp USD cd myApp This is the root directory for the application. Create directory structure src/main/java/com/example/ in the root directory, and navigate to it. USD mkdir -p src/main/java/com/example/ USD cd src/main/java/com/example/ Create a Java class file MyApp.java containing the application code. package com.example; import io.vertx.core.AbstractVerticle; import io.vertx.core.Promise; public class MyApp extends AbstractVerticle { @Override public void start(Promise<Void> promise) { vertx .createHttpServer() .requestHandler(r -> r.response().end("Greetings!")) .listen(8080, result -> { if (result.succeeded()) { promise.complete(); } else { promise.fail(result.cause()); } }); } } Create a pom.xml file in the application root directory myApp with the following content: <?xml version="1.0" encoding="UTF-8"?> <project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd"> <modelVersion>4.0.0</modelVersion> <groupId>com.example</groupId> <artifactId>my-app</artifactId> <version>1.0.0-SNAPSHOT</version> <packaging>jar</packaging> <name>My Application</name> <description>Example application using Vert.x</description> <properties> <vertx.version>4.3.7.redhat-00002</vertx.version> <vertx-maven-plugin.version>1.0.24</vertx-maven-plugin.version> <vertx.verticle>com.example.MyApp</vertx.verticle> <!-- Specify the JDK builder image used to build your application. --> <jkube.generator.from>registry.access.redhat.com/ubi8/openjdk-11</jkube.generator.from> <maven.compiler.source>1.8</maven.compiler.source> <maven.compiler.target>1.8</maven.compiler.target> <project.build.sourceEncoding>UTF-8</project.build.sourceEncoding> <project.reporting.outputEncoding>UTF-8</project.reporting.outputEncoding> </properties> <!-- Import dependencies from the Vert.x BOM. --> <dependencyManagement> <dependencies> <dependency> <groupId>io.vertx</groupId> <artifactId>vertx-dependencies</artifactId> <version>USD{vertx.version}</version> <type>pom</type> <scope>import</scope> </dependency> </dependencies> </dependencyManagement> <!-- Specify the Vert.x artifacts that your application depends on. --> <dependencies> <dependency> <groupId>io.vertx</groupId> <artifactId>vertx-core</artifactId> </dependency> <dependency> <groupId>io.vertx</groupId> <artifactId>vertx-web</artifactId> </dependency> </dependencies> <!-- Specify the repositories containing Vert.x artifacts. --> <repositories> <repository> <id>redhat-ga</id> <name>Red Hat GA Repository</name> <url>https://maven.repository.redhat.com/ga/</url> </repository> </repositories> <!-- Specify the repositories containing the plugins used to execute the build of your application. --> <pluginRepositories> <pluginRepository> <id>redhat-ga</id> <name>Red Hat GA Repository</name> <url>https://maven.repository.redhat.com/ga/</url> </pluginRepository> </pluginRepositories> <!-- Configure your application to be packaged using the Vert.x Maven Plugin. --> <build> <plugins> <plugin> <groupId>io.reactiverse</groupId> <artifactId>vertx-maven-plugin</artifactId> <version>USD{vertx-maven-plugin.version}</version> <executions> <execution> <id>vmp</id> <goals> <goal>initialize</goal> <goal>package</goal> </goals> </execution> </executions> </plugin> </plugins> </build> </project> Build the application using Maven from the root directory of the application. USD mvn vertx:run Verify that the application is running. Using curl or your browser, verify your application is running at http://localhost:8080 . USD curl http://localhost:8080 Greetings! Additional information As a recommended practice, you can configure liveness and readiness probes to enable health monitoring for your application when running on OpenShift. 3.2. Deploying Eclipse Vert.x application to OpenShift To deploy your Eclipse Vert.x application to OpenShift, configure the pom.xml file in your application and then use the OpenShift Maven plugin. Note The Fabric8 Maven plugin is no longer supported. Use the OpenShift Maven plugin to deploy your Eclipse Vert.x applications on OpenShift. For more information, see the section migrating from Fabric8 Maven Plugin to Eclipse JKube . You can specify a Java image by replacing the jkube.generator.from URL in the pom.xml file. The images are available in the Red Hat Ecosystem Catalog . For example, the Java image for RHEL 7 with OpenJDK 8 is specified as: 3.2.1. Supported Java images for Eclipse Vert.x Eclipse Vert.x is certified and tested with various Java images that are available for different operating systems. For example, Java images are available for RHEL 7 with OpenJDK 8 or OpenJDK 11. Eclipse Vert.x introduces support for building and deploying Eclipse Vert.x applications to OpenShift with OCI-compliant Universal Base Images for Red Hat OpenJDK 8 and Red Hat OpenJDK 11 on RHEL 8. You require Docker or podman authentication to access the RHEL 8 images in the Red Hat Ecosystem Catalog. The following table lists the container images supported by Eclipse Vert.x for different architectures. These container images are available in the Red Hat Ecosystem Catalog . In the catalog, you can search and download the images listed in the table below. The image pages contain authentication procedures required to access the images. Table 3.1. OpenJDK images and architectures JDK (OS) Architecture supported Images available in Red Hat Ecosystem Catalog OpenJDK8 (RHEL 7) x86_64 redhat-openjdk-18/openjdk18-openshift OpenJDK11 (RHEL 7) x86_64 openjdk/openjdk-11-rhel7 OpenJDK8 (RHEL 8) x86_64 ubi8/openjdk-8-runtime OpenJDK11 (RHEL 8) x86_64, IBM Z, and IBM Power Systems ubi8/openjdk-11 Note The use of a RHEL 8-based container on a RHEL 7 host, for example with OpenShift 3 or OpenShift 4, has limited support. For more information, see the Red Hat Enterprise Linux Container Compatibility Matrix . 3.2.2. Preparing Eclipse Vert.x application for OpenShift deployment For deploying your Eclipse Vert.x application to OpenShift, it must contain: Launcher profile information in the application's pom.xml file. In the following procedure, a profile with OpenShift Maven plugin is used for building and deploying the application to OpenShift. Prerequisites Maven is installed. Docker or podman authentication into Red Hat Ecosystem Catalog to access RHEL 8 images. Procedure Add the following content to the pom.xml file in the application root directory: <!-- Specify the JDK builder image used to build your application. --> <properties> <jkube.generator.from>registry.access.redhat.com/redhat-openjdk-18/openjdk18-openshift:latest</jkube.generator.from> </properties> ... <profiles> <profile> <id>openshift</id> <build> <plugins> <plugin> <groupId>org.eclipse.jkube</groupId> <artifactId>openshift-maven-plugin</artifactId> <version>1.1.1</version> <executions> <execution> <goals> <goal>resource</goal> <goal>build</goal> <goal>apply</goal> </goals> </execution> </executions> </plugin> </plugins> </build> </profile> </profiles> Replace the jkube.generator.from property in the pom.xml file to specify the OpenJDK image that you want to use. x86_64 architecture RHEL 7 with OpenJDK 8 <jkube.generator.from>registry.access.redhat.com/redhat-openjdk-18/openjdk18-openshift:latest</jkube.generator.from> RHEL 7 with OpenJDK 11 <jkube.generator.from>registry.access.redhat.com/openjdk/openjdk-11-rhel7:latest</jkube.generator.from> RHEL 8 with OpenJDK 8 <jkube.generator.from>registry.access.redhat.com/ubi8/openjdk-8:latest</jkube.generator.from> x86_64, s390x (IBM Z), and ppc64le (IBM Power Systems) architectures RHEL 8 with OpenJDK 11 <jkube.generator.from>registry.access.redhat.com/ubi8/openjdk-11:latest</jkube.generator.from> 3.2.3. Deploying Eclipse Vert.x application to OpenShift using OpenShift Maven plugin To deploy your Eclipse Vert.x application to OpenShift, you must perform the following: Log in to your OpenShift instance. Deploy the application to the OpenShift instance. Prerequisites oc CLI client installed. Maven installed. Procedure Log in to your OpenShift instance with the oc client. USD oc login ... Create a new project in the OpenShift instance. USD oc new-project MY_PROJECT_NAME Deploy the application to OpenShift using Maven from the application's root directory. The root directory of an application contains the pom.xml file. USD mvn clean oc:deploy -Popenshift This command uses the OpenShift Maven plugin to launch the S2I process on OpenShift and start the pod. Verify the deployment. Check the status of your application and ensure your pod is running. USD oc get pods -w NAME READY STATUS RESTARTS AGE MY_APP_NAME-1-aaaaa 1/1 Running 0 58s MY_APP_NAME-s2i-1-build 0/1 Completed 0 2m The MY_APP_NAME-1-aaaaa pod should have a status of Running once it is fully deployed and started. Your specific pod name will vary. Determine the route for the pod. Example Route Information USD oc get routes NAME HOST/PORT PATH SERVICES PORT TERMINATION MY_APP_NAME MY_APP_NAME-MY_PROJECT_NAME.OPENSHIFT_HOSTNAME MY_APP_NAME 8080 The route information of a pod gives you the base URL which you use to access it. In this example, http://MY_APP_NAME-MY_PROJECT_NAME.OPENSHIFT_HOSTNAME is the base URL to access the application. Verify that your application is running in OpenShift. USD curl http://MY_APP_NAME-MY_PROJECT_NAME.OPENSHIFT_HOSTNAME Greetings! 3.3. Deploying Eclipse Vert.x application to stand-alone Red Hat Enterprise Linux To deploy your Eclipse Vert.x application to stand-alone Red Hat Enterprise Linux, configure the pom.xml file in the application, package it using Maven and deploy using the java -jar command. Prerequisites RHEL 7 or RHEL 8 installed. 3.3.1. Preparing Eclipse Vert.x application for stand-alone Red Hat Enterprise Linux deployment For deploying your Eclipse Vert.x application to stand-alone Red Hat Enterprise Linux, you must first package the application using Maven. Prerequisites Maven installed. Procedure Add the following content to the pom.xml file in the application's root directory: ... <build> <plugins> <plugin> <groupId>io.reactiverse</groupId> <artifactId>vertx-maven-plugin</artifactId> <version>1.0.24</version> <executions> <execution> <id>vmp</id> <goals> <goal>initialize</goal> <goal>package</goal> </goals> </execution> </executions> </plugin> </plugins> </build> ... Package your application using Maven. USD mvn clean package The resulting JAR file is in the target directory. 3.3.2. Deploying Eclipse Vert.x application to stand-alone Red Hat Enterprise Linux using jar To deploy your Eclipse Vert.x application to stand-alone Red Hat Enterprise Linux, use java -jar command. Prerequisites RHEL 7 or RHEL 8 installed. OpenJDK 8 or OpenJDK 11 installed. A JAR file with the application. Procedure Deploy the JAR file with the application. USD java -jar my-app-fat.jar Verify the deployment. Use curl or your browser to verify your application is running at http://localhost:8080 : USD curl http://localhost:8080 | [
"mkdir myApp cd myApp",
"mkdir -p src/main/java/com/example/ cd src/main/java/com/example/",
"package com.example; import io.vertx.core.AbstractVerticle; import io.vertx.core.Promise; public class MyApp extends AbstractVerticle { @Override public void start(Promise<Void> promise) { vertx .createHttpServer() .requestHandler(r -> r.response().end(\"Greetings!\")) .listen(8080, result -> { if (result.succeeded()) { promise.complete(); } else { promise.fail(result.cause()); } }); } }",
"<?xml version=\"1.0\" encoding=\"UTF-8\"?> <project xmlns=\"http://maven.apache.org/POM/4.0.0\" xmlns:xsi=\"http://www.w3.org/2001/XMLSchema-instance\" xsi:schemaLocation=\"http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd\"> <modelVersion>4.0.0</modelVersion> <groupId>com.example</groupId> <artifactId>my-app</artifactId> <version>1.0.0-SNAPSHOT</version> <packaging>jar</packaging> <name>My Application</name> <description>Example application using Vert.x</description> <properties> <vertx.version>4.3.7.redhat-00002</vertx.version> <vertx-maven-plugin.version>1.0.24</vertx-maven-plugin.version> <vertx.verticle>com.example.MyApp</vertx.verticle> <!-- Specify the JDK builder image used to build your application. --> <jkube.generator.from>registry.access.redhat.com/ubi8/openjdk-11</jkube.generator.from> <maven.compiler.source>1.8</maven.compiler.source> <maven.compiler.target>1.8</maven.compiler.target> <project.build.sourceEncoding>UTF-8</project.build.sourceEncoding> <project.reporting.outputEncoding>UTF-8</project.reporting.outputEncoding> </properties> <!-- Import dependencies from the Vert.x BOM. --> <dependencyManagement> <dependencies> <dependency> <groupId>io.vertx</groupId> <artifactId>vertx-dependencies</artifactId> <version>USD{vertx.version}</version> <type>pom</type> <scope>import</scope> </dependency> </dependencies> </dependencyManagement> <!-- Specify the Vert.x artifacts that your application depends on. --> <dependencies> <dependency> <groupId>io.vertx</groupId> <artifactId>vertx-core</artifactId> </dependency> <dependency> <groupId>io.vertx</groupId> <artifactId>vertx-web</artifactId> </dependency> </dependencies> <!-- Specify the repositories containing Vert.x artifacts. --> <repositories> <repository> <id>redhat-ga</id> <name>Red Hat GA Repository</name> <url>https://maven.repository.redhat.com/ga/</url> </repository> </repositories> <!-- Specify the repositories containing the plugins used to execute the build of your application. --> <pluginRepositories> <pluginRepository> <id>redhat-ga</id> <name>Red Hat GA Repository</name> <url>https://maven.repository.redhat.com/ga/</url> </pluginRepository> </pluginRepositories> <!-- Configure your application to be packaged using the Vert.x Maven Plugin. --> <build> <plugins> <plugin> <groupId>io.reactiverse</groupId> <artifactId>vertx-maven-plugin</artifactId> <version>USD{vertx-maven-plugin.version}</version> <executions> <execution> <id>vmp</id> <goals> <goal>initialize</goal> <goal>package</goal> </goals> </execution> </executions> </plugin> </plugins> </build> </project>",
"mvn vertx:run",
"curl http://localhost:8080 Greetings!",
"<jkube.generator.from>IMAGE_NAME</jkube.generator.from>",
"<jkube.generator.from>registry.access.redhat.com/redhat-openjdk-18/openjdk18-openshift:latest</jkube.generator.from>",
"<!-- Specify the JDK builder image used to build your application. --> <properties> <jkube.generator.from>registry.access.redhat.com/redhat-openjdk-18/openjdk18-openshift:latest</jkube.generator.from> </properties> <profiles> <profile> <id>openshift</id> <build> <plugins> <plugin> <groupId>org.eclipse.jkube</groupId> <artifactId>openshift-maven-plugin</artifactId> <version>1.1.1</version> <executions> <execution> <goals> <goal>resource</goal> <goal>build</goal> <goal>apply</goal> </goals> </execution> </executions> </plugin> </plugins> </build> </profile> </profiles>",
"<jkube.generator.from>registry.access.redhat.com/redhat-openjdk-18/openjdk18-openshift:latest</jkube.generator.from>",
"<jkube.generator.from>registry.access.redhat.com/openjdk/openjdk-11-rhel7:latest</jkube.generator.from>",
"<jkube.generator.from>registry.access.redhat.com/ubi8/openjdk-8:latest</jkube.generator.from>",
"<jkube.generator.from>registry.access.redhat.com/ubi8/openjdk-11:latest</jkube.generator.from>",
"oc login",
"oc new-project MY_PROJECT_NAME",
"mvn clean oc:deploy -Popenshift",
"oc get pods -w NAME READY STATUS RESTARTS AGE MY_APP_NAME-1-aaaaa 1/1 Running 0 58s MY_APP_NAME-s2i-1-build 0/1 Completed 0 2m",
"oc get routes NAME HOST/PORT PATH SERVICES PORT TERMINATION MY_APP_NAME MY_APP_NAME-MY_PROJECT_NAME.OPENSHIFT_HOSTNAME MY_APP_NAME 8080",
"curl http://MY_APP_NAME-MY_PROJECT_NAME.OPENSHIFT_HOSTNAME Greetings!",
"<build> <plugins> <plugin> <groupId>io.reactiverse</groupId> <artifactId>vertx-maven-plugin</artifactId> <version>1.0.24</version> <executions> <execution> <id>vmp</id> <goals> <goal>initialize</goal> <goal>package</goal> </goals> </execution> </executions> </plugin> </plugins> </build>",
"mvn clean package",
"java -jar my-app-fat.jar",
"curl http://localhost:8080"
] | https://docs.redhat.com/en/documentation/red_hat_build_of_eclipse_vert.x/4.3/html/eclipse_vert.x_runtime_guide/developing-and-deploying-vertx-application_vertx |
2.14. pqos | 2.14. pqos The pqos utility, which is available from the intel-cmt-cat package, enables you to both monitor and control CPU cache and memory bandwidth on recent Intel processors. You can use it for workload isolation and improving performance determinism in multitenant deployments. It exposes the following processor capabilities from the Resource Director Technology (RDT) feature set: Monitoring Last Level Cache (LLC) usage and contention monitoring using the Cache Monitoring Technology (CMT) Per-thread memory bandwidth monitoring using the Memory Bandwidth Monitoring (MBM) technology Allocation Controlling the amount of LLC space that is available for specific threads and processes using the Cache Allocation Technology (CAT) Controlling code and data placement in the LLC using the Code and Data Prioritization (CDP) technology Use the following command to list the RDT capabilities supported on your system and to display the current RDT configuration: Additional Resources For more information about using pqos , see the pqos (8) man page. For detailed information on the CMT, MBM, CAT, and CDP processor features, see the official Intel documentation: Intel(R) Resource Director Technology (Intel(R) RDT) . | [
"pqos --show --verbose"
] | https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/performance_tuning_guide/sect-red_hat_enterprise_linux-performance_tuning_guide-performance_monitoring_tools-pqos |
Troubleshooting Guide | Troubleshooting Guide Red Hat Ceph Storage 7 Troubleshooting Red Hat Ceph Storage Red Hat Ceph Storage Documentation Team | [
"cephadm shell",
"ceph health detail",
"ceph -W cephadm",
"ceph config set global log_to_file true ceph config set global mon_cluster_log_to_file true",
"cephadm shell",
"ceph health detail HEALTH_WARN 1 osds down; 1 OSDs or CRUSH {nodes, device-classes} have {NOUP,NODOWN,NOIN,NOOUT} flags set [WRN] OSD_DOWN: 1 osds down osd.1 (root=default,host=host01) is down [WRN] OSD_FLAGS: 1 OSDs or CRUSH {nodes, device-classes} have {NOUP,NODOWN,NOIN,NOOUT} flags set osd.1 has flags noup",
"ceph health mute HEALTH_MESSAGE",
"ceph health mute OSD_DOWN",
"ceph health mute HEALTH_MESSAGE DURATION",
"ceph health mute OSD_DOWN 10m",
"ceph -s cluster: id: 81a4597a-b711-11eb-8cb8-001a4a000740 health: HEALTH_OK (muted: OSD_DOWN(9m) OSD_FLAGS(9m)) services: mon: 3 daemons, quorum host01,host02,host03 (age 33h) mgr: host01.pzhfuh(active, since 33h), standbys: host02.wsnngf, host03.xwzphg osd: 11 osds: 10 up (since 4m), 11 in (since 5d) data: pools: 1 pools, 1 pgs objects: 13 objects, 0 B usage: 85 MiB used, 165 GiB / 165 GiB avail pgs: 1 active+clean",
"ceph health mute HEALTH_MESSAGE DURATION --sticky",
"ceph health mute OSD_DOWN 1h --sticky",
"ceph health unmute HEALTH_MESSAGE",
"ceph health unmute OSD_DOWN",
"dnf install sos",
"sos report -a --all-logs",
"sos report --all-logs -e ceph_mgr,ceph_common,ceph_mon,ceph_osd,ceph_ansible,ceph_mds,ceph_rgw",
"debug_ms = 5 debug_mon = 20 debug_paxos = 20 debug_auth = 20",
"2022-05-12 12:37:04.278761 7f45a9afc700 10 mon.cephn2@0(leader).osd e322 e322: 2 osds: 2 up, 2 in 2022-05-12 12:37:04.278792 7f45a9afc700 10 mon.cephn2@0(leader).osd e322 min_last_epoch_clean 322 2022-05-12 12:37:04.278795 7f45a9afc700 10 mon.cephn2@0(leader).log v1010106 log 2022-05-12 12:37:04.278799 7f45a9afc700 10 mon.cephn2@0(leader).auth v2877 auth 2022-05-12 12:37:04.278811 7f45a9afc700 20 mon.cephn2@0(leader) e1 sync_trim_providers 2022-05-12 12:37:09.278914 7f45a9afc700 11 mon.cephn2@0(leader) e1 tick 2022-05-12 12:37:09.278949 7f45a9afc700 10 mon.cephn2@0(leader).pg v8126 v8126: 64 pgs: 64 active+clean; 60168 kB data, 172 MB used, 20285 MB / 20457 MB avail 2022-05-12 12:37:09.278975 7f45a9afc700 10 mon.cephn2@0(leader).paxosservice(pgmap 7511..8126) maybe_trim trim_to 7626 would only trim 115 < paxos_service_trim_min 250 2022-05-12 12:37:09.278982 7f45a9afc700 10 mon.cephn2@0(leader).osd e322 e322: 2 osds: 2 up, 2 in 2022-05-12 12:37:09.278989 7f45a9afc700 5 mon.cephn2@0(leader).paxos(paxos active c 1028850..1029466) is_readable = 1 - now=2021-08-12 12:37:09.278990 lease_expire=0.000000 has v0 lc 1029466 . 2022-05-12 12:59:18.769963 7f45a92fb700 1 -- 192.168.0.112:6789/0 <== osd.1 192.168.0.114:6800/2801 5724 ==== pg_stats(0 pgs tid 3045 v 0) v1 ==== 124+0+0 (2380105412 0 0) 0x5d96300 con 0x4d5bf40 2022-05-12 12:59:18.770053 7f45a92fb700 1 -- 192.168.0.112:6789/0 --> 192.168.0.114:6800/2801 -- pg_stats_ack(0 pgs tid 3045) v1 -- ?+0 0x550ae00 con 0x4d5bf40 2022-05-12 12:59:32.916397 7f45a9afc700 0 mon.cephn2@0(leader).data_health(1) update_stats avail 53% total 1951 MB, used 780 MB, avail 1053 MB . 2022-05-12 13:01:05.256263 7f45a92fb700 1 -- 192.168.0.112:6789/0 --> 192.168.0.113:6800/2410 -- mon_subscribe_ack(300s) v1 -- ?+0 0x4f283c0 con 0x4d5b440",
"debug_ms = 5 debug_osd = 20",
"2022-05-12 11:27:53.869151 7f5d55d84700 1 -- 192.168.17.3:0/2410 --> 192.168.17.4:6801/2801 -- osd_ping(ping e322 stamp 2021-08-12 11:27:53.869147) v2 -- ?+0 0x63baa00 con 0x578dee0 2022-05-12 11:27:53.869214 7f5d55d84700 1 -- 192.168.17.3:0/2410 --> 192.168.0.114:6801/2801 -- osd_ping(ping e322 stamp 2021-08-12 11:27:53.869147) v2 -- ?+0 0x638f200 con 0x578e040 2022-05-12 11:27:53.870215 7f5d6359f700 1 -- 192.168.17.3:0/2410 <== osd.1 192.168.0.114:6801/2801 109210 ==== osd_ping(ping_reply e322 stamp 2021-08-12 11:27:53.869147) v2 ==== 47+0+0 (261193640 0 0) 0x63c1a00 con 0x578e040 2022-05-12 11:27:53.870698 7f5d6359f700 1 -- 192.168.17.3:0/2410 <== osd.1 192.168.17.4:6801/2801 109210 ==== osd_ping(ping_reply e322 stamp 2021-08-12 11:27:53.869147) v2 ==== 47+0+0 (261193640 0 0) 0x6313200 con 0x578dee0 . 2022-05-12 11:28:10.432313 7f5d6e71f700 5 osd.0 322 tick 2022-05-12 11:28:10.432375 7f5d6e71f700 20 osd.0 322 scrub_random_backoff lost coin flip, randomly backing off 2022-05-12 11:28:10.432381 7f5d6e71f700 10 osd.0 322 do_waiters -- start 2022-05-12 11:28:10.432383 7f5d6e71f700 10 osd.0 322 do_waiters -- finish",
"ceph tell TYPE . ID injectargs --debug- SUBSYSTEM VALUE [-- NAME VALUE ]",
"ceph tell osd.0 injectargs --debug-osd 0/5",
"ceph daemon NAME config show | less",
"ceph daemon osd.0 config show | less",
"[global] debug_ms = 1/5 [mon] debug_mon = 20 debug_paxos = 1/5 debug_auth = 2 [osd] debug_osd = 1/5 debug_monc = 5/20 [mds] debug_mds = 1",
"rotate 7 weekly size SIZE compress sharedscripts",
"rotate 7 weekly size 500 MB compress sharedscripts size 500M",
"crontab -e",
"30 * * * * /usr/sbin/logrotate /etc/logrotate.d/ceph-d3bb5396-c404-11ee-9e65-002590fc2a2e >/dev/null 2>&1",
"logrotate -f",
"logrotate -f /etc/logrotate.d/ceph-12ab345c-1a2b-11ed-b736-fa163e4f6220",
"ll LOG_LOCATION",
"ll /var/log/ceph/12ab345c-1a2b-11ed-b736-fa163e4f6220 -rw-r--r--. 1 ceph ceph 412 Sep 28 09:26 opslog.log.1.gz",
"/usr/local/bin/s3cmd ls",
"/usr/local/bin/s3cmd mb s3:// NEW_BUCKET_NAME",
"/usr/local/bin/s3cmd mb s3://bucket1 Bucket `s3://bucket1` created",
"ll LOG_LOCATION",
"ll /var/log/ceph/12ab345c-1a2b-11ed-b736-fa163e4f6220 total 852 -rw-r--r--. 1 ceph ceph 920 Jun 29 02:17 opslog.log -rw-r--r--. 1 ceph ceph 412 Jun 28 09:26 opslog.log.1.gz",
"tail -f LOG_LOCATION /opslog.log",
"tail -f /var/log/ceph/12ab345c-1a2b-11ed-b736-fa163e4f6220/opslog.log {\"bucket\":\"\",\"time\":\"2022-09-29T06:17:03.133488Z\",\"time_local\":\"2022-09- 29T06:17:03.133488+0000\",\"remote_addr\":\"10.0.211.66\",\"user\":\"test1\", \"operation\":\"list_buckets\",\"uri\":\"GET / HTTP/1.1\",\"http_status\":\"200\",\"error_code\":\"\",\"bytes_sent\":232, \"bytes_received\":0,\"object_size\":0,\"total_time\":9,\"user_agent\":\"\",\"referrer\": \"\",\"trans_id\":\"tx00000c80881a9acd2952a-006335385f-175e5-primary\", \"authentication_type\":\"Local\",\"access_key_id\":\"1234\",\"temp_url\":false} {\"bucket\":\"cn1\",\"time\":\"2022-09-29T06:17:10.521156Z\",\"time_local\":\"2022-09- 29T06:17:10.521156+0000\",\"remote_addr\":\"10.0.211.66\",\"user\":\"test1\", \"operation\":\"create_bucket\",\"uri\":\"PUT /cn1/ HTTP/1.1\",\"http_status\":\"200\",\"error_code\":\"\",\"bytes_sent\":0, \"bytes_received\":0,\"object_size\":0,\"total_time\":106,\"user_agent\":\"\", \"referrer\":\"\",\"trans_id\":\"tx0000058d60c593632c017-0063353866-175e5-primary\", \"authentication_type\":\"Local\",\"access_key_id\":\"1234\",\"temp_url\":false}",
"dnf install net-tools dnf install telnet",
"cat /etc/ceph/ceph.conf minimal ceph.conf for 57bddb48-ee04-11eb-9962-001a4a000672 [global] fsid = 57bddb48-ee04-11eb-9962-001a4a000672 mon_host = [v2:10.74.249.26:3300/0,v1:10.74.249.26:6789/0] [v2:10.74.249.163:3300/0,v1:10.74.249.163:6789/0] [v2:10.74.254.129:3300/0,v1:10.74.254.129:6789/0] [mon.host01] public network = 10.74.248.0/21",
"ip link list 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 2: ens3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000 link/ether 00:1a:4a:00:06:72 brd ff:ff:ff:ff:ff:ff",
"ping SHORT_HOST_NAME",
"ping host02",
"firewall-cmd --info-zone= ZONE telnet IP_ADDRESS PORT",
"firewall-cmd --info-zone=public public (active) target: default icmp-block-inversion: no interfaces: ens3 sources: services: ceph ceph-mon cockpit dhcpv6-client ssh ports: 9283/tcp 8443/tcp 9093/tcp 9094/tcp 3000/tcp 9100/tcp 9095/tcp protocols: masquerade: no forward-ports: source-ports: icmp-blocks: rich rules: telnet 192.168.0.22 9100",
"ethtool -S INTERFACE",
"ethtool -S ens3 | grep errors NIC statistics: rx_fcs_errors: 0 rx_align_errors: 0 rx_frame_too_long_errors: 0 rx_in_length_errors: 0 rx_out_length_errors: 0 tx_mac_errors: 0 tx_carrier_sense_errors: 0 tx_errors: 0 rx_errors: 0",
"ifconfig ens3: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500 inet 10.74.249.26 netmask 255.255.248.0 broadcast 10.74.255.255 inet6 fe80::21a:4aff:fe00:672 prefixlen 64 scopeid 0x20<link> inet6 2620:52:0:4af8:21a:4aff:fe00:672 prefixlen 64 scopeid 0x0<global> ether 00:1a:4a:00:06:72 txqueuelen 1000 (Ethernet) RX packets 150549316 bytes 56759897541 (52.8 GiB) RX errors 0 dropped 176924 overruns 0 frame 0 TX packets 55584046 bytes 62111365424 (57.8 GiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 lo: flags=73<UP,LOOPBACK,RUNNING> mtu 65536 inet 127.0.0.1 netmask 255.0.0.0 inet6 ::1 prefixlen 128 scopeid 0x10<host> loop txqueuelen 1000 (Local Loopback) RX packets 9373290 bytes 16044697815 (14.9 GiB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 9373290 bytes 16044697815 (14.9 GiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0",
"netstat -ai Kernel Interface table Iface MTU RX-OK RX-ERR RX-DRP RX-OVR TX-OK TX-ERR TX-DRP TX-OVR Flg ens3 1500 311847720 0 364903 0 114341918 0 0 0 BMRU lo 65536 19577001 0 0 0 19577001 0 0 0 LRU",
"dnf install iperf3",
"iperf3 -s ----------------------------------------------------------- Server listening on 5201 -----------------------------------------------------------",
"iperf3 -c mon Connecting to host mon, port 5201 [ 4] local xx.x.xxx.xx port 52270 connected to xx.x.xxx.xx port 5201 [ ID] Interval Transfer Bandwidth Retr Cwnd [ 4] 0.00-1.00 sec 114 MBytes 954 Mbits/sec 0 409 KBytes [ 4] 1.00-2.00 sec 113 MBytes 945 Mbits/sec 0 409 KBytes [ 4] 2.00-3.00 sec 112 MBytes 943 Mbits/sec 0 454 KBytes [ 4] 3.00-4.00 sec 112 MBytes 941 Mbits/sec 0 471 KBytes [ 4] 4.00-5.00 sec 112 MBytes 940 Mbits/sec 0 471 KBytes [ 4] 5.00-6.00 sec 113 MBytes 945 Mbits/sec 0 471 KBytes [ 4] 6.00-7.00 sec 112 MBytes 937 Mbits/sec 0 488 KBytes [ 4] 7.00-8.00 sec 113 MBytes 947 Mbits/sec 0 520 KBytes [ 4] 8.00-9.00 sec 112 MBytes 939 Mbits/sec 0 520 KBytes [ 4] 9.00-10.00 sec 112 MBytes 939 Mbits/sec 0 520 KBytes - - - - - - - - - - - - - - - - - - - - - - - - - [ ID] Interval Transfer Bandwidth Retr [ 4] 0.00-10.00 sec 1.10 GBytes 943 Mbits/sec 0 sender [ 4] 0.00-10.00 sec 1.10 GBytes 941 Mbits/sec receiver iperf Done.",
"ethtool INTERFACE",
"ethtool ens3 Settings for ens3: Supported ports: [ TP ] Supported link modes: 10baseT/Half 10baseT/Full 100baseT/Half 100baseT/Full 1000baseT/Half 1000baseT/Full Supported pause frame use: No Supports auto-negotiation: Yes Supported FEC modes: Not reported Advertised link modes: 10baseT/Half 10baseT/Full 100baseT/Half 100baseT/Full 1000baseT/Half 1000baseT/Full Advertised pause frame use: Symmetric Advertised auto-negotiation: Yes Advertised FEC modes: Not reported Link partner advertised link modes: 10baseT/Half 10baseT/Full 100baseT/Half 100baseT/Full 1000baseT/Full Link partner advertised pause frame use: Symmetric Link partner advertised auto-negotiation: Yes Link partner advertised FEC modes: Not reported Speed: 1000Mb/s 1 Duplex: Full 2 Port: Twisted Pair PHYAD: 1 Transceiver: internal Auto-negotiation: on MDI-X: off Supports Wake-on: g Wake-on: d Current message level: 0x000000ff (255) drv probe link timer ifdown ifup rx_err tx_err Link detected: yes 3",
"systemctl status chronyd",
"systemctl enable chronyd systemctl start chronyd",
"chronyc sources chronyc sourcestats chronyc tracking",
"HEALTH_WARN 1 mons down, quorum 1,2 mon.b,mon.c mon.a (rank 0) addr 127.0.0.1:6789/0 is down (out of quorum)",
"systemctl status ceph- FSID @ DAEMON_NAME systemctl start ceph- FSID @ DAEMON_NAME",
"systemctl status [email protected] systemctl start [email protected]",
"Corruption: error in middle of record Corruption: 1 missing files; example: /var/lib/ceph/mon/mon.0/store.db/1234567.ldb",
"Caught signal (Bus error)",
"ceph daemon ID mon_status",
"ceph daemon mon.host01 mon_status",
"mon.a (rank 0) addr 127.0.0.1:6789/0 is down (out of quorum) mon.a addr 127.0.0.1:6789/0 clock skew 0.08235s > max 0.05s (latency 0.0045s)",
"2022-05-04 07:28:32.035795 7f806062e700 0 log [WRN] : mon.a 127.0.0.1:6789/0 clock skew 0.14s > max 0.05s 2022-05-04 04:31:25.773235 7f4997663700 0 log [WRN] : message from mon.1 was stamped 0.186257s in the future, clocks not synchronized",
"mon.ceph1 store is getting too big! 48031 MB >= 15360 MB -- 62% avail",
"du -sch /var/lib/ceph/ CLUSTER_FSID /mon. HOST_NAME /store.db/",
"du -sh /var/lib/ceph/b341e254-b165-11ed-a564-ac1f6bb26e8c/mon.host01/ 109M /var/lib/ceph/b341e254-b165-11ed-a564-ac1f6bb26e8c/mon.host01/ 47G /var/lib/ceph/mon/ceph-ceph1/store.db/ 47G total",
"{ \"name\": \"mon.3\", \"rank\": 2, \"state\": \"peon\", \"election_epoch\": 96, \"quorum\": [ 1, 2 ], \"outside_quorum\": [], \"extra_probe_peers\": [], \"sync_provider\": [], \"monmap\": { \"epoch\": 1, \"fsid\": \"d5552d32-9d1d-436c-8db1-ab5fc2c63cd0\", \"modified\": \"0.000000\", \"created\": \"0.000000\", \"mons\": [ { \"rank\": 0, \"name\": \"mon.1\", \"addr\": \"172.25.1.10:6789\\/0\" }, { \"rank\": 1, \"name\": \"mon.2\", \"addr\": \"172.25.1.12:6789\\/0\" }, { \"rank\": 2, \"name\": \"mon.3\", \"addr\": \"172.25.1.13:6789\\/0\" } ] } }",
"ceph mon getmap -o /tmp/monmap",
"systemctl stop ceph- FSID @ DAEMON_NAME",
"systemctl stop [email protected]",
"ceph-mon -i ID --extract-monmap /tmp/monmap",
"ceph-mon -i mon.a --extract-monmap /tmp/monmap",
"systemctl stop ceph- FSID @ DAEMON_NAME",
"systemctl stop [email protected]",
"ceph-mon -i ID --inject-monmap /tmp/monmap",
"ceph-mon -i mon.host01 --inject-monmap /tmp/monmap",
"systemctl start ceph- FSID @ DAEMON_NAME",
"systemctl start [email protected]",
"systemctl start ceph- FSID @ DAEMON_NAME",
"systemctl start [email protected]",
"rm -rf /var/lib/ceph/mon/ CLUSTER_NAME - SHORT_HOST_NAME",
"rm -rf /var/lib/ceph/mon/remote-host1",
"ceph mon remove SHORT_HOST_NAME --cluster CLUSTER_NAME",
"ceph mon remove host01 --cluster remote",
"ceph tell mon. HOST_NAME compact",
"ceph tell mon.host01 compact",
"[mon] mon_compact_on_start = true",
"systemctl restart ceph- FSID @ DAEMON_NAME",
"systemctl restart [email protected]",
"ceph mon stat",
"systemctl status ceph- FSID @ DAEMON_NAME systemctl stop ceph- FSID @ DAEMON_NAME",
"systemctl status [email protected] systemctl stop [email protected]",
"ceph-monstore-tool /var/lib/ceph/ CLUSTER_FSID /mon. HOST_NAME compact",
"ceph-monstore-tool /var/lib/ceph/b404c440-9e4c-11ec-a28a-001a4a0001df/mon.host01 compact",
"systemctl start ceph- FSID @ DAEMON_NAME",
"systemctl start [email protected]",
"firewall-cmd --add-port 6800-7300/tcp firewall-cmd --add-port 6800-7300/tcp --permanent",
"Corruption: error in middle of record Corruption: 1 missing files; e.g.: /var/lib/ceph/mon/mon.0/store.db/1234567.ldb",
"ceph-volume lvm list",
"mount -t tmpfs tmpfs /var/lib/ceph/osd/ceph-USDi",
"for i in { OSD_ID }; do restorecon /var/lib/ceph/osd/ceph-USDi; done",
"for i in { OSD_ID }; do chown -R ceph:ceph /var/lib/ceph/osd/ceph-USDi; done",
"ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev OSD-DATA --path /var/lib/ceph/osd/ceph- OSD-ID",
"ln -snf BLUESTORE DATABASE /var/lib/ceph/osd/ceph- OSD-ID /block.db",
"cd /root/ ms=/tmp/monstore/ db=/root/db/ db_slow=/root/db.slow/ mkdir USDms for host in USDosd_nodes; do echo \"USDhost\" rsync -avz USDms USDhost:USDms rsync -avz USDdb USDhost:USDdb rsync -avz USDdb_slow USDhost:USDdb_slow rm -rf USDms rm -rf USDdb rm -rf USDdb_slow sh -t USDhost <<EOF for osd in /var/lib/ceph/osd/ceph-*; do ceph-objectstore-tool --type bluestore --data-path \\USDosd --op update-mon-db --mon-store-path USDms done EOF rsync -avz USDhost:USDms USDms rsync -avz USDhost:USDdb USDdb rsync -avz USDhost:USDdb_slow USDdb_slow done",
"ceph-authtool /etc/ceph/ceph.client.admin.keyring -n mon. --cap mon 'allow *' --gen-key cat /etc/ceph/ceph.client.admin.keyring [mon.] key = AQCleqldWqm5IhAAgZQbEzoShkZV42RiQVffnA== caps mon = \"allow *\" [client.admin] key = AQCmAKld8J05KxAArOWeRAw63gAwwZO5o75ZNQ== auid = 0 caps mds = \"allow *\" caps mgr = \"allow *\" caps mon = \"allow *\" caps osd = \"allow *\"",
"mv /root/db/*.sst /root/db.slow/*.sst /tmp/monstore/store.db",
"ceph-monstore-tool /tmp/monstore rebuild -- --keyring /etc/ceph/ceph.client.admin",
"mv /var/lib/ceph/mon/ceph- HOSTNAME /store.db /var/lib/ceph/mon/ceph- HOSTNAME /store.db.corrupted",
"scp -r /tmp/monstore/store.db HOSTNAME :/var/lib/ceph/mon/ceph- HOSTNAME /",
"chown -R ceph:ceph /var/lib/ceph/mon/ceph- HOSTNAME /store.db",
"umount /var/lib/ceph/osd/ceph-*",
"systemctl start ceph- FSID @ DAEMON_NAME",
"systemctl start [email protected]",
"ceph -s",
"ceph auth import -i /etc/ceph/ceph.mgr. HOSTNAME .keyring systemctl start ceph- FSID @ DAEMON_NAME",
"systemctl start ceph-b341e254-b165-11ed-a564-ac1f6bb26e8c@mgr.extensa003.exrqql.service",
"systemctl start ceph- FSID @osd. OSD_ID",
"systemctl start [email protected]",
"ceph -s",
"HEALTH_ERR 1 full osds osd.3 is full at 95%",
"ceph df",
"health: HEALTH_WARN 3 backfillfull osd(s) Low space hindering backfill (add storage if this doesn't resolve itself): 32 pgs backfill_toofull",
"ceph df",
"ceph osd set-backfillfull-ratio VALUE",
"ceph osd set-backfillfull-ratio 0.92",
"HEALTH_WARN 1 nearfull osds osd.2 is near full at 85%",
"ceph osd df",
"df",
"HEALTH_WARN 1/3 in osds are down",
"ceph health detail HEALTH_WARN 1/3 in osds are down osd.0 is down since epoch 23, last address 192.168.106.220:6800/11080",
"systemctl restart ceph- FSID @osd. OSD_ID",
"systemctl restart [email protected]",
"FAILED assert(0 == \"hit suicide timeout\")",
"dmesg",
"xfs_log_force: error -5 returned",
"Caught signal (Segmentation fault)",
"wrongly marked me down heartbeat_check: no reply from osd.2 since back",
"ceph -w | grep osds 2022-05-05 06:27:20.810535 mon.0 [INF] osdmap e609: 9 osds: 8 up, 9 in 2022-05-05 06:27:24.120611 mon.0 [INF] osdmap e611: 9 osds: 7 up, 9 in 2022-05-05 06:27:25.975622 mon.0 [INF] HEALTH_WARN; 118 pgs stale; 2/9 in osds are down 2022-05-05 06:27:27.489790 mon.0 [INF] osdmap e614: 9 osds: 6 up, 9 in 2022-05-05 06:27:36.540000 mon.0 [INF] osdmap e616: 9 osds: 7 up, 9 in 2022-05-05 06:27:39.681913 mon.0 [INF] osdmap e618: 9 osds: 8 up, 9 in 2022-05-05 06:27:43.269401 mon.0 [INF] osdmap e620: 9 osds: 9 up, 9 in 2022-05-05 06:27:54.884426 mon.0 [INF] osdmap e622: 9 osds: 8 up, 9 in 2022-05-05 06:27:57.398706 mon.0 [INF] osdmap e624: 9 osds: 7 up, 9 in 2022-05-05 06:27:59.669841 mon.0 [INF] osdmap e625: 9 osds: 6 up, 9 in 2022-05-05 06:28:07.043677 mon.0 [INF] osdmap e628: 9 osds: 7 up, 9 in 2022-05-05 06:28:10.512331 mon.0 [INF] osdmap e630: 9 osds: 8 up, 9 in 2022-05-05 06:28:12.670923 mon.0 [INF] osdmap e631: 9 osds: 9 up, 9 in",
"2022-05-25 03:44:06.510583 osd.50 127.0.0.1:6801/149046 18992 : cluster [WRN] map e600547 wrongly marked me down",
"2022-05-25 19:00:08.906864 7fa2a0033700 -1 osd.254 609110 heartbeat_check: no reply from osd.2 since back 2021-07-25 19:00:07.444113 front 2021-07-25 18:59:48.311935 (cutoff 2021-07-25 18:59:48.906862)",
"ceph health detail HEALTH_WARN 30 requests are blocked > 32 sec; 3 osds have slow requests 30 ops are blocked > 268435 sec 1 ops are blocked > 268435 sec on osd.11 1 ops are blocked > 268435 sec on osd.18 28 ops are blocked > 268435 sec on osd.39 3 osds have slow requests",
"ceph osd tree | grep down",
"ceph osd set noup ceph osd set nodown",
"HEALTH_WARN 30 requests are blocked > 32 sec; 3 osds have slow requests 30 ops are blocked > 268435 sec 1 ops are blocked > 268435 sec on osd.11 1 ops are blocked > 268435 sec on osd.18 28 ops are blocked > 268435 sec on osd.39 3 osds have slow requests",
"2022-05-24 13:18:10.024659 osd.1 127.0.0.1:6812/3032 9 : cluster [WRN] 6 slow requests, 6 included below; oldest blocked for > 61.758455 secs",
"2022-05-25 03:44:06.510583 osd.50 [WRN] slow request 30.005692 seconds old, received at {date-time}: osd_op(client.4240.0:8 benchmark_data_ceph-1_39426_object7 [write 0~4194304] 0.69848840) v4 currently waiting for subops from [610]",
"cephadm shell",
"ceph osd set noout",
"ceph osd unset noout",
"HEALTH_WARN 1/3 in osds are down osd.0 is down since epoch 23, last address 192.168.106.220:6800/11080",
"cephadm shell",
"ceph osd tree | grep -i down ID CLASS WEIGHT TYPE NAME STATUS REWEIGHT PRI-AFF 0 hdd 0.00999 osd.0 down 1.00000 1.00000",
"ceph osd out OSD_ID .",
"ceph osd out osd.0 marked out osd.0.",
"ceph -w | grep backfill 2022-05-02 04:48:03.403872 mon.0 [INF] pgmap v10293282: 431 pgs: 1 active+undersized+degraded+remapped+backfilling, 28 active+undersized+degraded, 49 active+undersized+degraded+remapped+wait_backfill, 59 stale+active+clean, 294 active+clean; 72347 MB data, 101302 MB used, 1624 GB / 1722 GB avail; 227 kB/s rd, 1358 B/s wr, 12 op/s; 10626/35917 objects degraded (29.585%); 6757/35917 objects misplaced (18.813%); 63500 kB/s, 15 objects/s recovering 2022-05-02 04:48:04.414397 mon.0 [INF] pgmap v10293283: 431 pgs: 2 active+undersized+degraded+remapped+backfilling, 75 active+undersized+degraded+remapped+wait_backfill, 59 stale+active+clean, 295 active+clean; 72347 MB data, 101398 MB used, 1623 GB / 1722 GB avail; 969 kB/s rd, 6778 B/s wr, 32 op/s; 10626/35917 objects degraded (29.585%); 10580/35917 objects misplaced (29.457%); 125 MB/s, 31 objects/s recovering 2022-05-02 04:48:00.380063 osd.1 [INF] 0.6f starting backfill to osd.0 from (0'0,0'0] MAX to 2521'166639 2022-05-02 04:48:00.380139 osd.1 [INF] 0.48 starting backfill to osd.0 from (0'0,0'0] MAX to 2513'43079 2022-05-02 04:48:00.380260 osd.1 [INF] 0.d starting backfill to osd.0 from (0'0,0'0] MAX to 2513'136847 2022-05-02 04:48:00.380849 osd.1 [INF] 0.71 starting backfill to osd.0 from (0'0,0'0] MAX to 2331'28496 2022-05-02 04:48:00.381027 osd.1 [INF] 0.51 starting backfill to osd.0 from (0'0,0'0] MAX to 2513'87544",
"ceph orch daemon stop OSD_ID",
"ceph orch daemon stop osd.0",
"ceph orch osd rm OSD_ID --replace",
"ceph orch osd rm 0 --replace",
"ceph orch apply osd --all-available-devices",
"ceph orch apply osd --all-available-devices --unmanaged=true",
"ceph orch daemon add osd host02:/dev/sdb",
"ceph osd tree",
"sysctl -w kernel.pid.max=4194303",
"kernel.pid.max = 4194303",
"cephadm shell",
"ceph osd dump | grep -i full full_ratio 0.95",
"ceph osd set-full-ratio 0.97",
"ceph osd dump | grep -i full full_ratio 0.97",
"ceph -w",
"ceph osd set-full-ratio 0.95",
"ceph osd dump | grep -i full full_ratio 0.95",
"radosgw-admin user info --uid SYNCHRONIZATION_USER, and radosgw-admin zone get",
"radosgw-admin sync status",
"radosgw-admin data sync status --shard-id= X --source-zone= ZONE_NAME",
"radosgw-admin data sync status --shard-id=27 --source-zone=us-east { \"shard_id\": 27, \"marker\": { \"status\": \"incremental-sync\", \"marker\": \"1_1534494893.816775_131867195.1\", \"next_step_marker\": \"\", \"total_entries\": 1, \"pos\": 0, \"timestamp\": \"0.000000\" }, \"pending_buckets\": [], \"recovering_buckets\": [ \"pro-registry:4ed07bb2-a80b-4c69-aa15-fdc17ae6f5f2.314303.1:26\" ] }",
"radosgw-admin bucket sync status --bucket= X .",
"radosgw-admin sync error list",
"ceph --admin-daemon /var/run/ceph/ceph-client.rgw. RGW_ID .asok perf dump data-sync-from- ZONE_NAME",
"ceph --admin-daemon /var/run/ceph/ceph-client.rgw.host02-rgw0.103.94309060818504.asok perf dump data-sync-from-us-west { \"data-sync-from-us-west\": { \"fetch bytes\": { \"avgcount\": 54, \"sum\": 54526039885 }, \"fetch not modified\": 7, \"fetch errors\": 0, \"poll latency\": { \"avgcount\": 41, \"sum\": 2.533653367, \"avgtime\": 0.061796423 }, \"poll errors\": 0 } }",
"radosgw-admin sync status realm d713eec8-6ec4-4f71-9eaf-379be18e551b (india) zonegroup ccf9e0b2-df95-4e0a-8933-3b17b64c52b7 (shared) zone 04daab24-5bbd-4c17-9cf5-b1981fd7ff79 (primary) current time 2022-09-15T06:53:52Z zonegroup features enabled: resharding metadata sync no sync (zone is master) data sync source: 596319d2-4ffe-4977-ace1-8dd1790db9fb (secondary) syncing full sync: 0/128 shards incremental sync: 128/128 shards data is caught up with source",
"radosgw-admin data sync init --source-zone primary",
"ceph orch restart rgw.myrgw",
"2024-05-13T09:05:30.607+0000 7f4e7c4ea500 0 ERROR: failed to decode obj from .rgw.root:periods.91d2a42c-735b-492a-bcf3-05235ce888aa.3 2024-05-13T09:05:30.607+0000 7f4e7c4ea500 0 failed reading current period info: (5) Input/output error 2024-05-13T09:05:30.607+0000 7f4e7c4ea500 0 ERROR: failed to start notify service ((5) Input/output error 2024-05-13T09:05:30.607+0000 7f4e7c4ea500 0 ERROR: failed to init services (ret=(5) Input/output error) couldn't init storage provider",
"date;radosgw-admin bucket list Mon May 13 09:05:30 UTC 2024 2024-05-13T09:05:30.607+0000 7f4e7c4ea500 0 ERROR: failed to decode obj from .rgw.root:periods.91d2a42c-735b-492a-bcf3-05235ce888aa.3 2024-05-13T09:05:30.607+0000 7f4e7c4ea500 0 failed reading current period info: (5) Input/output error 2024-05-13T09:05:30.607+0000 7f4e7c4ea500 0 ERROR: failed to start notify service ((5) Input/output error 2024-05-13T09:05:30.607+0000 7f4e7c4ea500 0 ERROR: failed to init services (ret=(5) Input/output error) couldn't init storage provider",
"cephadm shell --radosgw-admin COMMAND",
"cephadm shell -- radosgw-admin bucket list",
"HEALTH_WARN 24 pgs stale; 3/300 in osds are down",
"ceph health detail HEALTH_WARN 24 pgs stale; 3/300 in osds are down pg 2.5 is stuck stale+active+remapped, last acting [2,0] osd.10 is down since epoch 23, last address 192.168.106.220:6800/11080 osd.11 is down since epoch 13, last address 192.168.106.220:6803/11539 osd.12 is down since epoch 24, last address 192.168.106.220:6806/11861",
"HEALTH_ERR 1 pgs inconsistent; 2 scrub errors pg 0.6 is active+clean+inconsistent, acting [0,1,2] 2 scrub errors",
"cephadm shell",
"ceph health detail HEALTH_ERR 1 pgs inconsistent; 2 scrub errors pg 0.6 is active+clean+inconsistent, acting [0,1,2] 2 scrub errors",
"ceph pg deep-scrub ID",
"ceph pg deep-scrub 0.6 instructing pg 0.6 on osd.0 to deep-scrub",
"ceph -w | grep ID",
"ceph -w | grep 0.6 2022-05-26 01:35:36.778215 osd.106 [ERR] 0.6 deep-scrub stat mismatch, got 636/635 objects, 0/0 clones, 0/0 dirty, 0/0 omap, 0/0 hit_set_archive, 0/0 whiteouts, 1855455/1854371 bytes. 2022-05-26 01:35:36.788334 osd.106 [ERR] 0.6 deep-scrub 1 errors",
"PG . ID shard OSD : soid OBJECT missing attr , missing attr _ATTRIBUTE_TYPE PG . ID shard OSD : soid OBJECT digest 0 != known digest DIGEST , size 0 != known size SIZE PG . ID shard OSD : soid OBJECT size 0 != known size SIZE PG . ID deep-scrub stat mismatch, got MISMATCH PG . ID shard OSD : soid OBJECT candidate had a read error, digest 0 != known digest DIGEST",
"PG . ID shard OSD : soid OBJECT digest DIGEST != known digest DIGEST PG . ID shard OSD : soid OBJECT omap_digest DIGEST != known omap_digest DIGEST",
"HEALTH_WARN 197 pgs stuck unclean",
"ceph osd tree",
"HEALTH_WARN 197 pgs stuck inactive",
"ceph osd tree",
"HEALTH_ERR 7 pgs degraded; 12 pgs down; 12 pgs peering; 1 pgs recovering; 6 pgs stuck unclean; 114/3300 degraded (3.455%); 1/3 in osds are down pg 0.5 is down+peering pg 1.4 is down+peering osd.1 is down since epoch 69, last address 192.168.106.220:6801/8651",
"ceph pg ID query",
"ceph pg 0.5 query { \"state\": \"down+peering\", \"recovery_state\": [ { \"name\": \"Started\\/Primary\\/Peering\\/GetInfo\", \"enter_time\": \"2021-08-06 14:40:16.169679\", \"requested_info_from\": []}, { \"name\": \"Started\\/Primary\\/Peering\", \"enter_time\": \"2021-08-06 14:40:16.169659\", \"probing_osds\": [ 0, 1], \"blocked\": \"peering is blocked due to down osds\", \"down_osds_we_would_probe\": [ 1], \"peering_blocked_by\": [ { \"osd\": 1, \"current_lost_at\": 0, \"comment\": \"starting or marking this osd lost may let us proceed\"}]}, { \"name\": \"Started\", \"enter_time\": \"2021-08-06 14:40:16.169513\"} ] }",
"HEALTH_WARN 1 pgs degraded; 78/3778 unfound (2.065%)",
"cephadm shell",
"ceph health detail HEALTH_WARN 1 pgs recovering; 1 pgs stuck unclean; recovery 5/937611 objects degraded (0.001%); 1/312537 unfound (0.000%) pg 3.8a5 is stuck unclean for 803946.712780, current state active+recovering, last acting [320,248,0] pg 3.8a5 is active+recovering, acting [320,248,0], 1 unfound recovery 5/937611 objects degraded (0.001%); **1/312537 unfound (0.000%)**",
"ceph pg ID query",
"ceph pg 3.8a5 query { \"state\": \"active+recovering\", \"epoch\": 10741, \"up\": [ 320, 248, 0], \"acting\": [ 320, 248, 0], <snip> \"recovery_state\": [ { \"name\": \"Started\\/Primary\\/Active\", \"enter_time\": \"2021-08-28 19:30:12.058136\", \"might_have_unfound\": [ { \"osd\": \"0\", \"status\": \"already probed\"}, { \"osd\": \"248\", \"status\": \"already probed\"}, { \"osd\": \"301\", \"status\": \"already probed\"}, { \"osd\": \"362\", \"status\": \"already probed\"}, { \"osd\": \"395\", \"status\": \"already probed\"}, { \"osd\": \"429\", \"status\": \"osd is down\"}], \"recovery_progress\": { \"backfill_targets\": [], \"waiting_on_backfill\": [], \"last_backfill_started\": \"0\\/\\/0\\/\\/-1\", \"backfill_info\": { \"begin\": \"0\\/\\/0\\/\\/-1\", \"end\": \"0\\/\\/0\\/\\/-1\", \"objects\": []}, \"peer_backfill_info\": [], \"backfills_in_flight\": [], \"recovering\": [], \"pg_backend\": { \"pull_from_peer\": [], \"pushing\": []}}, \"scrub\": { \"scrubber.epoch_start\": \"0\", \"scrubber.active\": 0, \"scrubber.block_writes\": 0, \"scrubber.finalizing\": 0, \"scrubber.waiting_on\": 0, \"scrubber.waiting_on_whom\": []}}, { \"name\": \"Started\", \"enter_time\": \"2021-08-28 19:30:11.044020\"}],",
"cephadm shell",
"ceph pg dump_stuck inactive ceph pg dump_stuck unclean ceph pg dump_stuck stale",
"rados list-inconsistent-pg POOL --format=json-pretty",
"rados list-inconsistent-pg data --format=json-pretty [0.6]",
"rados list-inconsistent-obj PLACEMENT_GROUP_ID",
"rados list-inconsistent-obj 0.6 { \"epoch\": 14, \"inconsistents\": [ { \"object\": { \"name\": \"image1\", \"nspace\": \"\", \"locator\": \"\", \"snap\": \"head\", \"version\": 1 }, \"errors\": [ \"data_digest_mismatch\", \"size_mismatch\" ], \"union_shard_errors\": [ \"data_digest_mismatch_oi\", \"size_mismatch_oi\" ], \"selected_object_info\": \"0:602f83fe:::foo:head(16'1 client.4110.0:1 dirty|data_digest|omap_digest s 968 uv 1 dd e978e67f od ffffffff alloc_hint [0 0 0])\", \"shards\": [ { \"osd\": 0, \"errors\": [], \"size\": 968, \"omap_digest\": \"0xffffffff\", \"data_digest\": \"0xe978e67f\" }, { \"osd\": 1, \"errors\": [], \"size\": 968, \"omap_digest\": \"0xffffffff\", \"data_digest\": \"0xe978e67f\" }, { \"osd\": 2, \"errors\": [ \"data_digest_mismatch_oi\", \"size_mismatch_oi\" ], \"size\": 0, \"omap_digest\": \"0xffffffff\", \"data_digest\": \"0xffffffff\" } ] } ] }",
"rados list-inconsistent-snapset PLACEMENT_GROUP_ID",
"rados list-inconsistent-snapset 0.23 --format=json-pretty { \"epoch\": 64, \"inconsistents\": [ { \"name\": \"obj5\", \"nspace\": \"\", \"locator\": \"\", \"snap\": \"0x00000001\", \"headless\": true }, { \"name\": \"obj5\", \"nspace\": \"\", \"locator\": \"\", \"snap\": \"0x00000002\", \"headless\": true }, { \"name\": \"obj5\", \"nspace\": \"\", \"locator\": \"\", \"snap\": \"head\", \"ss_attr_missing\": true, \"extra_clones\": true, \"extra clones\": [ 2, 1 ] } ]",
"HEALTH_ERR 1 pgs inconsistent; 2 scrub errors pg 0.6 is active+clean+inconsistent, acting [0,1,2] 2 scrub errors",
"_PG_._ID_ shard _OSD_: soid _OBJECT_ digest _DIGEST_ != known digest _DIGEST_ _PG_._ID_ shard _OSD_: soid _OBJECT_ omap_digest _DIGEST_ != known omap_digest _DIGEST_",
"ceph pg repair ID",
"ceph tell osd.* injectargs '--osd_max_backfills 1 --osd_recovery_max_active 1 --osd_recovery_op_priority 1'",
"ceph osd set noscrub ceph osd set nodeep-scrub",
"ceph osd pool set POOL pg_num VALUE",
"ceph osd pool set data pg_num 4",
"ceph -s",
"ceph osd pool set POOL pgp_num VALUE",
"ceph osd pool set data pgp_num 4",
"ceph -s",
"ceph tell osd.* injectargs '--osd_max_backfills 1 --osd_recovery_max_active 3 --osd_recovery_op_priority 3'",
"ceph osd unset noscrub ceph osd unset nodeep-scrub",
"systemctl status ceph- FSID @osd. OSD_ID",
"systemctl status [email protected]",
"cephadm shell --name osd. OSD_ID",
"cephadm shell --name osd.0",
"ceph-objectstore-tool --data-path PATH_TO_OSD --op list",
"ceph-objectstore-tool --data-path /var/lib/ceph/osd/ceph-0 --op list",
"ceph-objectstore-tool --data-path PATH_TO_OSD --pgid PG_ID --op list",
"ceph-objectstore-tool --data-path /var/lib/ceph/osd/ceph-0 --pgid 0.1c --op list",
"ceph-objectstore-tool --data-path PATH_TO_OSD --op list OBJECT_ID",
"ceph-objectstore-tool --data-path /var/lib/ceph/osd/ceph-0 --op list default.region",
"systemctl status ceph- FSID @osd. OSD_ID",
"systemctl status [email protected]",
"cephadm shell --name osd. OSD_ID",
"cephadm shell --name osd.0",
"ceph-objectstore-tool --data-path PATH_TO_OSD --op fix-lost --dry-run",
"ceph-objectstore-tool --data-path /var/lib/ceph/osd/ceph-0 --op fix-lost --dry-run",
"ceph-objectstore-tool --data-path PATH_TO_OSD --op fix-lost",
"ceph-objectstore-tool --data-path /var/lib/ceph/osd/ceph-0 --op fix-lost",
"ceph-objectstore-tool --data-path PATH_TO_OSD --pgid PG_ID --op fix-lost",
"ceph-objectstore-tool --data-path /var/lib/ceph/osd/ceph-0 --pgid 0.1c --op fix-lost",
"ceph-objectstore-tool --data-path PATH_TO_OSD --op fix-lost OBJECT_ID",
"ceph-objectstore-tool --data-path /var/lib/ceph/osd/ceph-0 --op fix-lost default.region",
"systemctl status ceph- FSID @osd. OSD_ID",
"systemctl status [email protected]",
"cephadm shell --name osd. OSD_ID",
"cephadm shell --name osd.0",
"ceph-objectstore-tool --data-path PATH_TO_OSD --pgid PG_ID OBJECT get-bytes > OBJECT_FILE_NAME",
"ceph-objectstore-tool --data-path /var/lib/ceph/osd/ceph-0 --pgid 0.1c '{\"oid\":\"zone_info.default\",\"key\":\"\",\"snapid\":-2,\"hash\":235010478,\"max\":0,\"pool\":11,\"namespace\":\"\"}' get-bytes > zone_info.default.backup ceph-objectstore-tool --data-path /var/lib/ceph/osd/ceph-0 --pgid 0.1c '{\"oid\":\"zone_info.default\",\"key\":\"\",\"snapid\":-2,\"hash\":235010478,\"max\":0,\"pool\":11,\"namespace\":\"\"}' get-bytes > zone_info.default.working-copy",
"ceph-objectstore-tool --data-path PATH_TO_OSD --pgid PG_ID OBJECT set-bytes < OBJECT_FILE_NAME",
"ceph-objectstore-tool --data-path /var/lib/ceph/osd/ceph-0 --pgid 0.1c '{\"oid\":\"zone_info.default\",\"key\":\"\",\"snapid\":-2,\"hash\":235010478,\"max\":0,\"pool\":11,\"namespace\":\"\"}' set-bytes < zone_info.default.working-copy",
"cephadm shell --name osd. OSD_ID",
"cephadm shell --name osd.0",
"ceph-objectstore-tool --data-path PATH_TO_OSD --pgid PG_ID OBJECT remove",
"ceph-objectstore-tool --data-path /var/lib/ceph/osd/ceph-0 --pgid 0.1c '{\"oid\":\"zone_info.default\",\"key\":\"\",\"snapid\":-2,\"hash\":235010478,\"max\":0,\"pool\":11,\"namespace\":\"\"}' remove",
"systemctl status ceph-osd@ OSD_ID",
"systemctl status [email protected]",
"cephadm shell --name osd. OSD_ID",
"cephadm shell --name osd.0",
"ceph-objectstore-tool --data-path PATH_TO_OSD --pgid PG_ID OBJECT list-omap",
"ceph-objectstore-tool --data-path /var/lib/ceph/osd/ceph-0 --pgid 0.1c '{\"oid\":\"zone_info.default\",\"key\":\"\",\"snapid\":-2,\"hash\":235010478,\"max\":0,\"pool\":11,\"namespace\":\"\"}' list-omap",
"systemctl status ceph- FSID @osd. OSD_ID",
"systemctl status [email protected]",
"cephadm shell --name osd. OSD_ID",
"cephadm shell --name osd.0",
"ceph-objectstore-tool --data-path PATH_TO_OSD --pgid PG_ID OBJECT get-omaphdr > OBJECT_MAP_FILE_NAME",
"ceph-objectstore-tool --data-path /var/lib/ceph/osd/ceph-0 --pgid 0.1c '{\"oid\":\"zone_info.default\",\"key\":\"\",\"snapid\":-2,\"hash\":235010478,\"max\":0,\"pool\":11,\"namespace\":\"\"}' get-omaphdr > zone_info.default.omaphdr.txt",
"ceph-objectstore-tool --data-path PATH_TO_OSD --pgid PG_ID OBJECT get-omaphdr < OBJECT_MAP_FILE_NAME",
"ceph-objectstore-tool --data-path /var/lib/ceph/osd/ceph-0 --pgid 0.1c '{\"oid\":\"zone_info.default\",\"key\":\"\",\"snapid\":-2,\"hash\":235010478,\"max\":0,\"pool\":11,\"namespace\":\"\"}' set-omaphdr < zone_info.default.omaphdr.txt",
"cephadm shell --name osd. OSD_ID",
"cephadm shell --name osd.0",
"ceph-objectstore-tool --data-path PATH_TO_OSD --pgid PG_ID OBJECT get-omap KEY > OBJECT_MAP_FILE_NAME",
"ceph-objectstore-tool --data-path /var/lib/ceph/osd/ceph-0 --pgid 0.1c '{\"oid\":\"zone_info.default\",\"key\":\"\",\"snapid\":-2,\"hash\":235010478,\"max\":0,\"pool\":11,\"namespace\":\"\"}' get-omap \"\" > zone_info.default.omap.txt",
"ceph-objectstore-tool --data-path PATH_TO_OSD --pgid PG_ID OBJECT set-omap KEY < OBJECT_MAP_FILE_NAME",
"ceph-objectstore-tool --data-path /var/lib/ceph/osd/ceph-0 --pgid 0.1c '{\"oid\":\"zone_info.default\",\"key\":\"\",\"snapid\":-2,\"hash\":235010478,\"max\":0,\"pool\":11,\"namespace\":\"\"}' set-omap \"\" < zone_info.default.omap.txt",
"ceph-objectstore-tool --data-path PATH_TO_OSD --pgid PG_ID OBJECT rm-omap KEY",
"ceph-objectstore-tool --data-path /var/lib/ceph/osd/ceph-0 --pgid 0.1c '{\"oid\":\"zone_info.default\",\"key\":\"\",\"snapid\":-2,\"hash\":235010478,\"max\":0,\"pool\":11,\"namespace\":\"\"}' rm-omap \"\"",
"systemctl status ceph- FSID @osd. OSD_ID",
"systemctl status [email protected]",
"cephadm shell --name osd. OSD_ID",
"cephadm shell --name osd.0",
"ceph-objectstore-tool --data-path PATH_TO_OSD --pgid PG_ID OBJECT list-attrs",
"ceph-objectstore-tool --data-path /var/lib/ceph/osd/ceph-0 --pgid 0.1c '{\"oid\":\"zone_info.default\",\"key\":\"\",\"snapid\":-2,\"hash\":235010478,\"max\":0,\"pool\":11,\"namespace\":\"\"}' list-attrs",
"systemctl status ceph- FSID @osd. OSD_ID",
"systemctl status [email protected]",
"cephadm shell --name osd. OSD_ID",
"cephadm shell --name osd.0",
"ceph-objectstore-tool --data-path PATH_TO_OSD --pgid PG_ID OBJECT get-attr KEY > OBJECT_ATTRS_FILE_NAME",
"ceph-objectstore-tool --data-path /var/lib/ceph/osd/ceph-0 --pgid 0.1c '{\"oid\":\"zone_info.default\",\"key\":\"\",\"snapid\":-2,\"hash\":235010478,\"max\":0,\"pool\":11,\"namespace\":\"\"}' get-attr \"oid\" > zone_info.default.attr.txt",
"ceph-objectstore-tool --data-path PATH_TO_OSD --pgid PG_ID OBJECT set-attr KEY < OBJECT_ATTRS_FILE_NAME",
"ceph-objectstore-tool --data-path /var/lib/ceph/osd/ceph-0 --pgid 0.1c '{\"oid\":\"zone_info.default\",\"key\":\"\",\"snapid\":-2,\"hash\":235010478,\"max\":0,\"pool\":11,\"namespace\":\"\"}' set-attr \"oid\"<zone_info.default.attr.txt",
"ceph-objectstore-tool --data-path PATH_TO_OSD --pgid PG_ID OBJECT rm-attr KEY",
"ceph-objectstore-tool --data-path /var/lib/ceph/osd/ceph-0 --pgid 0.1c '{\"oid\":\"zone_info.default\",\"key\":\"\",\"snapid\":-2,\"hash\":235010478,\"max\":0,\"pool\":11,\"namespace\":\"\"}' rm-attr \"oid\"",
"ceph orch apply mon --unmanaged Scheduled mon update...",
"ceph -s mon: 5 daemons, quorum host01, host02, host04, host05 (age 30s), out of quorum: host07",
"ceph mon set_new_tiebreaker NEW_HOST",
"ceph mon set_new_tiebreaker host02",
"ceph mon set_new_tiebreaker host02 Error EINVAL: mon.host02 has location DC1, which matches mons host02 on the datacenter dividing bucket for stretch mode.",
"ceph mon set_location HOST datacenter= DATACENTER",
"ceph mon set_location host02 datacenter=DC3",
"ceph orch daemon rm FAILED_TIEBREAKER_MONITOR --force",
"ceph orch daemon rm mon.host07 --force Removed mon.host07 from host 'host07'",
"ceph mon add HOST IP_ADDRESS datacenter= DATACENTER ceph orch daemon add mon HOST",
"ceph mon add host07 213.222.226.50 datacenter=DC1 ceph orch daemon add mon host07",
"ceph -s mon: 5 daemons, quorum host01, host02, host04, host05, host07 (age 15s)",
"ceph mon dump epoch 19 fsid 1234ab78-1234-11ed-b1b1-de456ef0a89d last_changed 2023-01-17T04:12:05.709475+0000 created 2023-01-16T05:47:25.631684+0000 min_mon_release 16 (pacific) election_strategy: 3 stretch_mode_enabled 1 tiebreaker_mon host02 disallowed_leaders host02 0: [v2:132.224.169.63:3300/0,v1:132.224.169.63:6789/0] mon.host02; crush_location {datacenter=DC3} 1: [v2:220.141.179.34:3300/0,v1:220.141.179.34:6789/0] mon.host04; crush_location {datacenter=DC2} 2: [v2:40.90.220.224:3300/0,v1:40.90.220.224:6789/0] mon.host01; crush_location {datacenter=DC1} 3: [v2:60.140.141.144:3300/0,v1:60.140.141.144:6789/0] mon.host07; crush_location {datacenter=DC1} 4: [v2:186.184.61.92:3300/0,v1:186.184.61.92:6789/0] mon.host03; crush_location {datacenter=DC2} dumped monmap epoch 19",
"ceph orch apply mon --placement=\" HOST_1 , HOST_2 , HOST_3 , HOST_4 , HOST_5 \"",
"ceph orch apply mon --placement=\"host01, host02, host04, host05, host07\" Scheduled mon update",
"ceph mon add NEW_HOST IP_ADDRESS datacenter= DATACENTER",
"ceph mon add host06 213.222.226.50 datacenter=DC3 adding mon.host06 at [v2:213.222.226.50:3300/0,v1:213.222.226.50:6789/0]",
"ceph orch apply mon --unmanaged Scheduled mon update...",
"ceph orch daemon add mon NEW_HOST",
"ceph orch daemon add mon host06",
"ceph -s mon: 6 daemons, quorum host01, host02, host04, host05, host06 (age 30s), out of quorum: host07",
"ceph mon set_new_tiebreaker NEW_HOST",
"ceph mon set_new_tiebreaker host06",
"ceph orch daemon rm FAILED_TIEBREAKER_MONITOR --force",
"ceph orch daemon rm mon.host07 --force Removed mon.host07 from host 'host07'",
"ceph mon dump epoch 19 fsid 1234ab78-1234-11ed-b1b1-de456ef0a89d last_changed 2023-01-17T04:12:05.709475+0000 created 2023-01-16T05:47:25.631684+0000 min_mon_release 16 (pacific) election_strategy: 3 stretch_mode_enabled 1 tiebreaker_mon host06 disallowed_leaders host06 0: [v2:213.222.226.50:3300/0,v1:213.222.226.50:6789/0] mon.host06; crush_location {datacenter=DC3} 1: [v2:220.141.179.34:3300/0,v1:220.141.179.34:6789/0] mon.host04; crush_location {datacenter=DC2} 2: [v2:40.90.220.224:3300/0,v1:40.90.220.224:6789/0] mon.host01; crush_location {datacenter=DC1} 3: [v2:60.140.141.144:3300/0,v1:60.140.141.144:6789/0] mon.host02; crush_location {datacenter=DC1} 4: [v2:186.184.61.92:3300/0,v1:186.184.61.92:6789/0] mon.host05; crush_location {datacenter=DC2} dumped monmap epoch 19",
"ceph orch apply mon --placement=\" HOST_1 , HOST_2 , HOST_3 , HOST_4 , HOST_5 \"",
"ceph orch apply mon --placement=\"host01, host02, host04, host05, host06\" Scheduled mon update...",
"ceph osd force_recovery_stretch_mode --yes-i-really-mean-it",
"ceph osd force_healthy_stretch_mode --yes-i-really-mean-it",
"ceph config rm osd.X osd_mclock_max_capacity_iops_[hdd|ssd]",
"ceph config rm osd.X osd_mclock_max_capacity_iops_[hdd|ssd]",
"ceph config set osd.X osd_mclock_force_run_benchmark_on_init true",
"ceph config rm osd.X osd_mclock_max_capacity_iops_[hdd|ssd]",
"ceph config set osd.X osd_mclock_force_run_benchmark_on_init true",
"subscription-manager repos --enable=rhceph-6-tools-for-rhel-9-x86_64-rpms yum --enable=rhceph-6-tools-for-rhel-9-x86_64-debug-rpms",
"ceph-base-debuginfo ceph-common-debuginfo ceph-debugsource ceph-fuse-debuginfo ceph-immutable-object-cache-debuginfo ceph-mds-debuginfo ceph-mgr-debuginfo ceph-mon-debuginfo ceph-osd-debuginfo ceph-radosgw-debuginfo cephfs-mirror-debuginfo",
"dnf install gdb",
"echo \"| /usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h %e\" > /proc/sys/kernel/core_pattern",
"ls -ltr /var/lib/systemd/coredump total 8232 -rw-r-----. 1 root root 8427548 Jan 22 19:24 core.ceph-osd.167.5ede29340b6c4fe4845147f847514c12.15622.1584573794000000.xz",
"ps exec -it MONITOR_ID_OR_OSD_ID bash",
"podman ps podman exec -it ceph-1ca9f6a8-d036-11ec-8263-fa163ee967ad-osd-2 bash",
"dnf install procps-ng gdb",
"ps -aef | grep PROCESS | grep -v run",
"ps -aef | grep ceph-mon | grep -v run ceph 15390 15266 0 18:54 ? 00:00:29 /usr/bin/ceph-mon --cluster ceph --setroot ceph --setgroup ceph -d -i 5 ceph 18110 17985 1 19:40 ? 00:00:08 /usr/bin/ceph-mon --cluster ceph --setroot ceph --setgroup ceph -d -i 2",
"gcore ID",
"gcore 18110 warning: target file /proc/18110/cmdline contained unexpected null characters Saved corefile core.18110",
"ls -ltr total 709772 -rw-r--r--. 1 root root 726799544 Mar 18 19:46 core.18110",
"cp ceph-mon- MONITOR_ID :/tmp/mon.core. MONITOR_PID /tmp",
"cephadm shell",
"ceph config set mgr mgr/cephadm/allow_ptrace true",
"ceph orch redeploy SERVICE_ID",
"ceph orch redeploy mgr ceph orch redeploy rgw.rgw.1",
"exit ssh [email protected]",
"podman ps podman exec -it ceph-1ca9f6a8-d036-11ec-8263-fa163ee967ad-rgw-rgw-1-host04 bash",
"dnf install procps-ng gdb",
"ps aux | grep rados ceph 6 0.3 2.8 5334140 109052 ? Sl May10 5:25 /usr/bin/radosgw -n client.rgw.rgw.1.host04 -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-stderr=true --default-log-stderr-prefix=debug",
"gcore PID",
"gcore 6",
"ls -ltr total 108798 -rw-r--r--. 1 root root 726799544 Mar 18 19:46 core.6",
"cp ceph-mon- DAEMON_ID :/tmp/mon.core. PID /tmp"
] | https://docs.redhat.com/en/documentation/red_hat_ceph_storage/7/html-single/troubleshooting_guide/index |
Providing feedback on Red Hat build of Quarkus documentation | Providing feedback on Red Hat build of Quarkus documentation To report an error or to improve our documentation, log in to your Red Hat Jira account and submit an issue. If you do not have a Red Hat Jira account, then you will be prompted to create an account. Procedure Click the following link to create a ticket . Enter a brief description of the issue in the Summary . Provide a detailed description of the issue or enhancement in the Description . Include a URL to where the issue occurs in the documentation. Clicking Submit creates and routes the issue to the appropriate documentation team. | null | https://docs.redhat.com/en/documentation/red_hat_build_of_quarkus/3.15/html/release_notes_for_red_hat_build_of_quarkus_3.15/proc_providing-feedback-on-red-hat-documentation_quarkus-release-notes |
Config APIs | Config APIs OpenShift Container Platform 4.15 Reference guide for config APIs Red Hat OpenShift Documentation Team | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.15/html-single/config_apis/index |
Chapter 1. APIs | Chapter 1. APIs You can access APIs to create and manage application resources, channels, subscriptions, and to query information. User required access: You can only perform actions that your role is assigned. Learn about access requirements from the Role-based access control documentation. You can also access all APIs from the integrated console. From the local-cluster view, navigate to Home > API Explorer to explore API groups. For more information, review the API documentation for each of the following resources: Clusters API ClusterSets API (v1beta2) ClusterSetBindings API (v1beta2) Channels API Subscriptions API PlacementRules API (deprecated) Applications API Helm API Policy API Observability API Search query API MultiClusterHub API Placements API (v1beta1) PlacementDecisions API (v1beta1) DiscoveryConfig API DiscoveredCluster API AddOnDeploymentConfig API (v1alpha1) ClusterManagementAddOn API (v1alpha1) ManagedClusterAddOn API (v1alpha1) ManagedClusterSet API KlusterletConfig API (v1alpha1) Policy compliance API (Technology Preview) 1.1. Clusters API 1.1.1. Overview This documentation is for the cluster resource for Red Hat Advanced Cluster Management for Kubernetes. Cluster resource has four possible requests: create, query, delete and update. ManagedCluster represents the desired state and current status of a managed cluster. ManagedCluster is a cluster-scoped resource. 1.1.1.1. Version information Version : 2.11.0 1.1.1.2. URI scheme BasePath : /kubernetes/apis Schemes : HTTPS 1.1.1.3. Tags cluster.open-cluster-management.io : Create and manage clusters 1.1.2. Paths 1.1.2.1. Query all clusters 1.1.2.1.1. Description Query your clusters for more details. 1.1.2.1.2. Parameters Type Name Description Schema Header COOKIE required Authorization: Bearer {ACCESS_TOKEN} ; ACCESS_TOKEN is the user access token. string 1.1.2.1.3. Responses HTTP Code Description Schema 200 Success No Content 403 Access forbidden No Content 404 Resource not found No Content 500 Internal service error No Content 503 Service unavailable No Content 1.1.2.1.4. Consumes cluster/yaml 1.1.2.1.5. Tags cluster.open-cluster-management.io 1.1.2.2. Create a cluster 1.1.2.2.1. Description Create a cluster 1.1.2.2.2. Parameters Type Name Description Schema Header COOKIE required Authorization: Bearer {ACCESS_TOKEN} ; ACCESS_TOKEN is the user access token. string Body body required Parameters describing the cluster to be created. Cluster 1.1.2.2.3. Responses HTTP Code Description Schema 200 Success No Content 403 Access forbidden No Content 404 Resource not found No Content 500 Internal service error No Content 503 Service unavailable No Content 1.1.2.2.4. Consumes cluster/yaml 1.1.2.2.5. Tags cluster.open-cluster-management.io 1.1.2.2.6. Example HTTP request 1.1.2.2.6.1. Request body { "apiVersion" : "cluster.open-cluster-management.io/v1", "kind" : "ManagedCluster", "metadata" : { "labels" : { "vendor" : "OpenShift" }, "name" : "cluster1" }, "spec": { "hubAcceptsClient": true, "managedClusterClientConfigs": [ { "caBundle": "test", "url": "https://test.com" } ] }, "status" : { } } 1.1.2.3. Query a single cluster 1.1.2.3.1. Description Query a single cluster for more details. 1.1.2.3.2. Parameters Type Name Description Schema Header COOKIE required Authorization: Bearer {ACCESS_TOKEN} ; ACCESS_TOKEN is the user access token. string Path cluster_name required Name of the cluster that you want to query. string 1.1.2.3.3. Responses HTTP Code Description Schema 200 Success No Content 403 Access forbidden No Content 404 Resource not found No Content 500 Internal service error No Content 503 Service unavailable No Content 1.1.2.3.4. Tags cluster.open-cluster-management.io 1.1.2.4. Delete a cluster 1.1.2.4.1. Description Delete a single cluster 1.1.2.4.2. Parameters Type Name Description Schema Header COOKIE required Authorization: Bearer {ACCESS_TOKEN} ; ACCESS_TOKEN is the user access token. string Path cluster_name required Name of the cluster that you want to delete. string 1.1.2.4.3. Responses HTTP Code Description Schema 200 Success No Content 403 Access forbidden No Content 404 Resource not found No Content 500 Internal service error No Content 503 Service unavailable No Content 1.1.2.4.4. Tags cluster.open-cluster-management.io 1.1.3. Definitions 1.1.3.1. Cluster Name Description Schema apiVersion required The versioned schema of the ManagedCluster . string kind required String value that represents the REST resource. string metadata required The metadata of the ManagedCluster . object spec required The specification of the ManagedCluster . spec spec Name Description Schema hubAcceptsClient required Specifies whether the hub can establish a connection with the klusterlet agent on the managed cluster. The default value is false , and can only be changed to true when you have an RBAC rule configured on the hub cluster that allows you to make updates to the virtual subresource of managedclusters/accept . bool managedClusterClientConfigs optional Lists the apiserver addresses of the managed cluster. managedClusterClientConfigs array leaseDurationSeconds optional Specifies the lease update time interval of the klusterlet agents on the managed cluster. By default, the klusterlet agent updates its lease every 60 seconds. integer (int32) taints optional Prevents a managed cluster from being assigned to one or more managed cluster sets during scheduling. taint array managedClusterClientConfigs Name Description Schema URL required string CABundle optional Pattern : "^(?:[A-Za-z0-9+/]{4})*(?:[A-Za-z0-9+/]{2}==|[A-Za-z0-9+/]{3}=)?USD" string (byte) taint Name Description Schema key required The taint key that is applied to a cluster. string value optional The taint value that corresponds to the taint key. string effect optional Effect of the taint on placements that do not tolerate the taint. Valid values are NoSelect , PreferNoSelect , and NoSelectIfNew . string 1.2. Clustersets API (v1beta2) 1.2.1. Overview This documentation is for the ClusterSet resource for Red Hat Advanced Cluster Management for Kubernetes. The ClusterSet resource has four possible requests: create, query, delete, and update. The ManagedClusterSet defines a group of ManagedClusters. You can assign a ManagedCluster to a specific ManagedClusterSet by adding a label with the name cluster.open-cluster-management.io/clusterset on the ManagedCluster that refers to the ManagedClusterSet. You can only add or remove this label on a ManagedCluster when you have an RBAC rule that allows the create permissions on a virtual subresource of managedclustersets/join . You must have this permission on both the source and the target ManagedClusterSets to update this label. 1.2.1.1. Version information Version : 2.11.0 1.2.1.2. URI scheme BasePath : /kubernetes/apis Schemes : HTTPS 1.2.1.3. Tags cluster.open-cluster-management.io : Create and manage Clustersets 1.2.2. Paths 1.2.2.1. Query all clustersets 1.2.2.1.1. Description Query your Clustersets for more details. 1.2.2.1.2. Parameters Type Name Description Schema Header COOKIE required Authorization: Bearer {ACCESS_TOKEN} ; ACCESS_TOKEN is the user access token. string 1.2.2.1.3. Responses HTTP Code Description Schema 200 Success No Content 403 Access forbidden No Content 404 Resource not found No Content 500 Internal service error No Content 503 Service unavailable No Content 1.2.2.1.4. Consumes clusterset/yaml 1.2.2.1.5. Tags cluster.open-cluster-management.io 1.2.2.2. Create a clusterset 1.2.2.2.1. Description Create a Clusterset. 1.2.2.2.2. Parameters Type Name Description Schema Header COOKIE required Authorization: Bearer {ACCESS_TOKEN} ; ACCESS_TOKEN is the user access token. string Body body required Parameters describing the clusterset to be created. Clusterset 1.2.2.2.3. Responses HTTP Code Description Schema 200 Success No Content 403 Access forbidden No Content 404 Resource not found No Content 500 Internal service error No Content 503 Service unavailable No Content 1.2.2.2.4. Consumes clusterset/yaml 1.2.2.2.5. Tags cluster.open-cluster-management.io 1.2.2.2.6. Example HTTP request 1.2.2.2.6.1. Request body { "apiVersion": "cluster.open-cluster-management.io/v1beta2", "kind": "ManagedClusterSet", "metadata": { "name": "clusterset1" }, "spec": { "clusterSelector": { "selectorType": "ExclusiveClusterSetLabel" } }, "status": {} } 1.2.2.3. Query a single clusterset 1.2.2.3.1. Description Query a single clusterset for more details. 1.2.2.3.2. Parameters Type Name Description Schema Header COOKIE required Authorization: Bearer {ACCESS_TOKEN} ; ACCESS_TOKEN is the user access token. string Path clusterset_name required Name of the clusterset that you want to query. string 1.2.2.3.3. Responses HTTP Code Description Schema 200 Success No Content 403 Access forbidden No Content 404 Resource not found No Content 500 Internal service error No Content 503 Service unavailable No Content 1.2.2.3.4. Tags cluster.open-cluster-management.io 1.2.2.4. Delete a clusterset 1.2.2.4.1. Description Delete a single clusterset. 1.2.2.4.2. Parameters Type Name Description Schema Header COOKIE required Authorization: Bearer {ACCESS_TOKEN} ; ACCESS_TOKEN is the user access token. string Path clusterset_name required Name of the clusterset that you want to delete. string 1.2.2.4.3. Responses HTTP Code Description Schema 200 Success No Content 403 Access forbidden No Content 404 Resource not found No Content 500 Internal service error No Content 503 Service unavailable No Content 1.2.2.4.4. Tags cluster.open-cluster-management.io 1.2.3. Definitions 1.2.3.1. Clusterset Name Schema apiVersion required string kind required string metadata required object 1.3. Clustersetbindings API (v1beta2) 1.3.1. Overview This documentation is for the ClusterSetBinding resource for Red Hat Advanced Cluster Management for Kubernetes. The ClusterSetBinding resource has four possible requests: create, query, delete, and update. ManagedClusterSetBinding projects a ManagedClusterSet into a certain namespace. You can create a ManagedClusterSetBinding in a namespace and bind it to a ManagedClusterSet if you have an RBAC rule that allows you to create on the virtual subresource of managedclustersets/bind . 1.3.1.1. Version information Version : 2.11.0 1.3.1.2. URI scheme BasePath : /kubernetes/apis Schemes : HTTPS 1.3.1.3. Tags cluster.open-cluster-management.io : Create and manage clustersetbindings 1.3.2. Paths 1.3.2.1. Query all clustersetbindings 1.3.2.1.1. Description Query your clustersetbindings for more details. 1.3.2.1.2. Parameters Type Name Description Schema Header COOKIE required Authorization: Bearer {ACCESS_TOKEN} ; ACCESS_TOKEN is the user access token. string Path namespace required Namespace that you want to use, for example, default. string 1.3.2.1.3. Responses HTTP Code Description Schema 200 Success No Content 403 Access forbidden No Content 404 Resource not found No Content 500 Internal service error No Content 503 Service unavailable No Content 1.3.2.1.4. Consumes clustersetbinding/yaml 1.3.2.1.5. Tags cluster.open-cluster-management.io 1.3.2.2. Create a clustersetbinding 1.3.2.2.1. Description Create a clustersetbinding. 1.3.2.2.2. Parameters Type Name Description Schema Header COOKIE required Authorization: Bearer {ACCESS_TOKEN} ; ACCESS_TOKEN is the user access token. string Path namespace required Namespace that you want to use, for example, default . string Body body required Parameters describing the clustersetbinding to be created. Clustersetbinding 1.3.2.2.3. Responses HTTP Code Description Schema 200 Success No Content 403 Access forbidden No Content 404 Resource not found No Content 500 Internal service error No Content 503 Service unavailable No Content 1.3.2.2.4. Consumes clustersetbinding/yaml 1.3.2.2.5. Tags cluster.open-cluster-management.io 1.3.2.2.6. Example HTTP request 1.3.2.2.6.1. Request body { "apiVersion" : "cluster.open-cluster-management.io/v1beta2", "kind" : "ManagedClusterSetBinding", "metadata" : { "name" : "clusterset1", "namespace" : "ns1" }, "spec": { "clusterSet": "clusterset1" }, "status" : { } } 1.3.2.3. Query a single clustersetbinding 1.3.2.3.1. Description Query a single clustersetbinding for more details. 1.3.2.3.2. Parameters Type Name Description Schema Header COOKIE required Authorization: Bearer {ACCESS_TOKEN} ; ACCESS_TOKEN is the user access token. string Path namespace required Namespace that you want to use, for example, default . string Path clustersetbinding_name required Name of the clustersetbinding that you want to query. string 1.3.2.3.3. Responses HTTP Code Description Schema 200 Success No Content 403 Access forbidden No Content 404 Resource not found No Content 500 Internal service error No Content 503 Service unavailable No Content 1.3.2.3.4. Tags cluster.open-cluster-management.io 1.3.2.4. Delete a clustersetbinding 1.3.2.4.1. Description Delete a single clustersetbinding. 1.3.2.4.2. Parameters Type Name Description Schema Header COOKIE required Authorization: Bearer {ACCESS_TOKEN} ; ACCESS_TOKEN is the user access token. string Path namespace required Namespace that you want to use, for example, default . string Path clustersetbinding_name required Name of the clustersetbinding that you want to delete. string 1.3.2.4.3. Responses HTTP Code Description Schema 200 Success No Content 403 Access forbidden No Content 404 Resource not found No Content 500 Internal service error No Content 503 Service unavailable No Content 1.3.2.4.4. Tags cluster.open-cluster-management.io 1.3.3. Definitions 1.3.3.1. Clustersetbinding Name Description Schema apiVersion required Versioned schema of the ManagedClusterSetBinding . string kind required String value that represents the REST resource. string metadata required Metadata of the ManagedClusterSetBinding . object spec required Specification of the ManagedClusterSetBinding . spec spec Name Description Schema clusterSet required Name of the ManagedClusterSet to bind. It must match the instance name of the ManagedClusterSetBinding and cannot change after it is created. string 1.4. Clusterview API (v1alpha1) 1.4.1. Overview This documentation is for the clusterview resource for Red Hat Advanced Cluster Management for Kubernetes. The clusterview resource provides a CLI command that enables you to view a list of the managed clusters and managed cluster sets that that you can access. The three possible requests are: list, get, and watch. 1.4.1.1. Version information Version : 2.11.0 1.4.1.2. URI scheme BasePath : /kubernetes/apis Schemes : HTTPS 1.4.1.3. Tags clusterview.open-cluster-management.io : View a list of managed clusters that your ID can access. 1.4.2. Paths 1.4.2.1. Get managed clusters 1.4.2.1.1. Description View a list of the managed clusters that you can access. 1.4.2.1.2. Parameters Type Name Description Schema Header COOKIE required Authorization: Bearer {ACCESS_TOKEN} ; ACCESS_TOKEN is the user access token. string 1.4.2.1.3. Responses HTTP Code Description Schema 200 Success No Content 403 Access forbidden No Content 404 Resource not found No Content 500 Internal service error No Content 503 Service unavailable No Content 1.4.2.1.4. Consumes managedcluster/yaml 1.4.2.1.5. Tags clusterview.open-cluster-management.io 1.4.2.2. List managed clusters 1.4.2.2.1. Description View a list of the managed clusters that you can access. 1.4.2.2.2. Parameters Type Name Description Schema Header COOKIE required Authorization: Bearer {ACCESS_TOKEN} ; ACCESS_TOKEN is the user access token. string Body body optional Name of the user ID for which you want to list the managed clusters. string 1.4.2.2.3. Responses HTTP Code Description Schema 200 Success No Content 403 Access forbidden No Content 404 Resource not found No Content 500 Internal service error No Content 503 Service unavailable No Content 1.4.2.2.4. Consumes managedcluster/yaml 1.4.2.2.5. Tags clusterview.open-cluster-management.io 1.4.2.2.6. Example HTTP request 1.4.2.2.6.1. Request body { "apiVersion" : "clusterview.open-cluster-management.io/v1alpha1", "kind" : "ClusterView", "metadata" : { "name" : "<user_ID>" }, "spec": { }, "status" : { } } 1.4.2.3. Watch the managed cluster sets 1.4.2.3.1. Description Watch the managed clusters that you can access. 1.4.2.3.2. Parameters Type Name Description Schema Header COOKIE required Authorization: Bearer {ACCESS_TOKEN} ; ACCESS_TOKEN is the user access token. string Path clusterview_name optional Name of the user ID that you want to watch. string 1.4.2.3.3. Responses HTTP Code Description Schema 200 Success No Content 403 Access forbidden No Content 404 Resource not found No Content 500 Internal service error No Content 503 Service unavailable No Content 1.4.2.4. List the managed cluster sets. 1.4.2.4.1. Description List the managed clusters that you can access. 1.4.2.4.2. Parameters Type Name Description Schema Header COOKIE required Authorization: Bearer {ACCESS_TOKEN} ; ACCESS_TOKEN is the user access token. string Path clusterview_name optional Name of the user ID that you want to watch. string 1.4.2.4.3. Responses HTTP Code Description Schema 200 Success No Content 403 Access forbidden No Content 404 Resource not found No Content 500 Internal service error No Content 503 Service unavailable No Content 1.4.2.5. List the managed cluster sets. 1.4.2.5.1. Description List the managed clusters that you can access. 1.4.2.5.2. Parameters Type Name Description Schema Header COOKIE required Authorization: Bearer {ACCESS_TOKEN} ; ACCESS_TOKEN is the user access token. string Path clusterview_name optional Name of the user ID that you want to watch. string 1.4.2.5.3. Responses HTTP Code Description Schema 200 Success No Content 403 Access forbidden No Content 404 Resource not found No Content 500 Internal service error No Content 503 Service unavailable No Content 1.4.2.6. Watch the managed cluster sets. 1.4.2.6.1. Description Watch the managed clusters that you can access. 1.4.2.6.2. Parameters Type Name Description Schema Header COOKIE required Authorization: Bearer {ACCESS_TOKEN} ; ACCESS_TOKEN is the user access token. string Path clusterview_name optional Name of the user ID that you want to watch. string 1.4.2.6.3. Responses HTTP Code Description Schema 200 Success No Content 403 Access forbidden No Content 404 Resource not found No Content 500 Internal service error No Content 503 Service unavailable No Content 1.5. Channels API 1.5.1. Overview This documentation is for the Channel resource for Red Hat Advanced Cluster Management for Kubernetes. The Channel resource has four possible requests: create, query, delete and update. 1.5.1.1. Version information Version : 2.11.0 1.5.1.2. URI scheme BasePath : /kubernetes/apis Schemes : HTTPS 1.5.1.3. Tags channels.apps.open-cluster-management.io : Create and manage deployables 1.5.2. Paths 1.5.2.1. Create a channel 1.5.2.1.1. Description Create a channel. 1.5.2.1.2. Parameters Type Name Description Schema Header COOKIE required Authorization: Bearer {ACCESS_TOKEN} ; ACCESS_TOKEN is the user access token. string Path namespace required Namespace that you want to use, for example, default. string Body body required Parameters describing the deployable to be created. Channel 1.5.2.1.3. Responses HTTP Code Description Schema 200 Success No Content 403 Access forbidden No Content 404 Resource not found No Content 500 Internal service error No Content 503 Service unavailable No Content 1.5.2.1.4. Consumes application/yaml 1.5.2.1.5. Tags channels.apps.open-cluster-management.io 1.5.2.1.6. Example HTTP request 1.5.2.1.6.1. Request body { "apiVersion": "apps.open-cluster-management.io/v1", "kind": "Channel", "metadata": { "name": "sample-channel", "namespace": "default" }, "spec": { "configMapRef": { "kind": "configmap", "name": "bookinfo-resource-filter-configmap" }, "pathname": "https://charts.helm.sh/stable", "type": "HelmRepo" } } 1.5.2.2. Query all channels for the target namespace 1.5.2.2.1. Description Query your channels for more details. 1.5.2.2.2. Parameters Type Name Description Schema Header COOKIE required Authorization: Bearer {ACCESS_TOKEN} ; ACCESS_TOKEN is the user access token. string Path namespace required Namespace that you want to use, for example, default. string 1.5.2.2.3. Responses HTTP Code Description Schema 200 Success No Content 403 Access forbidden No Content 404 Resource not found No Content 500 Internal service error No Content 503 Service unavailable No Content 1.5.2.2.4. Consumes application/yaml 1.5.2.2.5. Tags channels.apps.open-cluster-management.io 1.5.2.3. Query a single channels of a namespace 1.5.2.3.1. Description Query a single channels for more details. 1.5.2.3.2. Parameters Type Name Description Schema Header COOKIE required Authorization: Bearer {ACCESS_TOKEN} ; ACCESS_TOKEN is the user access token. string Path channel_name required Name of the deployable that you wan to query. string Path namespace required Namespace that you want to use, for example, default. string 1.5.2.3.3. Responses HTTP Code Description Schema 200 Success No Content 403 Access forbidden No Content 404 Resource not found No Content 500 Internal service error No Content 503 Service unavailable No Content 1.5.2.3.4. Tags channels.apps.open-cluster-management.io 1.5.2.4. Delete a Channel 1.5.2.4.1. Parameters Type Name Description Schema Header COOKIE required Authorization: Bearer {ACCESS_TOKEN} ; ACCESS_TOKEN is the user access token. string Path channel_name required Name of the Channel that you want to delete. string Path namespace required Namespace that you want to use, for example, default. string 1.5.2.4.2. Responses HTTP Code Description Schema 200 Success No Content 403 Access forbidden No Content 404 Resource not found No Content 500 Internal service error No Content 503 Service unavailable No Content 1.5.2.4.3. Tags channels.apps.open-cluster-management.io 1.5.3. Definitions 1.5.3.1. Channel Name Schema apiVersion required string kind required string metadata required object spec required spec spec Name Description Schema configMapRef optional ObjectReference contains enough information to let you inspect or modify the referred object. configMapRef gates optional ChannelGate defines criteria for promote to channel gates pathname required string secretRef optional ObjectReference contains enough information to let you inspect or modify the referred object. secretRef sourceNamespaces optional enum (Namespace, HelmRepo, ObjectBucket, Git, namespace, helmrepo, objectbucket, github) array configMapRef Name Description Schema apiVersion optional API version of the referent. string fieldPath optional If referring to a piece of an object instead of an entire object, this string should contain a valid JSON/Go field access statement, such as desiredState.manifest.containers[2]. For example, if the object reference is to a container within a pod, this would take on a value like: "spec.containers{name}" (where "name" refers to the name of the container that triggered the event) or if no container name is specified "spec.containers[2]" (container with index 2 in this pod). This syntax is chosen only to have some well-defined way of referencing a part of an object. TODO: this design is not final and this field is subject to change in the future. string kind optional Kind of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/kubernetes-objects/ name optional Name of the referent. More info: Names string namespace optional Namespace of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/namespaces/ string resourceVersion optional Specific resourceVersion to which this reference is made, if any. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#concurrency-control-and-consistency string uid optional gates Name Description Schema annotations optional typical annotations of k8s annotations labelSelector optional A label selector is a label query over a set of resources. The result of matchLabels and matchExpressions are ANDed. An empty label selector matches all objects. A null label selector matches no objects. labelSelector name optional string annotations Name Schema key optional string value optional string labelSelector Name Description Schema matchExpressions optional matchExpressions is a list of label selector requirements. The requirements are ANDed. matchExpressions array matchLabels optional matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed. string, string map matchExpressions Name Description Schema key required key is the label key that the selector applies to. string operator required operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist. string values optional values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch. string array secretRef Name Description Schema apiVersion optional API version of the referent. string fieldPath optional If referring to a piece of an object instead of an entire object, this string should contain a valid JSON/Go field access statement, such as desiredState.manifest.containers[2]. For example, if the object reference is to a container within a pod, this would take on a value like: "spec.containers{name}" (where "name" refers to the name of the container that triggered the event) or if no container name is specified "spec.containers[2]" (container with index 2 in this pod). This syntax is chosen only to have some well-defined way of referencing a part of an object. TODO: this design is not final and this field is subject to change in the future. string kind optional Kind of the referent. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds string name optional Name of the referent. More info: Names string namespace optional Namespace of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/namespaces/ string resourceVersion optional Specific resourceVersion to which this reference is made, if any. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#concurrency-control-and-consistency string uid optional UID of the referent. More info: UIIDs string 1.6. Subscriptions API 1.6.1. Overview This documentation is for the Subscription resource for Red Hat Advanced Cluster Management for Kubernetes. The Subscription resource has four possible requests: create, query, delete and update. Deprecated: PlacementRule 1.6.1.1. Version information Version : 2.11.0 1.6.1.2. URI scheme BasePath : /kubernetes/apis Schemes : HTTPS 1.6.1.3. Tags subscriptions.apps.open-cluster-management.io : Create and manage subscriptions 1.6.2. Paths 1.6.2.1. Create a subscription 1.6.2.1.1. Description Create a subscription. 1.6.2.1.2. Parameters Type Name Description Schema Header COOKIE required Authorization: Bearer {ACCESS_TOKEN} ; ACCESS_TOKEN is the user access token. string Path namespace required Namespace that you want to use, for example, default. string Body body required Parameters describing the subscription to be created. Subscription 1.6.2.1.3. Responses HTTP Code Description Schema 200 Success No Content 403 Access forbidden No Content 404 Resource not found No Content 500 Internal service error No Content 503 Service unavailable No Content 1.6.2.1.4. Consumes subscription/yaml 1.6.2.1.5. Tags subscriptions.apps.open-cluster-management.io 1.6.2.1.6. Example HTTP request 1.6.2.1.6.1. Request body { "apiVersion" : "apps.open-cluster-management.io/v1", "kind" : "Subscription", "metadata" : { "name" : "sample_subscription", "namespace" : "default", "labels" : { "app" : "sample_subscription-app" }, "annotations" : { "apps.open-cluster-management.io/git-path" : "apps/sample/", "apps.open-cluster-management.io/git-branch" : "sample_branch" } }, "spec" : { "channel" : "channel_namespace/sample_channel", "packageOverrides" : [ { "packageName" : "my-sample-application", "packageAlias" : "the-sample-app", "packageOverrides" : [ { "path" : "spec", "value" : { "persistence" : { "enabled" : false, "useDynamicProvisioning" : false }, "license" : "accept", "tls" : { "hostname" : "my-mcm-cluster.icp" }, "sso" : { "registrationImage" : { "pullSecret" : "hub-repo-docker-secret" } } } } ] } ], "placement" : { "placementRef" : { "kind" : "PlacementRule", "name" : "demo-clusters" } } } } 1.6.2.2. Query all subscriptions 1.6.2.2.1. Description Query your subscriptions for more details. 1.6.2.2.2. Parameters Type Name Description Schema Header COOKIE required Authorization: Bearer {ACCESS_TOKEN} ; ACCESS_TOKEN is the user access token. string Path namespace required Namespace that you want to use, for example, default. string 1.6.2.2.3. Responses HTTP Code Description Schema 200 Success No Content 403 Access forbidden No Content 404 Resource not found No Content 500 Internal service error No Content 503 Service unavailable No Content 1.6.2.2.4. Consumes subscription/yaml 1.6.2.2.5. Tags subscriptions.apps.open-cluster-management.io 1.6.2.3. Query a single subscription 1.6.2.3.1. Description Query a single subscription for more details. 1.6.2.3.2. Parameters Type Name Description Schema Header COOKIE required Authorization: Bearer {ACCESS_TOKEN} ; ACCESS_TOKEN is the user access token. string Path namespace required Namespace that you want to use, for example, default. string Path subscription_name required Name of the subscription that you wan to query. string 1.6.2.3.3. Responses HTTP Code Description Schema 200 Success No Content 403 Access forbidden No Content 404 Resource not found No Content 500 Internal service error No Content 503 Service unavailable No Content 1.6.2.3.4. Tags subscriptions.apps.open-cluster-management.io 1.6.2.4. Delete a subscription 1.6.2.4.1. Parameters Type Name Description Schema Header COOKIE required Authorization: Bearer {ACCESS_TOKEN} ; ACCESS_TOKEN is the user access token. string Path namespace required Namespace that you want to use, for example, default. string Path subscription_name required Name of the subscription that you want to delete. string 1.6.2.4.2. Responses HTTP Code Description Schema 200 Success No Content 403 Access forbidden No Content 404 Resource not found No Content 500 Internal service error No Content 503 Service unavailable No Content 1.6.2.4.3. Tags subscriptions.apps.open-cluster-management.io 1.6.3. Definitions 1.6.3.1. Subscription Name Schema apiVersion required string kind required string metadata required metadata spec required spec status optional status metadata Name Schema annotations optional object labels optional object name optional string namespace optional string spec Name Schema channel required string name optional string overrides optional overrides array packageFilter optional packageFilter packageOverrides optional packageOverrides array placement optional placement timewindow optional timewindow overrides Name Schema clusterName required string clusterOverrides required object array packageFilter Name Description Schema annotations optional string, string map filterRef optional filterRef labelSelector optional labelSelector version optional Pattern : "( )((\\.[0-9] )(\\. )|(\\.[0-9] )?(\\.[xX]))USD" string filterRef Name Schema name optional string labelSelector Name Schema matchExpressions optional matchExpressions array matchLabels optional string, string map matchExpressions Name Schema key required string operator required string values optional string array packageOverrides Name Schema packageAlias optional string packageName required string packageOverrides optional object array placement Name Schema clusterSelector optional clusterSelector clusters optional clusters array local optional boolean placementRef optional placementRef clusterSelector Name Schema matchExpressions optional matchExpressions array matchLabels optional string, string map matchExpressions Name Schema key required string operator required string values optional string array clusters Name Schema name required string placementRef Name Schema apiVersion optional string fieldPath optional string kind optional string name optional string namespace optional string resourceVersion optional string uid optional string timewindow Name Schema daysofweek optional string array hours optional hours array location optional string windowtype optional enum (active, blocked, Active, Blocked) hours Name Schema end optional string start optional string status Name Schema lastUpdateTime optional string (date-time) message optional string phase optional string reason optional string statuses optional object 1.7. PlacementRules API (deprecated) 1.7.1. Overview This documentation is for the PlacementRule resource for Red Hat Advanced Cluster Management for Kubernetes. The PlacementRule resource has four possible requests: create, query, delete and update. 1.7.1.1. Version information Version : 2.11.0 1.7.1.2. URI scheme BasePath : /kubernetes/apis Schemes : HTTPS 1.7.1.3. Tags placementrules.apps.open-cluster-management.io : Create and manage placement rules 1.7.2. Paths 1.7.2.1. Create a placement rule 1.7.2.1.1. Description Create a placement rule. 1.7.2.1.2. Parameters Type Name Description Schema Header COOKIE required Authorization: Bearer {ACCESS_TOKEN} ; ACCESS_TOKEN is the user access token. string Path namespace required Namespace that you want to use, for example, default. string Body body required Parameters describing the placement rule to be created. PlacementRule 1.7.2.1.3. Responses HTTP Code Description Schema 200 Success No Content 403 Access forbidden No Content 404 Resource not found No Content 500 Internal service error No Content 503 Service unavailable No Content 1.7.2.1.4. Consumes application/yaml 1.7.2.1.5. Tags placementrules.apps.open-cluster-management.io 1.7.2.1.6. Example HTTP request 1.7.2.1.6.1. Request body { "apiVersion" : "apps.open-cluster-management.io/v1", "kind" : "PlacementRule", "metadata" : { "name" : "towhichcluster", "namespace" : "ns-sub-1" }, "spec" : { "clusterConditions" : [ { "type": "ManagedClusterConditionAvailable", "status": "True" } ], "clusterSelector" : { } } } 1.7.2.2. Query all placement rules 1.7.2.2.1. Description Query your placement rules for more details. 1.7.2.2.2. Parameters Type Name Description Schema Header COOKIE required Authorization: Bearer {ACCESS_TOKEN} ; ACCESS_TOKEN is the user access token. string Path namespace required Namespace that you want to use, for example, default. string 1.7.2.2.3. Responses HTTP Code Description Schema 200 Success No Content 403 Access forbidden No Content 404 Resource not found No Content 500 Internal service error No Content 503 Service unavailable No Content 1.7.2.2.4. Consumes application/yaml 1.7.2.2.5. Tags placementrules.apps.open-cluster-management.io 1.7.2.3. Query a single placementrule 1.7.2.3.1. Description Query a single placement rule for more details. 1.7.2.3.2. Parameters Type Name Description Schema Header COOKIE required Authorization: Bearer {ACCESS_TOKEN} ; ACCESS_TOKEN is the user access token. string Path namespace required Namespace that you want to use, for example, default. string Path placementrule_name required Name of the placementrule that you want to query. string 1.7.2.3.3. Responses HTTP Code Description Schema 200 Success No Content 403 Access forbidden No Content 404 Resource not found No Content 500 Internal service error No Content 503 Service unavailable No Content 1.7.2.3.4. Tags placementrules.apps.open-cluster-management.io 1.7.2.4. Delete a placementrule 1.7.2.4.1. Parameters Type Name Description Schema Header COOKIE required Authorization: Bearer {ACCESS_TOKEN} ; ACCESS_TOKEN is the user access token. string Path namespace required Namespace that you want to use, for example, default. string Path placementrule_name required Name of the placementrule that you want to delete. string 1.7.2.4.2. Responses HTTP Code Description Schema 200 Success No Content 403 Access forbidden No Content 404 Resource not found No Content 500 Internal service error No Content 503 Service unavailable No Content 1.7.2.4.3. Tags placementrules.apps.open-cluster-management.io 1.7.3. Definitions 1.7.3.1. Placementrule Name Schema apiVersion required string kind required string metadata required object spec required spec spec Name Schema clusterConditions optional clusterConditions array clusterReplicas optional integer clusterSelector optional clusterSelector clusters optional clusters array policies optional policies array resourceHint optional resourceHint schedulerName optional string clusterConditions Name Schema status optional string type optional string clusterSelector Name Schema matchExpressions optional matchExpressions array matchLabels optional string, string map matchExpressions Name Schema key optional string operator optional string values optional string array clusters Name Schema name optional string policies Name Schema apiVersion optional string fieldPath optional string kind optional string name optional string namespace optional string resourceVersion optional string uid optional string resourceHint Name Schema order optional string type optional string 1.8. Applications API 1.8.1. Overview This documentation is for the Application resource for Red Hat Advanced Cluster Management for Kubernetes. Application resource has four possible requests: create, query, delete and update. 1.8.1.1. Version information Version : 2.11.0 1.8.1.2. URI scheme BasePath : /kubernetes/apis Schemes : HTTPS 1.8.1.3. Tags applications.app.k8s.io : Create and manage applications 1.8.2. Paths 1.8.2.1. Create a application 1.8.2.1.1. Description Create a application. 1.8.2.1.2. Parameters Type Name Description Schema Header COOKIE required Authorization: Bearer {ACCESS_TOKEN} ; ACCESS_TOKEN is the user access token. string Path namespace required Namespace that you want to use, for example, default. string Body body required Parameters describing the application to be created. Application 1.8.2.1.3. Responses HTTP Code Description Schema 200 Success No Content 403 Access forbidden No Content 404 Resource not found No Content 500 Internal service error No Content 503 Service unavailable No Content 1.8.2.1.4. Consumes application/yaml 1.8.2.1.5. Tags applications.app.k8s.io 1.8.2.1.6. Example HTTP request 1.8.2.1.6.1. Request body { "apiVersion" : "app.k8s.io/v1beta1", "kind" : "Application", "metadata" : { "labels" : { "app" : "nginx-app-details" }, "name" : "nginx-app-3", "namespace" : "ns-sub-1" }, "spec" : { "componentKinds" : [ { "group" : "apps.open-cluster-management.io", "kind" : "Subscription" } ] }, "selector" : { "matchLabels" : { "app" : "nginx-app-details" } }, "status" : { } } 1.8.2.2. Query all applications 1.8.2.2.1. Description Query your applications for more details. 1.8.2.2.2. Parameters Type Name Description Schema Header COOKIE required Authorization: Bearer {ACCESS_TOKEN} ; ACCESS_TOKEN is the user access token. string Path namespace required Namespace that you want to use, for example, default. string 1.8.2.2.3. Responses HTTP Code Description Schema 200 Success No Content 403 Access forbidden No Content 404 Resource not found No Content 500 Internal service error No Content 503 Service unavailable No Content 1.8.2.2.4. Consumes application/yaml 1.8.2.2.5. Tags applications.app.k8s.io 1.8.2.3. Query a single application 1.8.2.3.1. Description Query a single application for more details. 1.8.2.3.2. Parameters Type Name Description Schema Header COOKIE required Authorization: Bearer {ACCESS_TOKEN} ; ACCESS_TOKEN is the user access token. string Path application_name required Name of the application that you wan to query. string Path namespace required Namespace that you want to use, for example, default. string 1.8.2.3.3. Responses HTTP Code Description Schema 200 Success No Content 403 Access forbidden No Content 404 Resource not found No Content 500 Internal service error No Content 503 Service unavailable No Content 1.8.2.3.4. Tags applications.app.k8s.io 1.8.2.4. Delete a application 1.8.2.4.1. Parameters Type Name Description Schema Header COOKIE required Authorization: Bearer {ACCESS_TOKEN} ; ACCESS_TOKEN is the user access token. string Path application_name required Name of the application that you want to delete. string Path namespace required Namespace that you want to use, for example, default. string 1.8.2.4.2. Responses HTTP Code Description Schema 200 Success No Content 403 Access forbidden No Content 404 Resource not found No Content 500 Internal service error No Content 503 Service unavailable No Content 1.8.2.4.3. Tags applications.app.k8s.io 1.8.3. Definitions 1.8.3.1. Application Name Schema apiVersion required string kind required string metadata required object spec required spec spec Name Schema assemblyPhase optional string componentKinds optional object array descriptor optional descriptor info optional info array selector optional object descriptor Name Schema description optional string icons optional icons array keywords optional string array links optional links array maintainers optional maintainers array notes optional string owners optional owners array type optional string version optional string icons Name Schema size optional string src required string type optional string links Name Schema description optional string url optional string maintainers Name Schema email optional string name optional string url optional string owners Name Schema email optional string name optional string url optional string info Name Schema name optional string type optional string value optional string valueFrom optional valueFrom valueFrom Name Schema configMapKeyRef optional configMapKeyRef ingressRef optional ingressRef secretKeyRef optional secretKeyRef serviceRef optional serviceRef type optional string configMapKeyRef Name Schema apiVersion optional string fieldPath optional string key optional string kind optional string name optional string namespace optional string resourceVersion optional string uid optional string ingressRef Name Schema apiVersion optional string fieldPath optional string host optional string kind optional string name optional string namespace optional string path optional string resourceVersion optional string uid optional string secretKeyRef Name Schema apiVersion optional string fieldPath optional string key optional string kind optional string name optional string namespace optional string resourceVersion optional string uid optional string serviceRef Name Schema apiVersion optional string fieldPath optional string kind optional string name optional string namespace optional string path optional string port optional integer (int32) resourceVersion optional string uid optional string 1.9. Helm API 1.9.1. Overview This documentation is for the HelmRelease resource for Red Hat Advanced Cluster Management for Kubernetes. The HelmRelease resource has four possible requests: create, query, delete and update. 1.9.1.1. Version information Version : 2.11.0 1.9.1.2. URI scheme BasePath : /kubernetes/apis Schemes : HTTPS 1.9.1.3. Tags helmreleases.apps.open-cluster-management.io : Create and manage helmreleases 1.9.2. Paths 1.9.2.1. Create a helmrelease 1.9.2.1.1. Description Create a helmrelease. 1.9.2.1.2. Parameters Type Name Description Schema Header COOKIE required Authorization: Bearer {ACCESS_TOKEN} ; ACCESS_TOKEN is the user access token. string Path namespace required Namespace that you want to use, for example, default. string Body body required Parameters describing the helmrelease to be created. HelmRelease 1.9.2.1.3. Responses HTTP Code Description Schema 200 Success No Content 403 Access forbidden No Content 404 Resource not found No Content 500 Internal service error No Content 503 Service unavailable No Content 1.9.2.1.4. Consumes application/yaml 1.9.2.1.5. Tags helmreleases.apps.open-cluster-management.io 1.9.2.1.6. Example HTTP request 1.9.2.1.6.1. Request body { "apiVersion" : "apps.open-cluster-management.io/v1", "kind" : "HelmRelease", "metadata" : { "name" : "nginx-ingress", "namespace" : "default" }, "repo" : { "chartName" : "nginx-ingress", "source" : { "helmRepo" : { "urls" : [ "https://kubernetes-charts.storage.googleapis.com/nginx-ingress-1.26.0.tgz" ] }, "type" : "helmrepo" }, "version" : "1.26.0" }, "spec" : { "defaultBackend" : { "replicaCount" : 3 } } } 1.9.2.2. Query all helmreleases 1.9.2.2.1. Description Query your helmreleases for more details. 1.9.2.2.2. Parameters Type Name Description Schema Header COOKIE required Authorization: Bearer {ACCESS_TOKEN} ; ACCESS_TOKEN is the user access token. string Path namespace required Namespace that you want to use, for example, default. string 1.9.2.2.3. Responses HTTP Code Description Schema 200 Success No Content 403 Access forbidden No Content 404 Resource not found No Content 500 Internal service error No Content 503 Service unavailable No Content 1.9.2.2.4. Consumes application/yaml 1.9.2.2.5. Tags helmreleases.apps.open-cluster-management.io 1.9.2.3. Query a single helmrelease 1.9.2.3.1. Description Query a single helmrelease for more details. 1.9.2.3.2. Parameters Type Name Description Schema Header COOKIE required Authorization: Bearer {ACCESS_TOKEN} ; ACCESS_TOKEN is the user access token. string Path helmrelease_name required Name of the helmrelease that you wan to query. string Path namespace required Namespace that you want to use, for example, default. string 1.9.2.3.3. Responses HTTP Code Description Schema 200 Success No Content 403 Access forbidden No Content 404 Resource not found No Content 500 Internal service error No Content 503 Service unavailable No Content 1.9.2.3.4. Tags helmreleases.apps.open-cluster-management.io 1.9.2.4. Delete a helmrelease 1.9.2.4.1. Parameters Type Name Description Schema Header COOKIE required Authorization: Bearer {ACCESS_TOKEN} ; ACCESS_TOKEN is the user access token. string Path helmrelease_name required Name of the helmrelease that you want to delete. string Path namespace required Namespace that you want to use, for example, default. string 1.9.2.4.2. Responses HTTP Code Description Schema 200 Success No Content 403 Access forbidden No Content 404 Resource not found No Content 500 Internal service error No Content 503 Service unavailable No Content 1.9.2.4.3. Tags helmreleases.apps.open-cluster-management.io 1.9.3. Definitions 1.9.3.1. HelmRelease Name Schema apiVersion required string kind required string metadata required object repo required repo spec required object status required status repo Name Schema chartName optional string configMapRef optional configMapRef secretRef optional secretRef source optional source version optional string configMapRef Name Schema apiVersion optional string fieldPath optional string kind optional string name optional string namespace optional string resourceVersion optional string uid optional string secretRef Name Schema apiVersion optional string fieldPath optional string kind optional string name optional string namespace optional string resourceVersion optional string uid optional string source Name Schema github optional github helmRepo optional helmRepo type optional string github Name Schema branch optional string chartPath optional string urls optional string array helmRepo Name Schema urls optional string array status Name Schema conditions required conditions array deployedRelease optional deployedRelease conditions Name Schema lastTransitionTime optional string (date-time) message optional string reason optional string status required string type required string deployedRelease Name Schema manifest optional string name optional string 1.10. Policy API 1.10.1. Overview This documentation is for the Policy resource for Red Hat Advanced Cluster Management for Kubernetes. The Policy resource has four possible requests: create, query, delete and update. 1.10.1.1. Version information Version : 2.11.0 1.10.1.2. URI scheme BasePath : /kubernetes/apis Schemes : HTTPS 1.10.1.3. Tags policy.open-cluster-management.io/v1 : Create and manage policies 1.10.2. Paths 1.10.2.1. Create a policy 1.10.2.1.1. Description Create a policy. 1.10.2.1.2. Parameters Type Name Description Schema Header COOKIE required Authorization: Bearer {ACCESS_TOKEN} ; ACCESS_TOKEN is the user access token. string Path namespace required Namespace that you want to use, for example, default. string Body body required Parameters describing the policy to be created. Policy 1.10.2.1.3. Responses HTTP Code Description Schema 200 Success No Content 403 Access forbidden No Content 404 Resource not found No Content 500 Internal service error No Content 503 Service unavailable No Content 1.10.2.1.4. Consumes application/json 1.10.2.1.5. Tags policy.open-cluster-management.io 1.10.2.1.6. Example HTTP request 1.10.2.1.6.1. Request body { "apiVersion": "policy.open-cluster-management.io/v1", "kind": "Policy", "metadata": { "name": "test-policy-swagger", "description": "Example body for Policy API Swagger docs" }, "spec": { "remediationAction": "enforce", "namespaces": { "include": [ "default" ], "exclude": [ "kube*" ] }, "policy-templates": { "kind": "ConfigurationPolicy", "apiVersion": "policy.open-cluster-management.io/v1", "complianceType": "musthave", "metadataComplianceType": "musthave", "metadata": { "namespace": null, "name": "test-role" }, "selector": { "matchLabels": { "cloud": "IBM" } }, "spec" : { "object-templates": { "complianceType": "musthave", "metadataComplianceType": "musthave", "objectDefinition": { "apiVersion": "rbac.authorization.k8s.io/v1", "kind": "Role", "metadata": { "name": "role-policy", }, "rules": [ { "apiGroups": [ "extensions", "apps" ], "resources": [ "deployments" ], "verbs": [ "get", "list", "watch", "delete" ] }, { "apiGroups": [ "core" ], "resources": [ "pods" ], "verbs": [ "create", "update", "patch" ] }, { "apiGroups": [ "core" ], "resources": [ "secrets" ], "verbs": [ "get", "watch", "list", "create", "delete", "update", "patch" ], }, ], }, }, }, }, 1.10.2.2. Query all policies 1.10.2.2.1. Description Query your policies for more details. 1.10.2.2.2. Parameters Type Name Description Schema Header COOKIE required Authorization: Bearer {ACCESS_TOKEN} ; ACCESS_TOKEN is the user access token. string Path namespace required Namespace that you want to apply the policy to, for example, default. string 1.10.2.2.3. Responses HTTP Code Description Schema 200 Success No Content 403 Access forbidden No Content 404 Resource not found No Content 500 Internal service error No Content 503 Service unavailable No Content 1.10.2.2.4. Consumes application/json 1.10.2.2.5. Tags policy.open-cluster-management.io 1.10.2.3. Query a single policy 1.10.2.3.1. Description Query a single policy for more details. 1.10.2.3.2. Parameters Type Name Description Schema Header COOKIE required Authorization: Bearer {ACCESS_TOKEN} ; ACCESS_TOKEN is the user access token. string Path policy_name required Name of the policy that you want to query. string Path namespace required Namespace that you want to use, for example, default. string 1.10.2.3.3. Responses HTTP Code Description Schema 200 Success No Content 403 Access forbidden No Content 404 Resource not found No Content 500 Internal service error No Content 503 Service unavailable No Content 1.10.2.3.4. Tags policy.open-cluster-management.io 1.10.2.4. Delete a policy 1.10.2.4.1. Parameters Type Name Description Schema Header COOKIE required Authorization: Bearer {ACCESS_TOKEN} ; ACCESS_TOKEN is the user access token. string Path policy_name required Name of the policy that you want to delete. string Path namespace required Namespace that you want to use, for example, default. string 1.10.2.4.2. Responses HTTP Code Description Schema 200 Success No Content 403 Access forbidden No Content 404 Resource not found No Content 500 Internal service error No Content 503 Service unavailable No Content 1.10.2.4.3. Tags policy.open-cluster-management.io 1.10.3. Definitions 1.10.3.1. Policy Name Description Schema apiVersion required The versioned schema of Policy. string kind required String value that represents the REST resource. string metadata required Describes rules that define the policy. object spec Name Description Schema remediationAction optional Value that represents how violations are handled as defined in the resource. string namespaceSelector required Value that represents which namespaces the policy is applied. string policy-templates Name Description Schema apiVersion required The versioned schema of Policy. string kind optional String value that represents the REST resource. string metadata required Describes rules that define the policy. object complianceType Used to list expected behavior for roles and other Kubernetes object that must be evaluated or applied to the managed clusters. string metadataComplianceType optional Provides a way for users to process labels and annotations of an object differently than the other fields. The parameter value defaults to the same value of the ComplianceType parameter. string clusterConditions optional Section to define labels. string rules optional string clusterConditions Name Description Schema matchLabels optional The label that is required for the policy to be applied to a namespace. object cloud optional The label that is required for the policy to be applied to a cloud provider. string rules Name Description Schema apiGroups required List of APIs that the rule applies to. string resources required A list of resource types. object verbs required A list of verbs. string 1.11. Observability API 1.11.1. Overview This documentation is for the MultiClusterObservability resource for Red Hat Advanced Cluster Management for Kubernetes. The MultiClusterObservability resource has four possible requests: create, query, delete and update. 1.11.1.1. Version information Version : 2.11.0 1.11.1.2. URI scheme BasePath : /kubernetes/apis Schemes : HTTPS 1.11.1.3. Tags observability.open-cluster-management.io : Create and manage multiclusterobservabilities 1.11.2. Paths 1.11.2.1. Create a multiclusterobservability resource 1.11.2.1.1. Description Create a MultiClusterObservability resource. 1.11.2.1.2. Parameters Type Name Description Schema Header COOKIE required Authorization: Bearer {ACCESS_TOKEN} ; ACCESS_TOKEN is the user access token. string Body body required Parameters describing the MultiClusterObservability resource to be created. MultiClusterObservability 1.11.2.1.3. Responses HTTP Code Description Schema 200 Success No Content 403 Access forbidden No Content 404 Resource not found No Content 500 Internal service error No Content 503 Service unavailable No Content 1.11.2.1.4. Consumes application/yaml 1.11.2.1.5. Tags observability.apps.open-cluster-management.io 1.11.2.1.6. Example HTTP request 1.11.2.1.6.1. Request body { "apiVersion": "observability.open-cluster-management.io/v1beta2", "kind": "MultiClusterObservability", "metadata": { "name": "example" }, "spec": { "observabilityAddonSpec": {} "storageConfig": { "metricObjectStorage": { "name": "thanos-object-storage", "key": "thanos.yaml" "writeStorage": { - "key": " ", "name" : " " - "key": " ", "name" : " " } } } } 1.11.2.2. Query all multiclusterobservabilities 1.11.2.2.1. Description Query your MultiClusterObservability resources for more details. 1.11.2.2.2. Parameters Type Name Description Schema Header COOKIE required Authorization: Bearer {ACCESS_TOKEN} ; ACCESS_TOKEN is the user access token. string 1.11.2.2.3. Responses HTTP Code Description Schema 200 Success No Content 403 Access forbidden No Content 404 Resource not found No Content 500 Internal service error No Content 503 Service unavailable No Content 1.11.2.2.4. Consumes application/yaml 1.11.2.2.5. Tags observability.apps.open-cluster-management.io 1.11.2.3. Query a single multiclusterobservability 1.11.2.3.1. Description Query a single MultiClusterObservability resource for more details. 1.11.2.3.2. Parameters Type Name Description Schema Header COOKIE required Authorization: Bearer {ACCESS_TOKEN} ; ACCESS_TOKEN is the user access token. string Path multiclusterobservability_name required Name of the multiclusterobservability that you want to query. string 1.11.2.3.3. Responses HTTP Code Description Schema 200 Success No Content 403 Access forbidden No Content 404 Resource not found No Content 500 Internal service error No Content 503 Service unavailable No Content 1.11.2.3.4. Tags observability.apps.open-cluster-management.io 1.11.2.4. Delete a multiclusterobservability resource 1.11.2.4.1. Parameters Type Name Description Schema Header COOKIE required Authorization: Bearer {ACCESS_TOKEN} ; ACCESS_TOKEN is the user access token. string Path multiclusterobservability_name required Name of the multiclusterobservability that you want to delete. string 1.11.2.4.2. Responses HTTP Code Description Schema 200 Success No Content 403 Access forbidden No Content 404 Resource not found No Content 500 Internal service error No Content 503 Service unavailable No Content 1.11.2.4.3. Tags observability.apps.open-cluster-management.io 1.11.3. Definitions 1.11.3.1. MultiClusterObservability Name Description Schema apiVersion required The versioned schema of the MultiClusterObservability. string kind required String value that represents the REST resource, MultiClusterObservability. string metadata required Describes rules that define the policy. object spec Name Description Schema enableDownsampling optional Enable or disable the downsample. Default value is true . If there is no downsample data, the query is unavailable. boolean imagePullPolicy optional Pull policy for the MultiClusterObservability images. The default value is Always . corev1.PullPolicy imagePullSecret optional Pull secret for the MultiClusterObservability images. The default value is multiclusterhub-operator-pull-secret string nodeSelector optional Specification of the node selector. map[string]string observabilityAddonSpec required The global settings for all managed clusters, which have the observability add-on installed. observabilityAddonSpec storageConfig required Specifies the storage configuration to be used by observability. StorageConfig tolerations optional Provided the ability for all components to tolerate any taints. []corev1.Toleration advanced optional The advanced configuration settings for observability. advanced resources optional Compute resources required by MultiClusterObservability. corev1.ResourceRequirements replicas optional Replicas for MultiClusterObservability. integer storageConfig Name Description Schema alertmanagerStorageSize optional The amount of storage applied to the alertmanager stateful sets. Default value is 1Gi . string compactStorageSize optional The amount of storage applied to the thanos compact stateful sets. Default value is 100Gi . string metricObjectStorage required Object store to configure secrets for metrics. metricObjectStorage receiveStorageSize optional The amount of storage applied to thanos receive stateful sets. Default value is 100Gi . string ruleStorageSize optional The amount of storage applied to thanos rule stateful sets. Default value is 1Gi . string storageClass optional Specify the storageClass stateful sets. This storage is used for the object storage if metricObjectStorage is configured for your operating system to create storage. Default value is gp2 . string storeStorageSize optional The amount of storage applied to thanos store stateful sets. Default value is 10Gi . string writeStorage optional A list of endpoint access information. [ ] WriteStorage writeStorage Name Description Schema name required The name of the secret with endpoint access information. string key required The key of the secret to select from. string metricObjectStorage Name Description Schema key required The key of the secret to select from. Must be a valid secret key. See Thanos documentation . string name required Name of the metricObjectStorage . See Kubernetes Names for more information. string observabilityAddonSpec Name Description Schema enableMetrics optional Indicates if the observability add-on sends metrics to the hub cluster. Default value is true . boolean interval optional Interval for when the observability add-on sends metrics to the hub cluster. Default value is 300 seconds ( 300s ). integer resources optional Resource for the metrics collector resource requirement. The default CPU request is 100m , memory request is 100Mi . corev1.ResourceRequirements advanced Name Description Schema retentionConfig optional Specifies the data retention configuration to be used by observability. RetentionConfig rbacQueryProxy optional Specifies the replicas and resources for the rbac-query-proxy deployment. CommonSpec grafana optional Specifies the replicas and resources for the grafana deployment CommonSpec alertmanager optional Specifies the replicas and resources for alertmanager statefulset. CommonSpec observatoriumAPI optional Specifies the replicas and resources for the observatorium-api deployment. CommonSpec queryFrontend optional Specifies the replicas and resources for the query-frontend deployment. CommonSpec query optional Specifies the replicas and resources for the query deployment. CommonSpec receive optional Specifies the replicas and resources for the receive statefulset. CommonSpec rule optional Specifies the replicas and resources for rule statefulset. CommonSpec store optional Specifies the replicas and resources for the store statefulset. CommonSpec CompactSpec optional Specifies the resources for compact statefulset. compact storeMemcached optional Specifies the replicas, resources, etc. for store-memcached. storeMemcached queryFrontendMemcached optional Specifies the replicas, resources, etc for query-frontend-memcached. CacheConfig retentionConfig Name Description Schema blockDuration optional The amount of time to block the duration for Time Series Database (TSDB) block. Default value is 2h . string deleteDelay optional The amount of time until a block marked for deletion is deleted from a bucket. Default value is 48h . string retentionInLocal optional The amount of time to retain raw samples from the local storage. Default value is 24h . string retentionResolutionRaw optional The amount of time to retain raw samples of resolution in a bucket. Default value is 365 days ( 365d ) string retentionResolution5m optional The amount of time to retain samples of resolution 1 (5 minutes) in a bucket. Default value is 365 days ( 365d ). string retentionResolution1h optional The amount of time to retain samples of resolution 2 (1 hour) in a bucket. Default value is 365 days ( 365d ). string CompactSpec Name Description Schema resources optional Compute resources required by thanos compact. corev1.ResourceRequirements serviceAccountAnnotations optional Annotations is an unstructured key value map stored with the compact service account. map[string]string storeMemcached Name Description Schema resources optional Compute resources required by MultiCLusterObservability. corev1.ResourceRequirements replicas optional Replicas for MultiClusterObservability. integer memoryLimitMb optional Memory limit of Memcached in megabytes. integer maxItemSize optional Max item size of Memcached. The default value is 1m, min:1k, max:1024m . string connectionLimit optional Max simultaneous connections of Memcached. The default value is integer status Name Description Schema status optional Status contains the different condition statuses for MultiClusterObservability. metav1.Condition CommonSpec Name Description Schema resources optional Compute resources required by the component. corev1.ResourceRequirements replicas optional Replicas for the component. integer QuerySpec Name Description Schema CommonSpec optional Specifies the replicas and resources for the query deployment. CommonSpec serviceAccountAnnotations optional Annotations is an unstructured key value map stored with the query service account. map[string]string ReceiveSpec Name Description Schema CommonSpec optional Specifies the replicas and resources for the query deployment. CommonSpec serviceAccountAnnotations optional Annotations is an unstructured key value map stored with the query service account. map[string]string StoreSpec Name Description Schema CommonSpec optional Specifies the replicas and resources for the query deployment. CommonSpec serviceAccountAnnotations optional Annotations is an unstructured key value map stored with the query service account. map[string]string RuleSpec Name Description Schema CommonSpec optional Specifies the replicas and resources for the query deployment. CommonSpec evalInterval optional Specifies the evaluation interval for the rules. string serviceAccountAnnotations optional Annotations is an unstructured key value map stored with the query service account. map[string]string 1.12. Search query API The search query API is not a Kubernetes API, therefore is not displayed through the Red Hat OpenShift Container Platform API Explorer. Continue reading to understand the search query API capabilities. 1.12.1. Overview You can expose the search query API with a route and use the API to resolve search queries. The API is a GraphQL endpoint. You can use any client such as curl or Postman. 1.12.1.1. Version information Version : 2.10.0 1.12.1.2. URI scheme BasePath : /searchapi/graphql Schemes : HTTPS 1.12.1.3. Configure API access Create a route to access the Search API external from your cluster with the following command: oc create route passthrough search-api --service=search-search-api -n open-cluster-management Important: You must configure your route to secure your environment. See Route configuration in the OpenShift Container Platform documentation for more details. 1.12.2. Schema design input SearchFilter { property: String! values: [String]! } input SearchInput { keywords: [String] filters: [SearchFilter] limit: Int relatedKinds: [String] } type SearchResult { count: Int items: [Map] related: [SearchRelatedResult] } type SearchRelatedResult { kind: String! count: Int items: [Map] } Parameters with ! indicates that the field is required. 1.12.2.1. Description table of query inputs Type Description Property SearchFilter Defines a key and value to filter results. When you provide many values for a property, the API interpret the values as an "OR" operation. When you provide many filters, results match all filters and the API interprets as an "AND" operation. string SearchInput Enter key words to receive a list of resources. When you provide many keywords, the API interprets it as an "AND" operation. String limit Determine the maximum number of results returned after you enter the query. The default value is 10,000 . A value of -1 means that the limit is removed. Integer 1.12.2.2. Schema example { "query": "type SearchResult {count: Intitems: [Map]related: [SearchRelatedResult]} type SearchRelatedResult {kind: String!count: Intitems: [Map]}", "variables": { "input": [ { "keywords": [], "filters": [ { "property": "kind", "values": [ "Deployment" ] } ], "limit": 10 } ] } } 1.12.3. Generic schema type Query { search(input: [SearchInput]): [SearchResult] searchComplete(property: String!, query: SearchInput, limit: Int): [String] searchSchema: Map messages: [Message] } 1.12.4. Supported queries Continue reading to see the query types that are supported in JSON format. 1.12.4.1. Search for deployments Query: query mySearch(USDinput: [SearchInput]) { search(input: USDinput) { items } } Variables: {"input":[ { "keywords":[], "filters":[ {"property":"kind","values":["Deployment"]}], "limit":10 } ]} 1.12.4.2. Search for pods Query: query mySearch(USDinput: [SearchInput]) { search(input: USDinput) { items } } Variables: {"input":[ { "keywords":[], "filters":[ {"property":"kind","values":["Pod"]}], "limit":10 } ]} 1.13. MultiClusterHub API 1.13.1. Overview This documentation is for the MultiClusterHub resource for Red Hat Advanced Cluster Management for Kubernetes. MultiClusterHub resource has four possible requests: create, query, delete and update. 1.13.1.1. Version information Version : 2.11.0 1.13.1.2. URI scheme BasePath : /kubernetes/apis Schemes : HTTPS 1.13.1.3. Tags multiclusterhubs.operator.open-cluster-management.io : Create and manage multicluster hub operators 1.13.2. Paths 1.13.2.1. Create a MultiClusterHub resource 1.13.2.1.1. Description Create a MultiClusterHub resource to define the configuration for an instance of the multicluster hub. 1.13.2.1.2. Parameters Type Name Description Schema Header COOKIE required Authorization: Bearer {ACCESS_TOKEN} ; ACCESS_TOKEN is the user access token. string Path namespace required Namespace that you want to use, for example, default. string Body body required Parameters describing the multicluster hub to be created. Definitions 1.13.2.1.3. Responses HTTP Code Description Schema 200 Success No Content 403 Access forbidden No Content 404 Resource not found No Content 500 Internal service error No Content 503 Service unavailable No Content 1.13.2.1.4. Consumes multiclusterhubs/yaml 1.13.2.1.5. Tags multiclusterhubs.operator.open-cluster-management.io 1.13.2.1.6. Example HTTP request 1.13.2.1.6.1. Request body { "apiVersion": "apiextensions.k8s.io/v1", "kind": "CustomResourceDefinition", "metadata": { "name": "multiclusterhubs.operator.open-cluster-management.io" }, "spec": { "group": "operator.open-cluster-management.io", "names": { "kind": "MultiClusterHub", "listKind": "MultiClusterHubList", "plural": "multiclusterhubs", "shortNames": [ "mch" ], "singular": "multiclusterhub" }, "scope": "Namespaced", "versions": [ { "additionalPrinterColumns": [ { "description": "The overall status of the multicluster hub.", "jsonPath": ".status.phase", "name": "Status", "type": "string" }, { "jsonPath": ".metadata.creationTimestamp", "name": "Age", "type": "date" } ], "name": "v1", "schema": { "openAPIV3Schema": { "description": "MultiClusterHub defines the configuration for an instance of the multiCluster hub, a central point for managing multiple Kubernetes-based clusters. The deployment of multicluster hub components is determined based on the configuration that is defined in this resource.", "properties": { "apiVersion": { "description": "APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources", "type": "string" }, "kind": { "description": "Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. The value is in CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds", "type": "string" }, "metadata": { "type": "object" }, "spec": { "description": "MultiClusterHubSpec defines the desired state of MultiClusterHub.", "properties": { "availabilityConfig": { "description": "Specifies deployment replication for improved availability. Options are: Basic and High (default).", "type": "string" }, "customCAConfigmap": { "description": "Provide the customized OpenShift default ingress CA certificate to Red Hat Advanced Cluster Management.", } "type": "string" }, "disableHubSelfManagement": { "description": "Disable automatic import of the hub cluster as a managed cluster.", "type": "boolean" }, "disableUpdateClusterImageSets": { "description": "Disable automatic update of ClusterImageSets.", "type": "boolean" }, "hive": { "description": "(Deprecated) Overrides for the default HiveConfig specification.", "properties": { "additionalCertificateAuthorities": { "description": "(Deprecated) AdditionalCertificateAuthorities is a list of references to secrets in the 'hive' namespace that contain an additional Certificate Authority to use when communicating with target clusters. These certificate authorities are used in addition to any self-signed CA generated by each cluster on installation.", "items": { "description": "LocalObjectReference contains the information to let you locate the referenced object inside the same namespace.", "properties": { "name": { "description": "Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names", "type": "string" } }, "type": "object" }, "type": "array" }, "backup": { "description": "(Deprecated) Backup specifies configuration for backup integration. If absent, backup integration is disabled.", "properties": { "minBackupPeriodSeconds": { "description": "(Deprecated) MinBackupPeriodSeconds specifies that a minimum of MinBackupPeriodSeconds occurs in between each backup. This is used to rate limit backups. This potentially batches together multiple changes into one backup. No backups are lost for changes that happen during the interval that is queued up, and results in a backup once the interval has been completed.", "type": "integer" }, "velero": { "description": "(Deprecated) Velero specifies configuration for the Velero backup integration.", "properties": { "enabled": { "description": "(Deprecated) Enabled dictates if the Velero backup integration is enabled. If not specified, the default is disabled.", "type": "boolean" } }, "type": "object" } }, "type": "object" }, "externalDNS": { "description": "(Deprecated) ExternalDNS specifies configuration for external-dns if it is to be deployed by Hive. If absent, external-dns is not deployed.", "properties": { "aws": { "description": "(Deprecated) AWS contains AWS-specific settings for external DNS.", "properties": { "credentials": { "description": "(Deprecated) Credentials reference a secret that is used to authenticate with AWS Route53. It needs permission to manage entries in each of the managed domains for this cluster. Secret should have AWS keys named 'aws_access_key_id' and 'aws_secret_access_key'.", "properties": { "name": { "description": "Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names", "type": "string" } }, "type": "object" } }, "type": "object" }, "gcp": { "description": "(Deprecated) GCP contains Google Cloud Platform specific settings for external DNS.", "properties": { "credentials": { "description": "(Deprecated) Credentials reference a secret that is used to authenticate with GCP DNS. It needs permission to manage entries in each of the managed domains for this cluster. Secret should have a key names 'osServiceAccount.json'. The credentials must specify the project to use.", "properties": { "name": { "description": "Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names", "type": "string" } }, "type": "object" } }, "type": "object" } }, "type": "object" }, "failedProvisionConfig": { "description": "(Deprecated) FailedProvisionConfig is used to configure settings related to handling provision failures.", "properties": { "skipGatherLogs": { "description": "(Deprecated) SkipGatherLogs disables functionality that attempts to gather full logs from the cluster if an installation fails for any reason. The logs are stored in a persistent volume for up to seven days.", "type": "boolean" } }, "type": "object" }, "globalPullSecret": { "description": "(Deprecated) GlobalPullSecret is used to specify a pull secret that is used globally by all of the cluster deployments. For each cluster deployment, the contents of GlobalPullSecret are merged with the specific pull secret for a cluster deployment(if specified), with precedence given to the contents of the pull secret for the cluster deployment.", "properties": { "name": { "description": "Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names", "type": "string" } }, "type": "object" }, "maintenanceMode": { "description": "(Deprecated) MaintenanceMode can be set to true to disable the Hive controllers in situations where you need to ensure nothing is running that adds or act upon finalizers on Hive types. This should rarely be needed. Sets replicas to zero for the 'hive-controllers' deployment to accomplish this.", "type": "boolean" } }, "required": [ "failedProvisionConfig" ], "type": "object" }, "imagePullSecret": { "description": "Override pull secret for accessing MultiClusterHub operand and endpoint images.", "type": "string" }, "ingress": { "description": "Configuration options for ingress management.", "properties": { "sslCiphers": { "description": "List of SSL ciphers enabled for management ingress. Defaults to full list of supported ciphers.", "items": { "type": "string" }, "type": "array" } }, "type": "object" }, "nodeSelector": { "additionalProperties": { "type": "string" }, "description": "Set the node selectors..", "type": "object" }, "overrides": { "description": "Developer overrides.", "properties": { "imagePullPolicy": { "description": "Pull policy of the multicluster hub images.", "type": "string" } }, "type": "object" }, "separateCertificateManagement": { "description": "(Deprecated) Install cert-manager into its own namespace.", "type": "boolean" } }, "type": "object" }, "status": { "description": "MulticlusterHubStatus defines the observed state of MultiClusterHub.", "properties": { "components": { "additionalProperties": { "description": "StatusCondition contains condition information.", "properties": { "lastTransitionTime": { "description": "LastTransitionTime is the last time the condition changed from one status to another.", "format": "date-time", "type": "string" }, "message": { "description": "Message is a human-readable message indicating\ndetails about the last status change.", "type": "string" }, "reason": { "description": "Reason is a (brief) reason for the last status change of the condition.", "type": "string" }, "status": { "description": "Status is the status of the condition. One of True, False, Unknown.", "type": "string" }, "type": { "description": "Type is the type of the cluster condition.", "type": "string" } }, "type": "object" }, "description": "Components []ComponentCondition `json:\"manifests,omitempty\"`", "type": "object" }, "conditions": { "description": "Conditions contain the different condition statuses for the MultiClusterHub.", "items": { "description": "StatusCondition contains condition information.", "properties": { "lastTransitionTime": { "description": "LastTransitionTime is the last time the condition changed from one status to another.", "format": "date-time", "type": "string" }, "lastUpdateTime": { "description": "The last time this condition was updated.", "format": "date-time", "type": "string" }, "message": { "description": "Message is a human-readable message indicating details about the last status change.", "type": "string" }, "reason": { "description": "Reason is a (brief) reason for the last status change of the condition.", "type": "string" }, "status": { "description": "Status is the status of the condition. One of True, False, Unknown.", "type": "string" }, "type": { "description": "Type is the type of the cluster condition.", "type": "string" } }, "type": "object" }, "type": "array" }, "currentVersion": { "description": "CurrentVersion indicates the current version..", "type": "string" }, "desiredVersion": { "description": "DesiredVersion indicates the desired version.", "type": "string" }, "phase": { "description": "Represents the running phase of the MultiClusterHub", "type": "string" } }, "type": "object" } }, "type": "object" } }, "served": true, "storage": true, "subresources": { "status": {} } } ] } } 1.13.2.2. Query all MultiClusterHubs 1.13.2.2.1. Description Query your multicluster hub operator for more details. 1.13.2.2.2. Parameters Type Name Description Schema Header COOKIE required Authorization: Bearer {ACCESS_TOKEN} ; ACCESS_TOKEN is the user access token. string Path namespace required Namespace that you want to use, for example, default. string 1.13.2.2.3. Responses HTTP Code Description Schema 200 Success No Content 403 Access forbidden No Content 404 Resource not found No Content 500 Internal service error No Content 503 Service unavailable No Content 1.13.2.2.4. Consumes operator/yaml 1.13.2.2.5. Tags multiclusterhubs.operator.open-cluster-management.io 1.13.2.3. Query a MultiClusterHub operator 1.13.2.3.1. Description Query a single multicluster hub operator for more details. 1.13.2.3.2. Parameters Type Name Description Schema Header COOKIE required Authorization: Bearer {ACCESS_TOKEN} ; ACCESS_TOKEN is the user access token. string Path application_name required Name of the application that you want to query. string Path namespace required Namespace that you want to use, for example, default. string 1.13.2.3.3. Responses HTTP Code Description Schema 200 Success No Content 403 Access forbidden No Content 404 Resource not found No Content 500 Internal service error No Content 503 Service unavailable No Content 1.13.2.3.4. Tags multiclusterhubs.operator.open-cluster-management.io 1.13.2.4. Delete a MultiClusterHub operator 1.13.2.4.1. Parameters Type Name Description Schema Header COOKIE required Authorization: Bearer {ACCESS_TOKEN} ; ACCESS_TOKEN is the user access token. string Path application_name required Name of the multicluster hub operator that you want to delete. string Path namespace required Namespace that you want to use, for example, default. string 1.13.2.4.2. Responses HTTP Code Description Schema 200 Success No Content 403 Access forbidden No Content 404 Resource not found No Content 500 Internal service error No Content 503 Service unavailable No Content 1.13.2.4.3. Tags multiclusterhubs.operator.open-cluster-management.io 1.13.3. Definitions 1.13.3.1. Multicluster hub operator Name Description Schema apiVersion required The versioned schema of the MultiClusterHub. string kind required String value that represents the REST resource. string metadata required Describes rules that define the resource. object spec required The resource specification. spec spec availabilityConfig optional Specifies deployment replication for improved availability. The default value is High . string customCAConfigmap optional Provide the customized OpenShift default ingress CA certificate to Red Hat Advanced Cluster Management. string disableHubSelfManagement optional Disable automatic import of the hub cluster as a managed cluster. boolean disableUpdateClusterImageSets optional Disable automatic update of ClusterImageSets. boolean hive optional (Deprecated) An object that overrides for the default HiveConfig specification. hive imagePullSecret optional Overrides pull secret for accessing MultiClusterHub operand and endpoint images. string ingress optional Configuration options for ingress management. ingress nodeSelector optional Set the node selectors. string separateCertificateManagement optional (Deprecated) Install cert-manager into its own namespace. boolean hive additionalCertificateAuthorities optional (Deprecated) A list of references to secrets in the hive namespace that contain an additional Certificate Authority to use when communicating with target clusters. These certificate authorities are used in addition to any self-signed CA generated by each cluster on installation. object backup optional (Deprecated) Specifies the configuration for backup integration. If absent, backup integration is disabled. backup externalDNS optional (Deprecated) Specifies configuration for external-dns if it is to be deployed by Hive. If absent, external-dns is not be deployed. object failedProvisionConfig required (Deprecated) Used to configure settings related to handling provision failures. failedProvisionConfig globalPullSecret optional (Deprecated) Used to specify a pull secret that is used globally by all of the cluster deployments. For each cluster deployment, the contents of globalPullSecret are merged with the specific pull secret for a cluster deployment (if specified), with precedence given to the contents of the pull secret for the cluster deployment. object maintenanceMode optional (Deprecated) Can be set to true to disable the hive controllers in situations where you need to ensure nothing is running that adds or acts upon finalizers on Hive types. This should rarely be needed. Sets replicas to 0 for the hive-controllers deployment to accomplish this. boolean ingress sslCiphers optional List of SSL ciphers enabled for management ingress. Defaults to full list of supported ciphers. string backup minBackupPeriodSeconds optional (Deprecated) Specifies that a minimum of MinBackupPeriodSeconds occurs in between each backup. This is used to rate limit backups. This potentially batches together multiple changes into one backup. No backups are lost as changes happen during this interval are queued up and result in a backup happening once the interval has been completed. integer velero optional (Deprecated) Velero specifies configuration for the Velero backup integration. object failedProvisionConfig skipGatherLogs optional (Deprecated) Disables functionality that attempts to gather full logs from the cluster if an installation fails for any reason. The logs are stored in a persistent volume for up to seven days. boolean status components optional The components of the status configuration. object conditions optional Contains the different conditions for the multicluster hub. conditions desiredVersion optional Indicates the desired version. string phase optional Represents the active phase of the MultiClusterHub resource. The values that are used for this parameter are: Pending , Running , Installing , Updating , Uninstalling string conditions lastTransitionTime optional The last time the condition changed from one status to another. string lastUpdateTime optional The last time this condition was updated. string message required Message is a human-readable message indicating details about the last status change. string reason required A brief reason for why the condition status changed. string status required The status of the condition. string type required The type of the cluster condition. string StatusConditions kind required The resource kind that represents this status. string available required Indicates whether this component is properly running. boolean lastTransitionTime optional The last time the condition changed from one status to another. metav1.time lastUpdateTime optional The last time this condition was updated. metav1.time message required Message is a human-readable message indicating details about the last status change. string reason optional A brief reason for why the condition status changed. string status optional The status of the condition. string type optional The type of the cluster condition. string 1.14. Placement API (v1beta1) 1.14.1. Overview This documentation is for the Placement resource for Red Hat Advanced Cluster Management for Kubernetes. The Placement resource has four possible requests: create, query, delete, and update. Placement defines a rule to select a set of ManagedClusters from the ManagedClusterSets that are bound to the placement namespace. A slice of PlacementDecisions with the label cluster.open-cluster-management.io/placement={placement name} is created to represent the ManagedClusters that are selected by this placement. 1.14.1.1. Version information Version : 2.11.0 1.14.1.2. URI scheme BasePath : /kubernetes/apis Schemes : HTTPS 1.14.1.3. Tags cluster.open-cluster-management.io : Create and manage Placements 1.14.2. Paths 1.14.2.1. Query all Placements 1.14.2.1.1. Description Query your Placements for more details. 1.14.2.1.2. Parameters Type Name Description Schema Header COOKIE required Authorization: Bearer {ACCESS_TOKEN} ; ACCESS_TOKEN is the user access token. string 1.14.2.1.3. Responses HTTP Code Description Schema 200 Success No Content 403 Access forbidden No Content 404 Resource not found No Content 500 Internal service error No Content 503 Service unavailable No Content 1.14.2.1.4. Consumes placement/yaml 1.14.2.1.5. Tags cluster.open-cluster-management.io 1.14.2.2. Create a Placement 1.14.2.2.1. Description Create a Placement. 1.14.2.2.2. Parameters Type Name Description Schema Header COOKIE required Authorization: Bearer {ACCESS_TOKEN} ; ACCESS_TOKEN is the user access token. string Body body required Parameters describing the placement binding to be created. Placement 1.14.2.2.3. Responses HTTP Code Description Schema 200 Success No Content 403 Access forbidden No Content 404 Resource not found No Content 500 Internal service error No Content 503 Service unavailable No Content 1.14.2.2.4. Consumes placement/yaml 1.14.2.2.5. Tags cluster.open-cluster-management.io 1.14.2.2.6. Example HTTP request 1.14.2.2.6.1. Request body { "apiVersion" : "cluster.open-cluster-management.io/v1beta1", "kind" : "Placement", "metadata" : { "name" : "placement1", "namespace": "ns1" }, "spec": { "predicates": [ { "requiredClusterSelector": { "labelSelector": { "matchLabels": { "vendor": "OpenShift" } } } } ] }, "status" : { } } 1.14.2.3. Query a single Placement 1.14.2.3.1. Description Query a single Placement for more details. 1.14.2.3.2. Parameters Type Name Description Schema Header COOKIE required Authorization: Bearer {ACCESS_TOKEN} ; ACCESS_TOKEN is the user access token. string Path placement_name required Name of the Placement that you want to query. string 1.14.2.3.3. Responses HTTP Code Description Schema 200 Success No Content 403 Access forbidden No Content 404 Resource not found No Content 500 Internal service error No Content 503 Service unavailable No Content 1.14.2.3.4. Tags cluster.open-cluster-management.io 1.14.2.4. Delete a Placement 1.14.2.4.1. Description Delete a single Placement. 1.14.2.4.2. Parameters Type Name Description Schema Header COOKIE required Authorization: Bearer {ACCESS_TOKEN} ; ACCESS_TOKEN is the user access token. string Path placement_name required Name of the Placement that you want to delete. string 1.14.2.4.3. Responses HTTP Code Description Schema 200 Success No Content 403 Access forbidden No Content 404 Resource not found No Content 500 Internal service error No Content 503 Service unavailable No Content 1.14.2.4.4. Tags cluster.open-cluster-management.io 1.14.3. Definitions 1.14.3.1. Placement Name Description Schema apiVersion required Versioned schema of the Placement. string kind required String value that represents the REST resource. string metadata required Metadata of the Placement. object spec required Specification of the Placement. spec spec Name Description Schema clusterSets optional A subset of ManagedClusterSets from which the ManagedClusters are selected. If the ManagedClusterSet is empty, ManagedClusters are selected from the ManagedClusterSets that are bound to the Placement namespace. If the ManagedClusterSet contains ManagedClusters , ManagedClusters are selected from the intersection of this subset. The selected ManagedClusterSets are bound to the placement namespace. string array numberOfClusters optional Number of ManagedClusters that you want to be selected. integer (int32) predicates optional Subset of cluster predicates that select ManagedClusters . The conditional logic is OR . clusterPredicate array prioritizerPolicy optional Policy of the prioritizers. prioritizerPolicy tolerations optional Value that allows, but does not require, the managed clusters with certain taints to be selected by placements with matching tolerations. toleration array clusterPredicate Name Description Schema requiredClusterSelector optional A cluster selector to select ManagedClusters with a label and cluster claim. clusterSelector clusterSelector Name Description Schema labelSelector optional Selector of ManagedClusters by label. object claimSelector optional Selector of ManagedClusters by claim. clusterClaimSelector clusterClaimSelector Name Description Schema matchExpressions optional Subset of the cluster claim selector requirements. The conditional logic is AND . < object > array prioritizerPolicy Name Description Schema mode optional Either Exact , Additive , or "". The default value of "" is Additive . string configurations optional Configuration of the prioritizer. prioritizerConfig array prioritizerConfig Name Description Schema scoreCoordinate required Configuration of the prioritizer and score source. scoreCoordinate weight optional Weight of the prioritizer score. The value must be within the range: [-10,10]. int32 scoreCoordinate Name Description Schema type required Type of the prioritizer score. Valid values are "BuiltIn" or "AddOn". string builtIn optional Name of a BuiltIn prioritizer from the following options: 1) Balance: Balance the decisions among the clusters. 2) Steady: Ensure the existing decision is stabilized. 3) ResourceAllocatableCPU & ResourceAllocatableMemory: Sort clusters based on the allocatable resources. 4) Spread: Spread the workload evenly to topologies. string addOn optional When type is AddOn , AddOn defines the resource name and score name. object toleration Name Description Schema key optional Taint key that the toleration applies to. Empty means match all of the taint keys. string operator optional Relationship of a key to the value. Valid operators are Exists and Equal . The default value is Equal . string value optional Taint value that matches the toleration. string effect optional Taint effect to match. Empty means match all of the taint effects. When specified, allowed values are NoSelect , PreferNoSelect , and NoSelectIfNew . string tolerationSeconds optional Length of time that a taint is tolerated, after which the taint is not tolerated. The default value is nil, which indicates that there is no time limit on how long the taint is tolerated. int64 1.15. PlacementDecisions API (v1beta1) 1.15.1. Overview This documentation is for the PlacementDecision resource for Red Hat Advanced Cluster Management for Kubernetes. The PlacementDecision resource has four possible requests: create, query, delete, and update. A PlacementDecision indicates a decision from a placement. A PlacementDecision uses the label cluster.open-cluster-management.io/placement={placement name} to reference a certain placement. 1.15.1.1. Version information Version : 2.11.0 1.15.1.2. URI scheme BasePath : /kubernetes/apis Schemes : HTTPS 1.15.1.3. Tags cluster.open-cluster-management.io : Create and manage PlacementDecisions. 1.15.2. Paths 1.15.2.1. Query all PlacementDecisions 1.15.2.1.1. Description Query your PlacementDecisions for more details. 1.15.2.1.2. Parameters Type Name Description Schema Header COOKIE required Authorization: Bearer {ACCESS_TOKEN} ; ACCESS_TOKEN is the user access token. string 1.15.2.1.3. Responses HTTP Code Description Schema 200 Success No Content 403 Access forbidden No Content 404 Resource not found No Content 500 Internal service error No Content 503 Service unavailable No Content 1.15.2.1.4. Consumes placementdecision/yaml 1.15.2.1.5. Tags cluster.open-cluster-management.io 1.15.2.2. Create a PlacementDecision 1.15.2.2.1. Description Create a PlacementDecision. 1.15.2.2.2. Parameters Type Name Description Schema Header COOKIE required Authorization: Bearer {ACCESS_TOKEN} ; ACCESS_TOKEN is the user access token. string Body body required Parameters describing the PlacementDecision to be created. PlacementDecision 1.15.2.2.3. Responses HTTP Code Description Schema 200 Success No Content 403 Access forbidden No Content 404 Resource not found No Content 500 Internal service error No Content 503 Service unavailable No Content 1.15.2.2.4. Consumes placementdecision/yaml 1.15.2.2.5. Tags cluster.open-cluster-management.io 1.15.2.2.6. Example HTTP request 1.15.2.2.6.1. Request body { "apiVersion" : "cluster.open-cluster-management.io/v1beta1", "kind" : "PlacementDecision", "metadata" : { "labels" : { "cluster.open-cluster-management.io/placement" : "placement1" }, "name" : "placement1-decision1", "namespace": "ns1" }, "status" : { } } 1.15.2.3. Query a single PlacementDecision 1.15.2.3.1. Description Query a single PlacementDecision for more details. 1.15.2.3.2. Parameters Type Name Description Schema Header COOKIE required Authorization: Bearer {ACCESS_TOKEN} ; ACCESS_TOKEN is the user access token. string Path placementdecision_name required Name of the PlacementDecision that you want to query. string 1.15.2.3.3. Responses HTTP Code Description Schema 200 Success No Content 403 Access forbidden No Content 404 Resource not found No Content 500 Internal service error No Content 503 Service unavailable No Content 1.15.2.3.4. Tags cluster.open-cluster-management.io 1.15.2.4. Delete a PlacementDecision 1.15.2.4.1. Description Delete a single PlacementDecision. 1.15.2.4.2. Parameters Type Name Description Schema Header COOKIE required Authorization: Bearer {ACCESS_TOKEN} ; ACCESS_TOKEN is the user access token. string Path placementdecision_name required Name of the PlacementDecision that you want to delete. string 1.15.2.4.3. Responses HTTP Code Description Schema 200 Success No Content 403 Access forbidden No Content 404 Resource not found No Content 500 Internal service error No Content 503 Service unavailable No Content 1.15.2.4.4. Tags cluster.open-cluster-management.io 1.15.3. Definitions 1.15.3.1. PlacementDecision Name Description Schema apiVersion required Versioned schema of PlacementDecision . string kind required String value that represents the REST resource. string metadata required Metadata of PlacementDecision . object status optional Current status of the PlacementDecision . PlacementStatus PlacementStatus Name Description Schema Decisions required Slice of decisions according to a placement. ClusterDecision array ClusterDecision Name Description Schema clusterName required Name of the ManagedCluster . string reason required Reason why the ManagedCluster is selected. string 1.16. DiscoveryConfig API 1.16.1. Overview This documentation is for the DiscoveryConfig resource for Red Hat Advanced Cluster Management for Kubernetes. The DiscoveryConfig resource has four possible requests: create, query, delete, and update. 1.16.1.1. Version information Version : 2.11.0 1.16.1.2. URI scheme BasePath : /kubernetes/apis Schemes : HTTPS 1.16.1.3. Tags discoveryconfigs.discovery.open-cluster-management.io : Create and manage DiscoveryConfigs 1.16.2. Paths 1.16.2.1. Create a DiscoveryConfig 1.16.2.1.1. Description Create a DiscoveryConfig. 1.16.2.1.2. Parameters Type Name Description Schema Header COOKIE required Authorization: Bearer {ACCESS_TOKEN} ; ACCESS_TOKEN is the user access token. string Path namespace required Namespace that you want to use, for example, default. string Body body required Parameters describing the DiscoveryConfig to be created. DiscoveryConfig 1.16.2.1.3. Responses HTTP Code Description Schema 200 Success No Content 403 Access forbidden No Content 404 Resource not found No Content 500 Internal service error No Content 503 Service unavailable No Content 1.16.2.1.4. Consumes discoveryconfigs/yaml 1.16.2.1.5. Tags discoveryconfigs.discovery.open-cluster-management.io 1.16.2.1.5.1. Request body { "apiVersion": "apiextensions.k8s.io/v1", "kind": "CustomResourceDefinition", "metadata": { "annotations": { "controller-gen.kubebuilder.io/version": "v0.4.1", }, "creationTimestamp": null, "name": "discoveryconfigs.discovery.open-cluster-management.io", }, "spec": { "group": "discovery.open-cluster-management.io", "names": { "kind": "DiscoveryConfig", "listKind": "DiscoveryConfigList", "plural": "discoveryconfigs", "singular": "discoveryconfig" }, "scope": "Namespaced", "versions": [ { "name": "v1", "schema": { "openAPIV3Schema": { "description": "DiscoveryConfig is the Schema for the discoveryconfigs API", "properties": { "apiVersion": { "description": "APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources", "type": "string" }, "kind": { "description": "Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds", "type": "string" }, "metadata": { "type": "object" }, "spec": { "description": "DiscoveryConfigSpec defines the desired state of DiscoveryConfig", "properties": { "credential": { "description": "Credential is the secret containing credentials to connect to the OCM api on behalf of a user", "type": "string" }, "filters": { "description": "Sets restrictions on what kind of clusters to discover", "properties": { "lastActive": { "description": "LastActive is the last active in days of clusters to discover, determined by activity timestamp", "type": "integer" }, "openShiftVersions": { "description": "OpenShiftVersions is the list of release versions of OpenShift of the form \"<Major>.<Minor>\"", "items": { "description": "Semver represents a partial semver string with the major and minor version in the form \"<Major>.<Minor>\". For example: \"4.14\"", "pattern": "^(?:0|[1-9]\\d*)\\.(?:0|[1-9]\\d*)USD", "type": "string" }, "type": "array" } }, "type": "object" } }, "required": [ "credential" ], "type": "object" }, "status": { "description": "DiscoveryConfigStatus defines the observed state of DiscoveryConfig", "type": "object" } }, "type": "object" } }, "served": true, "storage": true, "subresources": { "status": {} } } ] }, "status": { "acceptedNames": { "kind": "", "plural": "" }, "conditions": [], "storedVersions": [] } } 1.16.2.2. Query all DiscoveryConfigs 1.16.2.2.1. Description Query your discovery config operator for more details. 1.16.2.2.2. Parameters Type Name Description Schema Header COOKIE required Authorization: Bearer {ACCESS_TOKEN} ; ACCESS_TOKEN is the user access token. string Path namespace required Namespace that you want to use, for example, default. string 1.16.2.2.3. Responses HTTP Code Description Schema 200 Success No Content 403 Access forbidden No Content 404 Resource not found No Content 500 Internal service error No Content 503 Service unavailable No Content 1.16.2.2.4. Consumes operator/yaml 1.16.2.2.5. Tags discoveryconfigs.discovery.open-cluster-management.io 1.16.2.3. Delete a DiscoveryConfig operator 1.16.2.3.1. Parameters Type Name Description Schema Header COOKIE required Authorization: Bearer {ACCESS_TOKEN} ; ACCESS_TOKEN is the user access token. string Path application_name required Name of the Discovery Config operator that you want to delete. string Path namespace required Namespace that you want to use, for example, default. string 1.16.2.3.2. Responses HTTP Code Description Schema 200 Success No Content 403 Access forbidden No Content 404 Resource not found No Content 500 Internal service error No Content 503 Service unavailable No Content 1.16.2.3.3. Tags discoveryconfigs.operator.open-cluster-management.io 1.16.3. Definitions 1.16.3.1. DiscoveryConfig Name Description Schema apiVersion required The versioned schema of the discoveryconfigs. string kind required String value that represents the REST resource. string metadata required Describes rules that define the resource. object spec required Defines the desired state of DiscoveryConfig. See List of specs 1.16.3.2. List of specs Name Description Schema credential required Credential is the secret containing credentials to connect to the OCM API on behalf of a user. string filters optional Sets restrictions on what kind of clusters to discover. See List of filters 1.16.3.3. List of filters Name Description Schema lastActive required LastActive is the last active in days of clusters to discover, determined by activity timestamp. integer openShiftVersions optional OpenShiftVersions is the list of release versions of OpenShift of the form "<Major>.<Minor>" object 1.17. DiscoveredCluster API 1.17.1. Overview This documentation is for the DiscoveredCluster resource for Red Hat Advanced Cluster Management for Kubernetes. The DiscoveredCluster resource has four possible requests: create, query, delete, and update. 1.17.1.1. Version information Version : 2.11.0 1.17.1.2. URI scheme BasePath : /kubernetes/apis Schemes : HTTPS 1.17.1.3. Tags discoveredclusters.discovery.open-cluster-management.io : Create and manage DiscoveredClusters 1.17.2. Paths 1.17.2.1. Create a DiscoveredCluster 1.17.2.1.1. Description Create a DiscoveredCluster. 1.17.2.1.2. Parameters Type Name Description Schema Header COOKIE required Authorization: Bearer {ACCESS_TOKEN} ; ACCESS_TOKEN is the user access token. string Path namespace required Namespace that you want to use, for example, default. string Body body required Parameters describing the DiscoveredCluster to be created. DiscoveredCluster 1.17.2.1.3. Responses HTTP Code Description Schema 200 Success No Content 403 Access forbidden No Content 404 Resource not found No Content 500 Internal service error No Content 503 Service unavailable No Content 1.17.2.1.4. Consumes discoveredclusters/yaml 1.17.2.1.5. Tags discoveredclusters.discovery.open-cluster-management.io 1.17.2.1.5.1. Request body { "apiVersion": "apiextensions.k8s.io/v1", "kind": "CustomResourceDefinition", "metadata": { "annotations": { "controller-gen.kubebuilder.io/version": "v0.4.1",\ }, "creationTimestamp": null, "name": "discoveredclusters.discovery.open-cluster-management.io", }, "spec": { "group": "discovery.open-cluster-management.io", "names": { "kind": "DiscoveredCluster", "listKind": "DiscoveredClusterList", "plural": "discoveredclusters", "singular": "discoveredcluster" }, "scope": "Namespaced", "versions": [ { "name": "v1", "schema": { "openAPIV3Schema": { "description": "DiscoveredCluster is the Schema for the discoveredclusters API", "properties": { "apiVersion": { "description": "APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources", "type": "string" }, "kind": { "description": "Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds", "type": "string" }, "metadata": { "type": "object" }, "spec": { "description": "DiscoveredClusterSpec defines the desired state of DiscoveredCluster", "properties": { "activityTimestamp": { "format": "date-time", "type": "string" }, "apiUrl": { "type": "string" }, "cloudProvider": { "type": "string" }, "console": { "type": "string" }, "creationTimestamp": { "format": "date-time", "type": "string" }, "credential": { "description": "ObjectReference contains enough information to let you inspect or modify the referred object. --- New uses of this type are discouraged because of difficulty describing its usage when embedded in APIs. 1. Ignored fields. It includes many fields which are not generally honored. For instance, ResourceVersion and FieldPath are both very rarely valid in actual usage. 2. Invalid usage help. It is impossible to add specific help for individual usage. In most embedded usages, there are particular restrictions like, \"must refer only to types A and B\" or \"UID not honored\" or \"name must be restricted\". Those cannot be well described when embedded. 3. Inconsistent validation. Because the usages are different, the validation rules are different by usage, which makes it hard for users to predict what will happen. 4. The fields are both imprecise and overly precise. Kind is not a precise mapping to a URL. This can produce ambiguity during interpretation and require a REST mapping. In most cases, the dependency is on the group,resource tuple and the version of the actual struct is irrelevant. 5. We cannot easily change it. Because this type is embedded in many locations, updates to this type will affect numerous schemas. Don't make new APIs embed an underspecified API type they do not control. Instead of using this type, create a locally provided and used type that is well-focused on your reference. For example, ServiceReferences for admission registration: https://github.com/kubernetes/api/blob/release-1.17/admissionregistration/v1/types.go#L533 .", "properties": { "apiVersion": { "description": "API version of the referent.", "type": "string" }, "fieldPath": { "description": "If referring to a piece of an object instead of an entire object, this string should contain a valid JSON/Go field access statement, such as desiredState.manifest.containers[2]. For example, if the object reference is to a container within a pod, this would take on a value like: \"spec.containers{name}\" (where \"name\" refers to the name of the container that triggered the event) or if no container name is specified \"spec.containers[2]\" (container with index 2 in this pod). This syntax is chosen only to have some well-defined way of referencing a part of an object. TODO: this design is not final and this field is subject to change in the future.", "type": "string" }, "kind": { "description": "Kind of the referent. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds", "type": "string" }, "name": { "description": "Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names", "type": "string" }, "namespace": { "description": "Namespace of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/namespaces/", "type": "string" }, "resourceVersion": { "description": "Specific resourceVersion to which this reference is made, if any. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#concurrency-control-and-consistency", "type": "string" }, "uid": { "description": "UID of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#uids", "type": "string" } }, "type": "object" }, "displayName": { "type": "string" }, "isManagedCluster": { "type": "boolean" }, "name": { "type": "string" }, "openshiftVersion": { "type": "string" }, "status": { "type": "string" }, "type": { "type": "string" } }, "required": [ "apiUrl", "displayName", "isManagedCluster", "name", "type" ], "type": "object" }, "status": { "description": "DiscoveredClusterStatus defines the observed state of DiscoveredCluster", "type": "object" } }, "type": "object" } }, "served": true, "storage": true, "subresources": { "status": {} } } ] }, "status": { "acceptedNames": { "kind": "", "plural": "" }, "conditions": [], "storedVersions": [] } } 1.17.2.2. Query all DiscoveredClusters 1.17.2.2.1. Description Query your discovered clusters operator for more details. 1.17.2.2.2. Parameters Type Name Description Schema Header COOKIE required Authorization: Bearer {ACCESS_TOKEN} ; ACCESS_TOKEN is the user access token. string Path namespace required Namespace that you want to use, for example, default. string 1.17.2.2.3. Responses HTTP Code Description Schema 200 Success No Content 403 Access forbidden No Content 404 Resource not found No Content 500 Internal service error No Content 503 Service unavailable No Content 1.17.2.2.4. Consumes operator/yaml 1.17.2.2.5. Tags discoveredclusters.discovery.open-cluster-management.io 1.17.2.3. Delete a DiscoveredCluster operator 1.17.2.3.1. Parameters Type Name Description Schema Header COOKIE required Authorization: Bearer {ACCESS_TOKEN} ; ACCESS_TOKEN is the user access token. string Path application_name required Name of the Discovered Cluster operator that you want to delete. string Path namespace required Namespace that you want to use, for example, default. string 1.17.2.3.2. Responses HTTP Code Description Schema 200 Success No Content 403 Access forbidden No Content 404 Resource not found No Content 500 Internal service error No Content 503 Service unavailable No Content 1.17.2.3.3. Tags discoveredclusters.operator.open-cluster-management.io 1.17.3. Definitions 1.17.3.1. DiscoveredCluster Name Description Schema apiVersion required The versioned schema of the discoveredclusters. string kind required String value that represents the REST resource. string metadata required Describes rules that define the resource. object spec required DiscoveredClusterSpec defines the desired state of DiscoveredCluster. See List of specs 1.17.3.2. List of specs Name Description Schema activityTimestamp optional Discoveredclusters last available activity timestamp. metav1.time apiUrl required Discoveredclusters API URL endpoint. string cloudProvider optional Cloud provider of discoveredcluster. string console optional Discoveredclusters console URL endpoint. string creationTimestamp optional Discoveredclusters creation timestamp. metav1.time credential optional The reference to the credential from which the cluster was discovered. corev1.ObjectReference displayName required The display name of the discovered cluster. string isManagedCluster required If true, cluster is managed by ACM. boolean name required The name of the discoveredcluster. string openshiftVersion optional The OpenShift version of the discovered cluster. string status optional The status of the discovered cluster. string type required The OpenShift flavor (ex. OCP, ROSA, etc.). string 1.18. AddOnDeploymentConfig API (v1alpha1) 1.18.1. Overview This documentation is for the AddOnDeploymentConfig resource for Red Hat Advanced Cluster Management for Kubernetes. The AddOnDeploymentConfig resource has four possible requests: create, query, delete, and update. AddOnDeploymentConfig represents a deployment configuration for an add-on. 1.18.1.1. Version information Version : 2.11.0 1.18.1.2. URI scheme BasePath : /kubernetes/apis Schemes : HTTPS 1.18.1.3. Tags addon.open-cluster-management.io : Create and manage AddOnDeploymentConfigs 1.18.2. Paths 1.18.2.1. Query all AddOnDeploymentConfigs 1.18.2.1.1. Description Query your AddOnDeploymentConfigs for more details. 1.18.2.1.2. Parameters Type Name Description Schema Header COOKIE required Authorization: Bearer {ACCESS_TOKEN} ; ACCESS_TOKEN is the user access token. string 1.18.2.1.3. Responses HTTP Code Description Schema 200 Success No Content 403 Access forbidden No Content 404 Resource not found No Content 500 Internal service error No Content 503 Service unavailable No Content 1.18.2.1.4. Consumes addondeploymentconfig/yaml 1.18.2.1.5. Tags addon.open-cluster-management.io 1.18.2.2. Create a AddOnDeploymentConfig 1.18.2.2.1. Description Create a AddOnDeploymentConfig. 1.18.2.2.2. Parameters Type Name Description Schema Header COOKIE required Authorization: Bearer {ACCESS_TOKEN} ; ACCESS_TOKEN is the user access token. string Body body required Parameters describing the AddOnDeploymentConfig binding to be created. AddOnDeploymentConfig 1.18.2.2.3. Responses HTTP Code Description Schema 200 Success No Content 403 Access forbidden No Content 404 Resource not found No Content 500 Internal service error No Content 503 Service unavailable No Content 1.18.2.2.4. Consumes addondeploymentconfig/yaml 1.18.2.2.5. Tags addon.open-cluster-management.io 1.18.2.2.6. Example HTTP request 1.18.2.2.6.1. Request body { "apiVersion": "addon.open-cluster-management.io/v1alpha1", "kind": "AddOnDeploymentConfig", "metadata": { "name": "deploy-config", "namespace": "open-cluster-management-hub" }, "spec": { "nodePlacement": { "nodeSelector": { "node-dedicated": "acm-addon" }, "tolerations": [ { "effect": "NoSchedule", "key": "node-dedicated", "operator": "Equal", "value": "acm-addon" } ] } } } 1.18.2.3. Query a single AddOnDeploymentConfig 1.18.2.3.1. Description Query a single AddOnDeploymentConfig for more details. 1.18.2.3.2. Parameters Type Name Description Schema Header COOKIE required Authorization: Bearer {ACCESS_TOKEN} ; ACCESS_TOKEN is the user access token. string Path addondeploymentconfig_name required Name of the AddOnDeploymentConfig that you want to query. string 1.18.2.3.3. Responses HTTP Code Description Schema 200 Success No Content 403 Access forbidden No Content 404 Resource not found No Content 500 Internal service error No Content 503 Service unavailable No Content 1.18.2.3.4. Tags addon.open-cluster-management.io 1.18.2.4. Delete a AddOnDeploymentConfig 1.18.2.4.1. Description Delete a single AddOnDeploymentConfig. 1.18.2.4.2. Parameters Type Name Description Schema Header COOKIE required Authorization: Bearer {ACCESS_TOKEN} ; ACCESS_TOKEN is the user access token. string Path addondeploymentconfig_name required Name of the AddOnDeploymentConfig that you want to delete. string 1.18.2.4.3. Responses HTTP Code Description Schema 200 Success No Content 403 Access forbidden No Content 404 Resource not found No Content 500 Internal service error No Content 503 Service unavailable No Content 1.18.2.4.4. Tags addon.open-cluster-management.io 1.18.3. Definitions 1.18.3.1. AddOnDeploymentConfig Name Description Schema apiVersion required Versioned schema of the AddOnDeploymentConfig. string kind required String value that represents the REST resource. string metadata required Metadata of the AddOnDeploymentConfig. object spec required Specification of the AddOnDeploymentConfig. spec spec Name Description Schema customizedVariables optional A list of name-value variables for the current add-on deployment. The add-on implementation can use these variables to render its add-on deployment. customizedVariable array nodePlacement required Enables explicit control over the scheduling of the add-on agents on the managed cluster. nodePlacement customizedVariable Name Description Schema name required Name of this variable. string value optional Value of this variable. string nodePlacement Name Description Schema nodeSelector optional Define which nodes the pods are scheduled to run on. When the nodeSelector is empty, the nodeSelector selects all nodes. map[string]string tolerations optional Applied to pods and used to schedule pods to any taint that matches the <key,value,effect> toleration using the matching operator ( <operator> ). []corev1.Toleration 1.19. ClusterManagementAddOn API (v1alpha1) 1.19.1. Overview This documentation is for the ClusterManagementAddOn resource for Red Hat Advanced Cluster Management for Kubernetes. The ClusterManagementAddOn resource has four possible requests: create, query, delete, and update. ClusterManagementAddOn represents the registration of an add-on to the cluster manager. This resource allows the user to discover which add-on is available for the cluster manager and also provides metadata information about the add-on. This resource also provides a reference to ManagedClusterAddOn, the name of the ClusterManagementAddOn resource that is used for the namespace-scoped ManagedClusterAddOn resource. ClusterManagementAddOn is a cluster-scoped resource. 1.19.1.1. Version information Version : 2.11.0 1.19.1.2. URI scheme BasePath : /kubernetes/apis Schemes : HTTPS 1.19.1.3. Tags addon.open-cluster-management.io : Create and manage ClusterManagementAddOns 1.19.2. Paths 1.19.2.1. Query all ClusterManagementAddOns 1.19.2.1.1. Description Query your ClusterManagementAddOns for more details. 1.19.2.1.2. Parameters Type Name Description Schema Header COOKIE required Authorization: Bearer {ACCESS_TOKEN} ; ACCESS_TOKEN is the user access token. string 1.19.2.1.3. Responses HTTP Code Description Schema 200 Success No Content 403 Access forbidden No Content 404 Resource not found No Content 500 Internal service error No Content 503 Service unavailable No Content 1.19.2.1.4. Consumes clustermanagementaddon/yaml 1.19.2.1.5. Tags addon.open-cluster-management.io 1.19.2.2. Create a ClusterManagementAddOn 1.19.2.2.1. Description Create a ClusterManagementAddOn. 1.19.2.2.2. Parameters Type Name Description Schema Header COOKIE required Authorization: Bearer {ACCESS_TOKEN} ; ACCESS_TOKEN is the user access token. string Body body required Parameters describing the ClusterManagementAddon binding to be created. ClusterManagementAddOn 1.19.2.2.3. Responses HTTP Code Description Schema 200 Success No Content 403 Access forbidden No Content 404 Resource not found No Content 500 Internal service error No Content 503 Service unavailable No Content 1.19.2.2.4. Consumes clustermanagementaddon/yaml 1.19.2.2.5. Tags addon.open-cluster-management.io 1.19.2.2.6. Example HTTP request 1.19.2.2.6.1. Request body { "apiVersion": "addon.open-cluster-management.io/v1alpha1", "kind": "ClusterManagementAddOn", "metadata": { "name": "helloworld" }, "spec": { "supportedConfigs": [ { "defaultConfig": { "name": "deploy-config", "namespace": "open-cluster-management-hub" }, "group": "addon.open-cluster-management.io", "resource": "addondeploymentconfigs" } ] }, "status" : { } } 1.19.2.3. Query a single ClusterManagementAddOn 1.19.2.3.1. Description Query a single ClusterManagementAddOn for more details. 1.19.2.3.2. Parameters Type Name Description Schema Header COOKIE required Authorization: Bearer {ACCESS_TOKEN} ; ACCESS_TOKEN is the user access token. string Path clustermanagementaddon_name required Name of the ClusterManagementAddOn that you want to query. string 1.19.2.3.3. Responses HTTP Code Description Schema 200 Success No Content 403 Access forbidden No Content 404 Resource not found No Content 500 Internal service error No Content 503 Service unavailable No Content 1.19.2.3.4. Tags addon.open-cluster-management.io 1.19.2.4. Delete a ClusterManagementAddOn 1.19.2.4.1. Description Delete a single ClusterManagementAddOn. 1.19.2.4.2. Parameters Type Name Description Schema Header COOKIE required Authorization: Bearer {ACCESS_TOKEN} ; ACCESS_TOKEN is the user access token. string Path clustermanagementaddon_name required Name of the ClusterManagementAddOn that you want to delete. string 1.19.2.4.3. Responses HTTP Code Description Schema 200 Success No Content 403 Access forbidden No Content 404 Resource not found No Content 500 Internal service error No Content 503 Service unavailable No Content 1.19.2.4.4. Tags addon.open-cluster-management.io 1.19.3. Definitions 1.19.3.1. ClusterManagementAddOn Name Description Schema apiVersion required Versioned schema of the ClusterManagementAddOn. string kind required String value that represents the REST resource. string metadata required Metadata of the ClusterManagementAddOn. object spec required Specification of the ClusterManagementAddOn. spec spec Name Description Schema addOnMeta optional AddOnMeta is a reference to the metadata information for the add-on. addOnMeta supportedConfigs optional SupportedConfigs is a list of configuration types supported by add-on. configMeta array addOnMeta Name Description Schema displayName optional Represents the name of add-on that is displayed. string description optional Represents the detailed description of the add-on. string configMeta Name Description Schema group optional Group of the add-on configuration. string resource required Resource of the add-on configuration. string defaultConfig required Represents the namespace and name of the default add-on configuration. This is where all add-ons have a same configuration. configReferent configReferent Name Description Schema namespace optional Namespace of the add-on configuration. If this field is not set, the configuration is cluster-scope. string name required Name of the add-on configuration. string 1.20. ManagedClusterAddOn API (v1alpha1) 1.20.1. Overview This documentation is for the ManagedClusterAddOn resource for Red Hat Advanced Cluster Management for Kubernetes. The ManagedClusterAddOn resource has four possible requests: create, query, delete, and update. ManagedClusterAddOn is the custom resource object which holds the current state of an add-on. This resource should be created in the ManagedCluster namespace. 1.20.1.1. Version information Version : 2.11.0 1.20.1.2. URI scheme BasePath : /kubernetes/apis Schemes : HTTPS 1.20.1.3. Tags addon.open-cluster-management.io : Create and manage ManagedClusterAddOns 1.20.2. Paths 1.20.2.1. Query all ManagedClusterAddOns 1.20.2.1.1. Description Query your ManagedClusterAddOns for more details. 1.20.2.1.2. Parameters Type Name Description Schema Header COOKIE required Authorization: Bearer {ACCESS_TOKEN} ; ACCESS_TOKEN is the user access token. string 1.20.2.1.3. Responses HTTP Code Description Schema 200 Success No Content 403 Access forbidden No Content 404 Resource not found No Content 500 Internal service error No Content 503 Service unavailable No Content 1.20.2.1.4. Consumes managedclusteraddon/yaml 1.20.2.1.5. Tags addon.open-cluster-management.io 1.20.2.2. Create a ManagedClusterAddOn 1.20.2.2.1. Description Create a ManagedClusterAddOn. 1.20.2.2.2. Parameters Type Name Description Schema Header COOKIE required Authorization: Bearer {ACCESS_TOKEN} ; ACCESS_TOKEN is the user access token. string Body body required Parameters that describe the ManagedClusterAddOn binding to be created. ManagedClusterAddOn 1.20.2.2.3. Responses HTTP Code Description Schema 200 Success No Content 403 Access forbidden No Content 404 Resource not found No Content 500 Internal service error No Content 503 Service unavailable No Content 1.20.2.2.4. Consumes managedclusteraddon/yaml 1.20.2.2.5. Tags addon.open-cluster-management.io 1.20.2.2.6. Example HTTP request 1.20.2.2.6.1. Request body { "apiVersion": "addon.open-cluster-management.io/v1alpha1", "kind": "ManagedClusterAddOn", "metadata": { "name": "helloworld", "namespace": "cluster1" }, "spec": { "configs": [ { "group": "addon.open-cluster-management.io", "name": "cluster-deploy-config", "namespace": "open-cluster-management-hub", "resource": "addondeploymentconfigs" } ], "installNamespace": "default" }, "status" : { } } 1.20.2.3. Query a single ManagedClusterAddOn 1.20.2.3.1. Description Query a single ManagedClusterAddOn for more details. 1.20.2.3.2. Parameters Type Name Description Schema Header COOKIE required Authorization: Bearer {ACCESS_TOKEN} ; ACCESS_TOKEN is the user access token. string Path managedclusteraddon_name required Name of the ManagedClusterAddOn that you want to query. string 1.20.2.3.3. Responses HTTP Code Description Schema 200 Success No Content 403 Access forbidden No Content 404 Resource not found No Content 500 Internal service error No Content 503 Service unavailable No Content 1.20.2.3.4. Tags addon.open-cluster-management.io 1.20.2.4. Delete a ManagedClusterAddOn 1.20.2.4.1. Description Delete a single ManagedClusterAddOn. 1.20.2.4.2. Parameters Type Name Description Schema Header COOKIE required Authorization: Bearer {ACCESS_TOKEN} ; ACCESS_TOKEN is the user access token. string Path managedclusteraddon_name required Name of the ManagedClusterAddOn that you want to delete. string 1.20.2.4.3. Responses HTTP Code Description Schema 200 Success No Content 403 Access forbidden No Content 404 Resource not found No Content 500 Internal service error No Content 503 Service unavailable No Content 1.20.2.4.4. Tags addon.open-cluster-management.io 1.20.3. Definitions 1.20.3.1. ManagedClusterAddOn Name Description Schema apiVersion required Versioned schema of the ManagedClusterAddOn. string kind required String value that represents the REST resource. string metadata required Metadata of the ManagedClusterAddOn. object spec required Specification of the ManagedClusterAddOn. spec spec Name Description Schema installNamespace optional The namespace on the managed cluster to install the add-on agent. If it is not set, the open-cluster-management-agent-addon namespace is used to install the add-on agent. string configs optional A list of add-on configurations where the current add-on has its own configurations. addOnConfig array addOnConfig Name Description Schema group optional Group of the add-on configuration. string resource required Resource of the add-on configuration. string namespace optional Namespace of the add-on configuration. If this field is not set, the configuration is cluster-scope. string name required Name of the add-on configuration. string 1.21. ManagedClusterSet API (v1beta2) 1.21.1. Overview This documentation is for the ManagedClusterSet resource for Red Hat Advanced Cluster Management for Kubernetes. The ManagedClusterSet resource has four possible requests: create, query, delete, and update. ManagedClusterSet groups two or more managed clusters into a set that you can operate together. Managed clusters that belong to a set can have similar attributes, such as shared use purposes or the same deployment region. 1.21.1.1. Version information Version : 2.11.0 1.21.1.2. URI scheme BasePath : /kubernetes/apis Schemes : HTTPS 1.21.1.3. Tags cluster.open-cluster-management.io : Create and manage ManagedClusterSets 1.21.2. Paths 1.21.2.1. Query all managedclustersets 1.21.2.1.1. Description Query your managedclustersets for more details. 1.21.2.1.2. Parameters Type Name Description Schema Header COOKIE required Authorization: Bearer {ACCESS_TOKEN} ; ACCESS_TOKEN is the user access token. string Path namespace required Namespace that you want to use, for example, default. string 1.21.2.1.3. Responses HTTP Code Description Schema 200 Success No Content 403 Access forbidden No Content 404 Resource not found No Content 500 Internal service error No Content 503 Service unavailable No Content 1.21.2.1.4. Consumes managedclusterset/yaml 1.21.2.1.5. Tags cluster.open-cluster-management.io 1.21.2.2. Create a managedclusterset 1.21.2.2.1. Description Create a managedclusterset. 1.21.2.2.2. Parameters Type Name Description Schema Header COOKIE required Authorization: Bearer {ACCESS_TOKEN} ; ACCESS_TOKEN is the user access token. string Path namespace required Namespace that you want to use, for example, default . string Body body required Parameters describing the managedclusterset to be created. Managedclusterset 1.21.2.2.3. Responses HTTP Code Description Schema 200 Success No Content 403 Access forbidden No Content 404 Resource not found No Content 500 Internal service error No Content 503 Service unavailable No Content 1.21.2.2.4. Consumes managedclusterset/yaml 1.21.2.2.5. Tags cluster.open-cluster-management.io 1.21.2.2.6. Example HTTP request 1.21.2.2.6.1. Request body { "apiVersion" : "cluster.open-cluster-management.io/v1beta2", "kind" : "ManagedClusterSet", "metadata" : { "name" : "example-clusterset", }, "spec": { }, "status" : { } } 1.21.2.3. Query a single managedclusterset 1.21.2.3.1. Description Query a single managedclusterset for more details. 1.21.2.3.2. Parameters Type Name Description Schema Header COOKIE required Authorization: Bearer {ACCESS_TOKEN} ; ACCESS_TOKEN is the user access token. string Path namespace required Namespace that you want to use, for example, default . string Path managedclusterset_name required Name of the managedclusterset that you want to query. string 1.21.2.3.3. Responses HTTP Code Description Schema 200 Success No Content 403 Access forbidden No Content 404 Resource not found No Content 500 Internal service error No Content 503 Service unavailable No Content 1.21.2.3.4. Tags cluster.open-cluster-management.io 1.21.2.4. Delete a managedclusterset 1.21.2.4.1. Description Delete a single managedclusterset. 1.21.2.4.2. Parameters Type Name Description Schema Header COOKIE required Authorization: Bearer {ACCESS_TOKEN} ; ACCESS_TOKEN is the user access token. string Path namespace required Namespace that you want to use, for example, default . string Path managedclusterset_name required Name of the managedclusterset that you want to delete. string 1.21.2.4.3. Responses HTTP Code Description Schema 200 Success No Content 403 Access forbidden No Content 404 Resource not found No Content 500 Internal service error No Content 503 Service unavailable No Content 1.21.2.4.4. Tags cluster.open-cluster-management.io 1.21.3. Definitions 1.21.3.1. ManagedClusterSet Name Description Schema apiVersion required Versioned schema of the ManagedClusterSet . string kind required String value that represents the REST resource. string metadata required Metadata of the ManagedClusterSet . object spec required Specification of the ManagedClusterSet . spec 1.22. KlusterletConfig API (v1alpha1) 1.22.1. Overview This documentation is for the KlusterletConfig resource for Red Hat Advanced Cluster Management for Kubernetes. The KlusterletConfig resource has four possible requests: create, query, delete, and update. KlusterletConfig contains configuration information about a klusterlet, such as nodeSelector , tolerations , and pullSecret . KlusterletConfig is a cluster-scoped resource and only works on klusterlet pods in the open-cluster-managemnet-agent namespace. KlusterletConfig does not affect add-on deployment configurations. 1.22.1.1. Version information Version : 2.11.0 1.22.1.2. URI scheme BasePath : /kubernetes/apis Schemes : HTTPS 1.22.1.3. Tags config.open-cluster-management.io : Create and manage KlusterletConfig 1.22.2. Paths 1.22.2.1. Query all KlusterletConfig 1.22.2.1.1. Description Query your KlusterletConfigs for more details. 1.22.2.1.2. Parameters Type Name Description Schema Header COOKIE required Authorization: Bearer {ACCESS_TOKEN} ; ACCESS_TOKEN is the user access token. string 1.22.2.1.3. Responses HTTP Code Description Schema 200 Success No Content 403 Access forbidden No Content 404 Resource not found No Content 500 Internal service error No Content 503 Service unavailable No Content 1.22.2.1.4. Consumes klusterletconfig/yaml 1.22.2.1.5. Tags config.open-cluster-management.io 1.22.2.2. Create a KlusterletConfig 1.22.2.2.1. Description Create a KlusterletConfig. 1.22.2.2.2. Parameters Type Name Description Schema Header COOKIE required Authorization: Bearer {ACCESS_TOKEN} ; ACCESS_TOKEN is the user access token. string Body body required Parameters describing the KlusterletConfig binding to be created. KlusterletConfig 1.22.2.2.3. Responses HTTP Code Description Schema 200 Success No Content 403 Access forbidden No Content 404 Resource not found No Content 500 Internal service error No Content 503 Service unavailable No Content 1.22.2.2.4. Consumes klusterletconfig/yaml 1.22.2.2.5. Tags config.open-cluster-management.io 1.22.2.2.6. Example HTTP request 1.22.2.2.6.1. Request body { "apiVersion": "apiextensions.k8s.io/v1", "kind": "CustomResourceDefinition", "metadata": { "annotations": { "controller-gen.kubebuilder.io/version": "v0.7.0" }, "creationTimestamp": null, "name": "klusterletconfigs.config.open-cluster-management.io" }, "spec": { "group": "config.open-cluster-management.io", "names": { "kind": "KlusterletConfig", "listKind": "KlusterletConfigList", "plural": "klusterletconfigs", "singular": "klusterletconfig" }, "preserveUnknownFields": false, "scope": "Cluster", "versions": [ { "name": "v1alpha1", "schema": { "openAPIV3Schema": { "description": "KlusterletConfig contains the configuration of a klusterlet including the upgrade strategy, config overrides, proxy configurations etc.", "properties": { "apiVersion": { "description": "APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources", "type": "string" }, "kind": { "description": "Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds", "type": "string" }, "metadata": { "type": "object" }, "spec": { "description": "Spec defines the desired state of KlusterletConfig", "properties": { "hubKubeAPIServerProxyConfig": { "description": "HubKubeAPIServerProxyConfig holds proxy settings for connections between klusterlet/add-on agents on the managed cluster and the kube-apiserver on the hub cluster. Empty means no proxy settings is available.", "properties": { "caBundle": { "description": "CABundle is a CA certificate bundle to verify the proxy server. It will be ignored if only HTTPProxy is set; And it is required when HTTPSProxy is set and self signed CA certificate is used by the proxy server.", "format": "byte", "type": "string" }, "httpProxy": { "description": "HTTPProxy is the URL of the proxy for HTTP requests", "type": "string" }, "httpsProxy": { "description": "HTTPSProxy is the URL of the proxy for HTTPS requests HTTPSProxy will be chosen if both HTTPProxy and HTTPSProxy are set.", "type": "string" } }, "type": "object" }, "nodePlacement": { "description": "NodePlacement enables explicit control over the scheduling of the agent components. If the placement is nil, the placement is not specified, it will be omitted. If the placement is an empty object, the placement will match all nodes and tolerate nothing.", "properties": { "nodeSelector": { "additionalProperties": { "type": "string" }, "description": "NodeSelector defines which Nodes the Pods are scheduled on. The default is an empty list.", "type": "object" }, "tolerations": { "description": "Tolerations is attached by pods to tolerate any taint that matches the triple <key,value,effect> using the matching operator <operator>. The default is an empty list.", "items": { "description": "The pod this Toleration is attached to tolerates any taint that matches the triple <key,value,effect> using the matching operator <operator>.", "properties": { "effect": { "description": "Effect indicates the taint effect to match. Empty means match all taint effects. When specified, allowed values are NoSchedule, PreferNoSchedule and NoExecute.", "type": "string" }, "key": { "description": "Key is the taint key that the toleration applies to. Empty means match all taint keys. If the key is empty, operator must be Exists; this combination means to match all values and all keys.", "type": "string" }, "operator": { "description": "Operator represents a key's relationship to the value. Valid operators are Exists and Equal. Defaults to Equal. Exists is equivalent to wildcard for value, so that a pod can tolerate all taints of a particular category.", "type": "string" }, "tolerationSeconds": { "description": "TolerationSeconds represents the period of time the toleration (which must be of effect NoExecute, otherwise this field is ignored) tolerates the taint. By default, it is not set, which means tolerate the taint forever (do not evict). Zero and negative values will be treated as 0 (evict immediately) by the system.", "format": "int64", "type": "integer" }, "value": { "description": "Value is the taint value the toleration matches to. If the operator is Exists, the value should be empty, otherwise just a regular string.", "type": "string" } }, "type": "object" }, "type": "array" } }, "type": "object" }, "pullSecret": { "description": "PullSecret is the name of image pull secret.", "properties": { "apiVersion": { "description": "API version of the referent.", "type": "string" }, "fieldPath": { "description": "If referring to a piece of an object instead of an entire object, this string should contain a valid JSON/Go field access statement, such as desiredState.manifest.containers[2]. For example, if the object reference is to a container within a pod, this would take on a value like: \"spec.containers{name}\" (where \"name\" refers to the name of the container that triggered the event) or if no container name is specified \"spec.containers[2]\" (container with index 2 in this pod). This syntax is chosen only to have some well-defined way of referencing a part of an object. TODO: this design is not final and this field is subject to change in the future.", "type": "string" }, "kind": { "description": "Kind of the referent. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds", "type": "string" }, "name": { "description": "Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names", "type": "string" }, "namespace": { "description": "Namespace of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/namespaces/", "type": "string" }, "resourceVersion": { "description": "Specific resourceVersion to which this reference is made, if any. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#concurrency-control-and-consistency", "type": "string" }, "uid": { "description": "UID of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#uids", "type": "string" } }, "type": "object" }, "registries": { "description": "Registries includes the mirror and source registries. The source registry will be replaced by the Mirror.", "items": { "properties": { "mirror": { "description": "Mirror is the mirrored registry of the Source. Will be ignored if Mirror is empty.", "type": "string" }, "source": { "description": "Source is the source registry. All image registries will be replaced by Mirror if Source is empty.", "type": "string" } }, "required": [ "mirror" ], "type": "object" }, "type": "array" } }, "type": "object" }, "status": { "description": "Status defines the observed state of KlusterletConfig", "type": "object" } }, "type": "object" } }, "served": true, "storage": true, "subresources": { "status": {} } } ] }, "status": { "acceptedNames": { "kind": "", "plural": "" }, "conditions": [], "storedVersions": [] } } 1.22.2.3. Query a single KlusterletConfig 1.22.2.3.1. Description Query a single KlusterletConfig for more details. 1.22.2.3.2. Parameters Type Name Description Schema Header COOKIE required Authorization: Bearer {ACCESS_TOKEN} ; ACCESS_TOKEN is the user access token. string Path klusterletconfig_name required Name of the KlusterletConfig that you want to query. string 1.22.2.3.3. Responses HTTP Code Description Schema 200 Success No Content 403 Access forbidden No Content 404 Resource not found No Content 500 Internal service error No Content 503 Service unavailable No Content 1.22.2.3.4. Tags config.open-cluster-management.io 1.22.2.4. Delete a KlusterletConfig 1.22.2.4.1. Description Delete a single klusterletconfig. 1.22.2.4.2. Parameters Type Name Description Schema Header COOKIE required Authorization: Bearer {ACCESS_TOKEN} ; ACCESS_TOKEN is the user access token. string Path klusterletconfig_name required Name of the KlusterletConfig that you want to delete. string 1.22.2.4.3. Responses HTTP Code Description Schema 200 Success No Content 403 Access forbidden No Content 404 Resource not found No Content 500 Internal service error No Content 503 Service unavailable No Content 1.22.2.4.4. Tags config.open-cluster-management.io 1.22.3. Definitions 1.22.3.1. KlusterletConfig Name Description Schema apiVersion required Versioned schema of the KlusterletConfig. string kind required String value that represents the REST resource. string metadata required Metadata of the KlusterletConfig. object spec required Specification of the KlusterletConfig. spec spec Name Description Schema registries optional Includes the mirror and source registries. The source registry is replaced by the mirror. registry pullSecret optional The name of image pull secret. object nodePlacement required Enables scheduling control of add-on agents on the managed cluster. nodePlacement hubKubeAPIServerProxyConfig required Contains proxy settings for the connections between the klusterlet or add-on agents on the managed cluster and the kube-apiserver on the hub cluster. Empty means no proxy setting is available. kubeAPIServerProxyConfig nodePlacement Name Description Schema nodeSelector optional Define which nodes the pods are scheduled to run on. When the nodeSelector is empty, the nodeSelector selects all nodes. map[string]string tolerations optional Applied to pods and used to schedule pods to any taint that matches the <key,value,effect> toleration using the matching operator ( <operator> ). []corev1.Toleration kubeAPIServerProxyConfig Name Description Schema caBundle optional A CA certificate bundle to verify the proxy server. The bundle is ignored if only HTTPProxy is set. The bundle is required when HTTPSProxy is set and a self signed CA certificate is used by the proxy server. map[string]string httpProxy optional The URL of the proxy for HTTP requests map[string]string httpsProxy optional The URL of the proxy for HTTPS requests. HTTPSProxy is chosen if both HTTPProxy and HTTPSProxy are set. map[string]string 1.23. Policy compliance history (Technology Preview) 1.23.1. Overview The policy compliance history API is an optional technical preview feature if you want long-term storage of Red Hat Advanced Cluster Management for Kubernetes policy compliance events in a queryable format. You can use the API to get additional details such as the spec field to audit and troubleshoot your policy, and get compliance events when a policy is disabled or removed from a cluster. The policy compliance history API can also generate a comma-separated values (CSV) spreadsheet of policy compliance events to help you with auditing and troubleshooting. 1.23.1.1. Version information Version : 2.11.0 1.23.2. API Endpoints 1.23.2.1. Listing policy compliance events /api/v1/compliance-events This lists all policy compliance events that you have access to by default. The response format is as follows and is sorted by event.timestamp in descending order by default: { "data": [ { "id": 2, "cluster": { "name": "cluster1", "cluster_id": "215ce184-8dee-4cab-b99b-1f8f29dff611" }, "parent_policy": { "id": 3, "name": "configure-custom-app", "namespace": "policies", "catageories": ["CM Configuration Management"], "controls": ["CM-2 Baseline Configuration"], "standards": ["NIST SP 800-53"] }, "policy": { "apiGroup": "policy.open-cluster-management.io", "id": 2, "kind": "ConfigurationPolicy", "name": "configure-custom-app", "namespace": "", // Only shown with `?include_spec` "spec": {} }, "event": { "compliance": "NonCompliant", "message": "configmaps [app-data] not found in namespace default", "timestamp": "2023-07-19T18:25:43.511Z", "metadata": {} } }, { "id": 1, "cluster": { "name": "cluster2", "cluster_id": "415ce234-8dee-4cab-b99b-1f8f29dff461" }, "parent_policy": { "id": 3, "name": "configure-custom-app", "namespace": "policies", "catageories": ["CM Configuration Management"], "controls": ["CM-2 Baseline Configuration"], "standards": ["NIST SP 800-53"] }, "policy": { "apiGroup": "policy.open-cluster-management.io", "id": 4, "kind": "ConfigurationPolicy", "name": "configure-custom-app", "namespace": "", // Only shown with `?include_spec` "spec": {} }, "event": { "compliance": "Compliant", "message": "configmaps [app-data] found as specified in namespace default", "timestamp": "2023-07-19T18:25:41.523Z", "metadata": {} } } ], "metadata": { "page": 1, "pages": 7, "per_page": 20, "total": 123 } } The following optional query parameters are accepted. Notice that those without descriptions just filter on the field it references. The parameter value null represents no value. Additionally, multiple values can be specified with commas. For example, ?cluster.name=cluster1,cluster2 for "or" filtering. Commas can be escaped with \ , if necessary. Table 1.1. Table of query parameters Query argument Description cluster.cluster_id cluster.name direction The direction to sort by. This defaults to desc , which represents descending order. The supported values are asc and desc . event.compliance event.message_includes A filter for compliance messages that include the input string. Only a single value is supported. event.message_like A SQL LIKE filter for compliance messages. The percent sign ( % ) represents a wildcard of zero or more characters. The underscore sign ( _ ) represents a wildcard of a single character. For example %configmaps [%my-configmap%]% matches any configuration policy compliance message that refers to the config map my-configmap . event.reported_by event.timestamp event.timestamp_after An RFC 3339 timestamp to indicate only compliance events after this time should be shown. For example, 2024-02-28T16:32:57Z . event.timestamp_before An RFC 3339 timestamp to indicate only compliance events before this time should be shown. For example, 2024-02-28T16:32:57Z . id include_spec A flag to include the spec field of the policy in the return value. This is not set by default. page The page number in the query. This defaults to 1 . parent_policy.categories parent_policy.controls parent_policy.id parent_policy.name parent_policy.namespace parent_policy.standards per_page The number of compliance events returned per page. This defaults to 20 and cannot be larger than 100 . policy.apiGroup policy.id policy.kind policy.name policy.namespace policy.severity sort The field to sort by. This defaults to event.timestamp . All fields except policy.spec and event.metadata are sortable by using dot notation. To specify multiple sort options, use commas such as ?sort=policy.name,policy.namespace . 1.23.2.2. Selecting a single policy compliance event /api/v1/compliance-events/<id> You can select a single policy compliance event by specifying its database ID. For example, /api/v1/compliance-events/1 selects the compliance event with the ID of 1. The format of the return value is the following JSON: { "id": 1, "cluster": { "name": "cluster2", "cluster_id": "415ce234-8dee-4cab-b99b-1f8f29dff461" }, "parent_policy": { "id": 2, "name": "etcd-encryption", "namespace": "policies", "catageories": ["CM Configuration Management"], "controls": ["CM-2 Baseline Configuration"], "standards": ["NIST SP 800-53"] }, "policy": { "apiGroup": "policy.open-cluster-management.io", "id": 4, "kind": "ConfigurationPolicy", "name": "etcd-encryption", "namespace": "", "spec": {} }, "event": { "compliance": "Compliant", "message": "configmaps [app-data] found as specified in namespace default", "timestamp": "2023-07-19T18:25:41.523Z", "metadata": {} } } 1.23.2.3. Generating a spreadsheet /api/v1/reports/compliance-events You can generate a comma separated value (CSV) spreadsheet of compliance events for auditing and troubleshooting. It outputs the same and accepts the same query arguments as the /api/v1/compliance-events API endpoint. By default there is no per_page limitation set and there is no maximum for the per_page query argument. All the CSV headers are the same as the /api/v1/compliance-events API endpoint with underscores separating JSON objects. For example, the event timestamp has a header of event_timestamp . 1.23.3. Authentication and Authorization The policy compliance history API utilizes the OpenShift instance used by the Red Hat Advanced Cluster Management hub cluster for authentication and authorization. You must provide your OpenShift token in the Authorization header of the HTTPS request. To find your token, run the following command: oc whoami --show-token 1.23.3.1. Viewing compliance events To view the compliance events for a managed cluster, you need access to complete the get verb for the ManagedCluster object on the Red Hat Advanced Cluster Management hub cluster. For example, to view the compliance events of the local-cluster cluster, you might use the open-cluster-management:view:local-cluster ClusterRole or create your own resource as the following example: apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: local-cluster-view rules: - apiGroups: - cluster.open-cluster-management.io resources: - managedclusters resourceNames: - local-cluster verbs: - get To verify your access to a particular managed cluster, use the oc auth can-i command. For example, to check if you have access to the local-cluster managed cluster, run the following command: 1.23.3.2. Recording a compliance event Users or service accounts with patch verb access in the policies.policy.open-cluster-management.io/status resource in the corresponding managed cluster namespace have access to record policy compliance events. The governance-policy-framework pod on managed clusters utilizes the open-cluster-management-compliance-history-api-recorder service account in the corresponding managed cluster namespace on the Red Hat Advanced Cluster Management hub cluster to record compliance events. Each service account has the open-cluster-management:compliance-history-api-recorder ClusterRole bound to the managed cluster namespace. Restrict user and service account patch verb access to the policy status to ensure the trustworthiness of the data stored in the policy compliance history API. | [
"GET /cluster.open-cluster-management.io/v1/managedclusters",
"POST /cluster.open-cluster-management.io/v1/managedclusters",
"{ \"apiVersion\" : \"cluster.open-cluster-management.io/v1\", \"kind\" : \"ManagedCluster\", \"metadata\" : { \"labels\" : { \"vendor\" : \"OpenShift\" }, \"name\" : \"cluster1\" }, \"spec\": { \"hubAcceptsClient\": true, \"managedClusterClientConfigs\": [ { \"caBundle\": \"test\", \"url\": \"https://test.com\" } ] }, \"status\" : { } }",
"GET /cluster.open-cluster-management.io/v1/managedclusters/{cluster_name}",
"DELETE /cluster.open-cluster-management.io/v1/managedclusters/{cluster_name}",
"DELETE /hive.openshift.io/v1/{cluster_name}/clusterdeployments/{cluster_name}",
"\"^(?:[A-Za-z0-9+/]{4})*(?:[A-Za-z0-9+/]{2}==|[A-Za-z0-9+/]{3}=)?USD\"",
"GET /cluster.open-cluster-management.io/v1beta2/managedclustersets",
"POST /cluster.open-cluster-management.io/v1beta2/managedclustersets",
"{ \"apiVersion\": \"cluster.open-cluster-management.io/v1beta2\", \"kind\": \"ManagedClusterSet\", \"metadata\": { \"name\": \"clusterset1\" }, \"spec\": { \"clusterSelector\": { \"selectorType\": \"ExclusiveClusterSetLabel\" } }, \"status\": {} }",
"GET /cluster.open-cluster-management.io/v1beta2/managedclustersets/{clusterset_name}",
"DELETE /cluster.open-cluster-management.io/v1beta2/managedclustersets/{clusterset_name}",
"GET /cluster.open-cluster-management.io/v1beta2/namespaces/{namespace}/managedclustersetbindings",
"POST /cluster.open-cluster-management.io/v1beta2/namespaces/{namespace}/managedclustersetbindings",
"{ \"apiVersion\" : \"cluster.open-cluster-management.io/v1beta2\", \"kind\" : \"ManagedClusterSetBinding\", \"metadata\" : { \"name\" : \"clusterset1\", \"namespace\" : \"ns1\" }, \"spec\": { \"clusterSet\": \"clusterset1\" }, \"status\" : { } }",
"GET /cluster.open-cluster-management.io/v1beta2/namespaces/{namespace}/managedclustersetbindings/{clustersetbinding_name}",
"DELETE /cluster.open-cluster-management.io/v1beta2/managedclustersetbindings/{clustersetbinding_name}",
"GET /managedclusters.clusterview.open-cluster-management.io",
"LIST /managedclusters.clusterview.open-cluster-management.io",
"{ \"apiVersion\" : \"clusterview.open-cluster-management.io/v1alpha1\", \"kind\" : \"ClusterView\", \"metadata\" : { \"name\" : \"<user_ID>\" }, \"spec\": { }, \"status\" : { } }",
"WATCH /managedclusters.clusterview.open-cluster-management.io",
"GET /managedclustersets.clusterview.open-cluster-management.io",
"LIST /managedclustersets.clusterview.open-cluster-management.io",
"WATCH /managedclustersets.clusterview.open-cluster-management.io",
"POST /apps.open-cluster-management.io/v1/namespaces/{namespace}/channels",
"{ \"apiVersion\": \"apps.open-cluster-management.io/v1\", \"kind\": \"Channel\", \"metadata\": { \"name\": \"sample-channel\", \"namespace\": \"default\" }, \"spec\": { \"configMapRef\": { \"kind\": \"configmap\", \"name\": \"bookinfo-resource-filter-configmap\" }, \"pathname\": \"https://charts.helm.sh/stable\", \"type\": \"HelmRepo\" } }",
"GET /apps.open-cluster-management.io/v1/namespaces/{namespace}/channels",
"GET /apps.open-cluster-management.io/v1/namespaces/{namespace}/channels/{channel_name}",
"DELETE /apps.open-cluster-management.io/v1/namespaces/{namespace}/channels/{channel_name}",
"POST /apps.open-cluster-management.io/v1/namespaces/{namespace}/subscriptions",
"{ \"apiVersion\" : \"apps.open-cluster-management.io/v1\", \"kind\" : \"Subscription\", \"metadata\" : { \"name\" : \"sample_subscription\", \"namespace\" : \"default\", \"labels\" : { \"app\" : \"sample_subscription-app\" }, \"annotations\" : { \"apps.open-cluster-management.io/git-path\" : \"apps/sample/\", \"apps.open-cluster-management.io/git-branch\" : \"sample_branch\" } }, \"spec\" : { \"channel\" : \"channel_namespace/sample_channel\", \"packageOverrides\" : [ { \"packageName\" : \"my-sample-application\", \"packageAlias\" : \"the-sample-app\", \"packageOverrides\" : [ { \"path\" : \"spec\", \"value\" : { \"persistence\" : { \"enabled\" : false, \"useDynamicProvisioning\" : false }, \"license\" : \"accept\", \"tls\" : { \"hostname\" : \"my-mcm-cluster.icp\" }, \"sso\" : { \"registrationImage\" : { \"pullSecret\" : \"hub-repo-docker-secret\" } } } } ] } ], \"placement\" : { \"placementRef\" : { \"kind\" : \"PlacementRule\", \"name\" : \"demo-clusters\" } } } }",
"GET /apps.open-cluster-management.io/v1/namespaces/{namespace}/subscriptions",
"GET /apps.open-cluster-management.io/v1/namespaces/{namespace}/subscriptions/{subscription_name}",
"DELETE /apps.open-cluster-management.io/v1/namespaces/{namespace}/subscriptions/{subscription_name}",
"POST /apps.open-cluster-management.io/v1/namespaces/{namespace}/placementrules",
"{ \"apiVersion\" : \"apps.open-cluster-management.io/v1\", \"kind\" : \"PlacementRule\", \"metadata\" : { \"name\" : \"towhichcluster\", \"namespace\" : \"ns-sub-1\" }, \"spec\" : { \"clusterConditions\" : [ { \"type\": \"ManagedClusterConditionAvailable\", \"status\": \"True\" } ], \"clusterSelector\" : { } } }",
"GET /apps.open-cluster-management.io/v1/namespaces/{namespace}/placementrules",
"GET /apps.open-cluster-management.io/v1/namespaces/{namespace}/placementrules/{placementrule_name}",
"DELETE /apps.open-cluster-management.io/v1/namespaces/{namespace}/placementrules/{placementrule_name}",
"POST /app.k8s.io/v1beta1/namespaces/{namespace}/applications",
"{ \"apiVersion\" : \"app.k8s.io/v1beta1\", \"kind\" : \"Application\", \"metadata\" : { \"labels\" : { \"app\" : \"nginx-app-details\" }, \"name\" : \"nginx-app-3\", \"namespace\" : \"ns-sub-1\" }, \"spec\" : { \"componentKinds\" : [ { \"group\" : \"apps.open-cluster-management.io\", \"kind\" : \"Subscription\" } ] }, \"selector\" : { \"matchLabels\" : { \"app\" : \"nginx-app-details\" } }, \"status\" : { } }",
"GET /app.k8s.io/v1beta1/namespaces/{namespace}/applications",
"GET /app.k8s.io/v1beta1/namespaces/{namespace}/applications/{application_name}",
"DELETE /app.k8s.io/v1beta1/namespaces/{namespace}/applications/{application_name}",
"POST /apps.open-cluster-management.io/v1/namespaces/{namespace}/helmreleases",
"{ \"apiVersion\" : \"apps.open-cluster-management.io/v1\", \"kind\" : \"HelmRelease\", \"metadata\" : { \"name\" : \"nginx-ingress\", \"namespace\" : \"default\" }, \"repo\" : { \"chartName\" : \"nginx-ingress\", \"source\" : { \"helmRepo\" : { \"urls\" : [ \"https://kubernetes-charts.storage.googleapis.com/nginx-ingress-1.26.0.tgz\" ] }, \"type\" : \"helmrepo\" }, \"version\" : \"1.26.0\" }, \"spec\" : { \"defaultBackend\" : { \"replicaCount\" : 3 } } }",
"GET /apps.open-cluster-management.io/v1/namespaces/{namespace}/helmreleases",
"GET /apps.open-cluster-management.io/v1/namespaces/{namespace}/helmreleases/{helmrelease_name}",
"DELETE /apps.open-cluster-management.io/v1/namespaces/{namespace}/helmreleases/{helmrelease_name}",
"POST /policy.open-cluster-management.io/v1/v1alpha1/namespaces/{namespace}/policies/{policy_name}",
"{ \"apiVersion\": \"policy.open-cluster-management.io/v1\", \"kind\": \"Policy\", \"metadata\": { \"name\": \"test-policy-swagger\", \"description\": \"Example body for Policy API Swagger docs\" }, \"spec\": { \"remediationAction\": \"enforce\", \"namespaces\": { \"include\": [ \"default\" ], \"exclude\": [ \"kube*\" ] }, \"policy-templates\": { \"kind\": \"ConfigurationPolicy\", \"apiVersion\": \"policy.open-cluster-management.io/v1\", \"complianceType\": \"musthave\", \"metadataComplianceType\": \"musthave\", \"metadata\": { \"namespace\": null, \"name\": \"test-role\" }, \"selector\": { \"matchLabels\": { \"cloud\": \"IBM\" } }, \"spec\" : { \"object-templates\": { \"complianceType\": \"musthave\", \"metadataComplianceType\": \"musthave\", \"objectDefinition\": { \"apiVersion\": \"rbac.authorization.k8s.io/v1\", \"kind\": \"Role\", \"metadata\": { \"name\": \"role-policy\", }, \"rules\": [ { \"apiGroups\": [ \"extensions\", \"apps\" ], \"resources\": [ \"deployments\" ], \"verbs\": [ \"get\", \"list\", \"watch\", \"delete\" ] }, { \"apiGroups\": [ \"core\" ], \"resources\": [ \"pods\" ], \"verbs\": [ \"create\", \"update\", \"patch\" ] }, { \"apiGroups\": [ \"core\" ], \"resources\": [ \"secrets\" ], \"verbs\": [ \"get\", \"watch\", \"list\", \"create\", \"delete\", \"update\", \"patch\" ], }, ], }, }, }, },",
"GET /policy.open-cluster-management.io/v1/namespaces/{namespace}/policies/{policy_name}",
"GET /policy.open-cluster-management.io/v1/namespaces/{namespace}/policies/{policy_name}",
"DELETE /policy.open-cluster-management.io/v1/namespaces/{namespace}/policies/{policy_name}",
"POST /apis/observability.open-cluster-management.io/v1beta2/multiclusterobservabilities",
"{ \"apiVersion\": \"observability.open-cluster-management.io/v1beta2\", \"kind\": \"MultiClusterObservability\", \"metadata\": { \"name\": \"example\" }, \"spec\": { \"observabilityAddonSpec\": {} \"storageConfig\": { \"metricObjectStorage\": { \"name\": \"thanos-object-storage\", \"key\": \"thanos.yaml\" \"writeStorage\": { - \"key\": \" \", \"name\" : \" \" - \"key\": \" \", \"name\" : \" \" } } } }",
"GET /apis/observability.open-cluster-management.io/v1beta2/multiclusterobservabilities",
"GET /apis/observability.open-cluster-management.io/v1beta2/multiclusterobservabilities/{multiclusterobservability_name}",
"DELETE /apis/observability.open-cluster-management.io/v1beta2/multiclusterobservabilities/{multiclusterobservability_name}",
"create route passthrough search-api --service=search-search-api -n open-cluster-management",
"input SearchFilter { property: String! values: [String]! } input SearchInput { keywords: [String] filters: [SearchFilter] limit: Int relatedKinds: [String] } type SearchResult { count: Int items: [Map] related: [SearchRelatedResult] } type SearchRelatedResult { kind: String! count: Int items: [Map] }",
"{ \"query\": \"type SearchResult {count: Intitems: [Map]related: [SearchRelatedResult]} type SearchRelatedResult {kind: String!count: Intitems: [Map]}\", \"variables\": { \"input\": [ { \"keywords\": [], \"filters\": [ { \"property\": \"kind\", \"values\": [ \"Deployment\" ] } ], \"limit\": 10 } ] } }",
"type Query { search(input: [SearchInput]): [SearchResult] searchComplete(property: String!, query: SearchInput, limit: Int): [String] searchSchema: Map messages: [Message] }",
"query mySearch(USDinput: [SearchInput]) { search(input: USDinput) { items } }",
"{\"input\":[ { \"keywords\":[], \"filters\":[ {\"property\":\"kind\",\"values\":[\"Deployment\"]}], \"limit\":10 } ]}",
"query mySearch(USDinput: [SearchInput]) { search(input: USDinput) { items } }",
"{\"input\":[ { \"keywords\":[], \"filters\":[ {\"property\":\"kind\",\"values\":[\"Pod\"]}], \"limit\":10 } ]}",
"POST /operator.open-cluster-management.io/v1beta1/namespaces/{namespace}/mch",
"{ \"apiVersion\": \"apiextensions.k8s.io/v1\", \"kind\": \"CustomResourceDefinition\", \"metadata\": { \"name\": \"multiclusterhubs.operator.open-cluster-management.io\" }, \"spec\": { \"group\": \"operator.open-cluster-management.io\", \"names\": { \"kind\": \"MultiClusterHub\", \"listKind\": \"MultiClusterHubList\", \"plural\": \"multiclusterhubs\", \"shortNames\": [ \"mch\" ], \"singular\": \"multiclusterhub\" }, \"scope\": \"Namespaced\", \"versions\": [ { \"additionalPrinterColumns\": [ { \"description\": \"The overall status of the multicluster hub.\", \"jsonPath\": \".status.phase\", \"name\": \"Status\", \"type\": \"string\" }, { \"jsonPath\": \".metadata.creationTimestamp\", \"name\": \"Age\", \"type\": \"date\" } ], \"name\": \"v1\", \"schema\": { \"openAPIV3Schema\": { \"description\": \"MultiClusterHub defines the configuration for an instance of the multiCluster hub, a central point for managing multiple Kubernetes-based clusters. The deployment of multicluster hub components is determined based on the configuration that is defined in this resource.\", \"properties\": { \"apiVersion\": { \"description\": \"APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\", \"type\": \"string\" }, \"kind\": { \"description\": \"Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. The value is in CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\", \"type\": \"string\" }, \"metadata\": { \"type\": \"object\" }, \"spec\": { \"description\": \"MultiClusterHubSpec defines the desired state of MultiClusterHub.\", \"properties\": { \"availabilityConfig\": { \"description\": \"Specifies deployment replication for improved availability. Options are: Basic and High (default).\", \"type\": \"string\" }, \"customCAConfigmap\": { \"description\": \"Provide the customized OpenShift default ingress CA certificate to Red Hat Advanced Cluster Management.\", } \"type\": \"string\" }, \"disableHubSelfManagement\": { \"description\": \"Disable automatic import of the hub cluster as a managed cluster.\", \"type\": \"boolean\" }, \"disableUpdateClusterImageSets\": { \"description\": \"Disable automatic update of ClusterImageSets.\", \"type\": \"boolean\" }, \"hive\": { \"description\": \"(Deprecated) Overrides for the default HiveConfig specification.\", \"properties\": { \"additionalCertificateAuthorities\": { \"description\": \"(Deprecated) AdditionalCertificateAuthorities is a list of references to secrets in the 'hive' namespace that contain an additional Certificate Authority to use when communicating with target clusters. These certificate authorities are used in addition to any self-signed CA generated by each cluster on installation.\", \"items\": { \"description\": \"LocalObjectReference contains the information to let you locate the referenced object inside the same namespace.\", \"properties\": { \"name\": { \"description\": \"Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names\", \"type\": \"string\" } }, \"type\": \"object\" }, \"type\": \"array\" }, \"backup\": { \"description\": \"(Deprecated) Backup specifies configuration for backup integration. If absent, backup integration is disabled.\", \"properties\": { \"minBackupPeriodSeconds\": { \"description\": \"(Deprecated) MinBackupPeriodSeconds specifies that a minimum of MinBackupPeriodSeconds occurs in between each backup. This is used to rate limit backups. This potentially batches together multiple changes into one backup. No backups are lost for changes that happen during the interval that is queued up, and results in a backup once the interval has been completed.\", \"type\": \"integer\" }, \"velero\": { \"description\": \"(Deprecated) Velero specifies configuration for the Velero backup integration.\", \"properties\": { \"enabled\": { \"description\": \"(Deprecated) Enabled dictates if the Velero backup integration is enabled. If not specified, the default is disabled.\", \"type\": \"boolean\" } }, \"type\": \"object\" } }, \"type\": \"object\" }, \"externalDNS\": { \"description\": \"(Deprecated) ExternalDNS specifies configuration for external-dns if it is to be deployed by Hive. If absent, external-dns is not deployed.\", \"properties\": { \"aws\": { \"description\": \"(Deprecated) AWS contains AWS-specific settings for external DNS.\", \"properties\": { \"credentials\": { \"description\": \"(Deprecated) Credentials reference a secret that is used to authenticate with AWS Route53. It needs permission to manage entries in each of the managed domains for this cluster. Secret should have AWS keys named 'aws_access_key_id' and 'aws_secret_access_key'.\", \"properties\": { \"name\": { \"description\": \"Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names\", \"type\": \"string\" } }, \"type\": \"object\" } }, \"type\": \"object\" }, \"gcp\": { \"description\": \"(Deprecated) GCP contains Google Cloud Platform specific settings for external DNS.\", \"properties\": { \"credentials\": { \"description\": \"(Deprecated) Credentials reference a secret that is used to authenticate with GCP DNS. It needs permission to manage entries in each of the managed domains for this cluster. Secret should have a key names 'osServiceAccount.json'. The credentials must specify the project to use.\", \"properties\": { \"name\": { \"description\": \"Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names\", \"type\": \"string\" } }, \"type\": \"object\" } }, \"type\": \"object\" } }, \"type\": \"object\" }, \"failedProvisionConfig\": { \"description\": \"(Deprecated) FailedProvisionConfig is used to configure settings related to handling provision failures.\", \"properties\": { \"skipGatherLogs\": { \"description\": \"(Deprecated) SkipGatherLogs disables functionality that attempts to gather full logs from the cluster if an installation fails for any reason. The logs are stored in a persistent volume for up to seven days.\", \"type\": \"boolean\" } }, \"type\": \"object\" }, \"globalPullSecret\": { \"description\": \"(Deprecated) GlobalPullSecret is used to specify a pull secret that is used globally by all of the cluster deployments. For each cluster deployment, the contents of GlobalPullSecret are merged with the specific pull secret for a cluster deployment(if specified), with precedence given to the contents of the pull secret for the cluster deployment.\", \"properties\": { \"name\": { \"description\": \"Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names\", \"type\": \"string\" } }, \"type\": \"object\" }, \"maintenanceMode\": { \"description\": \"(Deprecated) MaintenanceMode can be set to true to disable the Hive controllers in situations where you need to ensure nothing is running that adds or act upon finalizers on Hive types. This should rarely be needed. Sets replicas to zero for the 'hive-controllers' deployment to accomplish this.\", \"type\": \"boolean\" } }, \"required\": [ \"failedProvisionConfig\" ], \"type\": \"object\" }, \"imagePullSecret\": { \"description\": \"Override pull secret for accessing MultiClusterHub operand and endpoint images.\", \"type\": \"string\" }, \"ingress\": { \"description\": \"Configuration options for ingress management.\", \"properties\": { \"sslCiphers\": { \"description\": \"List of SSL ciphers enabled for management ingress. Defaults to full list of supported ciphers.\", \"items\": { \"type\": \"string\" }, \"type\": \"array\" } }, \"type\": \"object\" }, \"nodeSelector\": { \"additionalProperties\": { \"type\": \"string\" }, \"description\": \"Set the node selectors..\", \"type\": \"object\" }, \"overrides\": { \"description\": \"Developer overrides.\", \"properties\": { \"imagePullPolicy\": { \"description\": \"Pull policy of the multicluster hub images.\", \"type\": \"string\" } }, \"type\": \"object\" }, \"separateCertificateManagement\": { \"description\": \"(Deprecated) Install cert-manager into its own namespace.\", \"type\": \"boolean\" } }, \"type\": \"object\" }, \"status\": { \"description\": \"MulticlusterHubStatus defines the observed state of MultiClusterHub.\", \"properties\": { \"components\": { \"additionalProperties\": { \"description\": \"StatusCondition contains condition information.\", \"properties\": { \"lastTransitionTime\": { \"description\": \"LastTransitionTime is the last time the condition changed from one status to another.\", \"format\": \"date-time\", \"type\": \"string\" }, \"message\": { \"description\": \"Message is a human-readable message indicating\\ndetails about the last status change.\", \"type\": \"string\" }, \"reason\": { \"description\": \"Reason is a (brief) reason for the last status change of the condition.\", \"type\": \"string\" }, \"status\": { \"description\": \"Status is the status of the condition. One of True, False, Unknown.\", \"type\": \"string\" }, \"type\": { \"description\": \"Type is the type of the cluster condition.\", \"type\": \"string\" } }, \"type\": \"object\" }, \"description\": \"Components []ComponentCondition `json:\\\"manifests,omitempty\\\"`\", \"type\": \"object\" }, \"conditions\": { \"description\": \"Conditions contain the different condition statuses for the MultiClusterHub.\", \"items\": { \"description\": \"StatusCondition contains condition information.\", \"properties\": { \"lastTransitionTime\": { \"description\": \"LastTransitionTime is the last time the condition changed from one status to another.\", \"format\": \"date-time\", \"type\": \"string\" }, \"lastUpdateTime\": { \"description\": \"The last time this condition was updated.\", \"format\": \"date-time\", \"type\": \"string\" }, \"message\": { \"description\": \"Message is a human-readable message indicating details about the last status change.\", \"type\": \"string\" }, \"reason\": { \"description\": \"Reason is a (brief) reason for the last status change of the condition.\", \"type\": \"string\" }, \"status\": { \"description\": \"Status is the status of the condition. One of True, False, Unknown.\", \"type\": \"string\" }, \"type\": { \"description\": \"Type is the type of the cluster condition.\", \"type\": \"string\" } }, \"type\": \"object\" }, \"type\": \"array\" }, \"currentVersion\": { \"description\": \"CurrentVersion indicates the current version..\", \"type\": \"string\" }, \"desiredVersion\": { \"description\": \"DesiredVersion indicates the desired version.\", \"type\": \"string\" }, \"phase\": { \"description\": \"Represents the running phase of the MultiClusterHub\", \"type\": \"string\" } }, \"type\": \"object\" } }, \"type\": \"object\" } }, \"served\": true, \"storage\": true, \"subresources\": { \"status\": {} } } ] } }",
"GET /operator.open-cluster-management.io/v1beta1/namespaces/{namespace}/operator",
"GET /operator.open-cluster-management.io/v1beta1/namespaces/{namespace}/operator/{multiclusterhub_name}",
"DELETE /operator.open-cluster-management.io/v1beta1/namespaces/{namespace}/operator/{multiclusterhub_name}",
"GET /cluster.open-cluster-management.io/v1beta1/namespaces/{namespace}/placement",
"POST /cluster.open-cluster-management.io/v1beta1/namespaces/{namespace}/placements",
"{ \"apiVersion\" : \"cluster.open-cluster-management.io/v1beta1\", \"kind\" : \"Placement\", \"metadata\" : { \"name\" : \"placement1\", \"namespace\": \"ns1\" }, \"spec\": { \"predicates\": [ { \"requiredClusterSelector\": { \"labelSelector\": { \"matchLabels\": { \"vendor\": \"OpenShift\" } } } } ] }, \"status\" : { } }",
"GET /cluster.open-cluster-management.io/v1beta1/namespaces/{namespace}/placements/{placement_name}",
"DELETE /cluster.open-cluster-management.io/v1beta1/namespaces/{namespace}/placements/{placement_name}",
"GET /cluster.open-cluster-management.io/v1beta1/namespaces/{namespace}/placementdecisions",
"POST /cluster.open-cluster-management.io/v1beta1/namespaces/{namespace}/placementdecisions",
"{ \"apiVersion\" : \"cluster.open-cluster-management.io/v1beta1\", \"kind\" : \"PlacementDecision\", \"metadata\" : { \"labels\" : { \"cluster.open-cluster-management.io/placement\" : \"placement1\" }, \"name\" : \"placement1-decision1\", \"namespace\": \"ns1\" }, \"status\" : { } }",
"GET /cluster.open-cluster-management.io/v1beta1/namespaces/{namespace}/placementdecisions/{placementdecision_name}",
"DELETE /cluster.open-cluster-management.io/v1beta1/namespaces/{namespace}/placementdecisions/{placementdecision_name}",
"POST /app.k8s.io/v1/namespaces/{namespace}/discoveryconfigs",
"{ \"apiVersion\": \"apiextensions.k8s.io/v1\", \"kind\": \"CustomResourceDefinition\", \"metadata\": { \"annotations\": { \"controller-gen.kubebuilder.io/version\": \"v0.4.1\", }, \"creationTimestamp\": null, \"name\": \"discoveryconfigs.discovery.open-cluster-management.io\", }, \"spec\": { \"group\": \"discovery.open-cluster-management.io\", \"names\": { \"kind\": \"DiscoveryConfig\", \"listKind\": \"DiscoveryConfigList\", \"plural\": \"discoveryconfigs\", \"singular\": \"discoveryconfig\" }, \"scope\": \"Namespaced\", \"versions\": [ { \"name\": \"v1\", \"schema\": { \"openAPIV3Schema\": { \"description\": \"DiscoveryConfig is the Schema for the discoveryconfigs API\", \"properties\": { \"apiVersion\": { \"description\": \"APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\", \"type\": \"string\" }, \"kind\": { \"description\": \"Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\", \"type\": \"string\" }, \"metadata\": { \"type\": \"object\" }, \"spec\": { \"description\": \"DiscoveryConfigSpec defines the desired state of DiscoveryConfig\", \"properties\": { \"credential\": { \"description\": \"Credential is the secret containing credentials to connect to the OCM api on behalf of a user\", \"type\": \"string\" }, \"filters\": { \"description\": \"Sets restrictions on what kind of clusters to discover\", \"properties\": { \"lastActive\": { \"description\": \"LastActive is the last active in days of clusters to discover, determined by activity timestamp\", \"type\": \"integer\" }, \"openShiftVersions\": { \"description\": \"OpenShiftVersions is the list of release versions of OpenShift of the form \\\"<Major>.<Minor>\\\"\", \"items\": { \"description\": \"Semver represents a partial semver string with the major and minor version in the form \\\"<Major>.<Minor>\\\". For example: \\\"4.14\\\"\", \"pattern\": \"^(?:0|[1-9]\\\\d*)\\\\.(?:0|[1-9]\\\\d*)USD\", \"type\": \"string\" }, \"type\": \"array\" } }, \"type\": \"object\" } }, \"required\": [ \"credential\" ], \"type\": \"object\" }, \"status\": { \"description\": \"DiscoveryConfigStatus defines the observed state of DiscoveryConfig\", \"type\": \"object\" } }, \"type\": \"object\" } }, \"served\": true, \"storage\": true, \"subresources\": { \"status\": {} } } ] }, \"status\": { \"acceptedNames\": { \"kind\": \"\", \"plural\": \"\" }, \"conditions\": [], \"storedVersions\": [] } }",
"GET /operator.open-cluster-management.io/v1/namespaces/{namespace}/operator",
"DELETE /operator.open-cluster-management.io/v1/namespaces/{namespace}/operator/{discoveryconfigs_name}",
"POST /app.k8s.io/v1/namespaces/{namespace}/discoveredclusters",
"{ \"apiVersion\": \"apiextensions.k8s.io/v1\", \"kind\": \"CustomResourceDefinition\", \"metadata\": { \"annotations\": { \"controller-gen.kubebuilder.io/version\": \"v0.4.1\", }, \"creationTimestamp\": null, \"name\": \"discoveredclusters.discovery.open-cluster-management.io\", }, \"spec\": { \"group\": \"discovery.open-cluster-management.io\", \"names\": { \"kind\": \"DiscoveredCluster\", \"listKind\": \"DiscoveredClusterList\", \"plural\": \"discoveredclusters\", \"singular\": \"discoveredcluster\" }, \"scope\": \"Namespaced\", \"versions\": [ { \"name\": \"v1\", \"schema\": { \"openAPIV3Schema\": { \"description\": \"DiscoveredCluster is the Schema for the discoveredclusters API\", \"properties\": { \"apiVersion\": { \"description\": \"APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\", \"type\": \"string\" }, \"kind\": { \"description\": \"Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\", \"type\": \"string\" }, \"metadata\": { \"type\": \"object\" }, \"spec\": { \"description\": \"DiscoveredClusterSpec defines the desired state of DiscoveredCluster\", \"properties\": { \"activityTimestamp\": { \"format\": \"date-time\", \"type\": \"string\" }, \"apiUrl\": { \"type\": \"string\" }, \"cloudProvider\": { \"type\": \"string\" }, \"console\": { \"type\": \"string\" }, \"creationTimestamp\": { \"format\": \"date-time\", \"type\": \"string\" }, \"credential\": { \"description\": \"ObjectReference contains enough information to let you inspect or modify the referred object. --- New uses of this type are discouraged because of difficulty describing its usage when embedded in APIs. 1. Ignored fields. It includes many fields which are not generally honored. For instance, ResourceVersion and FieldPath are both very rarely valid in actual usage. 2. Invalid usage help. It is impossible to add specific help for individual usage. In most embedded usages, there are particular restrictions like, \\\"must refer only to types A and B\\\" or \\\"UID not honored\\\" or \\\"name must be restricted\\\". Those cannot be well described when embedded. 3. Inconsistent validation. Because the usages are different, the validation rules are different by usage, which makes it hard for users to predict what will happen. 4. The fields are both imprecise and overly precise. Kind is not a precise mapping to a URL. This can produce ambiguity during interpretation and require a REST mapping. In most cases, the dependency is on the group,resource tuple and the version of the actual struct is irrelevant. 5. We cannot easily change it. Because this type is embedded in many locations, updates to this type will affect numerous schemas. Don't make new APIs embed an underspecified API type they do not control. Instead of using this type, create a locally provided and used type that is well-focused on your reference. For example, ServiceReferences for admission registration: https://github.com/kubernetes/api/blob/release-1.17/admissionregistration/v1/types.go#L533 .\", \"properties\": { \"apiVersion\": { \"description\": \"API version of the referent.\", \"type\": \"string\" }, \"fieldPath\": { \"description\": \"If referring to a piece of an object instead of an entire object, this string should contain a valid JSON/Go field access statement, such as desiredState.manifest.containers[2]. For example, if the object reference is to a container within a pod, this would take on a value like: \\\"spec.containers{name}\\\" (where \\\"name\\\" refers to the name of the container that triggered the event) or if no container name is specified \\\"spec.containers[2]\\\" (container with index 2 in this pod). This syntax is chosen only to have some well-defined way of referencing a part of an object. TODO: this design is not final and this field is subject to change in the future.\", \"type\": \"string\" }, \"kind\": { \"description\": \"Kind of the referent. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\", \"type\": \"string\" }, \"name\": { \"description\": \"Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names\", \"type\": \"string\" }, \"namespace\": { \"description\": \"Namespace of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/namespaces/\", \"type\": \"string\" }, \"resourceVersion\": { \"description\": \"Specific resourceVersion to which this reference is made, if any. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#concurrency-control-and-consistency\", \"type\": \"string\" }, \"uid\": { \"description\": \"UID of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#uids\", \"type\": \"string\" } }, \"type\": \"object\" }, \"displayName\": { \"type\": \"string\" }, \"isManagedCluster\": { \"type\": \"boolean\" }, \"name\": { \"type\": \"string\" }, \"openshiftVersion\": { \"type\": \"string\" }, \"status\": { \"type\": \"string\" }, \"type\": { \"type\": \"string\" } }, \"required\": [ \"apiUrl\", \"displayName\", \"isManagedCluster\", \"name\", \"type\" ], \"type\": \"object\" }, \"status\": { \"description\": \"DiscoveredClusterStatus defines the observed state of DiscoveredCluster\", \"type\": \"object\" } }, \"type\": \"object\" } }, \"served\": true, \"storage\": true, \"subresources\": { \"status\": {} } } ] }, \"status\": { \"acceptedNames\": { \"kind\": \"\", \"plural\": \"\" }, \"conditions\": [], \"storedVersions\": [] } }",
"GET /operator.open-cluster-management.io/v1/namespaces/{namespace}/operator",
"DELETE /operator.open-cluster-management.io/v1/namespaces/{namespace}/operator/{discoveredclusters_name}",
"GET /addon.open-cluster-management.io/v1alpha1/namespaces/{namespace}/addondeploymentconfigs",
"POST /addon.open-cluster-management.io/v1alpha1/namespaces/{namespace}/addondeploymentconfigs",
"{ \"apiVersion\": \"addon.open-cluster-management.io/v1alpha1\", \"kind\": \"AddOnDeploymentConfig\", \"metadata\": { \"name\": \"deploy-config\", \"namespace\": \"open-cluster-management-hub\" }, \"spec\": { \"nodePlacement\": { \"nodeSelector\": { \"node-dedicated\": \"acm-addon\" }, \"tolerations\": [ { \"effect\": \"NoSchedule\", \"key\": \"node-dedicated\", \"operator\": \"Equal\", \"value\": \"acm-addon\" } ] } } }",
"GET /addon.open-cluster-management.io/v1alpha1/namespaces/{namespace}/addondeploymentconfigs/{addondeploymentconfig_name}",
"DELETE /addon.open-cluster-management.io/v1alpha1/namespaces/{namespace}/addondeploymentconfigs/{addondeploymentconfig_name}",
"GET /addon.open-cluster-management.io/v1alpha1/clustermanagementaddons",
"POST /addon.open-cluster-management.io/v1alpha1/clustermanagementaddons",
"{ \"apiVersion\": \"addon.open-cluster-management.io/v1alpha1\", \"kind\": \"ClusterManagementAddOn\", \"metadata\": { \"name\": \"helloworld\" }, \"spec\": { \"supportedConfigs\": [ { \"defaultConfig\": { \"name\": \"deploy-config\", \"namespace\": \"open-cluster-management-hub\" }, \"group\": \"addon.open-cluster-management.io\", \"resource\": \"addondeploymentconfigs\" } ] }, \"status\" : { } }",
"GET /addon.open-cluster-management.io/v1alpha1/clustermanagementaddons/{clustermanagementaddon_name}",
"DELETE /addon.open-cluster-management.io/v1alpha1/clustermanagementaddons/{clustermanagementaddon_name}",
"GET /addon.open-cluster-management.io/v1alpha1/namespaces/{namespace}/managedclusteraddons",
"POST /addon.open-cluster-management.io/v1alpha1/namespaces/{namespace}/managedclusteraddons",
"{ \"apiVersion\": \"addon.open-cluster-management.io/v1alpha1\", \"kind\": \"ManagedClusterAddOn\", \"metadata\": { \"name\": \"helloworld\", \"namespace\": \"cluster1\" }, \"spec\": { \"configs\": [ { \"group\": \"addon.open-cluster-management.io\", \"name\": \"cluster-deploy-config\", \"namespace\": \"open-cluster-management-hub\", \"resource\": \"addondeploymentconfigs\" } ], \"installNamespace\": \"default\" }, \"status\" : { } }",
"GET /addon.open-cluster-management.io/v1alpha1/namespaces/{namespace}/managedclusteraddons/{managedclusteraddon_name}",
"DELETE /addon.open-cluster-management.io/v1alpha1/namespaces/{namespace}/managedclusteraddons/{managedclusteraddon_name}",
"GET /cluster.open-cluster-management.io/v1beta2/namespaces/{namespace}/managedclustersets",
"POST /cluster.open-cluster-management.io/v1beta2/namespaces/{namespace}/managedclustersets",
"{ \"apiVersion\" : \"cluster.open-cluster-management.io/v1beta2\", \"kind\" : \"ManagedClusterSet\", \"metadata\" : { \"name\" : \"example-clusterset\", }, \"spec\": { }, \"status\" : { } }",
"GET /cluster.open-cluster-management.io/v1beta2/namespaces/{namespace}/managedclustersets/{managedclusterset_name}",
"DELETE /cluster.open-cluster-management.io/v1beta2/managedclustersets/{managedclusterset_name}",
"GET /config.open-cluster-management.io/v1alpha1/namespaces/{namespace}/klusterletconfigs",
"POST /config.open-cluster-management.io/v1alpha1/namespaces/{namespace}/klusterletconfigs",
"{ \"apiVersion\": \"apiextensions.k8s.io/v1\", \"kind\": \"CustomResourceDefinition\", \"metadata\": { \"annotations\": { \"controller-gen.kubebuilder.io/version\": \"v0.7.0\" }, \"creationTimestamp\": null, \"name\": \"klusterletconfigs.config.open-cluster-management.io\" }, \"spec\": { \"group\": \"config.open-cluster-management.io\", \"names\": { \"kind\": \"KlusterletConfig\", \"listKind\": \"KlusterletConfigList\", \"plural\": \"klusterletconfigs\", \"singular\": \"klusterletconfig\" }, \"preserveUnknownFields\": false, \"scope\": \"Cluster\", \"versions\": [ { \"name\": \"v1alpha1\", \"schema\": { \"openAPIV3Schema\": { \"description\": \"KlusterletConfig contains the configuration of a klusterlet including the upgrade strategy, config overrides, proxy configurations etc.\", \"properties\": { \"apiVersion\": { \"description\": \"APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\", \"type\": \"string\" }, \"kind\": { \"description\": \"Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\", \"type\": \"string\" }, \"metadata\": { \"type\": \"object\" }, \"spec\": { \"description\": \"Spec defines the desired state of KlusterletConfig\", \"properties\": { \"hubKubeAPIServerProxyConfig\": { \"description\": \"HubKubeAPIServerProxyConfig holds proxy settings for connections between klusterlet/add-on agents on the managed cluster and the kube-apiserver on the hub cluster. Empty means no proxy settings is available.\", \"properties\": { \"caBundle\": { \"description\": \"CABundle is a CA certificate bundle to verify the proxy server. It will be ignored if only HTTPProxy is set; And it is required when HTTPSProxy is set and self signed CA certificate is used by the proxy server.\", \"format\": \"byte\", \"type\": \"string\" }, \"httpProxy\": { \"description\": \"HTTPProxy is the URL of the proxy for HTTP requests\", \"type\": \"string\" }, \"httpsProxy\": { \"description\": \"HTTPSProxy is the URL of the proxy for HTTPS requests HTTPSProxy will be chosen if both HTTPProxy and HTTPSProxy are set.\", \"type\": \"string\" } }, \"type\": \"object\" }, \"nodePlacement\": { \"description\": \"NodePlacement enables explicit control over the scheduling of the agent components. If the placement is nil, the placement is not specified, it will be omitted. If the placement is an empty object, the placement will match all nodes and tolerate nothing.\", \"properties\": { \"nodeSelector\": { \"additionalProperties\": { \"type\": \"string\" }, \"description\": \"NodeSelector defines which Nodes the Pods are scheduled on. The default is an empty list.\", \"type\": \"object\" }, \"tolerations\": { \"description\": \"Tolerations is attached by pods to tolerate any taint that matches the triple <key,value,effect> using the matching operator <operator>. The default is an empty list.\", \"items\": { \"description\": \"The pod this Toleration is attached to tolerates any taint that matches the triple <key,value,effect> using the matching operator <operator>.\", \"properties\": { \"effect\": { \"description\": \"Effect indicates the taint effect to match. Empty means match all taint effects. When specified, allowed values are NoSchedule, PreferNoSchedule and NoExecute.\", \"type\": \"string\" }, \"key\": { \"description\": \"Key is the taint key that the toleration applies to. Empty means match all taint keys. If the key is empty, operator must be Exists; this combination means to match all values and all keys.\", \"type\": \"string\" }, \"operator\": { \"description\": \"Operator represents a key's relationship to the value. Valid operators are Exists and Equal. Defaults to Equal. Exists is equivalent to wildcard for value, so that a pod can tolerate all taints of a particular category.\", \"type\": \"string\" }, \"tolerationSeconds\": { \"description\": \"TolerationSeconds represents the period of time the toleration (which must be of effect NoExecute, otherwise this field is ignored) tolerates the taint. By default, it is not set, which means tolerate the taint forever (do not evict). Zero and negative values will be treated as 0 (evict immediately) by the system.\", \"format\": \"int64\", \"type\": \"integer\" }, \"value\": { \"description\": \"Value is the taint value the toleration matches to. If the operator is Exists, the value should be empty, otherwise just a regular string.\", \"type\": \"string\" } }, \"type\": \"object\" }, \"type\": \"array\" } }, \"type\": \"object\" }, \"pullSecret\": { \"description\": \"PullSecret is the name of image pull secret.\", \"properties\": { \"apiVersion\": { \"description\": \"API version of the referent.\", \"type\": \"string\" }, \"fieldPath\": { \"description\": \"If referring to a piece of an object instead of an entire object, this string should contain a valid JSON/Go field access statement, such as desiredState.manifest.containers[2]. For example, if the object reference is to a container within a pod, this would take on a value like: \\\"spec.containers{name}\\\" (where \\\"name\\\" refers to the name of the container that triggered the event) or if no container name is specified \\\"spec.containers[2]\\\" (container with index 2 in this pod). This syntax is chosen only to have some well-defined way of referencing a part of an object. TODO: this design is not final and this field is subject to change in the future.\", \"type\": \"string\" }, \"kind\": { \"description\": \"Kind of the referent. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\", \"type\": \"string\" }, \"name\": { \"description\": \"Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names\", \"type\": \"string\" }, \"namespace\": { \"description\": \"Namespace of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/namespaces/\", \"type\": \"string\" }, \"resourceVersion\": { \"description\": \"Specific resourceVersion to which this reference is made, if any. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#concurrency-control-and-consistency\", \"type\": \"string\" }, \"uid\": { \"description\": \"UID of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#uids\", \"type\": \"string\" } }, \"type\": \"object\" }, \"registries\": { \"description\": \"Registries includes the mirror and source registries. The source registry will be replaced by the Mirror.\", \"items\": { \"properties\": { \"mirror\": { \"description\": \"Mirror is the mirrored registry of the Source. Will be ignored if Mirror is empty.\", \"type\": \"string\" }, \"source\": { \"description\": \"Source is the source registry. All image registries will be replaced by Mirror if Source is empty.\", \"type\": \"string\" } }, \"required\": [ \"mirror\" ], \"type\": \"object\" }, \"type\": \"array\" } }, \"type\": \"object\" }, \"status\": { \"description\": \"Status defines the observed state of KlusterletConfig\", \"type\": \"object\" } }, \"type\": \"object\" } }, \"served\": true, \"storage\": true, \"subresources\": { \"status\": {} } } ] }, \"status\": { \"acceptedNames\": { \"kind\": \"\", \"plural\": \"\" }, \"conditions\": [], \"storedVersions\": [] } }",
"GET /config.open-cluster-management.io/v1alpha1/namespaces/{namespace}/klusterletconfigs/{klusterletconfig_name}",
"DELETE /addon.open-cluster-management.io/v1alpha1/namespaces/{namespace}/klusterletconfigs/{klusterletconfig_name}",
"{ \"data\": [ { \"id\": 2, \"cluster\": { \"name\": \"cluster1\", \"cluster_id\": \"215ce184-8dee-4cab-b99b-1f8f29dff611\" }, \"parent_policy\": { \"id\": 3, \"name\": \"configure-custom-app\", \"namespace\": \"policies\", \"catageories\": [\"CM Configuration Management\"], \"controls\": [\"CM-2 Baseline Configuration\"], \"standards\": [\"NIST SP 800-53\"] }, \"policy\": { \"apiGroup\": \"policy.open-cluster-management.io\", \"id\": 2, \"kind\": \"ConfigurationPolicy\", \"name\": \"configure-custom-app\", \"namespace\": \"\", // Only shown with `?include_spec` \"spec\": {} }, \"event\": { \"compliance\": \"NonCompliant\", \"message\": \"configmaps [app-data] not found in namespace default\", \"timestamp\": \"2023-07-19T18:25:43.511Z\", \"metadata\": {} } }, { \"id\": 1, \"cluster\": { \"name\": \"cluster2\", \"cluster_id\": \"415ce234-8dee-4cab-b99b-1f8f29dff461\" }, \"parent_policy\": { \"id\": 3, \"name\": \"configure-custom-app\", \"namespace\": \"policies\", \"catageories\": [\"CM Configuration Management\"], \"controls\": [\"CM-2 Baseline Configuration\"], \"standards\": [\"NIST SP 800-53\"] }, \"policy\": { \"apiGroup\": \"policy.open-cluster-management.io\", \"id\": 4, \"kind\": \"ConfigurationPolicy\", \"name\": \"configure-custom-app\", \"namespace\": \"\", // Only shown with `?include_spec` \"spec\": {} }, \"event\": { \"compliance\": \"Compliant\", \"message\": \"configmaps [app-data] found as specified in namespace default\", \"timestamp\": \"2023-07-19T18:25:41.523Z\", \"metadata\": {} } } ], \"metadata\": { \"page\": 1, \"pages\": 7, \"per_page\": 20, \"total\": 123 } }",
"{ \"id\": 1, \"cluster\": { \"name\": \"cluster2\", \"cluster_id\": \"415ce234-8dee-4cab-b99b-1f8f29dff461\" }, \"parent_policy\": { \"id\": 2, \"name\": \"etcd-encryption\", \"namespace\": \"policies\", \"catageories\": [\"CM Configuration Management\"], \"controls\": [\"CM-2 Baseline Configuration\"], \"standards\": [\"NIST SP 800-53\"] }, \"policy\": { \"apiGroup\": \"policy.open-cluster-management.io\", \"id\": 4, \"kind\": \"ConfigurationPolicy\", \"name\": \"etcd-encryption\", \"namespace\": \"\", \"spec\": {} }, \"event\": { \"compliance\": \"Compliant\", \"message\": \"configmaps [app-data] found as specified in namespace default\", \"timestamp\": \"2023-07-19T18:25:41.523Z\", \"metadata\": {} } }",
"whoami --show-token",
"apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: local-cluster-view rules: - apiGroups: - cluster.open-cluster-management.io resources: - managedclusters resourceNames: - local-cluster verbs: - get",
"auth can-i get managedclusters.cluster.open-cluster-management.io/local-cluster"
] | https://docs.redhat.com/en/documentation/red_hat_advanced_cluster_management_for_kubernetes/2.11/html/apis/apis |
Chapter 2. Installing the Self-hosted Engine Deployment Host | Chapter 2. Installing the Self-hosted Engine Deployment Host A self-hosted engine can be deployed from a Red Hat Virtualization Host or a Red Hat Enterprise Linux host . Important If you plan to use bonded interfaces for high availability or VLANs to separate different types of traffic (for example, for storage or management connections), you should configure them on the host before beginning the self-hosted engine deployment. See Networking Recommendations in the Planning and Prerequisites Guide . 2.1. Installing Red Hat Virtualization Hosts Red Hat Virtualization Host (RHVH) is a minimal operating system based on Red Hat Enterprise Linux that is designed to provide a simple method for setting up a physical machine to act as a hypervisor in a Red Hat Virtualization environment. The minimal operating system contains only the packages required for the machine to act as a hypervisor, and features a Cockpit web interface for monitoring the host and performing administrative tasks. See http://cockpit-project.org/running.html for the minimum browser requirements. RHVH supports NIST 800-53 partitioning requirements to improve security. RHVH uses a NIST 800-53 partition layout by default. The host must meet the minimum host requirements . Procedure Download the RHVH ISO image from the Customer Portal: Log in to the Customer Portal at https://access.redhat.com . Click Downloads in the menu bar. Click Red Hat Virtualization . Scroll up and click Download Latest to access the product download page. Go to Hypervisor Image for RHV 4.3 and and click Download Now . Create a bootable media device. See Making Media in the Red Hat Enterprise Linux Installation Guide for more information. Start the machine on which you are installing RHVH, booting from the prepared installation media. From the boot menu, select Install RHVH 4.3 and press Enter . Note You can also press the Tab key to edit the kernel parameters. Kernel parameters must be separated by a space, and you can boot the system using the specified kernel parameters by pressing the Enter key. Press the Esc key to clear any changes to the kernel parameters and return to the boot menu. Select a language, and click Continue . Select a time zone from the Date & Time screen and click Done . Select a keyboard layout from the Keyboard screen and click Done . Select the device on which to install RHVH from the Installation Destination screen. Optionally, enable encryption. Click Done . Important Red Hat strongly recommends using the Automatically configure partitioning option. Select a network from the Network & Host Name screen and click Configure... to configure the connection details. Note To use the connection every time the system boots, select the Automatically connect to this network when it is available check box. For more information, see Edit Network Connections in the Red Hat Enterprise Linux 7 Installation Guide . Enter a host name in the Host name field, and click Done . Optionally configure Language Support , Security Policy , and Kdump . See Installing Using Anaconda in the Red Hat Enterprise Linux 7 Installation Guide for more information on each of the sections in the Installation Summary screen. Click Begin Installation . Set a root password and, optionally, create an additional user while RHVH installs. Warning Red Hat strongly recommends not creating untrusted users on RHVH, as this can lead to exploitation of local security vulnerabilities. Click Reboot to complete the installation. Note When RHVH restarts, nodectl check performs a health check on the host and displays the result when you log in on the command line. The message node status: OK or node status: DEGRADED indicates the health status. Run nodectl check to get more information. The service is enabled by default. 2.1.1. Enabling the Red Hat Virtualization Host Repository Register the system to receive updates. Red Hat Virtualization Host only requires one repository. This section provides instructions for registering RHVH with the Content Delivery Network , or with Red Hat Satellite 6 . Registering RHVH with the Content Delivery Network Log in to the Cockpit web interface at https:// HostFQDNorIP :9090 . Navigate to Subscriptions , click Register System , and enter your Customer Portal user name and password. The Red Hat Virtualization Host subscription is automatically attached to the system. Click Terminal . Enable the Red Hat Virtualization Host 7 repository to allow later updates to the Red Hat Virtualization Host: Registering RHVH with Red Hat Satellite 6 Log in to the Cockpit web interface at https:// HostFQDNorIP :9090 . Click Terminal . Register RHVH with Red Hat Satellite 6: 2.2. Installing Red Hat Enterprise Linux hosts A Red Hat Enterprise Linux host is based on a standard basic installation of Red Hat Enterprise Linux 7 on a physical server, with the Red Hat Enterprise Linux Server and Red Hat Virtualization subscriptions attached. For detailed installation instructions, see the Performing a standard {enterprise-linux-shortname} installation . The host must meet the minimum host requirements . Important Virtualization must be enabled in your host's BIOS settings. For information on changing your host's BIOS settings, refer to your host's hardware documentation. Important Third-party watchdogs should not be installed on Red Hat Enterprise Linux hosts, as they can interfere with the watchdog daemon provided by VDSM. 2.2.1. Enabling the Red Hat Enterprise Linux host Repositories To use a Red Hat Enterprise Linux machine as a host, you must register the system with the Content Delivery Network, attach the Red Hat Enterprise Linux Server and Red Hat Virtualization subscriptions, and enable the host repositories. Procedure Register your system with the Content Delivery Network, entering your Customer Portal user name and password when prompted: Find the Red Hat Enterprise Linux Server and Red Hat Virtualization subscription pools and record the pool IDs: Use the pool IDs to attach the subscriptions to the system: Note To view currently attached subscriptions: To list all enabled repositories: Configure the repositories: For Red Hat Enterprise Linux 7 hosts, little endian, on IBM POWER8 hardware: For Red Hat Enterprise Linux 7 hosts, little endian, on IBM POWER9 hardware: Ensure that all packages currently installed are up to date: Reboot the machine. Although the existing storage domains will be migrated from the standalone Manager, you must prepare additional storage for a self-hosted engine storage domain that is dedicated to the Manager virtual machine. | [
"subscription-manager repos --enable=rhel-7-server-rhvh-4-rpms",
"rpm -Uvh http://satellite.example.com/pub/katello-ca-consumer-latest.noarch.rpm # subscription-manager register --org=\" org_id \" # subscription-manager list --available # subscription-manager attach --pool= pool_id # subscription-manager repos --disable='*' --enable=rhel-7-server-rhvh-4-rpms",
"subscription-manager register",
"subscription-manager list --available",
"subscription-manager attach --pool= poolid",
"subscription-manager list --consumed",
"yum repolist",
"subscription-manager repos --disable='*' --enable=rhel-7-server-rpms --enable=rhel-7-server-rhv-4-mgmt-agent-rpms --enable=rhel-7-server-ansible-2.9-rpms",
"subscription-manager repos --disable='*' --enable=rhel-7-server-rhv-4-mgmt-agent-for-power-le-rpms --enable=rhel-7-for-power-le-rpms",
"subscription-manager repos --disable='*' --enable=rhel-7-server-rhv-4-mgmt-agent-for-power-9-rpms --enable=rhel-7-for-power-9-rpms",
"yum update"
] | https://docs.redhat.com/en/documentation/red_hat_virtualization/4.3/html/migrating_from_a_standalone_manager_to_a_self-hosted_engine/installing_the_self-hosted_engine_deployment_host_migrating_to_she |
Chapter 3. Installing the Cluster Observability Operator | Chapter 3. Installing the Cluster Observability Operator As a cluster administrator, you can install or remove the Cluster Observability Operator (COO) from OperatorHub by using the OpenShift Container Platform web console. OperatorHub is a user interface that works in conjunction with Operator Lifecycle Manager (OLM), which installs and manages Operators on a cluster. 3.1. Installing the Cluster Observability Operator in the web console Install the Cluster Observability Operator (COO) from OperatorHub by using the OpenShift Container Platform web console. Prerequisites You have access to the cluster as a user with the cluster-admin cluster role. You have logged in to the OpenShift Container Platform web console. Procedure In the OpenShift Container Platform web console, click Operators OperatorHub . Type cluster observability operator in the Filter by keyword box. Click Cluster Observability Operator in the list of results. Read the information about the Operator, and configure the following installation settings: Update channel stable Version 1.0.0 or later Installation mode All namespaces on the cluster (default) Installed Namespace Operator recommended Namespace: openshift-cluster-observability-operator Select Enable Operator recommended cluster monitoring on this Namespace Update approval Automatic Optional: You can change the installation settings to suit your requirements. For example, you can select to subscribe to a different update channel, to install an older released version of the Operator, or to require manual approval for updates to new versions of the Operator. Click Install . Verification Go to Operators Installed Operators , and verify that the Cluster Observability Operator entry appears in the list. Additional resources Adding Operators to a cluster 3.2. Uninstalling the Cluster Observability Operator using the web console If you have installed the Cluster Observability Operator (COO) by using OperatorHub, you can uninstall it in the OpenShift Container Platform web console. Prerequisites You have access to the cluster as a user with the cluster-admin cluster role. You have logged in to the OpenShift Container Platform web console. Procedure Go to Operators Installed Operators . Locate the Cluster Observability Operator entry in the list. Click for this entry and select Uninstall Operator . Verification Go to Operators Installed Operators , and verify that the Cluster Observability Operator entry no longer appears in the list. | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.13/html/cluster_observability_operator/installing-cluster-observability-operators |
Chapter 12. Using a service account as an OAuth client | Chapter 12. Using a service account as an OAuth client 12.1. Service accounts as OAuth clients You can use a service account as a constrained form of OAuth client. Service accounts can request only a subset of scopes that allow access to some basic user information and role-based power inside of the service account's own namespace: user:info user:check-access role:<any_role>:<service_account_namespace> role:<any_role>:<service_account_namespace>:! When using a service account as an OAuth client: client_id is system:serviceaccount:<service_account_namespace>:<service_account_name> . client_secret can be any of the API tokens for that service account. For example: USD oc sa get-token <service_account_name> To get WWW-Authenticate challenges, set an serviceaccounts.openshift.io/oauth-want-challenges annotation on the service account to true . redirect_uri must match an annotation on the service account. 12.1.1. Redirect URIs for service accounts as OAuth clients Annotation keys must have the prefix serviceaccounts.openshift.io/oauth-redirecturi. or serviceaccounts.openshift.io/oauth-redirectreference. such as: In its simplest form, the annotation can be used to directly specify valid redirect URIs. For example: The first and second postfixes in the above example are used to separate the two valid redirect URIs. In more complex configurations, static redirect URIs may not be enough. For example, perhaps you want all Ingresses for a route to be considered valid. This is where dynamic redirect URIs via the serviceaccounts.openshift.io/oauth-redirectreference. prefix come into play. For example: Since the value for this annotation contains serialized JSON data, it is easier to see in an expanded format: { "kind": "OAuthRedirectReference", "apiVersion": "v1", "reference": { "kind": "Route", "name": "jenkins" } } Now you can see that an OAuthRedirectReference allows us to reference the route named jenkins . Thus, all Ingresses for that route will now be considered valid. The full specification for an OAuthRedirectReference is: { "kind": "OAuthRedirectReference", "apiVersion": "v1", "reference": { "kind": ..., 1 "name": ..., 2 "group": ... 3 } } 1 kind refers to the type of the object being referenced. Currently, only route is supported. 2 name refers to the name of the object. The object must be in the same namespace as the service account. 3 group refers to the group of the object. Leave this blank, as the group for a route is the empty string. Both annotation prefixes can be combined to override the data provided by the reference object. For example: The first postfix is used to tie the annotations together. Assuming that the jenkins route had an Ingress of https://example.com , now https://example.com/custompath is considered valid, but https://example.com is not. The format for partially supplying override data is as follows: Type Syntax Scheme "https://" Hostname "//website.com" Port "//:8000" Path "examplepath" Note Specifying a hostname override will replace the hostname data from the referenced object, which is not likely to be desired behavior. Any combination of the above syntax can be combined using the following format: <scheme:>//<hostname><:port>/<path> The same object can be referenced more than once for more flexibility: Assuming that the route named jenkins has an Ingress of https://example.com , then both https://example.com:8000 and https://example.com/custompath are considered valid. Static and dynamic annotations can be used at the same time to achieve the desired behavior: | [
"oc sa get-token <service_account_name>",
"serviceaccounts.openshift.io/oauth-redirecturi.<name>",
"\"serviceaccounts.openshift.io/oauth-redirecturi.first\": \"https://example.com\" \"serviceaccounts.openshift.io/oauth-redirecturi.second\": \"https://other.com\"",
"\"serviceaccounts.openshift.io/oauth-redirectreference.first\": \"{\\\"kind\\\":\\\"OAuthRedirectReference\\\",\\\"apiVersion\\\":\\\"v1\\\",\\\"reference\\\":{\\\"kind\\\":\\\"Route\\\",\\\"name\\\":\\\"jenkins\\\"}}\"",
"{ \"kind\": \"OAuthRedirectReference\", \"apiVersion\": \"v1\", \"reference\": { \"kind\": \"Route\", \"name\": \"jenkins\" } }",
"{ \"kind\": \"OAuthRedirectReference\", \"apiVersion\": \"v1\", \"reference\": { \"kind\": ..., 1 \"name\": ..., 2 \"group\": ... 3 } }",
"\"serviceaccounts.openshift.io/oauth-redirecturi.first\": \"custompath\" \"serviceaccounts.openshift.io/oauth-redirectreference.first\": \"{\\\"kind\\\":\\\"OAuthRedirectReference\\\",\\\"apiVersion\\\":\\\"v1\\\",\\\"reference\\\":{\\\"kind\\\":\\\"Route\\\",\\\"name\\\":\\\"jenkins\\\"}}\"",
"\"serviceaccounts.openshift.io/oauth-redirecturi.first\": \"custompath\" \"serviceaccounts.openshift.io/oauth-redirectreference.first\": \"{\\\"kind\\\":\\\"OAuthRedirectReference\\\",\\\"apiVersion\\\":\\\"v1\\\",\\\"reference\\\":{\\\"kind\\\":\\\"Route\\\",\\\"name\\\":\\\"jenkins\\\"}}\" \"serviceaccounts.openshift.io/oauth-redirecturi.second\": \"//:8000\" \"serviceaccounts.openshift.io/oauth-redirectreference.second\": \"{\\\"kind\\\":\\\"OAuthRedirectReference\\\",\\\"apiVersion\\\":\\\"v1\\\",\\\"reference\\\":{\\\"kind\\\":\\\"Route\\\",\\\"name\\\":\\\"jenkins\\\"}}\"",
"\"serviceaccounts.openshift.io/oauth-redirectreference.first\": \"{\\\"kind\\\":\\\"OAuthRedirectReference\\\",\\\"apiVersion\\\":\\\"v1\\\",\\\"reference\\\":{\\\"kind\\\":\\\"Route\\\",\\\"name\\\":\\\"jenkins\\\"}}\" \"serviceaccounts.openshift.io/oauth-redirecturi.second\": \"https://other.com\""
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.14/html/authentication_and_authorization/using-service-accounts-as-oauth-client |
1.10.3. REDUNDANCY | 1.10.3. REDUNDANCY The REDUNDANCY panel allows you to configure of the backup LVS router node and set various heartbeat monitoring options. Figure 1.33. The REDUNDANCY Panel Redundant server public IP The public real IP address for the backup LVS router. Redundant server private IP The backup router's private real IP address. The rest of the panel is for configuring the heartbeat channel, which is used by the backup node to monitor the primary node for failure. Heartbeat Interval (seconds) Sets the number of seconds between heartbeats - the interval that the backup node will check the functional status of the primary LVS node. Assume dead after (seconds) If the primary LVS node does not respond after this number of seconds, then the backup LVS router node will initiate failover. Heartbeat runs on port Sets the port at which the heartbeat communicates with the primary LVS node. The default is set to 539 if this field is left blank. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/cluster_suite_overview/s2-piranha-redun-CSO |
5.4.12. Removing Logical Volumes | 5.4.12. Removing Logical Volumes To remove an inactive logical volume, use the lvremove command. If the logical volume is currently mounted, unmount the volume before removing it. In addition, in a clustered environment you must deactivate a logical volume before it can be removed. The following command removes the logical volume /dev/testvg/testlv from the volume group testvg . Note that in this case the logical volume has not been deactivated. You could explicitly deactivate the logical volume before removing it with the lvchange -an command, in which case you would not see the prompt verifying whether you want to remove an active logical volume. | [
"lvremove /dev/testvg/testlv Do you really want to remove active logical volume \"testlv\"? [y/n]: y Logical volume \"testlv\" successfully removed"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/logical_volume_manager_administration/LV_remove |
6.8. Attaching a Red Hat Subscription and Enabling the Certificate System Package Repository | 6.8. Attaching a Red Hat Subscription and Enabling the Certificate System Package Repository Before you can install and update Certificate System, you must enable the corresponding repository: Attach the Red Hat subscriptions to the system: Skip this step, if your system is already registered or has a subscription that provides Certificate Server attached. Register the system to the Red Hat subscription management service. You can use the --auto-attach option to automatically apply an available subscription for the operating system. List the available subscriptions and note the pool ID providing the Red Hat Certificate System. For example: In case you have a lot of subscriptions, the output of the command can be very long. You can optionally redirect the output to a file: Attach the Certificate System subscription to the system using the pool ID from the step: Enable the Certificate System repository: Enable the Certificate System module stream: Installing the required packages is described in the Chapter 7, Installing and Configuring Certificate System chapter. Note For compliance, only enable Red Hat approved repositories. Only Red Hat approved repositories can be enabled through subscription-manager utility. | [
"subscription-manager register --auto-attach Username: [email protected] Password: The system has been registered with id: 566629db-a4ec-43e1-aa02-9cbaa6177c3f Installed Product Current Status: Product Name: Red Hat Enterprise Linux Server Status: Subscribed",
"subscription-manager list --available --all Subscription Name: Red Hat Enterprise Linux Developer Suite Provides: Red Hat Certificate System Pool ID: 7aba89677a6a38fc0bba7dac673f7993 Available: 1",
"subscription-manager list --available --all > /root/subscriptions.txt",
"subscription-manager attach --pool= 7aba89677a6a38fc0bba7dac673f7993 Successfully attached a subscription for: Red Hat Enterprise Linux Developer Suite",
"subscription-manager repos --enable certsys-10-for-rhel-8-x86_64-eus-rpms Repository 'certsys-10-for-rhel-8-x86_64-eus-rpms' is enabled for this system.",
"dnf module enable redhat-pki"
] | https://docs.redhat.com/en/documentation/red_hat_certificate_system/10/html/planning_installation_and_deployment_guide/enabling_the_CS_repository |
Assessing RHEL Configuration Issues Using the Red Hat Insights Advisor Service | Assessing RHEL Configuration Issues Using the Red Hat Insights Advisor Service Red Hat Insights 1-latest Assess and monitor the configuration issues impacting your RHEL systems Red Hat Customer Content Services | null | https://docs.redhat.com/en/documentation/red_hat_insights/1-latest/html/assessing_rhel_configuration_issues_using_the_red_hat_insights_advisor_service/index |
Chapter 12. Troubleshooting | Chapter 12. Troubleshooting This section describes resources for troubleshooting the Migration Toolkit for Containers (MTC). For known issues, see the MTC release notes . 12.1. MTC workflow You can migrate Kubernetes resources, persistent volume data, and internal container images to OpenShift Container Platform 4.18 by using the Migration Toolkit for Containers (MTC) web console or the Kubernetes API. MTC migrates the following resources: A namespace specified in a migration plan. Namespace-scoped resources: When the MTC migrates a namespace, it migrates all the objects and resources associated with that namespace, such as services or pods. Additionally, if a resource that exists in the namespace but not at the cluster level depends on a resource that exists at the cluster level, the MTC migrates both resources. For example, a security context constraint (SCC) is a resource that exists at the cluster level and a service account (SA) is a resource that exists at the namespace level. If an SA exists in a namespace that the MTC migrates, the MTC automatically locates any SCCs that are linked to the SA and also migrates those SCCs. Similarly, the MTC migrates persistent volumes that are linked to the persistent volume claims of the namespace. Note Cluster-scoped resources might have to be migrated manually, depending on the resource. Custom resources (CRs) and custom resource definitions (CRDs): MTC automatically migrates CRs and CRDs at the namespace level. Migrating an application with the MTC web console involves the following steps: Install the Migration Toolkit for Containers Operator on all clusters. You can install the Migration Toolkit for Containers Operator in a restricted environment with limited or no internet access. The source and target clusters must have network access to each other and to a mirror registry. Configure the replication repository, an intermediate object storage that MTC uses to migrate data. The source and target clusters must have network access to the replication repository during migration. If you are using a proxy server, you must configure it to allow network traffic between the replication repository and the clusters. Add the source cluster to the MTC web console. Add the replication repository to the MTC web console. Create a migration plan, with one of the following data migration options: Copy : MTC copies the data from the source cluster to the replication repository, and from the replication repository to the target cluster. Note If you are using direct image migration or direct volume migration, the images or volumes are copied directly from the source cluster to the target cluster. Move : MTC unmounts a remote volume, for example, NFS, from the source cluster, creates a PV resource on the target cluster pointing to the remote volume, and then mounts the remote volume on the target cluster. Applications running on the target cluster use the same remote volume that the source cluster was using. The remote volume must be accessible to the source and target clusters. Note Although the replication repository does not appear in this diagram, it is required for migration. Run the migration plan, with one of the following options: Stage copies data to the target cluster without stopping the application. A stage migration can be run multiple times so that most of the data is copied to the target before migration. Running one or more stage migrations reduces the duration of the cutover migration. Cutover stops the application on the source cluster and moves the resources to the target cluster. Optional: You can clear the Halt transactions on the source cluster during migration checkbox. About MTC custom resources The Migration Toolkit for Containers (MTC) creates the following custom resources (CRs): MigCluster (configuration, MTC cluster): Cluster definition MigStorage (configuration, MTC cluster): Storage definition MigPlan (configuration, MTC cluster): Migration plan The MigPlan CR describes the source and target clusters, replication repository, and namespaces being migrated. It is associated with 0, 1, or many MigMigration CRs. Note Deleting a MigPlan CR deletes the associated MigMigration CRs. BackupStorageLocation (configuration, MTC cluster): Location of Velero backup objects VolumeSnapshotLocation (configuration, MTC cluster): Location of Velero volume snapshots MigMigration (action, MTC cluster): Migration, created every time you stage or migrate data. Each MigMigration CR is associated with a MigPlan CR. Backup (action, source cluster): When you run a migration plan, the MigMigration CR creates two Velero backup CRs on each source cluster: Backup CR #1 for Kubernetes objects Backup CR #2 for PV data Restore (action, target cluster): When you run a migration plan, the MigMigration CR creates two Velero restore CRs on the target cluster: Restore CR #1 (using Backup CR #2) for PV data Restore CR #2 (using Backup CR #1) for Kubernetes objects 12.2. Migration Toolkit for Containers custom resource manifests Migration Toolkit for Containers (MTC) uses the following custom resource (CR) manifests for migrating applications. 12.2.1. DirectImageMigration The DirectImageMigration CR copies images directly from the source cluster to the destination cluster. apiVersion: migration.openshift.io/v1alpha1 kind: DirectImageMigration metadata: labels: controller-tools.k8s.io: "1.0" name: <direct_image_migration> spec: srcMigClusterRef: name: <source_cluster> namespace: openshift-migration destMigClusterRef: name: <destination_cluster> namespace: openshift-migration namespaces: 1 - <source_namespace_1> - <source_namespace_2>:<destination_namespace_3> 2 1 One or more namespaces containing images to be migrated. By default, the destination namespace has the same name as the source namespace. 2 Source namespace mapped to a destination namespace with a different name. 12.2.2. DirectImageStreamMigration The DirectImageStreamMigration CR copies image stream references directly from the source cluster to the destination cluster. apiVersion: migration.openshift.io/v1alpha1 kind: DirectImageStreamMigration metadata: labels: controller-tools.k8s.io: "1.0" name: <direct_image_stream_migration> spec: srcMigClusterRef: name: <source_cluster> namespace: openshift-migration destMigClusterRef: name: <destination_cluster> namespace: openshift-migration imageStreamRef: name: <image_stream> namespace: <source_image_stream_namespace> destNamespace: <destination_image_stream_namespace> 12.2.3. DirectVolumeMigration The DirectVolumeMigration CR copies persistent volumes (PVs) directly from the source cluster to the destination cluster. apiVersion: migration.openshift.io/v1alpha1 kind: DirectVolumeMigration metadata: name: <direct_volume_migration> namespace: openshift-migration spec: createDestinationNamespaces: false 1 deleteProgressReportingCRs: false 2 destMigClusterRef: name: <host_cluster> 3 namespace: openshift-migration persistentVolumeClaims: - name: <pvc> 4 namespace: <pvc_namespace> srcMigClusterRef: name: <source_cluster> namespace: openshift-migration 1 Set to true to create namespaces for the PVs on the destination cluster. 2 Set to true to delete DirectVolumeMigrationProgress CRs after migration. The default is false so that DirectVolumeMigrationProgress CRs are retained for troubleshooting. 3 Update the cluster name if the destination cluster is not the host cluster. 4 Specify one or more PVCs to be migrated. 12.2.4. DirectVolumeMigrationProgress The DirectVolumeMigrationProgress CR shows the progress of the DirectVolumeMigration CR. apiVersion: migration.openshift.io/v1alpha1 kind: DirectVolumeMigrationProgress metadata: labels: controller-tools.k8s.io: "1.0" name: <direct_volume_migration_progress> spec: clusterRef: name: <source_cluster> namespace: openshift-migration podRef: name: <rsync_pod> namespace: openshift-migration 12.2.5. MigAnalytic The MigAnalytic CR collects the number of images, Kubernetes resources, and the persistent volume (PV) capacity from an associated MigPlan CR. You can configure the data that it collects. apiVersion: migration.openshift.io/v1alpha1 kind: MigAnalytic metadata: annotations: migplan: <migplan> name: <miganalytic> namespace: openshift-migration labels: migplan: <migplan> spec: analyzeImageCount: true 1 analyzeK8SResources: true 2 analyzePVCapacity: true 3 listImages: false 4 listImagesLimit: 50 5 migPlanRef: name: <migplan> namespace: openshift-migration 1 Optional: Returns the number of images. 2 Optional: Returns the number, kind, and API version of the Kubernetes resources. 3 Optional: Returns the PV capacity. 4 Returns a list of image names. The default is false so that the output is not excessively long. 5 Optional: Specify the maximum number of image names to return if listImages is true . 12.2.6. MigCluster The MigCluster CR defines a host, local, or remote cluster. apiVersion: migration.openshift.io/v1alpha1 kind: MigCluster metadata: labels: controller-tools.k8s.io: "1.0" name: <host_cluster> 1 namespace: openshift-migration spec: isHostCluster: true 2 # The 'azureResourceGroup' parameter is relevant only for Microsoft Azure. azureResourceGroup: <azure_resource_group> 3 caBundle: <ca_bundle_base64> 4 insecure: false 5 refresh: false 6 # The 'restartRestic' parameter is relevant for a source cluster. restartRestic: true 7 # The following parameters are relevant for a remote cluster. exposedRegistryPath: <registry_route> 8 url: <destination_cluster_url> 9 serviceAccountSecretRef: name: <source_secret> 10 namespace: openshift-config 1 Update the cluster name if the migration-controller pod is not running on this cluster. 2 The migration-controller pod runs on this cluster if true . 3 Microsoft Azure only: Specify the resource group. 4 Optional: If you created a certificate bundle for self-signed CA certificates and if the insecure parameter value is false , specify the base64-encoded certificate bundle. 5 Set to true to disable SSL verification. 6 Set to true to validate the cluster. 7 Set to true to restart the Restic pods on the source cluster after the Stage pods are created. 8 Remote cluster and direct image migration only: Specify the exposed secure registry path. 9 Remote cluster only: Specify the URL. 10 Remote cluster only: Specify the name of the Secret object. 12.2.7. MigHook The MigHook CR defines a migration hook that runs custom code at a specified stage of the migration. You can create up to four migration hooks. Each hook runs during a different phase of the migration. You can configure the hook name, runtime duration, a custom image, and the cluster where the hook will run. The migration phases and namespaces of the hooks are configured in the MigPlan CR. apiVersion: migration.openshift.io/v1alpha1 kind: MigHook metadata: generateName: <hook_name_prefix> 1 name: <mighook> 2 namespace: openshift-migration spec: activeDeadlineSeconds: 1800 3 custom: false 4 image: <hook_image> 5 playbook: <ansible_playbook_base64> 6 targetCluster: source 7 1 Optional: A unique hash is appended to the value for this parameter so that each migration hook has a unique name. You do not need to specify the value of the name parameter. 2 Specify the migration hook name, unless you specify the value of the generateName parameter. 3 Optional: Specify the maximum number of seconds that a hook can run. The default is 1800 . 4 The hook is a custom image if true . The custom image can include Ansible or it can be written in a different programming language. 5 Specify the custom image, for example, quay.io/konveyor/hook-runner:latest . Required if custom is true . 6 Base64-encoded Ansible playbook. Required if custom is false . 7 Specify the cluster on which the hook will run. Valid values are source or destination . 12.2.8. MigMigration The MigMigration CR runs a MigPlan CR. You can configure a Migmigration CR to run a stage or incremental migration, to cancel a migration in progress, or to roll back a completed migration. apiVersion: migration.openshift.io/v1alpha1 kind: MigMigration metadata: labels: controller-tools.k8s.io: "1.0" name: <migmigration> namespace: openshift-migration spec: canceled: false 1 rollback: false 2 stage: false 3 quiescePods: true 4 keepAnnotations: true 5 verify: false 6 migPlanRef: name: <migplan> namespace: openshift-migration 1 Set to true to cancel a migration in progress. 2 Set to true to roll back a completed migration. 3 Set to true to run a stage migration. Data is copied incrementally and the pods on the source cluster are not stopped. 4 Set to true to stop the application during migration. The pods on the source cluster are scaled to 0 after the Backup stage. 5 Set to true to retain the labels and annotations applied during the migration. 6 Set to true to check the status of the migrated pods on the destination cluster are checked and to return the names of pods that are not in a Running state. 12.2.9. MigPlan The MigPlan CR defines the parameters of a migration plan. You can configure destination namespaces, hook phases, and direct or indirect migration. Note By default, a destination namespace has the same name as the source namespace. If you configure a different destination namespace, you must ensure that the namespaces are not duplicated on the source or the destination clusters because the UID and GID ranges are copied during migration. apiVersion: migration.openshift.io/v1alpha1 kind: MigPlan metadata: labels: controller-tools.k8s.io: "1.0" name: <migplan> namespace: openshift-migration spec: closed: false 1 srcMigClusterRef: name: <source_cluster> namespace: openshift-migration destMigClusterRef: name: <destination_cluster> namespace: openshift-migration hooks: 2 - executionNamespace: <namespace> 3 phase: <migration_phase> 4 reference: name: <hook> 5 namespace: <hook_namespace> 6 serviceAccount: <service_account> 7 indirectImageMigration: true 8 indirectVolumeMigration: false 9 migStorageRef: name: <migstorage> namespace: openshift-migration namespaces: - <source_namespace_1> 10 - <source_namespace_2> - <source_namespace_3>:<destination_namespace_4> 11 refresh: false 12 1 The migration has completed if true . You cannot create another MigMigration CR for this MigPlan CR. 2 Optional: You can specify up to four migration hooks. Each hook must run during a different migration phase. 3 Optional: Specify the namespace in which the hook will run. 4 Optional: Specify the migration phase during which a hook runs. One hook can be assigned to one phase. Valid values are PreBackup , PostBackup , PreRestore , and PostRestore . 5 Optional: Specify the name of the MigHook CR. 6 Optional: Specify the namespace of MigHook CR. 7 Optional: Specify a service account with cluster-admin privileges. 8 Direct image migration is disabled if true . Images are copied from the source cluster to the replication repository and from the replication repository to the destination cluster. 9 Direct volume migration is disabled if true . PVs are copied from the source cluster to the replication repository and from the replication repository to the destination cluster. 10 Specify one or more source namespaces. If you specify only the source namespace, the destination namespace is the same. 11 Specify the destination namespace if it is different from the source namespace. 12 The MigPlan CR is validated if true . 12.2.10. MigStorage The MigStorage CR describes the object storage for the replication repository. Amazon Web Services (AWS), Microsoft Azure, Google Cloud Storage, Multi-Cloud Object Gateway, and generic S3-compatible cloud storage are supported. AWS and the snapshot copy method have additional parameters. apiVersion: migration.openshift.io/v1alpha1 kind: MigStorage metadata: labels: controller-tools.k8s.io: "1.0" name: <migstorage> namespace: openshift-migration spec: backupStorageProvider: <backup_storage_provider> 1 volumeSnapshotProvider: <snapshot_storage_provider> 2 backupStorageConfig: awsBucketName: <bucket> 3 awsRegion: <region> 4 credsSecretRef: namespace: openshift-config name: <storage_secret> 5 awsKmsKeyId: <key_id> 6 awsPublicUrl: <public_url> 7 awsSignatureVersion: <signature_version> 8 volumeSnapshotConfig: awsRegion: <region> 9 credsSecretRef: namespace: openshift-config name: <storage_secret> 10 refresh: false 11 1 Specify the storage provider. 2 Snapshot copy method only: Specify the storage provider. 3 AWS only: Specify the bucket name. 4 AWS only: Specify the bucket region, for example, us-east-1 . 5 Specify the name of the Secret object that you created for the storage. 6 AWS only: If you are using the AWS Key Management Service, specify the unique identifier of the key. 7 AWS only: If you granted public access to the AWS bucket, specify the bucket URL. 8 AWS only: Specify the AWS signature version for authenticating requests to the bucket, for example, 4 . 9 Snapshot copy method only: Specify the geographical region of the clusters. 10 Snapshot copy method only: Specify the name of the Secret object that you created for the storage. 11 Set to true to validate the cluster. 12.3. Logs and debugging tools This section describes logs and debugging tools that you can use for troubleshooting. 12.3.1. Viewing migration plan resources You can view migration plan resources to monitor a running migration or to troubleshoot a failed migration by using the MTC web console and the command line interface (CLI). Procedure In the MTC web console, click Migration Plans . Click the Migrations number to a migration plan to view the Migrations page. Click a migration to view the Migration details . Expand Migration resources to view the migration resources and their status in a tree view. Note To troubleshoot a failed migration, start with a high-level resource that has failed and then work down the resource tree towards the lower-level resources. Click the Options menu to a resource and select one of the following options: Copy oc describe command copies the command to your clipboard. Log in to the relevant cluster and then run the command. The conditions and events of the resource are displayed in YAML format. Copy oc logs command copies the command to your clipboard. Log in to the relevant cluster and then run the command. If the resource supports log filtering, a filtered log is displayed. View JSON displays the resource data in JSON format in a web browser. The data is the same as the output for the oc get <resource> command. 12.3.2. Viewing a migration plan log You can view an aggregated log for a migration plan. You use the MTC web console to copy a command to your clipboard and then run the command from the command line interface (CLI). The command displays the filtered logs of the following pods: Migration Controller Velero Restic Rsync Stunnel Registry Procedure In the MTC web console, click Migration Plans . Click the Migrations number to a migration plan. Click View logs . Click the Copy icon to copy the oc logs command to your clipboard. Log in to the relevant cluster and enter the command on the CLI. The aggregated log for the migration plan is displayed. 12.3.3. Using the migration log reader You can use the migration log reader to display a single filtered view of all the migration logs. Procedure Get the mig-log-reader pod: USD oc -n openshift-migration get pods | grep log Enter the following command to display a single migration log: USD oc -n openshift-migration logs -f <mig-log-reader-pod> -c color 1 1 The -c plain option displays the log without colors. 12.3.4. Accessing performance metrics The MigrationController custom resource (CR) records metrics and pulls them into on-cluster monitoring storage. You can query the metrics by using Prometheus Query Language (PromQL) to diagnose migration performance issues. All metrics are reset when the Migration Controller pod restarts. You can access the performance metrics and run queries by using the OpenShift Container Platform web console. Procedure In the OpenShift Container Platform web console, click Observe Metrics . Enter a PromQL query, select a time window to display, and click Run Queries . If your web browser does not display all the results, use the Prometheus console. 12.3.4.1. Provided metrics The MigrationController custom resource (CR) provides metrics for the MigMigration CR count and for its API requests. 12.3.4.1.1. cam_app_workload_migrations This metric is a count of MigMigration CRs over time. It is useful for viewing alongside the mtc_client_request_count and mtc_client_request_elapsed metrics to collate API request information with migration status changes. This metric is included in Telemetry. Table 12.1. cam_app_workload_migrations metric Queryable label name Sample label values Label description status running , idle , failed , completed Status of the MigMigration CR type stage, final Type of the MigMigration CR 12.3.4.1.2. mtc_client_request_count This metric is a cumulative count of Kubernetes API requests that MigrationController issued. It is not included in Telemetry. Table 12.2. mtc_client_request_count metric Queryable label name Sample label values Label description cluster https://migcluster-url:443 Cluster that the request was issued against component MigPlan , MigCluster Sub-controller API that issued request function (*ReconcileMigPlan).Reconcile Function that the request was issued from kind SecretList , Deployment Kubernetes kind the request was issued for 12.3.4.1.3. mtc_client_request_elapsed This metric is a cumulative latency, in milliseconds, of Kubernetes API requests that MigrationController issued. It is not included in Telemetry. Table 12.3. mtc_client_request_elapsed metric Queryable label name Sample label values Label description cluster https://cluster-url.com:443 Cluster that the request was issued against component migplan , migcluster Sub-controller API that issued request function (*ReconcileMigPlan).Reconcile Function that the request was issued from kind SecretList , Deployment Kubernetes resource that the request was issued for 12.3.4.1.4. Useful queries The table lists some helpful queries that can be used for monitoring performance. Table 12.4. Useful queries Query Description mtc_client_request_count Number of API requests issued, sorted by request type sum(mtc_client_request_count) Total number of API requests issued mtc_client_request_elapsed API request latency, sorted by request type sum(mtc_client_request_elapsed) Total latency of API requests sum(mtc_client_request_elapsed) / sum(mtc_client_request_count) Average latency of API requests mtc_client_request_elapsed / mtc_client_request_count Average latency of API requests, sorted by request type cam_app_workload_migrations{status="running"} * 100 Count of running migrations, multiplied by 100 for easier viewing alongside request counts 12.3.5. Using the must-gather tool You can collect logs, metrics, and information about MTC custom resources by using the must-gather tool. The must-gather data must be attached to all customer cases. You can collect data for a one-hour or a 24-hour period and view the data with the Prometheus console. Prerequisites You must be logged in to the OpenShift Container Platform cluster as a user with the cluster-admin role. You must have the OpenShift CLI ( oc ) installed. Procedure Navigate to the directory where you want to store the must-gather data. Run the oc adm must-gather command for one of the following data collection options: To collect data for the past hour, run the following command: USD oc adm must-gather --image=registry.redhat.io/rhmtc/openshift-migration-must-gather-rhel8:v1.7 This command saves the data as the must-gather/must-gather.tar.gz file. You can upload this file to a support case on the Red Hat Customer Portal . To collect data for the past 24 hours, run the following command: USD oc adm must-gather --image=registry.redhat.io/rhmtc/openshift-migration-must-gather-rhel8:v1.7 -- /usr/bin/gather_metrics_dump This operation can take a long time. This command saves the data as the must-gather/metrics/prom_data.tar.gz file. 12.3.6. Debugging Velero resources with the Velero CLI tool You can debug Backup and Restore custom resources (CRs) and retrieve logs with the Velero CLI tool. The Velero CLI tool provides more detailed information than the OpenShift CLI tool. Syntax Use the oc exec command to run a Velero CLI command: USD oc -n openshift-migration exec deployment/velero -c velero -- ./velero \ <backup_restore_cr> <command> <cr_name> Example USD oc -n openshift-migration exec deployment/velero -c velero -- ./velero \ backup describe 0e44ae00-5dc3-11eb-9ca8-df7e5254778b-2d8ql Help option Use the velero --help option to list all Velero CLI commands: USD oc -n openshift-migration exec deployment/velero -c velero -- ./velero \ --help Describe command Use the velero describe command to retrieve a summary of warnings and errors associated with a Backup or Restore CR: USD oc -n openshift-migration exec deployment/velero -c velero -- ./velero \ <backup_restore_cr> describe <cr_name> Example USD oc -n openshift-migration exec deployment/velero -c velero -- ./velero \ backup describe 0e44ae00-5dc3-11eb-9ca8-df7e5254778b-2d8ql The following types of restore errors and warnings are shown in the output of a velero describe request: Velero : A list of messages related to the operation of Velero itself, for example, messages related to connecting to the cloud, reading a backup file, and so on Cluster : A list of messages related to backing up or restoring cluster-scoped resources Namespaces : A list of list of messages related to backing up or restoring resources stored in namespaces One or more errors in one of these categories results in a Restore operation receiving the status of PartiallyFailed and not Completed . Warnings do not lead to a change in the completion status. Important For resource-specific errors, that is, Cluster and Namespaces errors, the restore describe --details output includes a resource list that lists all resources that Velero succeeded in restoring. For any resource that has such an error, check to see if the resource is actually in the cluster. If there are Velero errors, but no resource-specific errors, in the output of a describe command, it is possible that the restore completed without any actual problems in restoring workloads, but carefully validate post-restore applications. For example, if the output contains PodVolumeRestore or node agent-related errors, check the status of PodVolumeRestores and DataDownloads . If none of these are failed or still running, then volume data might have been fully restored. Logs command Use the velero logs command to retrieve the logs of a Backup or Restore CR: USD oc -n openshift-migration exec deployment/velero -c velero -- ./velero \ <backup_restore_cr> logs <cr_name> Example USD oc -n openshift-migration exec deployment/velero -c velero -- ./velero \ restore logs ccc7c2d0-6017-11eb-afab-85d0007f5a19-x4lbf 12.3.7. Debugging a partial migration failure You can debug a partial migration failure warning message by using the Velero CLI to examine the Restore custom resource (CR) logs. A partial failure occurs when Velero encounters an issue that does not cause a migration to fail. For example, if a custom resource definition (CRD) is missing or if there is a discrepancy between CRD versions on the source and target clusters, the migration completes but the CR is not created on the target cluster. Velero logs the issue as a partial failure and then processes the rest of the objects in the Backup CR. Procedure Check the status of a MigMigration CR: USD oc get migmigration <migmigration> -o yaml Example output status: conditions: - category: Warn durable: true lastTransitionTime: "2021-01-26T20:48:40Z" message: 'Final Restore openshift-migration/ccc7c2d0-6017-11eb-afab-85d0007f5a19-x4lbf: partially failed on destination cluster' status: "True" type: VeleroFinalRestorePartiallyFailed - category: Advisory durable: true lastTransitionTime: "2021-01-26T20:48:42Z" message: The migration has completed with warnings, please look at `Warn` conditions. reason: Completed status: "True" type: SucceededWithWarnings Check the status of the Restore CR by using the Velero describe command: USD oc -n {namespace} exec deployment/velero -c velero -- ./velero \ restore describe <restore> Example output Phase: PartiallyFailed (run 'velero restore logs ccc7c2d0-6017-11eb-afab-85d0007f5a19-x4lbf' for more information) Errors: Velero: <none> Cluster: <none> Namespaces: migration-example: error restoring example.com/migration-example/migration-example: the server could not find the requested resource Check the Restore CR logs by using the Velero logs command: USD oc -n {namespace} exec deployment/velero -c velero -- ./velero \ restore logs <restore> Example output time="2021-01-26T20:48:37Z" level=info msg="Attempting to restore migration-example: migration-example" logSource="pkg/restore/restore.go:1107" restore=openshift-migration/ccc7c2d0-6017-11eb-afab-85d0007f5a19-x4lbf time="2021-01-26T20:48:37Z" level=info msg="error restoring migration-example: the server could not find the requested resource" logSource="pkg/restore/restore.go:1170" restore=openshift-migration/ccc7c2d0-6017-11eb-afab-85d0007f5a19-x4lbf The Restore CR log error message, the server could not find the requested resource , indicates the cause of the partially failed migration. 12.3.8. Using MTC custom resources for troubleshooting You can check the following Migration Toolkit for Containers (MTC) custom resources (CRs) to troubleshoot a failed migration: MigCluster MigStorage MigPlan BackupStorageLocation The BackupStorageLocation CR contains a migrationcontroller label to identify the MTC instance that created the CR: labels: migrationcontroller: ebe13bee-c803-47d0-a9e9-83f380328b93 VolumeSnapshotLocation The VolumeSnapshotLocation CR contains a migrationcontroller label to identify the MTC instance that created the CR: labels: migrationcontroller: ebe13bee-c803-47d0-a9e9-83f380328b93 MigMigration Backup MTC changes the reclaim policy of migrated persistent volumes (PVs) to Retain on the target cluster. The Backup CR contains an openshift.io/orig-reclaim-policy annotation that indicates the original reclaim policy. You can manually restore the reclaim policy of the migrated PVs. Restore Procedure List the MigMigration CRs in the openshift-migration namespace: USD oc get migmigration -n openshift-migration Example output NAME AGE 88435fe0-c9f8-11e9-85e6-5d593ce65e10 6m42s Inspect the MigMigration CR: USD oc describe migmigration 88435fe0-c9f8-11e9-85e6-5d593ce65e10 -n openshift-migration The output is similar to the following examples. MigMigration example output name: 88435fe0-c9f8-11e9-85e6-5d593ce65e10 namespace: openshift-migration labels: <none> annotations: touch: 3b48b543-b53e-4e44-9d34-33563f0f8147 apiVersion: migration.openshift.io/v1alpha1 kind: MigMigration metadata: creationTimestamp: 2019-08-29T01:01:29Z generation: 20 resourceVersion: 88179 selfLink: /apis/migration.openshift.io/v1alpha1/namespaces/openshift-migration/migmigrations/88435fe0-c9f8-11e9-85e6-5d593ce65e10 uid: 8886de4c-c9f8-11e9-95ad-0205fe66cbb6 spec: migPlanRef: name: socks-shop-mig-plan namespace: openshift-migration quiescePods: true stage: false status: conditions: category: Advisory durable: True lastTransitionTime: 2019-08-29T01:03:40Z message: The migration has completed successfully. reason: Completed status: True type: Succeeded phase: Completed startTimestamp: 2019-08-29T01:01:29Z events: <none> Velero backup CR #2 example output that describes the PV data apiVersion: velero.io/v1 kind: Backup metadata: annotations: openshift.io/migrate-copy-phase: final openshift.io/migrate-quiesce-pods: "true" openshift.io/migration-registry: 172.30.105.179:5000 openshift.io/migration-registry-dir: /socks-shop-mig-plan-registry-44dd3bd5-c9f8-11e9-95ad-0205fe66cbb6 openshift.io/orig-reclaim-policy: delete creationTimestamp: "2019-08-29T01:03:15Z" generateName: 88435fe0-c9f8-11e9-85e6-5d593ce65e10- generation: 1 labels: app.kubernetes.io/part-of: migration migmigration: 8886de4c-c9f8-11e9-95ad-0205fe66cbb6 migration-stage-backup: 8886de4c-c9f8-11e9-95ad-0205fe66cbb6 velero.io/storage-location: myrepo-vpzq9 name: 88435fe0-c9f8-11e9-85e6-5d593ce65e10-59gb7 namespace: openshift-migration resourceVersion: "87313" selfLink: /apis/velero.io/v1/namespaces/openshift-migration/backups/88435fe0-c9f8-11e9-85e6-5d593ce65e10-59gb7 uid: c80dbbc0-c9f8-11e9-95ad-0205fe66cbb6 spec: excludedNamespaces: [] excludedResources: [] hooks: resources: [] includeClusterResources: null includedNamespaces: - sock-shop includedResources: - persistentvolumes - persistentvolumeclaims - namespaces - imagestreams - imagestreamtags - secrets - configmaps - pods labelSelector: matchLabels: migration-included-stage-backup: 8886de4c-c9f8-11e9-95ad-0205fe66cbb6 storageLocation: myrepo-vpzq9 ttl: 720h0m0s volumeSnapshotLocations: - myrepo-wv6fx status: completionTimestamp: "2019-08-29T01:02:36Z" errors: 0 expiration: "2019-09-28T01:02:35Z" phase: Completed startTimestamp: "2019-08-29T01:02:35Z" validationErrors: null version: 1 volumeSnapshotsAttempted: 0 volumeSnapshotsCompleted: 0 warnings: 0 Velero restore CR #2 example output that describes the Kubernetes resources apiVersion: velero.io/v1 kind: Restore metadata: annotations: openshift.io/migrate-copy-phase: final openshift.io/migrate-quiesce-pods: "true" openshift.io/migration-registry: 172.30.90.187:5000 openshift.io/migration-registry-dir: /socks-shop-mig-plan-registry-36f54ca7-c925-11e9-825a-06fa9fb68c88 creationTimestamp: "2019-08-28T00:09:49Z" generateName: e13a1b60-c927-11e9-9555-d129df7f3b96- generation: 3 labels: app.kubernetes.io/part-of: migration migmigration: e18252c9-c927-11e9-825a-06fa9fb68c88 migration-final-restore: e18252c9-c927-11e9-825a-06fa9fb68c88 name: e13a1b60-c927-11e9-9555-d129df7f3b96-gb8nx namespace: openshift-migration resourceVersion: "82329" selfLink: /apis/velero.io/v1/namespaces/openshift-migration/restores/e13a1b60-c927-11e9-9555-d129df7f3b96-gb8nx uid: 26983ec0-c928-11e9-825a-06fa9fb68c88 spec: backupName: e13a1b60-c927-11e9-9555-d129df7f3b96-sz24f excludedNamespaces: null excludedResources: - nodes - events - events.events.k8s.io - backups.velero.io - restores.velero.io - resticrepositories.velero.io includedNamespaces: null includedResources: null namespaceMapping: null restorePVs: true status: errors: 0 failureReason: "" phase: Completed validationErrors: null warnings: 15 12.4. Common issues and concerns This section describes common issues and concerns that can cause issues during migration. 12.4.1. Updating deprecated internal images If your application uses images from the openshift namespace, the required versions of the images must be present on the target cluster. If an OpenShift Container Platform 3 image is deprecated in OpenShift Container Platform 4.18, you can manually update the image stream tag by using podman . Prerequisites You must have podman installed. You must be logged in as a user with cluster-admin privileges. If you are using insecure registries, add your registry host values to the [registries.insecure] section of /etc/container/registries.conf to ensure that podman does not encounter a TLS verification error. The internal registries must be exposed on the source and target clusters. Procedure Ensure that the internal registries are exposed on the OpenShift Container Platform 3 and 4 clusters. The OpenShift image registry is exposed by default on OpenShift Container Platform 4. If you are using insecure registries, add your registry host values to the [registries.insecure] section of /etc/container/registries.conf to ensure that podman does not encounter a TLS verification error. Log in to the OpenShift Container Platform 3 registry by running the following command: USD podman login -u USD(oc whoami) -p USD(oc whoami -t) --tls-verify=false <registry_url>:<port> Log in to the OpenShift Container Platform 4 registry by running the following command: USD podman login -u USD(oc whoami) -p USD(oc whoami -t) --tls-verify=false <registry_url>:<port> Pull the OpenShift Container Platform 3 image by running the following command: USD podman pull <registry_url>:<port>/openshift/<image> Scan the OpenShift Container Platform 3 image for deprecated namespaces by running the following command: USD oc get bc --all-namespaces --template='range .items "BuildConfig:" .metadata.namespace/.metadata.name => "\t""ImageStream(FROM):" .spec.strategy.sourceStrategy.from.namespace/.spec.strategy.sourceStrategy.from.name "\t""ImageStream(TO):" .spec.output.to.namespace/.spec.output.to.name end' Tag the OpenShift Container Platform 3 image for the OpenShift Container Platform 4 registry by running the following command: USD podman tag <registry_url>:<port>/openshift/<image> \ 1 <registry_url>:<port>/openshift/<image> 2 1 Specify the registry URL and port for the OpenShift Container Platform 3 cluster. 2 Specify the registry URL and port for the OpenShift Container Platform 4 cluster. Push the image to the OpenShift Container Platform 4 registry by running the following command: USD podman push <registry_url>:<port>/openshift/<image> 1 1 Specify the OpenShift Container Platform 4 cluster. Verify that the image has a valid image stream by running the following command: USD oc get imagestream -n openshift | grep <image> Example output NAME IMAGE REPOSITORY TAGS UPDATED my_image image-registry.openshift-image-registry.svc:5000/openshift/my_image latest 32 seconds ago 12.4.2. Direct volume migration does not complete If direct volume migration does not complete, the target cluster might not have the same node-selector annotations as the source cluster. Migration Toolkit for Containers (MTC) migrates namespaces with all annotations to preserve security context constraints and scheduling requirements. During direct volume migration, MTC creates Rsync transfer pods on the target cluster in the namespaces that were migrated from the source cluster. If a target cluster namespace does not have the same annotations as the source cluster namespace, the Rsync transfer pods cannot be scheduled. The Rsync pods remain in a Pending state. You can identify and fix this issue by performing the following procedure. Procedure Check the status of the MigMigration CR: USD oc describe migmigration <pod> -n openshift-migration The output includes the following status message: Example output Some or all transfer pods are not running for more than 10 mins on destination cluster On the source cluster, obtain the details of a migrated namespace: USD oc get namespace <namespace> -o yaml 1 1 Specify the migrated namespace. On the target cluster, edit the migrated namespace: USD oc edit namespace <namespace> Add the missing openshift.io/node-selector annotations to the migrated namespace as in the following example: apiVersion: v1 kind: Namespace metadata: annotations: openshift.io/node-selector: "region=east" ... Run the migration plan again. 12.4.3. Error messages and resolutions This section describes common error messages you might encounter with the Migration Toolkit for Containers (MTC) and how to resolve their underlying causes. 12.4.3.1. CA certificate error displayed when accessing the MTC console for the first time If a CA certificate error message is displayed the first time you try to access the MTC console, the likely cause is the use of self-signed CA certificates in one of the clusters. To resolve this issue, navigate to the oauth-authorization-server URL displayed in the error message and accept the certificate. To resolve this issue permanently, add the certificate to the trust store of your web browser. If an Unauthorized message is displayed after you have accepted the certificate, navigate to the MTC console and refresh the web page. 12.4.3.2. OAuth timeout error in the MTC console If a connection has timed out message is displayed in the MTC console after you have accepted a self-signed certificate, the causes are likely to be the following: Interrupted network access to the OAuth server Interrupted network access to the OpenShift Container Platform console Proxy configuration that blocks access to the oauth-authorization-server URL. See MTC console inaccessible because of OAuth timeout error for details. To determine the cause of the timeout: Inspect the MTC console web page with a browser web inspector. Check the Migration UI pod log for errors. 12.4.3.3. Certificate signed by unknown authority error If you use a self-signed certificate to secure a cluster or a replication repository for the MTC, certificate verification might fail with the following error message: Certificate signed by unknown authority . You can create a custom CA certificate bundle file and upload it in the MTC web console when you add a cluster or a replication repository. Procedure Download a CA certificate from a remote endpoint and save it as a CA bundle file: USD echo -n | openssl s_client -connect <host_FQDN>:<port> \ 1 | sed -ne '/-BEGIN CERTIFICATE-/,/-END CERTIFICATE-/p' > <ca_bundle.cert> 2 1 Specify the host FQDN and port of the endpoint, for example, api.my-cluster.example.com:6443 . 2 Specify the name of the CA bundle file. 12.4.3.4. Backup storage location errors in the Velero pod log If a Velero Backup custom resource contains a reference to a backup storage location (BSL) that does not exist, the Velero pod log might display the following error messages: USD oc logs <Velero_Pod> -n openshift-migration Example output level=error msg="Error checking repository for stale locks" error="error getting backup storage location: BackupStorageLocation.velero.io \"ts-dpa-1\" not found" error.file="/remote-source/src/github.com/vmware-tanzu/velero/pkg/restic/repository_manager.go:259" You can ignore these error messages. A missing BSL cannot cause a migration to fail. 12.4.3.5. Pod volume backup timeout error in the Velero pod log If a migration fails because Restic times out, the following error is displayed in the Velero pod log. level=error msg="Error backing up item" backup=velero/monitoring error="timed out waiting for all PodVolumeBackups to complete" error.file="/go/src/github.com/heptio/velero/pkg/restic/backupper.go:165" error.function="github.com/heptio/velero/pkg/restic.(*backupper).BackupPodVolumes" group=v1 The default value of restic_timeout is one hour. You can increase this parameter for large migrations, keeping in mind that a higher value may delay the return of error messages. Procedure In the OpenShift Container Platform web console, navigate to Operators Installed Operators . Click Migration Toolkit for Containers Operator . In the MigrationController tab, click migration-controller . In the YAML tab, update the following parameter value: spec: restic_timeout: 1h 1 1 Valid units are h (hours), m (minutes), and s (seconds), for example, 3h30m15s . Click Save . 12.4.3.6. Restic verification errors in the MigMigration custom resource If data verification fails when migrating a persistent volume with the file system data copy method, the following error is displayed in the MigMigration CR. Example output status: conditions: - category: Warn durable: true lastTransitionTime: 2020-04-16T20:35:16Z message: There were verify errors found in 1 Restic volume restores. See restore `<registry-example-migration-rvwcm>` for details 1 status: "True" type: ResticVerifyErrors 2 1 The error message identifies the Restore CR name. 2 ResticVerifyErrors is a general error warning type that includes verification errors. Note A data verification error does not cause the migration process to fail. You can check the Restore CR to identify the source of the data verification error. Procedure Log in to the target cluster. View the Restore CR: USD oc describe <registry-example-migration-rvwcm> -n openshift-migration The output identifies the persistent volume with PodVolumeRestore errors. Example output status: phase: Completed podVolumeRestoreErrors: - kind: PodVolumeRestore name: <registry-example-migration-rvwcm-98t49> namespace: openshift-migration podVolumeRestoreResticErrors: - kind: PodVolumeRestore name: <registry-example-migration-rvwcm-98t49> namespace: openshift-migration View the PodVolumeRestore CR: USD oc describe <migration-example-rvwcm-98t49> The output identifies the Restic pod that logged the errors. Example output completionTimestamp: 2020-05-01T20:49:12Z errors: 1 resticErrors: 1 ... resticPod: <restic-nr2v5> View the Restic pod log to locate the errors: USD oc logs -f <restic-nr2v5> 12.4.3.7. Restic permission error when migrating from NFS storage with root_squash enabled If you are migrating data from NFS storage and root_squash is enabled, Restic maps to nfsnobody and does not have permission to perform the migration. The following error is displayed in the Restic pod log. Example output backup=openshift-migration/<backup_id> controller=pod-volume-backup error="fork/exec /usr/bin/restic: permission denied" error.file="/go/src/github.com/vmware-tanzu/velero/pkg/controller/pod_volume_backup_controller.go:280" error.function="github.com/vmware-tanzu/velero/pkg/controller.(*podVolumeBackupController).processBackup" logSource="pkg/controller/pod_volume_backup_controller.go:280" name=<backup_id> namespace=openshift-migration You can resolve this issue by creating a supplemental group for Restic and adding the group ID to the MigrationController CR manifest. Procedure Create a supplemental group for Restic on the NFS storage. Set the setgid bit on the NFS directories so that group ownership is inherited. Add the restic_supplemental_groups parameter to the MigrationController CR manifest on the source and target clusters: spec: restic_supplemental_groups: <group_id> 1 1 Specify the supplemental group ID. Wait for the Restic pods to restart so that the changes are applied. 12.4.4. Known issues This release has the following known issues: During migration, the Migration Toolkit for Containers (MTC) preserves the following namespace annotations: openshift.io/sa.scc.mcs openshift.io/sa.scc.supplemental-groups openshift.io/sa.scc.uid-range These annotations preserve the UID range, ensuring that the containers retain their file system permissions on the target cluster. There is a risk that the migrated UIDs could duplicate UIDs within an existing or future namespace on the target cluster. ( BZ#1748440 ) Most cluster-scoped resources are not yet handled by MTC. If your applications require cluster-scoped resources, you might have to create them manually on the target cluster. If a migration fails, the migration plan does not retain custom PV settings for quiesced pods. You must manually roll back the migration, delete the migration plan, and create a new migration plan with your PV settings. ( BZ#1784899 ) If a large migration fails because Restic times out, you can increase the restic_timeout parameter value (default: 1h ) in the MigrationController custom resource (CR) manifest. If you select the data verification option for PVs that are migrated with the file system copy method, performance is significantly slower. If you are migrating data from NFS storage and root_squash is enabled, Restic maps to nfsnobody . The migration fails and a permission error is displayed in the Restic pod log. ( BZ#1873641 ) You can resolve this issue by adding supplemental groups for Restic to the MigrationController CR manifest: spec: ... restic_supplemental_groups: - 5555 - 6666 If you perform direct volume migration with nodes that are in different availability zones or availability sets, the migration might fail because the migrated pods cannot access the PVC. ( BZ#1947487 ) 12.5. Rolling back a migration You can roll back a migration by using the MTC web console or the CLI. You can also roll back a migration manually . 12.5.1. Rolling back a migration by using the MTC web console You can roll back a migration by using the Migration Toolkit for Containers (MTC) web console. Note The following resources remain in the migrated namespaces for debugging after a failed direct volume migration (DVM): Config maps (source and destination clusters) Secret objects (source and destination clusters) Rsync CRs (source cluster) These resources do not affect rollback. You can delete them manually. If you later run the same migration plan successfully, the resources from the failed migration are deleted automatically. If your application was stopped during a failed migration, you must roll back the migration to prevent data corruption in the persistent volume. Rollback is not required if the application was not stopped during migration because the original application is still running on the source cluster. Procedure In the MTC web console, click Migration plans . Click the Options menu beside a migration plan and select Rollback under Migration . Click Rollback and wait for rollback to complete. In the migration plan details, Rollback succeeded is displayed. Verify that rollback was successful in the OpenShift Container Platform web console of the source cluster: Click Home Projects . Click the migrated project to view its status. In the Routes section, click Location to verify that the application is functioning, if applicable. Click Workloads Pods to verify that the pods are running in the migrated namespace. Click Storage Persistent volumes to verify that the migrated persistent volume is correctly provisioned. 12.5.2. Rolling back a migration from the command line interface You can roll back a migration by creating a MigMigration custom resource (CR) from the command line interface. Note The following resources remain in the migrated namespaces for debugging after a failed direct volume migration (DVM): Config maps (source and destination clusters) Secret objects (source and destination clusters) Rsync CRs (source cluster) These resources do not affect rollback. You can delete them manually. If you later run the same migration plan successfully, the resources from the failed migration are deleted automatically. If your application was stopped during a failed migration, you must roll back the migration to prevent data corruption in the persistent volume. Rollback is not required if the application was not stopped during migration because the original application is still running on the source cluster. Procedure Create a MigMigration CR based on the following example: USD cat << EOF | oc apply -f - apiVersion: migration.openshift.io/v1alpha1 kind: MigMigration metadata: labels: controller-tools.k8s.io: "1.0" name: <migmigration> namespace: openshift-migration spec: ... rollback: true ... migPlanRef: name: <migplan> 1 namespace: openshift-migration EOF 1 Specify the name of the associated MigPlan CR. In the MTC web console, verify that the migrated project resources have been removed from the target cluster. Verify that the migrated project resources are present in the source cluster and that the application is running. 12.5.3. Rolling back a migration manually You can roll back a failed migration manually by deleting the stage pods and unquiescing the application. If you run the same migration plan successfully, the resources from the failed migration are deleted automatically. Note The following resources remain in the migrated namespaces after a failed direct volume migration (DVM): Config maps (source and destination clusters) Secret objects (source and destination clusters) Rsync CRs (source cluster) These resources do not affect rollback. You can delete them manually. Procedure Delete the stage pods on all clusters: USD oc delete USD(oc get pods -l migration.openshift.io/is-stage-pod -n <namespace>) 1 1 Namespaces specified in the MigPlan CR. Unquiesce the application on the source cluster by scaling the replicas to their premigration number: USD oc scale deployment <deployment> --replicas=<premigration_replicas> The migration.openshift.io/preQuiesceReplicas annotation in the Deployment CR displays the premigration number of replicas: apiVersion: extensions/v1beta1 kind: Deployment metadata: annotations: deployment.kubernetes.io/revision: "1" migration.openshift.io/preQuiesceReplicas: "1" Verify that the application pods are running on the source cluster: USD oc get pod -n <namespace> Additional resources Deleting Operators from a cluster using the web console | [
"apiVersion: migration.openshift.io/v1alpha1 kind: DirectImageMigration metadata: labels: controller-tools.k8s.io: \"1.0\" name: <direct_image_migration> spec: srcMigClusterRef: name: <source_cluster> namespace: openshift-migration destMigClusterRef: name: <destination_cluster> namespace: openshift-migration namespaces: 1 - <source_namespace_1> - <source_namespace_2>:<destination_namespace_3> 2",
"apiVersion: migration.openshift.io/v1alpha1 kind: DirectImageStreamMigration metadata: labels: controller-tools.k8s.io: \"1.0\" name: <direct_image_stream_migration> spec: srcMigClusterRef: name: <source_cluster> namespace: openshift-migration destMigClusterRef: name: <destination_cluster> namespace: openshift-migration imageStreamRef: name: <image_stream> namespace: <source_image_stream_namespace> destNamespace: <destination_image_stream_namespace>",
"apiVersion: migration.openshift.io/v1alpha1 kind: DirectVolumeMigration metadata: name: <direct_volume_migration> namespace: openshift-migration spec: createDestinationNamespaces: false 1 deleteProgressReportingCRs: false 2 destMigClusterRef: name: <host_cluster> 3 namespace: openshift-migration persistentVolumeClaims: - name: <pvc> 4 namespace: <pvc_namespace> srcMigClusterRef: name: <source_cluster> namespace: openshift-migration",
"apiVersion: migration.openshift.io/v1alpha1 kind: DirectVolumeMigrationProgress metadata: labels: controller-tools.k8s.io: \"1.0\" name: <direct_volume_migration_progress> spec: clusterRef: name: <source_cluster> namespace: openshift-migration podRef: name: <rsync_pod> namespace: openshift-migration",
"apiVersion: migration.openshift.io/v1alpha1 kind: MigAnalytic metadata: annotations: migplan: <migplan> name: <miganalytic> namespace: openshift-migration labels: migplan: <migplan> spec: analyzeImageCount: true 1 analyzeK8SResources: true 2 analyzePVCapacity: true 3 listImages: false 4 listImagesLimit: 50 5 migPlanRef: name: <migplan> namespace: openshift-migration",
"apiVersion: migration.openshift.io/v1alpha1 kind: MigCluster metadata: labels: controller-tools.k8s.io: \"1.0\" name: <host_cluster> 1 namespace: openshift-migration spec: isHostCluster: true 2 The 'azureResourceGroup' parameter is relevant only for Microsoft Azure. azureResourceGroup: <azure_resource_group> 3 caBundle: <ca_bundle_base64> 4 insecure: false 5 refresh: false 6 The 'restartRestic' parameter is relevant for a source cluster. restartRestic: true 7 The following parameters are relevant for a remote cluster. exposedRegistryPath: <registry_route> 8 url: <destination_cluster_url> 9 serviceAccountSecretRef: name: <source_secret> 10 namespace: openshift-config",
"apiVersion: migration.openshift.io/v1alpha1 kind: MigHook metadata: generateName: <hook_name_prefix> 1 name: <mighook> 2 namespace: openshift-migration spec: activeDeadlineSeconds: 1800 3 custom: false 4 image: <hook_image> 5 playbook: <ansible_playbook_base64> 6 targetCluster: source 7",
"apiVersion: migration.openshift.io/v1alpha1 kind: MigMigration metadata: labels: controller-tools.k8s.io: \"1.0\" name: <migmigration> namespace: openshift-migration spec: canceled: false 1 rollback: false 2 stage: false 3 quiescePods: true 4 keepAnnotations: true 5 verify: false 6 migPlanRef: name: <migplan> namespace: openshift-migration",
"apiVersion: migration.openshift.io/v1alpha1 kind: MigPlan metadata: labels: controller-tools.k8s.io: \"1.0\" name: <migplan> namespace: openshift-migration spec: closed: false 1 srcMigClusterRef: name: <source_cluster> namespace: openshift-migration destMigClusterRef: name: <destination_cluster> namespace: openshift-migration hooks: 2 - executionNamespace: <namespace> 3 phase: <migration_phase> 4 reference: name: <hook> 5 namespace: <hook_namespace> 6 serviceAccount: <service_account> 7 indirectImageMigration: true 8 indirectVolumeMigration: false 9 migStorageRef: name: <migstorage> namespace: openshift-migration namespaces: - <source_namespace_1> 10 - <source_namespace_2> - <source_namespace_3>:<destination_namespace_4> 11 refresh: false 12",
"apiVersion: migration.openshift.io/v1alpha1 kind: MigStorage metadata: labels: controller-tools.k8s.io: \"1.0\" name: <migstorage> namespace: openshift-migration spec: backupStorageProvider: <backup_storage_provider> 1 volumeSnapshotProvider: <snapshot_storage_provider> 2 backupStorageConfig: awsBucketName: <bucket> 3 awsRegion: <region> 4 credsSecretRef: namespace: openshift-config name: <storage_secret> 5 awsKmsKeyId: <key_id> 6 awsPublicUrl: <public_url> 7 awsSignatureVersion: <signature_version> 8 volumeSnapshotConfig: awsRegion: <region> 9 credsSecretRef: namespace: openshift-config name: <storage_secret> 10 refresh: false 11",
"oc -n openshift-migration get pods | grep log",
"oc -n openshift-migration logs -f <mig-log-reader-pod> -c color 1",
"oc adm must-gather --image=registry.redhat.io/rhmtc/openshift-migration-must-gather-rhel8:v1.7",
"oc adm must-gather --image=registry.redhat.io/rhmtc/openshift-migration-must-gather-rhel8:v1.7 -- /usr/bin/gather_metrics_dump",
"oc -n openshift-migration exec deployment/velero -c velero -- ./velero <backup_restore_cr> <command> <cr_name>",
"oc -n openshift-migration exec deployment/velero -c velero -- ./velero backup describe 0e44ae00-5dc3-11eb-9ca8-df7e5254778b-2d8ql",
"oc -n openshift-migration exec deployment/velero -c velero -- ./velero --help",
"oc -n openshift-migration exec deployment/velero -c velero -- ./velero <backup_restore_cr> describe <cr_name>",
"oc -n openshift-migration exec deployment/velero -c velero -- ./velero backup describe 0e44ae00-5dc3-11eb-9ca8-df7e5254778b-2d8ql",
"oc -n openshift-migration exec deployment/velero -c velero -- ./velero <backup_restore_cr> logs <cr_name>",
"oc -n openshift-migration exec deployment/velero -c velero -- ./velero restore logs ccc7c2d0-6017-11eb-afab-85d0007f5a19-x4lbf",
"oc get migmigration <migmigration> -o yaml",
"status: conditions: - category: Warn durable: true lastTransitionTime: \"2021-01-26T20:48:40Z\" message: 'Final Restore openshift-migration/ccc7c2d0-6017-11eb-afab-85d0007f5a19-x4lbf: partially failed on destination cluster' status: \"True\" type: VeleroFinalRestorePartiallyFailed - category: Advisory durable: true lastTransitionTime: \"2021-01-26T20:48:42Z\" message: The migration has completed with warnings, please look at `Warn` conditions. reason: Completed status: \"True\" type: SucceededWithWarnings",
"oc -n {namespace} exec deployment/velero -c velero -- ./velero restore describe <restore>",
"Phase: PartiallyFailed (run 'velero restore logs ccc7c2d0-6017-11eb-afab-85d0007f5a19-x4lbf' for more information) Errors: Velero: <none> Cluster: <none> Namespaces: migration-example: error restoring example.com/migration-example/migration-example: the server could not find the requested resource",
"oc -n {namespace} exec deployment/velero -c velero -- ./velero restore logs <restore>",
"time=\"2021-01-26T20:48:37Z\" level=info msg=\"Attempting to restore migration-example: migration-example\" logSource=\"pkg/restore/restore.go:1107\" restore=openshift-migration/ccc7c2d0-6017-11eb-afab-85d0007f5a19-x4lbf time=\"2021-01-26T20:48:37Z\" level=info msg=\"error restoring migration-example: the server could not find the requested resource\" logSource=\"pkg/restore/restore.go:1170\" restore=openshift-migration/ccc7c2d0-6017-11eb-afab-85d0007f5a19-x4lbf",
"labels: migrationcontroller: ebe13bee-c803-47d0-a9e9-83f380328b93",
"labels: migrationcontroller: ebe13bee-c803-47d0-a9e9-83f380328b93",
"oc get migmigration -n openshift-migration",
"NAME AGE 88435fe0-c9f8-11e9-85e6-5d593ce65e10 6m42s",
"oc describe migmigration 88435fe0-c9f8-11e9-85e6-5d593ce65e10 -n openshift-migration",
"name: 88435fe0-c9f8-11e9-85e6-5d593ce65e10 namespace: openshift-migration labels: <none> annotations: touch: 3b48b543-b53e-4e44-9d34-33563f0f8147 apiVersion: migration.openshift.io/v1alpha1 kind: MigMigration metadata: creationTimestamp: 2019-08-29T01:01:29Z generation: 20 resourceVersion: 88179 selfLink: /apis/migration.openshift.io/v1alpha1/namespaces/openshift-migration/migmigrations/88435fe0-c9f8-11e9-85e6-5d593ce65e10 uid: 8886de4c-c9f8-11e9-95ad-0205fe66cbb6 spec: migPlanRef: name: socks-shop-mig-plan namespace: openshift-migration quiescePods: true stage: false status: conditions: category: Advisory durable: True lastTransitionTime: 2019-08-29T01:03:40Z message: The migration has completed successfully. reason: Completed status: True type: Succeeded phase: Completed startTimestamp: 2019-08-29T01:01:29Z events: <none>",
"apiVersion: velero.io/v1 kind: Backup metadata: annotations: openshift.io/migrate-copy-phase: final openshift.io/migrate-quiesce-pods: \"true\" openshift.io/migration-registry: 172.30.105.179:5000 openshift.io/migration-registry-dir: /socks-shop-mig-plan-registry-44dd3bd5-c9f8-11e9-95ad-0205fe66cbb6 openshift.io/orig-reclaim-policy: delete creationTimestamp: \"2019-08-29T01:03:15Z\" generateName: 88435fe0-c9f8-11e9-85e6-5d593ce65e10- generation: 1 labels: app.kubernetes.io/part-of: migration migmigration: 8886de4c-c9f8-11e9-95ad-0205fe66cbb6 migration-stage-backup: 8886de4c-c9f8-11e9-95ad-0205fe66cbb6 velero.io/storage-location: myrepo-vpzq9 name: 88435fe0-c9f8-11e9-85e6-5d593ce65e10-59gb7 namespace: openshift-migration resourceVersion: \"87313\" selfLink: /apis/velero.io/v1/namespaces/openshift-migration/backups/88435fe0-c9f8-11e9-85e6-5d593ce65e10-59gb7 uid: c80dbbc0-c9f8-11e9-95ad-0205fe66cbb6 spec: excludedNamespaces: [] excludedResources: [] hooks: resources: [] includeClusterResources: null includedNamespaces: - sock-shop includedResources: - persistentvolumes - persistentvolumeclaims - namespaces - imagestreams - imagestreamtags - secrets - configmaps - pods labelSelector: matchLabels: migration-included-stage-backup: 8886de4c-c9f8-11e9-95ad-0205fe66cbb6 storageLocation: myrepo-vpzq9 ttl: 720h0m0s volumeSnapshotLocations: - myrepo-wv6fx status: completionTimestamp: \"2019-08-29T01:02:36Z\" errors: 0 expiration: \"2019-09-28T01:02:35Z\" phase: Completed startTimestamp: \"2019-08-29T01:02:35Z\" validationErrors: null version: 1 volumeSnapshotsAttempted: 0 volumeSnapshotsCompleted: 0 warnings: 0",
"apiVersion: velero.io/v1 kind: Restore metadata: annotations: openshift.io/migrate-copy-phase: final openshift.io/migrate-quiesce-pods: \"true\" openshift.io/migration-registry: 172.30.90.187:5000 openshift.io/migration-registry-dir: /socks-shop-mig-plan-registry-36f54ca7-c925-11e9-825a-06fa9fb68c88 creationTimestamp: \"2019-08-28T00:09:49Z\" generateName: e13a1b60-c927-11e9-9555-d129df7f3b96- generation: 3 labels: app.kubernetes.io/part-of: migration migmigration: e18252c9-c927-11e9-825a-06fa9fb68c88 migration-final-restore: e18252c9-c927-11e9-825a-06fa9fb68c88 name: e13a1b60-c927-11e9-9555-d129df7f3b96-gb8nx namespace: openshift-migration resourceVersion: \"82329\" selfLink: /apis/velero.io/v1/namespaces/openshift-migration/restores/e13a1b60-c927-11e9-9555-d129df7f3b96-gb8nx uid: 26983ec0-c928-11e9-825a-06fa9fb68c88 spec: backupName: e13a1b60-c927-11e9-9555-d129df7f3b96-sz24f excludedNamespaces: null excludedResources: - nodes - events - events.events.k8s.io - backups.velero.io - restores.velero.io - resticrepositories.velero.io includedNamespaces: null includedResources: null namespaceMapping: null restorePVs: true status: errors: 0 failureReason: \"\" phase: Completed validationErrors: null warnings: 15",
"podman login -u USD(oc whoami) -p USD(oc whoami -t) --tls-verify=false <registry_url>:<port>",
"podman login -u USD(oc whoami) -p USD(oc whoami -t) --tls-verify=false <registry_url>:<port>",
"podman pull <registry_url>:<port>/openshift/<image>",
"oc get bc --all-namespaces --template='range .items \"BuildConfig:\" .metadata.namespace/.metadata.name => \"\\t\"\"ImageStream(FROM):\" .spec.strategy.sourceStrategy.from.namespace/.spec.strategy.sourceStrategy.from.name \"\\t\"\"ImageStream(TO):\" .spec.output.to.namespace/.spec.output.to.name end'",
"podman tag <registry_url>:<port>/openshift/<image> \\ 1 <registry_url>:<port>/openshift/<image> 2",
"podman push <registry_url>:<port>/openshift/<image> 1",
"oc get imagestream -n openshift | grep <image>",
"NAME IMAGE REPOSITORY TAGS UPDATED my_image image-registry.openshift-image-registry.svc:5000/openshift/my_image latest 32 seconds ago",
"oc describe migmigration <pod> -n openshift-migration",
"Some or all transfer pods are not running for more than 10 mins on destination cluster",
"oc get namespace <namespace> -o yaml 1",
"oc edit namespace <namespace>",
"apiVersion: v1 kind: Namespace metadata: annotations: openshift.io/node-selector: \"region=east\"",
"echo -n | openssl s_client -connect <host_FQDN>:<port> \\ 1 | sed -ne '/-BEGIN CERTIFICATE-/,/-END CERTIFICATE-/p' > <ca_bundle.cert> 2",
"oc logs <Velero_Pod> -n openshift-migration",
"level=error msg=\"Error checking repository for stale locks\" error=\"error getting backup storage location: BackupStorageLocation.velero.io \\\"ts-dpa-1\\\" not found\" error.file=\"/remote-source/src/github.com/vmware-tanzu/velero/pkg/restic/repository_manager.go:259\"",
"level=error msg=\"Error backing up item\" backup=velero/monitoring error=\"timed out waiting for all PodVolumeBackups to complete\" error.file=\"/go/src/github.com/heptio/velero/pkg/restic/backupper.go:165\" error.function=\"github.com/heptio/velero/pkg/restic.(*backupper).BackupPodVolumes\" group=v1",
"spec: restic_timeout: 1h 1",
"status: conditions: - category: Warn durable: true lastTransitionTime: 2020-04-16T20:35:16Z message: There were verify errors found in 1 Restic volume restores. See restore `<registry-example-migration-rvwcm>` for details 1 status: \"True\" type: ResticVerifyErrors 2",
"oc describe <registry-example-migration-rvwcm> -n openshift-migration",
"status: phase: Completed podVolumeRestoreErrors: - kind: PodVolumeRestore name: <registry-example-migration-rvwcm-98t49> namespace: openshift-migration podVolumeRestoreResticErrors: - kind: PodVolumeRestore name: <registry-example-migration-rvwcm-98t49> namespace: openshift-migration",
"oc describe <migration-example-rvwcm-98t49>",
"completionTimestamp: 2020-05-01T20:49:12Z errors: 1 resticErrors: 1 resticPod: <restic-nr2v5>",
"oc logs -f <restic-nr2v5>",
"backup=openshift-migration/<backup_id> controller=pod-volume-backup error=\"fork/exec /usr/bin/restic: permission denied\" error.file=\"/go/src/github.com/vmware-tanzu/velero/pkg/controller/pod_volume_backup_controller.go:280\" error.function=\"github.com/vmware-tanzu/velero/pkg/controller.(*podVolumeBackupController).processBackup\" logSource=\"pkg/controller/pod_volume_backup_controller.go:280\" name=<backup_id> namespace=openshift-migration",
"spec: restic_supplemental_groups: <group_id> 1",
"spec: restic_supplemental_groups: - 5555 - 6666",
"cat << EOF | oc apply -f - apiVersion: migration.openshift.io/v1alpha1 kind: MigMigration metadata: labels: controller-tools.k8s.io: \"1.0\" name: <migmigration> namespace: openshift-migration spec: rollback: true migPlanRef: name: <migplan> 1 namespace: openshift-migration EOF",
"oc delete USD(oc get pods -l migration.openshift.io/is-stage-pod -n <namespace>) 1",
"oc scale deployment <deployment> --replicas=<premigration_replicas>",
"apiVersion: extensions/v1beta1 kind: Deployment metadata: annotations: deployment.kubernetes.io/revision: \"1\" migration.openshift.io/preQuiesceReplicas: \"1\"",
"oc get pod -n <namespace>"
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.18/html/migrating_from_version_3_to_4/troubleshooting-3-4 |
3.2.2. Systemtap Handler/Body | 3.2.2. Systemtap Handler/Body Consider the following sample script: Example 3.4. helloworld.stp In Example 3.4, "helloworld.stp" , the event begin (that is the start of the session) triggers the handler enclosed in { } , which simply prints hello world followed by a new-line, then exits. Note SystemTap scripts continue to run until the exit() function executes. If the users wants to stop the execution of the script, it can interrupted manually with Ctrl + C . printf ( ) Statements The printf () statement is one of the simplest functions for printing data. printf () can also be used to display data using a wide variety of SystemTap functions in the following format: The format string specifies how arguments should be printed. The format string of Example 3.4, "helloworld.stp" simply instructs SystemTap to print hello world , and contains no format specifiers. You can use the format specifiers %s (for strings) and %d (for numbers) in format strings, depending on your list of arguments. Format strings can have multiple format specifiers, each matching a corresponding argument; multiple arguments are delimited by a comma ( , ). Note Semantically, the SystemTap printf function is very similar to its C language counterpart. The aforementioned syntax and format for SystemTap's printf function is identical to that of the C-style printf . To illustrate this, consider the following probe example: Example 3.5. variables-in-printf-statements.stp Example 3.5, "variables-in-printf-statements.stp" instructs SystemTap to probe all entries to the system call open ; for each event, it prints the current execname() (a string with the executable name) and pid() (the current process ID number), followed by the word open . A snippet of this probe's output would look like: SystemTap Functions SystemTap supports a wide variety of functions that can be used as printf () arguments. Example 3.5, "variables-in-printf-statements.stp" uses the SystemTap functions execname() (name of the process that called a kernel function/performed a system call) and pid() (current process ID). The following is a list of commonly-used SystemTap functions: tid() The ID of the current thread. uid() The ID of the current user. cpu() The current CPU number. gettimeofday_s() The number of seconds since UNIX epoch (January 1, 1970). ctime() Convert number of seconds since UNIX epoch to date. pp() A string describing the probe point currently being handled. thread_indent() This particular function is quite useful, providing you with a way to better organize your print results. The function takes one argument, an indentation delta, which indicates how many spaces to add or remove from a thread's "indentation counter". It then returns a string with some generic trace data along with an appropriate number of indentation spaces. The generic data included in the returned string includes a timestamp (number of microseconds since the first call to thread_indent() by the thread), a process name, and the thread ID. This allows you to identify what functions were called, who called them, and the duration of each function call. If call entries and exits immediately precede each other, it is easy to match them. However, in most cases, after a first function call entry is made several other call entries and exits may be made before the first call exits. The indentation counter helps you match an entry with its corresponding exit by indenting the function call if it is not the exit of the one. Consider the following example on the use of thread_indent() : Example 3.6. thread_indent.stp Example 3.6, "thread_indent.stp" prints out the thread_indent() and probe functions at each event in the following format: This sample output contains the following information: The time (in microseconds) since the initial thread_indent() call for the thread (included in the string from thread_indent() ). The process name (and its corresponding ID) that made the function call (included in the string from thread_indent() ). An arrow signifying whether the call was an entry ( <- ) or an exit ( -> ); the indentations help you match specific function call entries with their corresponding exits. The name of the function called by the process. name Identifies the name of a specific system call. This variable can only be used in probes that use the event syscall. system_call . target() Used in conjunction with stap script -x process ID or stap script -c command . If you want to specify a script to take an argument of a process ID or command, use target() as the variable in the script to refer to it. For example: Example 3.7. targetexample.stp When Example 3.7, "targetexample.stp" is run with the argument -x process ID , it watches all system calls (as specified by the event syscall.* ) and prints out the name of all system calls made by the specified process. This has the same effect as specifying if (pid() == process ID ) each time you wish to target a specific process. However, using target() makes it easier for you to re-use the script, giving you the ability to simply pass a process ID as an argument each time you wish to run the script (for example stap targetexample.stp -x process ID ). For more information about supported SystemTap functions, refer to man stapfuncs . | [
"probe begin { printf (\"hello world\\n\") exit () }",
"printf (\" format string \\n\", arguments )",
"probe syscall.open { printf (\"%s(%d) open\\n\", execname(), pid()) }",
"vmware-guestd(2206) open hald(2360) open hald(2360) open hald(2360) open df(3433) open df(3433) open df(3433) open hald(2360) open",
"probe kernel.function(\"*@net/socket.c\") { printf (\"%s -> %s\\n\", thread_indent(1), probefunc()) } probe kernel.function(\"*@net/socket.c\").return { printf (\"%s <- %s\\n\", thread_indent(-1), probefunc()) }",
"0 ftp(7223): -> sys_socketcall 1159 ftp(7223): -> sys_socket 2173 ftp(7223): -> __sock_create 2286 ftp(7223): -> sock_alloc_inode 2737 ftp(7223): <- sock_alloc_inode 3349 ftp(7223): -> sock_alloc 3389 ftp(7223): <- sock_alloc 3417 ftp(7223): <- __sock_create 4117 ftp(7223): -> sock_create 4160 ftp(7223): <- sock_create 4301 ftp(7223): -> sock_map_fd 4644 ftp(7223): -> sock_map_file 4699 ftp(7223): <- sock_map_file 4715 ftp(7223): <- sock_map_fd 4732 ftp(7223): <- sys_socket 4775 ftp(7223): <- sys_socketcall",
"probe syscall.* { if (pid() == target()) printf(\"%s/n\", name) }"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/systemtap_beginners_guide/systemtapscript-handler |
Chapter 6. Premigration checklists | Chapter 6. Premigration checklists Before you migrate your application workloads with the Migration Toolkit for Containers (MTC), review the following checklists. 6.1. Cluster health checklist ❏ The clusters meet the minimum hardware requirements for the specific platform and installation method, for example, on bare metal . ❏ All MTC prerequisites are met. ❏ All nodes have an active OpenShift Container Platform subscription. ❏ You have verified node health . ❏ The identity provider is working. ❏ The migration network has a minimum throughput of 10 Gbps. ❏ The clusters have sufficient resources for migration. Note Clusters require additional memory, CPUs, and storage in order to run a migration on top of normal workloads. Actual resource requirements depend on the number of Kubernetes resources being migrated in a single migration plan. You must test migrations in a non-production environment in order to estimate the resource requirements. ❏ The etcd disk performance of the clusters has been checked with fio . 6.2. Source cluster checklist ❏ You have checked for persistent volumes (PVs) with abnormal configurations stuck in a Terminating state by running the following command: USD oc get pv ❏ You have checked for pods whose status is other than Running or Completed by running the following command: USD oc get pods --all-namespaces | egrep -v 'Running | Completed' ❏ You have checked for pods with a high restart count by running the following command: USD oc get pods --all-namespaces --field-selector=status.phase=Running \ -o json | jq '.items[]|select(any( .status.containerStatuses[]; \ .restartCount > 3))|.metadata.name' Even if the pods are in a Running state, a high restart count might indicate underlying problems. ❏ The cluster certificates are valid for the duration of the migration process. ❏ You have checked for pending certificate-signing requests by running the following command: USD oc get csr -A | grep pending -i ❏ The registry uses a recommended storage type . ❏ You can read and write images to the registry. ❏ The etcd cluster is healthy. ❏ The average API server response time on the source cluster is less than 50 ms. 6.3. Target cluster checklist ❏ The cluster has the correct network configuration and permissions to access external services, for example, databases, source code repositories, container image registries, and CI/CD tools. ❏ External applications and services that use services provided by the cluster have the correct network configuration and permissions to access the cluster. ❏ Internal container image dependencies are met. ❏ The target cluster and the replication repository have sufficient storage space. | [
"oc get pv",
"oc get pods --all-namespaces | egrep -v 'Running | Completed'",
"oc get pods --all-namespaces --field-selector=status.phase=Running -o json | jq '.items[]|select(any( .status.containerStatuses[]; .restartCount > 3))|.metadata.name'",
"oc get csr -A | grep pending -i"
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.15/html/migration_toolkit_for_containers/premigration-checklists-mtc |
Chapter 7. Triggering and modifying builds | Chapter 7. Triggering and modifying builds The following sections outline how to trigger builds and modify builds using build hooks. 7.1. Build triggers When defining a BuildConfig , you can define triggers to control the circumstances in which the BuildConfig should be run. The following build triggers are available: Webhook Image change Configuration change 7.1.1. Webhook triggers Webhook triggers allow you to trigger a new build by sending a request to the OpenShift Dedicated API endpoint. You can define these triggers using GitHub, GitLab, Bitbucket, or Generic webhooks. Currently, OpenShift Dedicated webhooks only support the analogous versions of the push event for each of the Git-based Source Code Management (SCM) systems. All other event types are ignored. When the push events are processed, the OpenShift Dedicated control plane host confirms if the branch reference inside the event matches the branch reference in the corresponding BuildConfig . If so, it then checks out the exact commit reference noted in the webhook event on the OpenShift Dedicated build. If they do not match, no build is triggered. Note oc new-app and oc new-build create GitHub and Generic webhook triggers automatically, but any other needed webhook triggers must be added manually. You can manually add triggers by setting triggers. For all webhooks, you must define a secret with a key named WebHookSecretKey and the value being the value to be supplied when invoking the webhook. The webhook definition must then reference the secret. The secret ensures the uniqueness of the URL, preventing others from triggering the build. The value of the key is compared to the secret provided during the webhook invocation. For example here is a GitHub webhook with a reference to a secret named mysecret : type: "GitHub" github: secretReference: name: "mysecret" The secret is then defined as follows. Note that the value of the secret is base64 encoded as is required for any data field of a Secret object. - kind: Secret apiVersion: v1 metadata: name: mysecret creationTimestamp: data: WebHookSecretKey: c2VjcmV0dmFsdWUx 7.1.1.1. Adding unauthenticated users to the system:webhook role binding As a cluster administrator, you can add unauthenticated users to the system:webhook role binding in OpenShift Dedicated for specific namespaces. The system:webhook role binding allows users to trigger builds from external systems that do not use an OpenShift Dedicated authentication mechanism. Unauthenticated users do not have access to non-public role bindings by default. This is a change from OpenShift Dedicated versions before 4.16. Adding unauthenticated users to the system:webhook role binding is required to successfully trigger builds from GitHub, GitLab, and Bitbucket. If it is necessary to allow unauthenticated users access to a cluster, you can do so by adding unauthenticated users to the system:webhook role binding in each required namespace. This method is more secure than adding unauthenticated users to the system:webhook cluster role binding. However, if you have a large number of namespaces, it is possible to add unauthenticated users to the system:webhook cluster role binding which would apply the change to all namespaces. Important Always verify compliance with your organization's security standards when modifying unauthenticated access. Prerequisites You have access to the cluster as a user with the cluster-admin role. You have installed the OpenShift CLI ( oc ). Procedure Create a YAML file named add-webhooks-unauth.yaml and add the following content: apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: annotations: rbac.authorization.kubernetes.io/autoupdate: "true" name: webhook-access-unauthenticated namespace: <namespace> 1 roleRef: apiGroup: rbac.authorization.k8s.io kind: Role name: "system:webhook" subjects: - apiGroup: rbac.authorization.k8s.io kind: Group name: "system:unauthenticated" 1 The namespace of your BuildConfig . Apply the configuration by running the following command: USD oc apply -f add-webhooks-unauth.yaml Additional resources Cluster role bindings for unauthenticated groups 7.1.1.2. Using GitHub webhooks GitHub webhooks handle the call made by GitHub when a repository is updated. When defining the trigger, you must specify a secret, which is part of the URL you supply to GitHub when configuring the webhook. Example GitHub webhook definition: type: "GitHub" github: secretReference: name: "mysecret" Note The secret used in the webhook trigger configuration is not the same as the secret field you encounter when configuring webhook in GitHub UI. The secret in the webhook trigger configuration makes the webhook URL unique and hard to predict. The secret configured in the GitHub UI is an optional string field that is used to create an HMAC hex digest of the body, which is sent as an X-Hub-Signature header. The payload URL is returned as the GitHub Webhook URL by the oc describe command (see Displaying Webhook URLs), and is structured as follows: Example output https://<openshift_api_host:port>/apis/build.openshift.io/v1/namespaces/<namespace>/buildconfigs/<name>/webhooks/<secret>/github Prerequisites Create a BuildConfig from a GitHub repository. system:unauthenticated has access to the system:webhook role in the required namespaces. Or, system:unauthenticated has access to the system:webhook cluster role. Procedure Configure a GitHub Webhook. After creating a BuildConfig object from a GitHub repository, run the following command: USD oc describe bc/<name_of_your_BuildConfig> This command generates a webhook GitHub URL. Example output https://api.starter-us-east-1.openshift.com:443/apis/build.openshift.io/v1/namespaces/<namespace>/buildconfigs/<name>/webhooks/<secret>/github Cut and paste this URL into GitHub, from the GitHub web console. In your GitHub repository, select Add Webhook from Settings Webhooks . Paste the URL output into the Payload URL field. Change the Content Type from GitHub's default application/x-www-form-urlencoded to application/json . Click Add webhook . You should see a message from GitHub stating that your webhook was successfully configured. Now, when you push a change to your GitHub repository, a new build automatically starts, and upon a successful build a new deployment starts. Note Gogs supports the same webhook payload format as GitHub. Therefore, if you are using a Gogs server, you can define a GitHub webhook trigger on your BuildConfig and trigger it by your Gogs server as well. Given a file containing a valid JSON payload, such as payload.json , you can manually trigger the webhook with the following curl command: USD curl -H "X-GitHub-Event: push" -H "Content-Type: application/json" -k -X POST --data-binary @payload.json https://<openshift_api_host:port>/apis/build.openshift.io/v1/namespaces/<namespace>/buildconfigs/<name>/webhooks/<secret>/github The -k argument is only necessary if your API server does not have a properly signed certificate. Note The build will only be triggered if the ref value from GitHub webhook event matches the ref value specified in the source.git field of the BuildConfig resource. Additional resources Gogs 7.1.1.3. Using GitLab webhooks GitLab webhooks handle the call made by GitLab when a repository is updated. As with the GitHub triggers, you must specify a secret. The following example is a trigger definition YAML within the BuildConfig : type: "GitLab" gitlab: secretReference: name: "mysecret" The payload URL is returned as the GitLab Webhook URL by the oc describe command, and is structured as follows: Example output https://<openshift_api_host:port>/apis/build.openshift.io/v1/namespaces/<namespace>/buildconfigs/<name>/webhooks/<secret>/gitlab Prerequisites system:unauthenticated has access to the system:webhook role in the required namespaces. Or, system:unauthenticated has access to the system:webhook cluster role. Procedure Configure a GitLab Webhook. Get the webhook URL by entering the following command: USD oc describe bc <name> Copy the webhook URL, replacing <secret> with your secret value. Follow the GitLab setup instructions to paste the webhook URL into your GitLab repository settings. Given a file containing a valid JSON payload, such as payload.json , you can manually trigger the webhook with the following curl command: USD curl -H "X-GitLab-Event: Push Hook" -H "Content-Type: application/json" -k -X POST --data-binary @payload.json https://<openshift_api_host:port>/apis/build.openshift.io/v1/namespaces/<namespace>/buildconfigs/<name>/webhooks/<secret>/gitlab The -k argument is only necessary if your API server does not have a properly signed certificate. 7.1.1.4. Using Bitbucket webhooks Bitbucket webhooks handle the call made by Bitbucket when a repository is updated. Similar to GitHub and GitLab triggers, you must specify a secret. The following example is a trigger definition YAML within the BuildConfig : type: "Bitbucket" bitbucket: secretReference: name: "mysecret" The payload URL is returned as the Bitbucket Webhook URL by the oc describe command, and is structured as follows: Example output https://<openshift_api_host:port>/apis/build.openshift.io/v1/namespaces/<namespace>/buildconfigs/<name>/webhooks/<secret>/bitbucket Prerequisites system:unauthenticated has access to the system:webhook role in the required namespaces. Or, system:unauthenticated has access to the system:webhook cluster role. Procedure Configure a Bitbucket Webhook. Get the webhook URL by entering the following command: USD oc describe bc <name> Copy the webhook URL, replacing <secret> with your secret value. Follow the Bitbucket setup instructions to paste the webhook URL into your Bitbucket repository settings. Given a file containing a valid JSON payload, such as payload.json , you can manually trigger the webhook by entering the following curl command: USD curl -H "X-Event-Key: repo:push" -H "Content-Type: application/json" -k -X POST --data-binary @payload.json https://<openshift_api_host:port>/apis/build.openshift.io/v1/namespaces/<namespace>/buildconfigs/<name>/webhooks/<secret>/bitbucket The -k argument is only necessary if your API server does not have a properly signed certificate. 7.1.1.5. Using generic webhooks Generic webhooks are called from any system capable of making a web request. As with the other webhooks, you must specify a secret, which is part of the URL that the caller must use to trigger the build. The secret ensures the uniqueness of the URL, preventing others from triggering the build. The following is an example trigger definition YAML within the BuildConfig : type: "Generic" generic: secretReference: name: "mysecret" allowEnv: true 1 1 Set to true to allow a generic webhook to pass in environment variables. Procedure To set up the caller, supply the calling system with the URL of the generic webhook endpoint for your build. Example generic webhook endpoint URL The caller must call the webhook as a POST operation. To call the webhook manually, enter the following curl command: USD curl -X POST -k https://<openshift_api_host:port>/apis/build.openshift.io/v1/namespaces/<namespace>/buildconfigs/<name>/webhooks/<secret>/generic The HTTP verb must be set to POST . The insecure -k flag is specified to ignore certificate validation. This second flag is not necessary if your cluster has properly signed certificates. The endpoint can accept an optional payload with the following format: git: uri: "<url to git repository>" ref: "<optional git reference>" commit: "<commit hash identifying a specific git commit>" author: name: "<author name>" email: "<author e-mail>" committer: name: "<committer name>" email: "<committer e-mail>" message: "<commit message>" env: 1 - name: "<variable name>" value: "<variable value>" 1 Similar to the BuildConfig environment variables, the environment variables defined here are made available to your build. If these variables collide with the BuildConfig environment variables, these variables take precedence. By default, environment variables passed by webhook are ignored. Set the allowEnv field to true on the webhook definition to enable this behavior. To pass this payload using curl , define it in a file named payload_file.yaml and run the following command: USD curl -H "Content-Type: application/yaml" --data-binary @payload_file.yaml -X POST -k https://<openshift_api_host:port>/apis/build.openshift.io/v1/namespaces/<namespace>/buildconfigs/<name>/webhooks/<secret>/generic The arguments are the same as the example with the addition of a header and a payload. The -H argument sets the Content-Type header to application/yaml or application/json depending on your payload format. The --data-binary argument is used to send a binary payload with newlines intact with the POST request. Note OpenShift Dedicated permits builds to be triggered by the generic webhook even if an invalid request payload is presented, for example, invalid content type, unparsable or invalid content, and so on. This behavior is maintained for backwards compatibility. If an invalid request payload is presented, OpenShift Dedicated returns a warning in JSON format as part of its HTTP 200 OK response. 7.1.1.6. Displaying webhook URLs You can use the oc describe command to display webhook URLs associated with a build configuration. If the command does not display any webhook URLs, then no webhook trigger is currently defined for that build configuration. Procedure To display any webhook URLs associated with a BuildConfig , run the following command: USD oc describe bc <name> 7.1.2. Using image change triggers As a developer, you can configure your build to run automatically every time a base image changes. You can use image change triggers to automatically invoke your build when a new version of an upstream image is available. For example, if a build is based on a RHEL image, you can trigger that build to run any time the RHEL image changes. As a result, the application image is always running on the latest RHEL base image. Note Image streams that point to container images in v1 container registries only trigger a build once when the image stream tag becomes available and not on subsequent image updates. This is due to the lack of uniquely identifiable images in v1 container registries. Procedure Define an ImageStream that points to the upstream image you want to use as a trigger: kind: "ImageStream" apiVersion: "v1" metadata: name: "ruby-20-centos7" This defines the image stream that is tied to a container image repository located at <system-registry> / <namespace> /ruby-20-centos7 . The <system-registry> is defined as a service with the name docker-registry running in OpenShift Dedicated. If an image stream is the base image for the build, set the from field in the build strategy to point to the ImageStream : strategy: sourceStrategy: from: kind: "ImageStreamTag" name: "ruby-20-centos7:latest" In this case, the sourceStrategy definition is consuming the latest tag of the image stream named ruby-20-centos7 located within this namespace. Define a build with one or more triggers that point to ImageStreams : type: "ImageChange" 1 imageChange: {} type: "ImageChange" 2 imageChange: from: kind: "ImageStreamTag" name: "custom-image:latest" 1 An image change trigger that monitors the ImageStream and Tag as defined by the build strategy's from field. The imageChange object here must be empty. 2 An image change trigger that monitors an arbitrary image stream. The imageChange part, in this case, must include a from field that references the ImageStreamTag to monitor. When using an image change trigger for the strategy image stream, the generated build is supplied with an immutable docker tag that points to the latest image corresponding to that tag. This new image reference is used by the strategy when it executes for the build. For other image change triggers that do not reference the strategy image stream, a new build is started, but the build strategy is not updated with a unique image reference. Since this example has an image change trigger for the strategy, the resulting build is: strategy: sourceStrategy: from: kind: "DockerImage" name: "172.30.17.3:5001/mynamespace/ruby-20-centos7:<immutableid>" This ensures that the triggered build uses the new image that was just pushed to the repository, and the build can be re-run any time with the same inputs. You can pause an image change trigger to allow multiple changes on the referenced image stream before a build is started. You can also set the paused attribute to true when initially adding an ImageChangeTrigger to a BuildConfig to prevent a build from being immediately triggered. type: "ImageChange" imageChange: from: kind: "ImageStreamTag" name: "custom-image:latest" paused: true If a build is triggered due to a webhook trigger or manual request, the build that is created uses the <immutableid> resolved from the ImageStream referenced by the Strategy . This ensures that builds are performed using consistent image tags for ease of reproduction. Additional resources v1 container registries 7.1.3. Identifying the image change trigger of a build As a developer, if you have image change triggers, you can identify which image change initiated the last build. This can be useful for debugging or troubleshooting builds. Example BuildConfig apiVersion: build.openshift.io/v1 kind: BuildConfig metadata: name: bc-ict-example namespace: bc-ict-example-namespace spec: # ... triggers: - imageChange: from: kind: ImageStreamTag name: input:latest namespace: bc-ict-example-namespace - imageChange: from: kind: ImageStreamTag name: input2:latest namespace: bc-ict-example-namespace type: ImageChange status: imageChangeTriggers: - from: name: input:latest namespace: bc-ict-example-namespace lastTriggerTime: "2021-06-30T13:47:53Z" lastTriggeredImageID: image-registry.openshift-image-registry.svc:5000/bc-ict-example-namespace/input@sha256:0f88ffbeb9d25525720bfa3524cb1bf0908b7f791057cf1acfae917b11266a69 - from: name: input2:latest namespace: bc-ict-example-namespace lastTriggeredImageID: image-registry.openshift-image-registry.svc:5000/bc-ict-example-namespace/input2@sha256:0f88ffbeb9d25525720bfa3524cb2ce0908b7f791057cf1acfae917b11266a69 lastVersion: 1 Note This example omits elements that are not related to image change triggers. Prerequisites You have configured multiple image change triggers. These triggers have triggered one or more builds. Procedure In the BuildConfig CR, in status.imageChangeTriggers , identify the lastTriggerTime that has the latest timestamp. This ImageChangeTriggerStatus Under imageChangeTriggers , compare timestamps to identify the latest Image change triggers In your build configuration, buildConfig.spec.triggers is an array of build trigger policies, BuildTriggerPolicy . Each BuildTriggerPolicy has a type field and set of pointers fields. Each pointer field corresponds to one of the allowed values for the type field. As such, you can only set BuildTriggerPolicy to only one pointer field. For image change triggers, the value of type is ImageChange . Then, the imageChange field is the pointer to an ImageChangeTrigger object, which has the following fields: lastTriggeredImageID : This field, which is not shown in the example, is deprecated in OpenShift Dedicated 4.8 and will be ignored in a future release. It contains the resolved image reference for the ImageStreamTag when the last build was triggered from this BuildConfig . paused : You can use this field, which is not shown in the example, to temporarily disable this particular image change trigger. from : Use this field to reference the ImageStreamTag that drives this image change trigger. Its type is the core Kubernetes type, OwnerReference . The from field has the following fields of note: kind : For image change triggers, the only supported value is ImageStreamTag . namespace : Use this field to specify the namespace of the ImageStreamTag . name : Use this field to specify the ImageStreamTag . Image change trigger status In your build configuration, buildConfig.status.imageChangeTriggers is an array of ImageChangeTriggerStatus elements. Each ImageChangeTriggerStatus element includes the from , lastTriggeredImageID , and lastTriggerTime elements shown in the preceding example. The ImageChangeTriggerStatus that has the most recent lastTriggerTime triggered the most recent build. You use its name and namespace to identify the image change trigger in buildConfig.spec.triggers that triggered the build. The lastTriggerTime with the most recent timestamp signifies the ImageChangeTriggerStatus of the last build. This ImageChangeTriggerStatus has the same name and namespace as the image change trigger in buildConfig.spec.triggers that triggered the build. Additional resources v1 container registries 7.1.4. Configuration change triggers A configuration change trigger allows a build to be automatically invoked as soon as a new BuildConfig is created. The following is an example trigger definition YAML within the BuildConfig : type: "ConfigChange" Note Configuration change triggers currently only work when creating a new BuildConfig . In a future release, configuration change triggers will also be able to launch a build whenever a BuildConfig is updated. 7.1.4.1. Setting triggers manually Triggers can be added to and removed from build configurations with oc set triggers . Procedure To set a GitHub webhook trigger on a build configuration, enter the following command: USD oc set triggers bc <name> --from-github To set an image change trigger, enter the following command: USD oc set triggers bc <name> --from-image='<image>' To remove a trigger, enter the following command: USD oc set triggers bc <name> --from-bitbucket --remove Note When a webhook trigger already exists, adding it again regenerates the webhook secret. For more information, consult the help documentation by entering the following command: USD oc set triggers --help 7.2. Build hooks Build hooks allow behavior to be injected into the build process. The postCommit field of a BuildConfig object runs commands inside a temporary container that is running the build output image. The hook is run immediately after the last layer of the image has been committed and before the image is pushed to a registry. The current working directory is set to the image's WORKDIR , which is the default working directory of the container image. For most images, this is where the source code is located. The hook fails if the script or command returns a non-zero exit code or if starting the temporary container fails. When the hook fails it marks the build as failed and the image is not pushed to a registry. The reason for failing can be inspected by looking at the build logs. Build hooks can be used to run unit tests to verify the image before the build is marked complete and the image is made available in a registry. If all tests pass and the test runner returns with exit code 0 , the build is marked successful. In case of any test failure, the build is marked as failed. In all cases, the build log contains the output of the test runner, which can be used to identify failed tests. The postCommit hook is not only limited to running tests, but can be used for other commands as well. Since it runs in a temporary container, changes made by the hook do not persist, meaning that running the hook cannot affect the final image. This behavior allows for, among other uses, the installation and usage of test dependencies that are automatically discarded and are not present in the final image. 7.2.1. Configuring post commit build hooks There are different ways to configure the post-build hook. All forms in the following examples are equivalent and run bundle exec rake test --verbose . Procedure Use one of the following options to configure post-build hooks: Option Description Shell script postCommit: script: "bundle exec rake test --verbose" The script value is a shell script to be run with /bin/sh -ic . Use this option when a shell script is appropriate to execute the build hook. For example, for running unit tests as above. To control the image entry point or if the image does not have /bin/sh , use command , or args , or both. Note The additional -i flag was introduced to improve the experience working with CentOS and RHEL images, and may be removed in a future release. Command as the image entry point postCommit: command: ["/bin/bash", "-c", "bundle exec rake test --verbose"] In this form, command is the command to run, which overrides the image entry point in the exec form, as documented in the Dockerfile reference . This is needed if the image does not have /bin/sh , or if you do not want to use a shell. In all other cases, using script might be more convenient. Command with arguments postCommit: command: ["bundle", "exec", "rake", "test"] args: ["--verbose"] This form is equivalent to appending the arguments to command . Note Providing both script and command simultaneously creates an invalid build hook. 7.2.2. Using the CLI to set post commit build hooks The oc set build-hook command can be used to set the build hook for a build configuration. Procedure Complete one of the following actions: To set a command as the post-commit build hook, enter the following command: USD oc set build-hook bc/mybc \ --post-commit \ --command \ -- bundle exec rake test --verbose To set a script as the post-commit build hook, enter the following command: USD oc set build-hook bc/mybc --post-commit --script="bundle exec rake test --verbose" | [
"type: \"GitHub\" github: secretReference: name: \"mysecret\"",
"- kind: Secret apiVersion: v1 metadata: name: mysecret creationTimestamp: data: WebHookSecretKey: c2VjcmV0dmFsdWUx",
"apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: annotations: rbac.authorization.kubernetes.io/autoupdate: \"true\" name: webhook-access-unauthenticated namespace: <namespace> 1 roleRef: apiGroup: rbac.authorization.k8s.io kind: Role name: \"system:webhook\" subjects: - apiGroup: rbac.authorization.k8s.io kind: Group name: \"system:unauthenticated\"",
"oc apply -f add-webhooks-unauth.yaml",
"type: \"GitHub\" github: secretReference: name: \"mysecret\"",
"https://<openshift_api_host:port>/apis/build.openshift.io/v1/namespaces/<namespace>/buildconfigs/<name>/webhooks/<secret>/github",
"oc describe bc/<name_of_your_BuildConfig>",
"https://api.starter-us-east-1.openshift.com:443/apis/build.openshift.io/v1/namespaces/<namespace>/buildconfigs/<name>/webhooks/<secret>/github",
"curl -H \"X-GitHub-Event: push\" -H \"Content-Type: application/json\" -k -X POST --data-binary @payload.json https://<openshift_api_host:port>/apis/build.openshift.io/v1/namespaces/<namespace>/buildconfigs/<name>/webhooks/<secret>/github",
"type: \"GitLab\" gitlab: secretReference: name: \"mysecret\"",
"https://<openshift_api_host:port>/apis/build.openshift.io/v1/namespaces/<namespace>/buildconfigs/<name>/webhooks/<secret>/gitlab",
"oc describe bc <name>",
"curl -H \"X-GitLab-Event: Push Hook\" -H \"Content-Type: application/json\" -k -X POST --data-binary @payload.json https://<openshift_api_host:port>/apis/build.openshift.io/v1/namespaces/<namespace>/buildconfigs/<name>/webhooks/<secret>/gitlab",
"type: \"Bitbucket\" bitbucket: secretReference: name: \"mysecret\"",
"https://<openshift_api_host:port>/apis/build.openshift.io/v1/namespaces/<namespace>/buildconfigs/<name>/webhooks/<secret>/bitbucket",
"oc describe bc <name>",
"curl -H \"X-Event-Key: repo:push\" -H \"Content-Type: application/json\" -k -X POST --data-binary @payload.json https://<openshift_api_host:port>/apis/build.openshift.io/v1/namespaces/<namespace>/buildconfigs/<name>/webhooks/<secret>/bitbucket",
"type: \"Generic\" generic: secretReference: name: \"mysecret\" allowEnv: true 1",
"https://<openshift_api_host:port>/apis/build.openshift.io/v1/namespaces/<namespace>/buildconfigs/<name>/webhooks/<secret>/generic",
"curl -X POST -k https://<openshift_api_host:port>/apis/build.openshift.io/v1/namespaces/<namespace>/buildconfigs/<name>/webhooks/<secret>/generic",
"git: uri: \"<url to git repository>\" ref: \"<optional git reference>\" commit: \"<commit hash identifying a specific git commit>\" author: name: \"<author name>\" email: \"<author e-mail>\" committer: name: \"<committer name>\" email: \"<committer e-mail>\" message: \"<commit message>\" env: 1 - name: \"<variable name>\" value: \"<variable value>\"",
"curl -H \"Content-Type: application/yaml\" --data-binary @payload_file.yaml -X POST -k https://<openshift_api_host:port>/apis/build.openshift.io/v1/namespaces/<namespace>/buildconfigs/<name>/webhooks/<secret>/generic",
"oc describe bc <name>",
"kind: \"ImageStream\" apiVersion: \"v1\" metadata: name: \"ruby-20-centos7\"",
"strategy: sourceStrategy: from: kind: \"ImageStreamTag\" name: \"ruby-20-centos7:latest\"",
"type: \"ImageChange\" 1 imageChange: {} type: \"ImageChange\" 2 imageChange: from: kind: \"ImageStreamTag\" name: \"custom-image:latest\"",
"strategy: sourceStrategy: from: kind: \"DockerImage\" name: \"172.30.17.3:5001/mynamespace/ruby-20-centos7:<immutableid>\"",
"type: \"ImageChange\" imageChange: from: kind: \"ImageStreamTag\" name: \"custom-image:latest\" paused: true",
"apiVersion: build.openshift.io/v1 kind: BuildConfig metadata: name: bc-ict-example namespace: bc-ict-example-namespace spec: triggers: - imageChange: from: kind: ImageStreamTag name: input:latest namespace: bc-ict-example-namespace - imageChange: from: kind: ImageStreamTag name: input2:latest namespace: bc-ict-example-namespace type: ImageChange status: imageChangeTriggers: - from: name: input:latest namespace: bc-ict-example-namespace lastTriggerTime: \"2021-06-30T13:47:53Z\" lastTriggeredImageID: image-registry.openshift-image-registry.svc:5000/bc-ict-example-namespace/input@sha256:0f88ffbeb9d25525720bfa3524cb1bf0908b7f791057cf1acfae917b11266a69 - from: name: input2:latest namespace: bc-ict-example-namespace lastTriggeredImageID: image-registry.openshift-image-registry.svc:5000/bc-ict-example-namespace/input2@sha256:0f88ffbeb9d25525720bfa3524cb2ce0908b7f791057cf1acfae917b11266a69 lastVersion: 1",
"Then you use the `name` and `namespace` from that build to find the corresponding image change trigger in `buildConfig.spec.triggers`.",
"type: \"ConfigChange\"",
"oc set triggers bc <name> --from-github",
"oc set triggers bc <name> --from-image='<image>'",
"oc set triggers bc <name> --from-bitbucket --remove",
"oc set triggers --help",
"postCommit: script: \"bundle exec rake test --verbose\"",
"postCommit: command: [\"/bin/bash\", \"-c\", \"bundle exec rake test --verbose\"]",
"postCommit: command: [\"bundle\", \"exec\", \"rake\", \"test\"] args: [\"--verbose\"]",
"oc set build-hook bc/mybc --post-commit --command -- bundle exec rake test --verbose",
"oc set build-hook bc/mybc --post-commit --script=\"bundle exec rake test --verbose\""
] | https://docs.redhat.com/en/documentation/openshift_dedicated/4/html/builds_using_buildconfig/triggering-builds-build-hooks |
Chapter 2. Fuse on OpenShift | Chapter 2. Fuse on OpenShift Fuse on OpenShift enables you to deploy Fuse applications on OpenShift Container Platform. 2.1. Supported version of OpenShift For details of the supported version (or versions) of OpenShift Container Platform to use with Fuse on OpenShift, see the Supported Configurations page. 2.2. Supported images Fuse on OpenShift provides the following Docker-formatted images: Image Platform Supported architectures fuse7/fuse-java-openshift-rhel8 Spring Boot AMD64 and Intel 64 (x86_64) fuse7/fuse-java-openshift-jdk11-rhel8 Spring Boot AMD64 and Intel 64 (x86_64) fuse7/fuse-java-openshift-jdk17-rhel8 Spring Boot AMD64 and Intel 64 (x86_64) fuse7/fuse-java-openshift-openj9-11-rhel8 Spring Boot IBM Z and LinuxONE (s390x) IBM Power Systems (ppc64le) fuse7/fuse-karaf-openshift-rhel8 Apache Karaf AMD64 and Intel 64 (x86_64) fuse7/fuse-karaf-openshift-jdk11-rhel8 Apache Karaf AMD64 and Intel 64 (x86_64) fuse7/fuse-karaf-openshift-jdk17-rhel8 Apache Karaf AMD64 and Intel 64 (x86_64) fuse7/fuse-eap-openshift-jdk11-rhel8 Red Hat JBoss Enterprise Application Platform AMD64 and Intel 64 (x86_64) fuse7/fuse-eap-openshift-jdk17-rhel8 Red Hat JBoss Enterprise Application Platform AMD64 and Intel 64 (x86_64) fuse7/fuse-console-rhel8 Fuse console AMD64 and Intel 64 (x86_64) IBM Z and LinuxONE (s390x) IBM Power Systems (ppc64le) fuse7/fuse-console- rhel8-operator Fuse console operator AMD64 and Intel 64 (x86_64) IBM Z and LinuxONE (s390x) IBM Power Systems (ppc64le) fuse7/fuse-apicurito-generator-rhel8 Apicurito REST application generator AMD64 and Intel 64 (x86_64) fuse7/fuse-apicurito-rhel8 Apicurito REST API editor AMD64 and Intel 64 (x86_64) fuse7/fuse-apicurito-rhel8-operator API Designer Operator AMD64 and Intel 64 (x86_64) | null | https://docs.redhat.com/en/documentation/red_hat_fuse/7.13/html/release_notes_for_red_hat_fuse_7.13/fisdistrib |
probe::ipmib.OutRequests | probe::ipmib.OutRequests Name probe::ipmib.OutRequests - Count a request to send a packet Synopsis ipmib.OutRequests Values skb pointer to the struct sk_buff being acted on op value to be added to the counter (default value of 1) Description The packet pointed to by skb is filtered by the function ipmib_filter_key . If the packet passes the filter is is counted in the global OutRequests (equivalent to SNMP's MIB IPSTATS_MIB_OUTREQUESTS) | null | https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/systemtap_tapset_reference/api-ipmib-outrequests |
Chapter 5. Securing Service Registry deployments | Chapter 5. Securing Service Registry deployments Service Registry provides authentication and authorization by using Red Hat Single Sign-On based on OpenID Connect (OIDC) and HTTP basic. You can configure the required settings automatically using the Red Hat Single Sign-On Operator, or manually configure them in Red Hat Single Sign-On and Service Registry. Service Registry also provides authentcation and authorization by using Microsoft Azure Active Directory based on OpenID Connect (OIDC) and the OAuth Authorization Code Flow. You can configure the required settings manually in Azure AD and Service Registry. In addition to role-based authorization options with Red Hat Single Sign-On or Azure AD, Service Registry also provides content-based authorization at the schema or API level, where only the artifact creator has write access. You can also configure an HTTPS connection to Service Registry from inside or outside an OpenShift cluster. This chapter explains how to configure the following security options for your Service Registry deployment on OpenShift: Section 5.1, "Securing Service Registry using the Red Hat Single Sign-On Operator" Section 5.2, "Configuring Service Registry authentication and authorization with Red Hat Single Sign-On" Section 5.3, "Configuring Service Registry authentication and authorization with Microsoft Azure Active Directory" Section 5.4, "Service Registry authentication and authorization configuration options" Section 5.5, "Configuring an HTTPS connection to Service Registry from inside the OpenShift cluster" Section 5.6, "Configuring an HTTPS connection to Service Registry from outside the OpenShift cluster" Additional resources For details on security configuration for Java client applications, see the following: Service Registry Java client configuration Service Registry serializer/deserializer configuration 5.1. Securing Service Registry using the Red Hat Single Sign-On Operator The following procedure shows how to configure a Service Registry REST API and web console to be protected by Red Hat Single Sign-On. Service Registry supports the following user roles: Table 5.1. Service Registry user roles Name Capabilities sr-admin Full access, no restrictions. sr-developer Create artifacts and configure artifact rules. Cannot modify global rules, perform import/export, or use /admin REST API endpoint. sr-readonly View and search only. Cannot modify artifacts or rules, perform import/export, or use /admin REST API endpoint. Note There is a related configuration option in the ApicurioRegistry CRD that you can use to set the web console to read-only mode. However, this configuration does not affect the REST API. Prerequisites You must have already installed the Service Registry Operator. You must install the Red Hat Single Sign-On Operator or have Red Hat Single Sign-On accessible from your OpenShift cluster. Important The example configuration in this procedure is intended for development and testing only. To keep the procedure simple, it does not use HTTPS and other defenses recommended for a production environment. For more details, see the Red Hat Single Sign-On documentation. Procedure In the OpenShift web console, click Installed Operators and Red Hat Single Sign-On Operator , and then the Keycloak tab. Click Create Keycloak to provision a new Red Hat Single Sign-On instance for securing a Service Registry deployment. You can use the default value, for example: apiVersion: keycloak.org/v1alpha1 kind: Keycloak metadata: name: example-keycloak labels: app: sso spec: instances: 1 externalAccess: enabled: True podDisruptionBudget: enabled: True Wait until the instance has been created, and click Networking and then Routes to access the new route for the keycloak instance. Click the Location URL and copy the displayed URL value for later use when deploying Service Registry. Click Installed Operators and Red Hat Single Sign-On Operator , and click the Keycloak Realm tab, and then Create Keycloak Realm to create a registry example realm: apiVersion: keycloak.org/v1alpha1 kind: KeycloakRealm metadata: name: registry-keycloakrealm labels: app: registry spec: instanceSelector: matchLabels: app: sso realm: displayName: Registry enabled: true id: registry realm: registry sslRequired: none roles: realm: - name: sr-admin - name: sr-developer - name: sr-readonly clients: - clientId: registry-client-ui implicitFlowEnabled: true redirectUris: - '*' standardFlowEnabled: true webOrigins: - '*' publicClient: true - clientId: registry-client-api implicitFlowEnabled: true redirectUris: - '*' standardFlowEnabled: true webOrigins: - '*' publicClient: true users: - credentials: - temporary: false type: password value: changeme enabled: true realmRoles: - sr-admin username: registry-admin - credentials: - temporary: false type: password value: changeme enabled: true realmRoles: - sr-developer username: registry-developer - credentials: - temporary: false type: password value: changeme enabled: true realmRoles: - sr-readonly username: registry-user Important You must customize this KeycloakRealm resource with values suitable for your environment if you are deploying to production. You can also create and manage realms using the Red Hat Single Sign-On web console. If your cluster does not have a valid HTTPS certificate configured, you can create the following HTTP Service and Ingress resources as a temporary workaround: Click Networking and then Services , and click Create Service using the following example: apiVersion: v1 kind: Service metadata: name: keycloak-http labels: app: keycloak spec: ports: - name: keycloak-http protocol: TCP port: 8080 targetPort: 8080 selector: app: keycloak component: keycloak type: ClusterIP sessionAffinity: None status: loadBalancer: {} Click Networking and then Ingresses , and click Create Ingress using the following example:: Modify the host value to create a route accessible for the Service Registry user, and use it instead of the HTTPS route created by Red Hat Single Sign-On Operator. Click the Service Registry Operator , and on the ApicurioRegistry tab, click Create ApicurioRegistry , using the following example, but replace your values in the keycloak section. apiVersion: registry.apicur.io/v1 kind: ApicurioRegistry metadata: name: example-apicurioregistry-kafkasql-keycloak spec: configuration: security: keycloak: url: "http://keycloak-http-<namespace>.apps.<cluster host>" # ^ Required # Use an HTTP URL in development. realm: "registry" # apiClientId: "registry-client-api" # ^ Optional (default value) # uiClientId: "registry-client-ui" # ^ Optional (default value) persistence: 'kafkasql' kafkasql: bootstrapServers: '<my-cluster>-kafka-bootstrap.<my-namespace>.svc:9092' 5.2. Configuring Service Registry authentication and authorization with Red Hat Single Sign-On This section explains how to manually configure authentication and authorization options for Service Registry and Red Hat Single Sign-On. Note Alternatively, for details on how to configure these settings automatically, see Section 5.1, "Securing Service Registry using the Red Hat Single Sign-On Operator" . The Service Registry web console and core REST API support authentication in Red Hat Single Sign-On based on OAuth and OpenID Connect (OIDC). The same Red Hat Single Sign-On realm and users are federated across the Service Registry web console and core REST API using OpenID Connect so that you only require one set of credentials. Service Registry provides role-based authorization for default admin, write, and read-only user roles. Service Registry provides content-based authorization at the schema or API level, where only the creator of the registry artifact can update or delete it. Service Registry authentication and authorization settings are disabled by default. Prerequisites Red Hat Single Sign-On is installed and running. For more details, see the Red Hat Single Sign-On user documentation . Service Registry is installed and running. Procedure In the Red Hat Single Sign-On Admin Console, create a Red Hat Single Sign-On realm for Service Registry. By default, Service Registry expects a realm name of registry . For details on creating realms, see the the Red Hat Single Sign-On user documentation . Create a Red Hat Single Sign-On client for the Service Registry API. By default, Service Registry expects the following settings: Client ID : registry-api Client Protocol : openid-connect Access Type : bearer-only You can use the defaults for the other client settings. Note If you are using Red Hat Single Sign-On service accounts, the client Access Type must be confidential instead of bearer-only . Create a Red Hat Single Sign-On client for the Service Registry web console. By default, Service Registry expects the following settings: Client ID : apicurio-registry Client Protocol : openid-connect Access Type : public Valid Redirect URLs : http://my-registry-url:8080/* Web Origins : + You can use the defaults for the other client settings. In your Service Registry deployment on OpenShift, set the following Service Registry environment variables to configure authentication using Red Hat Single Sign-On: Table 5.2. Configuration for Service Registry authentication with Red Hat Single Sign-On Environment variable Description Type Default AUTH_ENABLED Enables authentication for Service Registry. When set to true , the environment variables that follow are required for authentication using Red Hat Single Sign-On. String false KEYCLOAK_URL The URL of the Red Hat Single Sign-On authentication server. For example, http://localhost:8080 . String - KEYCLOAK_REALM The Red Hat Single Sign-On realm for authentication. For example, registry. String - KEYCLOAK_API_CLIENT_ID The client ID for the Service Registry REST API. String registry-api KEYCLOAK_UI_CLIENT_ID The client ID for the Service Registry web console. String apicurio-registry Tip For an example of setting environment variables on OpenShift, see Section 6.1, "Configuring Service Registry health checks on OpenShift" . Set the following option to true to enable Service Registry user roles in Red Hat Single Sign-On: Table 5.3. Configuration for Service Registry role-based authorization Environment variable Java system property Type Default value ROLE_BASED_AUTHZ_ENABLED registry.auth.role-based-authorization Boolean false When Service Registry user roles are enabled, you must assign Service Registry users to at least one of the following default user roles in your Red Hat Single Sign-On realm: Table 5.4. Default user roles for registry authentication and authorization Role Read artifacts Write artifacts Global rules Summary sr-admin Yes Yes Yes Full access to all create, read, update, and delete operations. sr-developer Yes Yes No Access to create, read, update, and delete operations, except configuring global rules. This role can configure artifact-specific rules. sr-readonly Yes No No Access to read and search operations only. This role cannot configure any rules. Set the following to true to enable owner-only authorization for updates to schema and API artifacts in Service Registry: Table 5.5. Configuration for owner-only authorization Environment variable Java system property Type Default value REGISTRY_AUTH_OBAC_ENABLED registry.auth.owner-only-authorization Boolean false Additional resources For details on configuring non-default user role names, see Section 5.4, "Service Registry authentication and authorization configuration options" . For an open source example application and Keycloak realm, see Docker Compose example of Apicurio Registry with Keycloak . For details on how to use Red Hat Single Sign-On in a production environment, see the Red Hat Single Sign-On documentation . 5.3. Configuring Service Registry authentication and authorization with Microsoft Azure Active Directory This section explains how to manually configure authentication and authorization options for Service Registry and Microsoft Azure Active Directory (Azure AD). The Service Registry web console and core REST API support authentication in Azure AD based on OpenID Connect (OIDC) and the OAuth Authorization Code Flow. Service Registry provides role-based authorization for default admin, write, and read-only user roles. Service Registry authentication and authorization settings are disabled by default. To secure Service Registry with Azure AD, you require a valid directory in Azure AD with specific configuration. This involves registering the Service Registry application in the Azure AD portal with recommended settings and configuring environment variables in Service Registry. Prerequisites Azure AD is installed and running. For more details, see the Microsoft Azure AD user documentation . Service Registry is installed and running. Procedure Log in to the Azure AD portal using your email address or GitHub account. In the navigation menu, select Manage > App registrations > New registration , and complete the following settings: Name : Enter your application name. For example: apicurio-registry-example Supported account types : Click Accounts in any organizational directory . Redirect URI : Select application from the list, and enter your Service Registry web console application host. For example: https://test-registry.com/ui/ Important You must register your Service Registry application host as a Redirect URI . When logging in, users are redirected from Service Registry to Azure AD for authentication, and you want to send them back to your application afterwards. Azure AD does not allow any redirect URLs that are not registered. Click Register . You can view your app registration details by selecting Manage > App registrations > apicurio-registry-example . Select Manage > Authentication and ensure that the application is configured with your redirect URLs and tokens as follows: Redirect URIs : For example: https://test-registry.com/ui/ Implicit grant and hybrid flows : Click ID tokens (used for implicit and hybrid flows) Select Azure AD > Admin > App registrations > Your app > Application (client) ID . For example: 123456a7-b8c9-012d-e3f4-5fg67h8i901 Select Azure AD > Admin > App registrations > Your app > Directory (tenant) ID . For example: https://login.microsoftonline.com/1a2bc34d-567e-89f1-g0hi-1j2kl3m4no56/v2.0 In Service Registry, configure the following environment variables with your Azure AD settings: Table 5.6. Configuration for Azure AD settings in Service Registry Environment variable Description Setting KEYCLOAK_API_CLIENT_ID The client application ID for the Service Registry REST API Your Azure AD Application (client) ID obtained in step 5. For example: 123456a7-b8c9-012d-e3f4-5fg67h8i901 REGISTRY_OIDC_UI_CLIENT_ID The client application ID for the Service Registry web console. Your Azure AD Application (client) ID obtained in step 5. For example: 123456a7-b8c9-012d-e3f4-5fg67h8i901 REGISTRY_AUTH_URL_CONFIGURED The URL for authentication in Azure AD. Your Azure AD Application (tenant) ID obtained in step 6. For example: https://login.microsoftonline.com/1a2bc34d-567e-89f1-g0hi-1j2kl3m4no56/v2.0 . In Service Registry, configure the following environment variables for Service Registry-specific settings: Table 5.7. Configuration for Service Registry-specific settings Environment variable Description Setting REGISTRY_AUTH_ENABLED Enables authentication for Service Registry. true REGISTRY_UI_AUTH_TYPE The Service Registry authentication type. oidc CORS_ALLOWED_ORIGINS The host for your Service Registry deployment for cross-origin resource sharing (CORS). For example: https://test-registry.com REGISTRY_OIDC_UI_REDIRECT_URL The host for your Service Registry web console. For example: https://test-registry.com/ui ROLE_BASED_AUTHZ_ENABLED Enables role-based authorization in Service Registry. true QUARKUS_OIDC_ROLES_ROLE_CLAIM_PATH The name of the claim in which Azure AD stores roles. roles Note When you enable roles in Service Registry, you must also create the same roles in Azure AD as application roles. The default roles expected by Service Registry are sr-admin , sr-developer , and sr-readonly . Additional resources For details on configuring non-default user role names, see Section 5.4, "Service Registry authentication and authorization configuration options" . For more details on using Azure AD, see the Microsoft Azure AD user documentation . 5.4. Service Registry authentication and authorization configuration options Service Registry provides authentication options for OpenID Connect with Red Hat Single Sign-On and HTTP basic authentication. Service Registry provides authorization options for role-based and content-based approaches: Role-based authorization for default admin, write, and read-only user roles. Content-based authorization for schema or API artifacts, where only the owner of the artifacts or artifact group can update or delete artifacts. Important All authentication and authorization options in Service Registry are disabled by default. Before enabling any of these options, you must first set the AUTH_ENABLED option to true . This chapter provides details on the following configuration options: Service Registry authentication by using OpenID Connect with Red Hat Single Sign-On Service Registry authentication by using HTTP basic Service Registry role-based authorization Service Registry owner-only authorization Service Registry authenticated read access Service Registry anonymous read-only access Service Registry authentication by using OpenID Connect with Red Hat Single Sign-On You can set the following environment variables to configure authentication for the Service Registry web console and API with Red Hat Single Sign-On: Table 5.8. Configuration for Service Registry authentication with Red Hat Single Sign-On Environment variable Description Type Default AUTH_ENABLED Enables authentication for Service Registry. When set to true , the environment variables that follow are required for authentication using Red Hat Single Sign-On. String false KEYCLOAK_URL The URL of the Red Hat Single Sign-On authentication server. For example, http://localhost:8080 . String - KEYCLOAK_REALM The Red Hat Single Sign-On realm for authentication. For example, registry. String - KEYCLOAK_API_CLIENT_ID The client ID for the Service Registry REST API. String registry-api KEYCLOAK_UI_CLIENT_ID The client ID for the Service Registry web console. String apicurio-registry Service Registry authentication by using HTTP basic By default, Service Registry supports authentication by using OpenID Connect. Users or API clients must obtain an access token to make authenticated calls to the Service Registry REST API. However, because some tools do not support OpenID Connect, you can also configure Service Registry to support HTTP basic authentication by setting the following configuration options to true : Table 5.9. Configuration for Service Registry HTTP basic authentication Environment variable Java system property Type Default value AUTH_ENABLED registry.auth.enabled Boolean false CLIENT_CREDENTIALS_BASIC_AUTH_ENABLED registry.auth.basic-auth-client-credentials.enabled Boolean false Service Registry HTTP basic client credentials cache expiry You can also configure the HTTP basic client credentials cache expiry time. By default, when using HTTP basic authentication, Service Registry caches JWT tokens, and does not issue a new token when there is no need. You can configure the cache expiry time for JWT tokens, which is set to 10 mins by default. When using Red Hat Single Sign-On, it is best to set this configuration to your Red Hat Single Sign-On JWT expiry time minus one minute. For example, if you have the expiry time set to 5 mins in Red Hat Single Sign-On, you should set the following configuration option to 4 mins: Table 5.10. Configuration for HTTP basic client credentials cache expiry Environment variable Java system property Type Default value CLIENT_CREDENTIALS_BASIC_CACHE_EXPIRATION registry.auth.basic-auth-client-credentials.cache-expiration Integer 10 Service Registry role-based authorization You can set the following options to true to enable role-based authorization in Service Registry: Table 5.11. Configuration for Service Registry role-based authorization Environment variable Java system property Type Default value AUTH_ENABLED registry.auth.enabled Boolean false ROLE_BASED_AUTHZ_ENABLED registry.auth.role-based-authorization Boolean false You can then configure role-based authorization to use roles included in the user's authentication token (for example, granted when authenticating by using Red Hat Single Sign-On), or to use role mappings managed internally by Service Registry. Use roles assigned in Red Hat Single Sign-On To enable using roles assigned by Red Hat Single Sign-On, set the following environment variables: Table 5.12. Configuration for Service Registry role-based authorization by using Red Hat Single Sign-On Environment variable Description Type Default ROLE_BASED_AUTHZ_SOURCE When set to token , user roles are taken from the authentication token. String token REGISTRY_AUTH_ROLES_ADMIN The name of the role that indicates a user is an admin. String sr-admin REGISTRY_AUTH_ROLES_DEVELOPER The name of the role that indicates a user is a developer. String sr-developer REGISTRY_AUTH_ROLES_READONLY The name of the role that indicates a user has read-only access. String sr-readonly When Service Registry is configured to use roles from Red Hat Single Sign-On, you must assign Service Registry users to at least one of the following user roles in Red Hat Single Sign-On. However, you can configure different user role names by using the environment variables in Table 5.12, "Configuration for Service Registry role-based authorization by using Red Hat Single Sign-On" . Table 5.13. Service Registry roles for authentication and authorization Role name Read artifacts Write artifacts Global rules Description sr-admin Yes Yes Yes Full access to all create, read, update, and delete operations. sr-developer Yes Yes No Access to create, read, update, and delete operations, except configuring global rules and import/export. This role can configure artifact-specific rules only. sr-readonly Yes No No Access to read and search operations only. This role cannot configure any rules. Manage roles directly in Service Registry To enable using roles managed internally by Service Registry, set the following environment variable: Table 5.14. Configuration for Service Registry role-based authorization by using internal role mappings Environment variable Description Type Default ROLE_BASED_AUTHZ_SOURCE When set to application , user roles are managed internally by Service Registry. String token When using internally managed role mappings, users can be assigned a role by using the /admin/roleMappings endpoint in the Service Registry REST API. For more details, see Apicurio Registry REST API documentation . Users can be granted exactly one role: ADMIN , DEVELOPER , or READ_ONLY . Only users with admin privileges can grant access to other users. Service Registry admin-override configuration Because there are no default admin users in Service Registry, it is usually helpful to configure another way for users to be identified as admins. You can configure this admin-override feature by using the following environment variables: Table 5.15. Configuration for Service Registry admin-override Environment variable Description Type Default REGISTRY_AUTH_ADMIN_OVERRIDE_ENABLED Enables the admin-override feature. String false REGISTRY_AUTH_ADMIN_OVERRIDE_FROM Where to look for admin-override information. Only token is currently supported. String token REGISTRY_AUTH_ADMIN_OVERRIDE_TYPE The type of information used to determine if a user is an admin. Values depend on the value of the FROM variable, for example, role or claim when FROM is token . String role REGISTRY_AUTH_ADMIN_OVERRIDE_ROLE The name of the role that indicates a user is an admin. String sr-admin REGISTRY_AUTH_ADMIN_OVERRIDE_CLAIM The name of a JWT token claim to use for determining admin-override. String org-admin REGISTRY_AUTH_ADMIN_OVERRIDE_CLAIM_VALUE The value that the JWT token claim indicated by the CLAIM variable must be for the user to be granted admin-override. String true For example, you can use this admin-override feature to assign the sr-admin role to a single user in Red Hat Single Sign-On, which grants that user the admin role. That user can then use the /admin/roleMappings REST API (or associated UI) to grant roles to additional users (including additional admins). Service Registry owner-only authorization You can set the following options to true to enable owner-only authorization for updates to artifacts or artifact groups in Service Registry: Table 5.16. Configuration for owner-only authorization Environment variable Java system property Type Default value AUTH_ENABLED registry.auth.enabled Boolean false REGISTRY_AUTH_OBAC_ENABLED registry.auth.owner-only-authorization Boolean false REGISTRY_AUTH_OBAC_LIMIT_GROUP_ACCESS registry.auth.owner-only-authorization.limit-group-access Boolean false When owner-only authorization is enabled, only the user who created an artifact can modify or delete that artifact. When owner-only authorization and group owner-only authorization are both enabled, only the user who created an artifact group has write access to that artifact group, for example, to add or remove artifacts in that group. Service Registry authenticated read access When the authenticated read access option is enabled, Service Registry grants at least read-only access to requests from any authenticated user in the same organization, regardless of their user role. To enable authenticated read access, you must first enable role-based authorization, and then ensure that the following options are set to true : Table 5.17. Configuration for authenticated read access Environment variable Java system property Type Default value AUTH_ENABLED registry.auth.enabled Boolean false REGISTRY_AUTH_AUTHENTICATED_READS_ENABLED registry.auth.authenticated-read-access.enabled Boolean false For more details, see the section called "Service Registry role-based authorization" . Service Registry anonymous read-only access In addition to the two main types of authorization (role-based and owner-based authorization), Service Registry supports an anonymous read-only access option. To allow anonymous users, such as REST API calls with no authentication credentials, to make read-only calls to the REST API, set the following options to true : Table 5.18. Configuration for anonymous read-only access Environment variable Java system property Type Default value AUTH_ENABLED registry.auth.enabled Boolean false REGISTRY_AUTH_ANONYMOUS_READ_ACCESS_ENABLED registry.auth.anonymous-read-access.enabled Boolean false Additional resources For an example of how to set environment variables in your Service Registry deployment on OpenShift, see Section 6.1, "Configuring Service Registry health checks on OpenShift" For details on configuring custom authentication for Service Registry, the see Quarkus Open ID Connect documentation 5.5. Configuring an HTTPS connection to Service Registry from inside the OpenShift cluster The following procedure shows how to configure Service Registry deployment to expose a port for HTTPS connections from inside the OpenShift cluster. Warning This kind of connection is not directly available outside of the cluster. Routing is based on hostname, which is encoded in the case of an HTTPS connection. Therefore, edge termination or other configuration is still needed. See Section 5.6, "Configuring an HTTPS connection to Service Registry from outside the OpenShift cluster" . Prerequisites You must have already installed the Service Registry Operator. Procedure Generate a keystore with a self-signed certificate. You can skip this step if you are using your own certificates. openssl req -newkey rsa:2048 -new -nodes -x509 -days 3650 -keyout tls.key -out tls.crt Create a new secret to hold the certificate and the private key. In the left navigation menu of the OpenShift web console, click Workloads > Secrets > Create Key/Value Secret . Use the following values: Name: https-cert-secret Key 1: tls.key Value 1: tls.key (uploaded file) Key 2: tls.crt Value 2: tls.crt (uploaded file) or create the secret using the following command: oc create secret generic https-cert-secret --from-file=tls.key --from-file=tls.crt Edit the spec.configuration.security.https section of the ApicurioRegistry CR for your Service Registry deployment, for example: apiVersion: registry.apicur.io/v1 kind: ApicurioRegistry metadata: name: example-apicurioregistry spec: configuration: # ... security: https: secretName: https-cert-secret Verify that the connection is working: Connect into a pod on the cluster using SSH (you can use the Service Registry pod): oc rsh example-apicurioregistry-deployment-6f788db977-2wzpw Find the cluster IP of the Service Registry pod from the Service resource (see the Location column in the web console). Afterwards, execute a test request (we are using self-signed certificate, so an insecure flag is required): curl -k https://172.30.230.78:8443/health Note In the Kubernetes secret containing the HTTPS certificate and key, the names tls.crt and tls.key must be used for the provided values. This is currently not configurable. Disabling HTTP If you enabled HTTPS using the procedure in this section, you can also disable the default HTTP connection by setting the spec.security.https.disableHttp to true . This removes the HTTP port 8080 from the Service Registry pod container, Service , and the NetworkPolicy (if present). Importantly, Ingress is also removed because the Service Registry Operator currently does not support configuring HTTPS in Ingress . Users must create an Ingress for HTTPS connections manually. Additional resources How to enable HTTPS and SSL termination in a Quarkus app 5.6. Configuring an HTTPS connection to Service Registry from outside the OpenShift cluster The following procedure shows how to configure Service Registry deployment to expose an HTTPS edge-terminated route for connections from outside the OpenShift cluster. Prerequisites You must have already installed the Service Registry Operator. Read the OpenShift documentation for creating secured routes . Procedure Add a second Route in addition to the HTTP route created by the Service Registry Operator. For example: kind: Route apiVersion: route.openshift.io/v1 metadata: [...] labels: app: example-apicurioregistry [...] spec: host: example-apicurioregistry-default.apps.example.com to: kind: Service name: example-apicurioregistry-service-9whd7 weight: 100 port: targetPort: 8080 tls: termination: edge insecureEdgeTerminationPolicy: Redirect wildcardPolicy: None Note Make sure the insecureEdgeTerminationPolicy: Redirect configuration property is set. If you do not specify a certificate, OpenShift will use a default. Alternatively, you can generate a custom self-signed certificate using the following commands: openssl genrsa 2048 > tls.key && openssl req -new -x509 -nodes -sha256 -days 365 -key tls.key -out tls.crt Then create a route using the OpenShift CLI: oc create route edge \ --service=example-apicurioregistry-service-9whd7 \ --cert=tls.crt --key=tls.key \ --hostname=example-apicurioregistry-default.apps.example.com \ --insecure-policy=Redirect \ -n default | [
"apiVersion: keycloak.org/v1alpha1 kind: Keycloak metadata: name: example-keycloak labels: app: sso spec: instances: 1 externalAccess: enabled: True podDisruptionBudget: enabled: True",
"apiVersion: keycloak.org/v1alpha1 kind: KeycloakRealm metadata: name: registry-keycloakrealm labels: app: registry spec: instanceSelector: matchLabels: app: sso realm: displayName: Registry enabled: true id: registry realm: registry sslRequired: none roles: realm: - name: sr-admin - name: sr-developer - name: sr-readonly clients: - clientId: registry-client-ui implicitFlowEnabled: true redirectUris: - '*' standardFlowEnabled: true webOrigins: - '*' publicClient: true - clientId: registry-client-api implicitFlowEnabled: true redirectUris: - '*' standardFlowEnabled: true webOrigins: - '*' publicClient: true users: - credentials: - temporary: false type: password value: changeme enabled: true realmRoles: - sr-admin username: registry-admin - credentials: - temporary: false type: password value: changeme enabled: true realmRoles: - sr-developer username: registry-developer - credentials: - temporary: false type: password value: changeme enabled: true realmRoles: - sr-readonly username: registry-user",
"apiVersion: v1 kind: Service metadata: name: keycloak-http labels: app: keycloak spec: ports: - name: keycloak-http protocol: TCP port: 8080 targetPort: 8080 selector: app: keycloak component: keycloak type: ClusterIP sessionAffinity: None status: loadBalancer: {}",
"apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: keycloak-http labels: app: keycloak spec: rules: - host: KEYCLOAK_HTTP_HOST http: paths: - path: / pathType: ImplementationSpecific backend: service: name: keycloak-http port: number: 8080",
"apiVersion: registry.apicur.io/v1 kind: ApicurioRegistry metadata: name: example-apicurioregistry-kafkasql-keycloak spec: configuration: security: keycloak: url: \"http://keycloak-http-<namespace>.apps.<cluster host>\" # ^ Required # Use an HTTP URL in development. realm: \"registry\" # apiClientId: \"registry-client-api\" # ^ Optional (default value) # uiClientId: \"registry-client-ui\" # ^ Optional (default value) persistence: 'kafkasql' kafkasql: bootstrapServers: '<my-cluster>-kafka-bootstrap.<my-namespace>.svc:9092'",
"openssl req -newkey rsa:2048 -new -nodes -x509 -days 3650 -keyout tls.key -out tls.crt",
"create secret generic https-cert-secret --from-file=tls.key --from-file=tls.crt",
"apiVersion: registry.apicur.io/v1 kind: ApicurioRegistry metadata: name: example-apicurioregistry spec: configuration: # security: https: secretName: https-cert-secret",
"rsh example-apicurioregistry-deployment-6f788db977-2wzpw",
"curl -k https://172.30.230.78:8443/health",
"kind: Route apiVersion: route.openshift.io/v1 metadata: [...] labels: app: example-apicurioregistry [...] spec: host: example-apicurioregistry-default.apps.example.com to: kind: Service name: example-apicurioregistry-service-9whd7 weight: 100 port: targetPort: 8080 tls: termination: edge insecureEdgeTerminationPolicy: Redirect wildcardPolicy: None",
"openssl genrsa 2048 > tls.key && openssl req -new -x509 -nodes -sha256 -days 365 -key tls.key -out tls.crt",
"create route edge --service=example-apicurioregistry-service-9whd7 --cert=tls.crt --key=tls.key --hostname=example-apicurioregistry-default.apps.example.com --insecure-policy=Redirect -n default"
] | https://docs.redhat.com/en/documentation/red_hat_integration/2023.q4/html/installing_and_deploying_service_registry_on_openshift/securing-the-registry |
9.3. Configuring Device Controllers | 9.3. Configuring Device Controllers Depending on the guest virtual machine architecture, some device buses can appear more than once, with a group of virtual devices tied to a virtual controller. Normally, libvirt can automatically infer such controllers without requiring explicit XML markup, but in some cases it is better to explicitly set a virtual controller element. ... <devices> <controller type='ide' index='0'/> <controller type='virtio-serial' index='0' ports='16' vectors='4'/> <controller type='virtio-serial' index='1'> <address type='pci' domain='0x0000' bus='0x00' slot='0x0a' function='0x0'/> </controller> ... </devices> ... Figure 9.11. Domain XML example for virtual controllers Each controller has a mandatory attribute <controller type> , which must be one of: ide fdc scsi sata usb ccid virtio-serial pci The <controller> element has a mandatory attribute <controller index> which is the decimal integer describing in which order the bus controller is encountered (for use in controller attributes of <address> elements). When <controller type ='virtio-serial'> there are two additional optional attributes (named ports and vectors ), which control how many devices can be connected through the controller. Note that Red Hat Enterprise Linux 6 does not support the use of more than 32 vectors per device. Using more vectors will cause failures in migrating the guest virtual machine. When <controller type ='scsi'> , there is an optional attribute model model, which can have the following values: auto buslogic ibmvscsi lsilogic lsisas1068 lsisas1078 virtio-scsi vmpvscsi When <controller type ='usb'> , there is an optional attribute model model, which can have the following values: piix3-uhci piix4-uhci ehci ich9-ehci1 ich9-uhci1 ich9-uhci2 ich9-uhci3 vt82c686b-uhci pci-ohci nec-xhci Note If the USB bus needs to be explicitly disabled for the guest virtual machine, <model='none'> may be used. . For controllers that are themselves devices on a PCI or USB bus, an optional sub-element <address> can specify the exact relationship of the controller to its master bus, with semantics as shown in Section 9.4, "Setting Addresses for Devices" . An optional sub-element <driver> can specify the driver specific options. Currently it only supports attribute queues, which specifies the number of queues for the controller. For best performance, it is recommended to specify a value matching the number of vCPUs. USB companion controllers have an optional sub-element <master> to specify the exact relationship of the companion to its master controller. A companion controller is on the same bus as its master, so the companion index value should be equal. An example XML which can be used is as follows: ... <devices> <controller type='usb' index='0' model='ich9-ehci1'> <address type='pci' domain='0' bus='0' slot='4' function='7'/> </controller> <controller type='usb' index='0' model='ich9-uhci1'> <master startport='0'/> <address type='pci' domain='0' bus='0' slot='4' function='0' multifunction='on'/> </controller> ... </devices> ... Figure 9.12. Domain XML example for USB controllers PCI controllers have an optional model attribute with the following possible values: pci-root pcie-root pci-bridge dmi-to-pci-bridge The root controllers ( pci-root and pcie-root ) have an optional pcihole64 element specifying how big (in kilobytes, or in the unit specified by pcihole64 's unit attribute) the 64-bit PCI hole should be. Some guest virtual machines (such as Windows Server 2003) may cause a crash, unless unit is disabled (set to 0 unit='0' ). For machine types which provide an implicit PCI bus, the pci-root controller with index='0' is auto-added and required to use PCI devices. pci-root has no address. PCI bridges are auto-added if there are too many devices to fit on the one bus provided by model='pci-root' , or a PCI bus number greater than zero was specified. PCI bridges can also be specified manually, but their addresses should only refer to PCI buses provided by already specified PCI controllers. Leaving gaps in the PCI controller indexes might lead to an invalid configuration. The following XML example can be added to the <devices> section: ... <devices> <controller type='pci' index='0' model='pci-root'/> <controller type='pci' index='1' model='pci-bridge'> <address type='pci' domain='0' bus='0' slot='5' function='0' multifunction='off'/> </controller> </devices> ... Figure 9.13. Domain XML example for PCI bridge For machine types which provide an implicit PCI Express (PCIe) bus (for example, the machine types based on the Q35 chipset), the pcie-root controller with index='0' is auto-added to the domain's configuration. pcie-root has also no address, but provides 31 slots (numbered 1-31) and can only be used to attach PCIe devices. In order to connect standard PCI devices on a system which has a pcie-root controller, a pci controller with model='dmi-to-pci-bridge' is automatically added. A dmi-to-pci-bridge controller plugs into a PCIe slot (as provided by pcie-root), and itself provides 31 standard PCI slots (which are not hot-pluggable). In order to have hot-pluggable PCI slots in the guest system, a pci-bridge controller will also be automatically created and connected to one of the slots of the auto-created dmi-to-pci-bridge controller; all guest devices with PCI addresses that are auto-determined by libvirt will be placed on this pci-bridge device. ... <devices> <controller type='pci' index='0' model='pcie-root'/> <controller type='pci' index='1' model='dmi-to-pci-bridge'> <address type='pci' domain='0' bus='0' slot='0xe' function='0'/> </controller> <controller type='pci' index='2' model='pci-bridge'> <address type='pci' domain='0' bus='1' slot='1' function='0'/> </controller> </devices> ... Figure 9.14. Domain XML example for PCIe (PCI express) | [
"<devices> <controller type='ide' index='0'/> <controller type='virtio-serial' index='0' ports='16' vectors='4'/> <controller type='virtio-serial' index='1'> <address type='pci' domain='0x0000' bus='0x00' slot='0x0a' function='0x0'/> </controller> </devices>",
"<devices> <controller type='usb' index='0' model='ich9-ehci1'> <address type='pci' domain='0' bus='0' slot='4' function='7'/> </controller> <controller type='usb' index='0' model='ich9-uhci1'> <master startport='0'/> <address type='pci' domain='0' bus='0' slot='4' function='0' multifunction='on'/> </controller> </devices>",
"<devices> <controller type='pci' index='0' model='pci-root'/> <controller type='pci' index='1' model='pci-bridge'> <address type='pci' domain='0' bus='0' slot='5' function='0' multifunction='off'/> </controller> </devices>",
"<devices> <controller type='pci' index='0' model='pcie-root'/> <controller type='pci' index='1' model='dmi-to-pci-bridge'> <address type='pci' domain='0' bus='0' slot='0xe' function='0'/> </controller> <controller type='pci' index='2' model='pci-bridge'> <address type='pci' domain='0' bus='1' slot='1' function='0'/> </controller> </devices>"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/virtualization_administration_guide/sect-guest_virtual_machine_device_configuration-configuring_device_controllers |
Chapter 3. Configuration recommendations for the Object Storage service (swift) | Chapter 3. Configuration recommendations for the Object Storage service (swift) If you choose not to deploy Red Hat OpenStack Platform (RHOSP) with Red Hat Ceph Storage, RHOSP director deploys the RHOSP Object Storage service (swift). The Object Store service is the object store for several OpenStack services, including the RHOSP Telemetry service and RabbitMQ. Here are several recommendations to improve your RHOSP performance when using the Telemetry service with the Object Storage service. 3.1. Disk recommendation for the Object Storage service Use one or more separate, local disks for the Red Hat OpenStack Platform (RHOSP) Object Storage service. By default, RHOSP director uses the directory /srv/node/d1 on the system disk for the Object Storage service. On the Controller this disk is also used by other services, and the disk could become a performance bottleneck after the Telemetry service starts recording events in an enterprise setting. The following example is a excerpt from an RHOSP Orchestration service (heat) custom environment file. On each Controller node, the Object Storage service uses two separate disks. The entirety of both disks contains an XFS file system: SwiftRawDisks defines each storage disk on the node. This example defines both sdb and sdc disks on each Controller node. Important When configuring multiple disks, ensure that the Bare Metal service (ironic) uses the intended root disk. Additional resources Defining the Root Disk for Multi-disk Clusters in the Director Installation and Usage guide. 3.2. Defining dedicated Object Storage nodes Dedicating a node to the Red Hat OpenStack Platform (RHOSP) Object Storage service improves performance. Procedure Create a custom roles_data.yaml file (based on the default /usr/share/openstack-tripleo-heat-templates/roles_data.yaml ). Edit the custom roles_data.yaml file by removing the Object Storage service entry from the Controller node. Specifically, remove the following line from the ServicesDefault list of the Controller role: Use the ObjectStorageCount resource in your custom environment file to set how many dedicated nodes to allocate for the Object Storage service. For example, add ObjectStorageCount: 3 to the parameter_defaults in your environment file to deploy three dedicated object storage nodes: To apply this configuration, deploy the overcloud, adding roles_data.yaml to the stack along with your other environment files: Additional resources Composable Services and Custom Roles in the Advanced Overcloud Customization guide Adding and Removing Services from Roles in the Advanced Overcloud Customization guide Modifying the Overcloud Environment in the Director Installation and Usage guide 3.3. Partition power recommendation for the Object Storage service When using separate Red Hat OpenStack Platform (RHOSP) Object Storage service nodes, use a higher partition power value. The Object Storage service distributes data across disks and nodes using modified hash rings . There are three rings by default: one for accounts, one for containers, and one for objects. Each ring uses a fixed parameter called partition power . This parameter sets the maximum number of partitions that can be created. The partition power parameter is important and can only be changed for new containers and their objects. As such, it is important to set this value before initial deployment . The default partition power value is 10 for environments that RHOSP director deploys. This is a reasonable value for smaller deployments, especially if you only plan to use disks on the Controller nodes for the Object Storage service. The following table helps you to select an appropriate partition power if you use three replicas: Table 3.1. Appropriate partition power values per number of available disks Partition Power Maximum number of disks 10 ~ 35 11 ~ 75 12 ~ 150 13 ~ 250 14 ~ 500 Important Setting an excessively high partition power value (for example, 14 for only 40 disks) negatively impacts replication times. To set the partition power, use the following resource: Tip You can also configure an additional object server ring for new containers. This is useful if you want to add more disks to an Object Storage service deployment that initially uses a low partition power. Additional resources Object Storage rings in the Storage Guide The Rings in swift upstream documentation Modifying the Overcloud Environment in the Director Installation and Usage guide | [
"parameter_defaults: SwiftRawDisks: {\"sdb\": {}, \"sdc\": {}}",
"- OS::TripleO::Services::SwiftStorage",
"parameter_defaults: ObjectStorageCount: 3",
"(undercloud) USD openstack overcloud deploy --templates -e [your environment files] -e /home/stack/templates/roles_data.yaml",
"parameter_defaults: SwiftPartPower: 11"
] | https://docs.redhat.com/en/documentation/red_hat_openstack_platform/16.2/html/deployment_recommendations_for_specific_red_hat_openstack_platform_services/assembly_configuration-recommendations-for-the-object-storage-service-swift |
Chapter 10. Managing Ceph OSDs on the dashboard | Chapter 10. Managing Ceph OSDs on the dashboard As a storage administrator, you can monitor and manage OSDs on the Red Hat Ceph Storage Dashboard. Some of the capabilities of the Red Hat Ceph Storage Dashboard are: List OSDs, their status, statistics, information such as attributes, metadata, device health, performance counters and performance details. Mark OSDs down, in, out, lost, purge, reweight, scrub, deep-scrub, destroy, delete, and select profiles to adjust backfilling activity. List all drives associated with an OSD. Set and change the device class of an OSD. Deploy OSDs on new drives and hosts. Prerequisites A running Red Hat Ceph Storage cluster cluster-manager level of access on the Red Hat Ceph Storage dashboard 10.1. Managing the OSDs on the Ceph dashboard You can carry out the following actions on a Ceph OSD on the Red Hat Ceph Storage Dashboard: Create a new OSD. Edit the device class of the OSD. Mark the Flags as No Up , No Down , No In , or No Out . Scrub and deep-scrub the OSDs. Reweight the OSDs. Mark the OSDs Out , In , Down , or Lost . Purge the OSDs. Destroy the OSDs. Delete the OSDs. Prerequisites A running Red Hat Ceph Storage cluster. Dashboard is installed. Hosts, Monitors, and Manager Daemons are added to the storage cluster. Procedure From the dashboard navigation, go to Cluster->OSDs . Creating an OSD To create the OSD, from the OSDs List table, click Create . Figure 10.1. Add device for OSDs Note Ensure you have an available host and a few available devices. Check for available devices in Cluster->Physical Disks and filter for Available . In the Create OSDs form, in the Deployment Options section, select one of the following options: Cost/Capacity-optimized : The cluster gets deployed with all available HDDs. Throughput-optimized : Slower devices are used to store data and faster devices are used to store journals/WALs. IOPS-optmized : All the available NVMe devices are used to deploy OSDs. In the Advanced Mode section, add primary, WAL, and DB devices by clicking Add . Primary devices : Primary storage devices contain all OSD data. WAL devices : Write-Ahead-Log devices are used for BlueStore's internal journal and are used only if the WAL device is faster than the primary device. For example, NVMe or SSD devices. DB devices : DB devices are used to store BlueStore's internal metadata and are used only if the DB device is faster than the primary device. For example, NVMe or SSD devices. To encrypt your data, for security purposes, from the Features section of the form, select Encryption . Click Preview . In the OSD Creation Preview dialog review the OSD and click Create . A notification displays that the OSD was created successfully and the OSD status changes from in and down to in and up . Editing an OSD To edit an OSD, select the row and click Edit . From the Edit OSD form, edit the device class. Click Edit OSD . Figure 10.2. Edit an OSD A notification displays that the OSD was updated successfully. Marking the OSD flags To mark the flag of the OSD, select the row and click Flags from the action drop-down. In the Individual OSD Flags form, select the OSD flags needed. Click Update . Figure 10.3. Marking OSD flags A notification displays that the OSD flags updated successfully. Scrubbing an OSD To scrub an OSD, select the row and click Scrub from the action drop-down. In the OSDs Scrub notification, click Update . Figure 10.4. Scrubbing an OSD A notification displays that the scrubbing of the OSD was initiated successfully. Deep-scrubbing the OSDs To deep-scrub the OSD, select the row and click Deep Scrub from the action drop-down. In the OSDs Deep Scrub notification, click Update . Figure 10.5. Deep-scrubbing an OSD A notification displays that the deep scrubbing of the OSD was initiated successfully. Reweighting the OSDs To reweight the OSD, select the row and click Reweight from the action drop-down. In the Reweight OSD form enter a value between 0 and 1. Click Reweight . Figure 10.6. Reweighting an OSD Marking OSDs out To mark an OSD as out , select the row and click Mark Out from the action drop-down. In the Mark OSD out notification, click Mark Out . Figure 10.7. Marking OSDs out The OSD status changes to out . Marking OSDs in To mark an OSD as in , select the OSD row that is in out status and click Mark In from the action drop-down. In the Mark OSD in notification, click Mark In . Figure 10.8. Marking OSDs in The OSD status changes to in . Marking OSDs down To mark an OSD down , select the row and click Mark Down from the action drop-down. In the Mark OSD down notification, click Mark Down . Figure 10.9. Marking OSDs down The OSD status changes to down . Marking OSDs lost To mark an OSD lost , select the OSD in out and down status and click Mark Lost from the action drop-down. In the Mark OSD Lost notification, select Yes, I am sure and click Mark Lost . Figure 10.10. Marking OSDs lost Purging OSDs To purge an OSD, select the OSD in down status and click Purge from the action drop-down. In the Purge OSDs notification, select Yes, I am sure and click Purge OSD . Figure 10.11. Purging OSDs All the flags are reset and the OSD is back in in and up status. Destroying OSDs To destroy an OSD, select the OSD in down status and click Destroy from the action drop-down. In the Destroy OSDs notification, select Yes, I am sure and click Destroy OSD . Figure 10.12. Destroying OSDs The OSD status changes to destroyed . Deleting OSDs To delete an OSD, select the OSD and click Delete from the action drop-down. In the Delete OSDs notification, select Yes, I am sure and click Delete OSD . Note You can preserve the OSD_ID when you have to to replace the failed OSD. Figure 10.13. Deleting OSDs 10.2. Replacing the failed OSDs on the Ceph dashboard You can replace the failed OSDs in a Red Hat Ceph Storage cluster with the cluster-manager level of access on the dashboard. One of the highlights of this feature on the dashboard is that the OSD IDs can be preserved while replacing the failed OSDs. Prerequisites A running Red Hat Ceph Storage cluster. At least cluster-manager level of access to the Ceph Dashboard. At least one of the OSDs is down Procedure On the dashboard, you can identify the failed OSDs in the following ways: Dashboard AlertManager pop-up notifications. Dashboard landing page showing HEALTH_WARN status. Dashboard landing page showing failed OSDs. Dashboard OSD page showing failed OSDs. In this example, you can see that one of the OSDs is down on the landing page of the dashboard. You can also view the LED blinking lights on the physical drive if one of the OSDs is down. From Cluster->OSDs , on the OSDs List table, select the out and down OSD. Click Flags from the action drop-down, select No Up in the Individual OSD Flags form, and click Update . Click Delete from the action drop-down. In the Delete OSD notification, select Preserve OSD ID(s) for replacement and Yes, I am sure and click Delete OSD . Wait until the status of the OSD changes to out and destroyed . Optional: To change the No Up Flag for the entire cluster, from the Cluster-wide configuration menu, select Flags . In Cluster-wide OSDs Flags form, select No Up and click Update . Optional: If the OSDs are down due to a hard disk failure, replace the physical drive: If the drive is hot-swappable, replace the failed drive with a new one. If the drive is not hot-swappable and the host contains multiple OSDs, you might have to shut down the whole host and replace the physical drive. Consider preventing the cluster from backfilling. See the Stopping and Starting Rebalancing chapter in the Red Hat Ceph Storage Troubleshooting Guide for details. When the drive appears under the /dev/ directory, make a note of the drive path. If you want to add the OSD manually, find the OSD drive and format the disk. If the new disk has data, zap the disk: Syntax Example From the Ceph Dashboard OSDs List , click Create . In the Create OSDs form Advanced Mode section, add a primary device. In the Primary devices dialog, select a Hostname filter. Select a device type from the list. Note You have to select the Hostname first and then at least one filter to add the devices. For example, from Hostname list, select Type and then hdd . Select Vendor and from device list, select ATA . Click Add . In the Create OSDs form, click Preview . In the OSD Creation Preview dialog, click Create . A notification displays that the OSD is created successfully and the OSD changes to be in the out and down status. Select the newly created OSD that has out and down status. Click Mark In from the action drop-down. In the Mark OSD in notification, click Mark In . The OSD status changes to in . Click Flags from the action drop-down. Clear the No Up selection and click Update . Optional: If you have changed the No Up flag before for cluster-wide configuration, in the Cluster-wide configuration menu, select Flags . In Cluster-wide OSDs Flags form, clear the No Up selection and click Update . Verification Verify that the OSD that was destroyed is created on the device and the OSD ID is preserved. Additional Resources For more information on Down OSDs, see the Down OSDs section in the Red Hat Ceph Storage Troubleshooting Guide . For additional assistance see the Red Hat Support for service section in the Red Hat Ceph Storage Troubleshooting Guide . For more information on system roles, see the Managing roles on the Ceph dashboard section in the Red Hat Ceph Storage Dashboard Guide . | [
"ceph orch device zap HOST_NAME PATH --force",
"ceph orch device zap ceph-adm2 /dev/sdc --force"
] | https://docs.redhat.com/en/documentation/red_hat_ceph_storage/7/html/dashboard_guide/management-of-ceph-osds-on-the-dashboard |
Release Notes for Red Hat build of Quarkus 3.15 | Release Notes for Red Hat build of Quarkus 3.15 Red Hat build of Quarkus 3.15 Red Hat Customer Content Services | [
"quarkus config encrypt --secret=<xyz123>",
"quarkus config encrypt <xyz123>",
"quarkus config set --name=<abc> --value=<xyz123>",
"quarkus config set <abc> <xyz123>",
"quarkus config remove <abc>",
"<dependency> <groupId>io.quarkus</groupId> <artifactId>quarkus-observability-devservices-lgtm</artifactId> <version>USD{project.version}</version> <scope>provided</scope> </dependency>",
"@ClientBasicAuth(username = \"USD{service.username}\", password = \"USD{service.password}\") public interface SomeClient { }",
"%test.quarkus.oidc.auth-server-url=USD{keycloak.url}/realms/quarkus/",
"%test.quarkus.oidc.auth-server-url=USD{keycloak.url:replaced-by-test-resource}/realms/quarkus/",
"@Inject RemoteCache<String, Book> booksCache; ... QueryFactory queryFactory = Search.getQueryFactory(booksCache); Query query = queryFactory.create(\"from book_sample.Book\"); List<Book> list = query.execute().list();",
"@Inject RemoteCache<String, Book> booksCache; ... Query<Book> query = booksCache.<Book>query(\"from book_sample.Book\"); List<Book> list = query.execute().list();",
"<plugin> <groupId>org.apache.maven.plugins</groupId> <artifactId>maven-compiler-plugin</artifactId> <configuration> <annotationProcessorPaths> <path> <groupId>io.quarkus</groupId> <artifactId>quarkus-extension-processor</artifactId> <version>USD{quarkus.version}</version> </path> </annotationProcessorPaths> <compilerArgs> <arg>-AlegacyConfigRoot=true</arg> </compilerArgs> </configuration> </plugin>",
"<plugin> <artifactId>maven-compiler-plugin</artifactId> <executions> <execution> <id>default-compile</id> <configuration> <annotationProcessorPaths> <path> <groupId>io.quarkus</groupId> <artifactId>quarkus-extension-processor</artifactId> <version>USD{quarkus.version}</version> </path> </annotationProcessorPaths> <compilerArgs> <arg>-AlegacyConfigRoot=true</arg> </compilerArgs> </configuration> </execution> </executions> </plugin>",
"<build> <plugins> <!-- other plugins --> <plugin> <groupId>org.apache.maven.plugins</groupId> <artifactId>maven-compiler-plugin</artifactId> <version>3.13.0</version> <!-- Necessary for proper dependency management in annotationProcessorPaths --> <configuration> <annotationProcessorPaths> <path> <groupId>io.quarkus</groupId> <artifactId>quarkus-panache-common</artifactId> </path> </annotationProcessorPaths> </configuration> </plugin> <!-- other plugins --> </plugins> </build>",
"dependencies { annotationProcessor \"io.quarkus:quarkus-panache-common\" }",
"package org.acme; import org.eclipse.microprofile.reactive.messaging.Incoming; import org.eclipse.microprofile.reactive.messaging.Outgoing; @Incoming(\"source\") @Outgoing(\"sink\") public Result process(int payload) { return new Result(payload); }",
"package org.acme; import io.smallrye.common.annotation.NonBlocking; import org.eclipse.microprofile.reactive.messaging.Incoming; @Incoming(\"source\") @NonBlocking public void consume(int payload) { // called on I/O thread }",
"<properties> <junit-pioneer.version>2.2.0</junit-pioneer.version> </properties>",
"@Path(\"/records\") public class RecordsResource { @Inject HalService halService; @GET @Produces({ MediaType.APPLICATION_JSON, RestMediaType.APPLICATION_HAL_JSON }) @RestLink(rel = \"list\") public HalCollectionWrapper<Record> getAll() { List<Record> list = // HalCollectionWrapper<Record> halCollection = halService.toHalCollectionWrapper( list, \"collectionName\", Record.class); // return halCollection; } @GET @Produces({ MediaType.APPLICATION_JSON, RestMediaType.APPLICATION_HAL_JSON }) @Path(\"/{id}\") @RestLink(rel = \"self\") @InjectRestLinks(RestLinkType.INSTANCE) public HalEntityWrapper<Record> get(@PathParam(\"id\") int id) { Record entity = // HalEntityWrapper<Record> halEntity = halService.toHalWrapper(entity); // return halEntity; } }",
"package io.quarkus.resteasy.reactive.server.test.customproviders; import jakarta.ws.rs.NotFoundException; import jakarta.ws.rs.core.Response; import jakarta.ws.rs.ext.ExceptionMapper; import jakarta.ws.rs.ext.Provider; @Provider public class NotFoundExeptionMapper implements ExceptionMapper<NotFoundException> { @Override public Response toResponse(NotFoundException exception) { return Response.status(404).build(); } }",
"org.eclipse.microprofile.reactive.messaging.MessageUSD5@1e8dc267 from channel 'test' was not sent to Kafka topic 'test' - nacking message: org.apache.kafka.common.KafkaException: org.xerial.snappy.SnappyError: [FAILED_TO_LOAD_NATIVE_LIBRARY] no native library is found for os.name=Windows and os.arch=x86_64",
"run.repos=central,https://maven.repository.redhat.com/ga/",
"jbang config set run.repos central,https://maven.repository.redhat.com/ga/",
"mvn com.redhat.quarkus.platform:quarkus-maven-plugin:3.15.3.SP1-redhat-00002:update -Dmaven.repo.local=<path-to-local-repo>",
"<properties> <quarkus.platform.artifact-id>quarkus-bom</quarkus.platform.artifact-id> <quarkus.platform.group-id>com.redhat.quarkus.platform</quarkus.platform.group-id> <quarkus.platform.version>3.15.3.SP1-redhat-00002</quarkus.platform.version> </properties> <dependencyManagement> <dependencies> <dependency> <groupId>USD{quarkus.platform.group-id}</groupId> <artifactId>USD{quarkus.platform.artifact-id}</artifactId> <version>USD{quarkus.platform.version}</version> </dependency> <dependency> <groupId>io.quarkus.platform</groupId> <artifactId>quarkus-amazon-services-bom</artifactId> <version>3.2.12.Final</version> 1 </dependency> </dependencies> </dependencyManagement>",
"2024-10-17 10:45:01,931 ERROR [io.qua.ver.htt.run.QuarkusErrorHandler] (executor-thread-1) HTTP Request to /repro failed, error id: 9b1f5dbb-058b-4c9b-9377-f3acc0a6cba5-1: java.lang.RuntimeException: java.lang.NullPointerException at org.acme.ReproResource.init(ReproResource.java:38)",
"quarkus.security.security-providers=SunPKCS11",
"quarkus.native.additional-build-args=--initialize-at-run-time=org.acme.ReproResource"
] | https://docs.redhat.com/en/documentation/red_hat_build_of_quarkus/3.15/html-single/release_notes_for_red_hat_build_of_quarkus_3.15/index |
Chapter 53. KIE Server | Chapter 53. KIE Server KIE Server is the server where the rules and other artifacts for Red Hat Process Automation Manager are stored and run. KIE Server is a standalone built-in component that can be used to instantiate and execute rules through interfaces available for REST, Java Message Service (JMS), or Java client-side applications, as well as to manage processes, jobs, and Red Hat build of OptaPlanner functionality through solvers. Created as a web deployable WAR file, KIE Server can be deployed on any web container. The current version of KIE Server is included with default extensions for both Red Hat Decision Manager and Red Hat Process Automation Manager. KIE Server has a low footprint with minimal memory consumption and therefore can be deployed easily on a cloud instance. Each instance of this server can open and instantiate multiple containers, which enables you to execute multiple rule services in parallel. KIE Server can be integrated with other application servers, such as Oracle WebLogic Server or IBM WebSphere Application Server, to streamline Red Hat Process Automation Manager application management. | null | https://docs.redhat.com/en/documentation/red_hat_process_automation_manager/7.13/html/installing_and_configuring_red_hat_process_automation_manager/kie-server-con_kie-server-on-wls |
3.3. Adding Devices to the Multipathing Database | 3.3. Adding Devices to the Multipathing Database By default, DM-Multipath includes support for the most common storage arrays that support DM-Multipath. The default configuration values, including supported devices, can be found in the multipath.conf.defaults file. If you need to add a storage device that is not supported by default as a known multipath device, edit the /etc/multipath.conf file and insert the appropriate device information. For example, to add information about the HP Open-V series the entry looks like this: For more information on the devices section of the configuration file, see Section 4.5, "Configuration File Devices" . | [
"devices { device { vendor \"HP\" product \"OPEN-V.\" getuid_callout \"/sbin/scsi_id -g -u -p0x80 -s /block/%n\" } }"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/dm_multipath/mp_device_add |
3.6. Runtime Device Power Management | 3.6. Runtime Device Power Management Runtime device power management (RDPM) helps to reduce power consumption with minimum user-visible impact. If a device has been idle for a sufficient time and the RDPM hardware support exists in both the device and driver, the device is put into a lower power state. The recovery from the lower power state is assured by an external I/O event for this device, which triggers the kernel and the device driver to bring the device back to the running state. All this occurs automatically, as RDPM is enabled by default. Users are allowed to control RDPM of a device by setting the attribute in a particular RDPM configuration file. The RDPM configuration files for particular devices can be found in the /sys/devices/ device /power/ directory, where device replaces the path to the directory of a particular device. For example, to configure the RDPM for a CPU, access this directory: Bringing a device back from a lower power state to the running state adds additional latency to the I/O operation. The duration of that additional delay is device-specific. The configuration scheme described here allows the system administrator to disable RDPM on a device-by-device basis and to both examine and control some of the other parameters. Every /sys/devices/ device /power directory contains the following configuration files: control This file is used to enable or disable RDPM for a particular device. All devices have one of the following two values of the attribute in the control file: auto default for all devices, they may be subject to automatic RDPM, depending on their driver on prevents the driver from managing the device's power state at run time autosuspend_delay_ms This file controls the auto-suspend delay, which is the minimum time period of inactivity between idle state and suspending of the device. The file contains the auto-suspend delay value in milliseconds. A negative value prevents the device from being suspended at run time, thus having the same effect as setting the attribute in the /sys/devices/ device /power/control file to on . Values higher than 1000 are rounded up to the nearest second. | [
"/sys/devices/system/cpu/power/"
] | https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/power_management_guide/runtime_device_power_management |
Chapter 9. Scheduler Tapset | Chapter 9. Scheduler Tapset This family of probe points is used to probe the task scheduler activities. It contains the following probe points: | null | https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/systemtap_tapset_reference/sched-dot-stp |
Chapter 15. Managing containers using the Ansible playbook | Chapter 15. Managing containers using the Ansible playbook With Podman 4.2, you can use the Podman RHEL system role to manage Podman configuration, containers, and systemd services which run Podman containers. RHEL system roles provide a configuration interface to remotely manage multiple RHEL systems. You can use the interface to manage system configurations across multiple versions of RHEL, as well as adopting new major releases. For more information, see the Automating system administration by using RHEL system roles . 15.1. Creating a rootless container with bind mount by using the podman RHEL system role You can use the podman RHEL system role to create rootless containers with bind mount by running an Ansible playbook and with that, manage your application configuration. The example Ansible playbook starts two Kubernetes pods: one for a database and another for a web application. The database pod configuration is specified in the playbook, while the web application pod is defined in an external YAML file. Prerequisites You have prepared the control node and the managed nodes You are logged in to the control node as a user who can run playbooks on the managed nodes. The account you use to connect to the managed nodes has sudo permissions on them. The user and group webapp exist, and must be listed in the /etc/subuid and /etc/subgid files on the host. Procedure Create a playbook file, for example ~/playbook.yml , with the following content: - name: Configure Podman hosts: managed-node-01.example.com tasks: - name: Create a web application and a database ansible.builtin.include_role: name: rhel-system-roles.podman vars: podman_create_host_directories: true podman_firewall: - port: 8080-8081/tcp state: enabled - port: 12340/tcp state: enabled podman_selinux_ports: - ports: 8080-8081 setype: http_port_t podman_kube_specs: - state: started run_as_user: dbuser run_as_group: dbgroup kube_file_content: apiVersion: v1 kind: Pod metadata: name: db spec: containers: - name: db image: quay.io/linux-system-roles/mysql:5.6 ports: - containerPort: 1234 hostPort: 12340 volumeMounts: - mountPath: /var/lib/db:Z name: db volumes: - name: db hostPath: path: /var/lib/db - state: started run_as_user: webapp run_as_group: webapp kube_file_src: /path/to/webapp.yml The settings specified in the example playbook include the following: run_as_user and run_as_group Specify that containers are rootless. kube_file_content Contains a Kubernetes YAML file defining the first container named db . You can generate the Kubernetes YAML file by using the podman kube generate command. The db container is based on the quay.io/db/db:stable container image. The db bind mount maps the /var/lib/db directory on the host to the /var/lib/db directory in the container. The Z flag labels the content with a private unshared label, therefore, only the db container can access the content. kube_file_src: <path> Defines the second container. The content of the /path/to/webapp.yml file on the controller node will be copied to the kube_file field on the managed node. volumes: <list> A YAML list to define the source of the data to provide in one or more containers. For example, a local disk on the host ( hostPath ) or other disk device. volumeMounts: <list> A YAML list to define the destination where the individual container will mount a given volume. podman_create_host_directories: true Creates the directory on the host. This instructs the role to check the kube specification for hostPath volumes and create those directories on the host. If you need more control over the ownership and permissions, use podman_host_directories . For details about all variables used in the playbook, see the /usr/share/ansible/roles/rhel-system-roles.podman/README.md file on the control node. Validate the playbook syntax: Note that this command only validates the syntax and does not protect against a wrong but valid configuration. Run the playbook: Additional resources /usr/share/ansible/roles/rhel-system-roles.podman/README.md file /usr/share/doc/rhel-system-roles/podman/ directory 15.2. Creating a rootful container with Podman volume by using the podman RHEL system role You can use the podman RHEL system role to create a rootful container with a Podman volume by running an Ansible playbook and with that, manage your application configuration. The example Ansible playbook deploys a Kubernetes pod named ubi8-httpd running an HTTP server container from the registry.access.redhat.com/ubi8/httpd-24 image. The container's web content is mounted from a persistent volume named ubi8-html-volume . By default, the podman role creates rootful containers. Prerequisites You have prepared the control node and the managed nodes You are logged in to the control node as a user who can run playbooks on the managed nodes. The account you use to connect to the managed nodes has sudo permissions on them. Procedure Create a playbook file, for example ~/playbook.yml , with the following content: - name: Configure Podman hosts: managed-node-01.example.com tasks: - name: Start Apache server on port 8080 ansible.builtin.include_role: name: rhel-system-roles.podman vars: podman_firewall: - port: 8080/tcp state: enabled podman_kube_specs: - state: started kube_file_content: apiVersion: v1 kind: Pod metadata: name: ubi8-httpd spec: containers: - name: ubi8-httpd image: registry.access.redhat.com/ubi8/httpd-24 ports: - containerPort: 8080 hostPort: 8080 volumeMounts: - mountPath: /var/www/html:Z name: ubi8-html volumes: - name: ubi8-html persistentVolumeClaim: claimName: ubi8-html-volume The settings specified in the example playbook include the following: kube_file_content Contains a Kubernetes YAML file defining the first container named db . You can generate the Kubernetes YAML file by using the podman kube generate command. The ubi8-httpd container is based on the registry.access.redhat.com/ubi8/httpd-24 container image. The ubi8-html-volume maps the /var/www/html directory on the host to the container. The Z flag labels the content with a private unshared label, therefore, only the ubi8-httpd container can access the content. The pod mounts the existing persistent volume named ubi8-html-volume with the mount path /var/www/html . For details about all variables used in the playbook, see the /usr/share/ansible/roles/rhel-system-roles.podman/README.md file on the control node. Validate the playbook syntax: Note that this command only validates the syntax and does not protect against a wrong but valid configuration. Run the playbook: Additional resources /usr/share/ansible/roles/rhel-system-roles.podman/README.md file /usr/share/doc/rhel-system-roles/podman/ directory 15.3. Creating a Quadlet application with secrets by using the podman RHEL system role You can use the podman RHEL system role to create a Quadlet application with secrets by running an Ansible playbook. Prerequisites You have prepared the control node and the managed nodes You are logged in to the control node as a user who can run playbooks on the managed nodes. The account you use to connect to the managed nodes has sudo permissions on them. The certificate and the corresponding private key that the web server in the container should use are stored in the ~/certificate.pem and ~/key.pem files. Procedure Display the contents of the certificate and private key files: You require this information in a later step. Store your sensitive variables in an encrypted file: Create the vault: After the ansible-vault create command opens an editor, enter the sensitive data in the <key> : <value> format: root_password: <root_password> certificate: |- -----BEGIN CERTIFICATE----- ... -----END CERTIFICATE----- key: |- -----BEGIN PRIVATE KEY----- ... -----END PRIVATE KEY----- Ensure that all lines in the certificate and key variables start with two spaces. Save the changes, and close the editor. Ansible encrypts the data in the vault. Create a playbook file, for example ~/playbook.yml , with the following content: - name: Deploy a wordpress CMS with MySQL database hosts: managed-node-01.example.com vars_files: - vault.yml tasks: - name: Create and run the container ansible.builtin.include_role: name: rhel-system-roles.podman vars: podman_create_host_directories: true podman_activate_systemd_unit: false podman_quadlet_specs: - name: quadlet-demo type: network file_content: | [Network] Subnet=192.168.30.0/24 Gateway=192.168.30.1 Label=app=wordpress - file_src: quadlet-demo-mysql.volume - template_src: quadlet-demo-mysql.container.j2 - file_src: envoy-proxy-configmap.yml - file_src: quadlet-demo.yml - file_src: quadlet-demo.kube activate_systemd_unit: true podman_firewall: - port: 8000/tcp state: enabled - port: 9000/tcp state: enabled podman_secrets: - name: mysql-root-password-container state: present skip_existing: true data: "{{ root_password }}" - name: mysql-root-password-kube state: present skip_existing: true data: | apiVersion: v1 data: password: "{{ root_password | b64encode }}" kind: Secret metadata: name: mysql-root-password-kube - name: envoy-certificates state: present skip_existing: true data: | apiVersion: v1 data: certificate.key: {{ key | b64encode }} certificate.pem: {{ certificate | b64encode }} kind: Secret metadata: name: envoy-certificates The procedure creates a WordPress content management system paired with a MySQL database. The podman_quadlet_specs role variable defines a set of configurations for the Quadlet, which refers to a group of containers or services that work together in a certain way. It includes the following specifications: The Wordpress network is defined by the quadlet-demo network unit. The volume configuration for MySQL container is defined by the file_src: quadlet-demo-mysql.volume field. The template_src: quadlet-demo-mysql.container.j2 field is used to generate a configuration for the MySQL container. Two YAML files follow: file_src: envoy-proxy-configmap.yml and file_src: quadlet-demo.yml . Note that .yml is not a valid Quadlet unit type, therefore these files will just be copied and not processed as a Quadlet specification. The Wordpress and envoy proxy containers and configuration are defined by the file_src: quadlet-demo.kube field. The kube unit refers to the YAML files in the [Kube] section as Yaml=quadlet-demo.yml and ConfigMap=envoy-proxy-configmap.yml . Validate the playbook syntax: Note that this command only validates the syntax and does not protect against a wrong but valid configuration. Run the playbook: Additional resources /usr/share/ansible/roles/rhel-system-roles.podman/README.md file /usr/share/doc/rhel-system-roles/podman/ directory | [
"- name: Configure Podman hosts: managed-node-01.example.com tasks: - name: Create a web application and a database ansible.builtin.include_role: name: rhel-system-roles.podman vars: podman_create_host_directories: true podman_firewall: - port: 8080-8081/tcp state: enabled - port: 12340/tcp state: enabled podman_selinux_ports: - ports: 8080-8081 setype: http_port_t podman_kube_specs: - state: started run_as_user: dbuser run_as_group: dbgroup kube_file_content: apiVersion: v1 kind: Pod metadata: name: db spec: containers: - name: db image: quay.io/linux-system-roles/mysql:5.6 ports: - containerPort: 1234 hostPort: 12340 volumeMounts: - mountPath: /var/lib/db:Z name: db volumes: - name: db hostPath: path: /var/lib/db - state: started run_as_user: webapp run_as_group: webapp kube_file_src: /path/to/webapp.yml",
"ansible-playbook --syntax-check --ask-vault-pass ~/playbook.yml",
"ansible-playbook --ask-vault-pass ~/playbook.yml",
"- name: Configure Podman hosts: managed-node-01.example.com tasks: - name: Start Apache server on port 8080 ansible.builtin.include_role: name: rhel-system-roles.podman vars: podman_firewall: - port: 8080/tcp state: enabled podman_kube_specs: - state: started kube_file_content: apiVersion: v1 kind: Pod metadata: name: ubi8-httpd spec: containers: - name: ubi8-httpd image: registry.access.redhat.com/ubi8/httpd-24 ports: - containerPort: 8080 hostPort: 8080 volumeMounts: - mountPath: /var/www/html:Z name: ubi8-html volumes: - name: ubi8-html persistentVolumeClaim: claimName: ubi8-html-volume",
"ansible-playbook --syntax-check ~/playbook.yml",
"ansible-playbook ~/playbook.yml",
"cat ~/certificate.pem -----BEGIN CERTIFICATE----- -----END CERTIFICATE----- cat ~/key.pem -----BEGIN PRIVATE KEY----- -----END PRIVATE KEY-----",
"ansible-vault create vault.yml New Vault password: <vault_password> Confirm New Vault password: <vault_password>",
"root_password: <root_password> certificate: |- -----BEGIN CERTIFICATE----- -----END CERTIFICATE----- key: |- -----BEGIN PRIVATE KEY----- -----END PRIVATE KEY-----",
"- name: Deploy a wordpress CMS with MySQL database hosts: managed-node-01.example.com vars_files: - vault.yml tasks: - name: Create and run the container ansible.builtin.include_role: name: rhel-system-roles.podman vars: podman_create_host_directories: true podman_activate_systemd_unit: false podman_quadlet_specs: - name: quadlet-demo type: network file_content: | [Network] Subnet=192.168.30.0/24 Gateway=192.168.30.1 Label=app=wordpress - file_src: quadlet-demo-mysql.volume - template_src: quadlet-demo-mysql.container.j2 - file_src: envoy-proxy-configmap.yml - file_src: quadlet-demo.yml - file_src: quadlet-demo.kube activate_systemd_unit: true podman_firewall: - port: 8000/tcp state: enabled - port: 9000/tcp state: enabled podman_secrets: - name: mysql-root-password-container state: present skip_existing: true data: \"{{ root_password }}\" - name: mysql-root-password-kube state: present skip_existing: true data: | apiVersion: v1 data: password: \"{{ root_password | b64encode }}\" kind: Secret metadata: name: mysql-root-password-kube - name: envoy-certificates state: present skip_existing: true data: | apiVersion: v1 data: certificate.key: {{ key | b64encode }} certificate.pem: {{ certificate | b64encode }} kind: Secret metadata: name: envoy-certificates",
"ansible-playbook --syntax-check --ask-vault-pass ~/playbook.yml",
"ansible-playbook --ask-vault-pass ~/playbook.yml"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/9/html/building_running_and_managing_containers/managing-containers-using-the-ansible-playbook_building-running-and-managing-containers |
5.359. xfsprogs | 5.359. xfsprogs 5.359.1. RHBA-2012:0883 - xfsprogs bug fix update Updated xfsprogs packages that fix four bugs are now available for Red Hat Enterprise Linux 6. The xfsprogs packages contain a set of commands to use the XFS file system, including mkfs.xfs. Bug Fixes BZ# 730886 Prior to this update, certain file names could cause the xfs_metadump utility to become suspended when generating obfuscated names. This update modifies the underlying code so that xfs_metadump now works as expected. BZ# 738279 Prior to this update, the allocation group size (agsize) was computed incorrectly during mkfs for some filesystem sizes. As a consequence, creating file systems could fail if file system blocks within an allocation group (agblocks) were increased past the maximum. This update modifies the computing method so that agblocks are no longer increased past the maximum. BZ# 749434 Prior to this update, the xfs_quota utility failed with the error message "xfs_quota: cannot initialise path table: No such file or directory" if an invalid xfs entry was encountered in the mtab. This update modifies the xfs_quota utility so that the xfs_quota utility now runs as expected. BZ# 749435 Prior to this update, the xfs_quota utility reported that the project quota values were twice as high as expected. This update modifies the xfs_quota utility so that it now reports the correct values. All users who use the XFS file system are advised to upgrade to these updated packages, which fix these bugs. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.3_technical_notes/xfsprogs |
Making open source more inclusive | Making open source more inclusive Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see our CTO Chris Wright's message . | null | https://docs.redhat.com/en/documentation/red_hat_build_of_openjdk/11/html/release_notes_for_red_hat_build_of_openjdk_11.0.16/making-open-source-more-inclusive |
Chapter 3. Install and Use the Maven Repositories | Chapter 3. Install and Use the Maven Repositories 3.1. About Maven Apache Maven is a distributed build automation tool used in Java application development to create, manage, and build software projects. Maven uses standard configuration files called Project Object Model (POM) files to define projects and manage the build process. POM files describe the module and component dependencies, build order, and targets for the resulting project packaging and output using an XML file. This ensures that the project is built correctly and in a uniform manner. Important Red Hat JBoss Data Grid requires Maven 3 (or better) for all quickstarts and general use. Visit the Maven Download page ( http://maven.apache.org/download.html ) for instructions to download and install Maven. Report a bug | null | https://docs.redhat.com/en/documentation/red_hat_data_grid/6.6/html/getting_started_guide/chap-install_and_use_the_maven_repositories |
Chapter 17. Data objects | Chapter 17. Data objects Data objects are the building blocks for the rule assets that you create. Data objects are custom data types implemented as Java objects in specified packages of your project. For example, you might create a Person object with data fields Name , Address , and DateOfBirth to specify personal details for loan application rules. These custom data types determine what data your assets and your decision services are based on. 17.1. Creating data objects The following procedure is a generic overview of creating data objects. It is not specific to a particular business asset. Procedure In Business Central, go to Menu Design Projects and click the project name. Click Add Asset Data Object . Enter a unique Data Object name and select the Package where you want the data object to be available for other rule assets. Data objects with the same name cannot exist in the same package. In the specified DRL file, you can import a data object from any package. Importing data objects from other packages You can import an existing data object from another package directly into the asset designers like guided rules or guided decision table designers. Select the relevant rule asset within the project and in the asset designer, go to Data Objects New item to select the object to be imported. To make your data object persistable, select the Persistable checkbox. Persistable data objects are able to be stored in a database according to the JPA specification. The default JPA is Hibernate. Click Ok . In the data object designer, click add field to add a field to the object with the attributes Id , Label , and Type . Required attributes are marked with an asterisk (*). Id: Enter the unique ID of the field. Label: (Optional) Enter a label for the field. Type: Enter the data type of the field. List: (Optional) Select this check box to enable the field to hold multiple items for the specified type. Figure 17.1. Add data fields to a data object Click Create to add the new field, or click Create and continue to add the new field and continue adding other fields. Note To edit a field, select the field row and use the general properties on the right side of the screen. | null | https://docs.redhat.com/en/documentation/red_hat_decision_manager/7.13/html/developing_decision_services_in_red_hat_decision_manager/data-objects-con_drl-rules |
Chapter 13. Red Hat Quay quota management and enforcement overview | Chapter 13. Red Hat Quay quota management and enforcement overview With Red Hat Quay, users have the ability to report storage consumption and to contain registry growth by establishing configured storage quota limits. On-premise Red Hat Quay users are now equipped with the following capabilities to manage the capacity limits of their environment: Quota reporting: With this feature, a superuser can track the storage consumption of all their organizations. Additionally, users can track the storage consumption of their assigned organization. Quota management: With this feature, a superuser can define soft and hard checks for Red Hat Quay users. Soft checks tell users if the storage consumption of an organization reaches their configured threshold. Hard checks prevent users from pushing to the registry when storage consumption reaches the configured limit. Together, these features allow service owners of a Red Hat Quay registry to define service level agreements and support a healthy resource budget. 13.1. Quota management limitations Quota management helps organizations to maintain resource consumption. One limitation of quota management is that calculating resource consumption on push results in the calculation becoming part of the push's critical path. Without this, usage data might drift. The maximum storage quota size is dependent on the selected database: Table 13.1. Worker count environment variables Variable Description Postgres 8388608 TB MySQL 8388608 TB SQL Server 16777216 TB 13.2. Quota management for Red Hat Quay 3.9 If you are upgrading to Red Hat Quay 3.9, you must reconfigure the quota management feature. This is because with Red Hat Quay 3.9, calculation is done differently. As a result, totals prior to Red Hat Quay 3.9 are no longer valid. There are two methods for configuring quota management in Red Hat Quay 3.9, which are detailed in the following sections. Note This is a one time calculation that must be done after you have upgraded to Red Hat Quay 3.9. Superuser privileges are required to create, update and delete quotas. While quotas can be set for users as well as organizations, you cannot reconfigure the user quota using the Red Hat Quay UI and you must use the API instead. 13.2.1. Option A: Configuring quota management for Red Hat Quay 3.9 by adjusting the QUOTA_TOTAL_DELAY feature flag Use the following procedure to recalculate Red Hat Quay 3.9 quota management by adjusting the QUOTA_TOTAL_DELAY feature flag. Note With this recalculation option, the totals appear as 0.00 KB until the allotted time designated for QUOTA_TOTAL_DELAY . Prerequisites You have upgraded to Red Hat Quay 3.9. You are logged into Red Hat Quay 3.9 as a superuser. Procedure Deploy Red Hat Quay 3.9 with the following config.yaml settings: FEATURE_QUOTA_MANAGEMENT: true FEATURE_GARBAGE_COLLECTION: true PERMANENTLY_DELETE_TAGS: true QUOTA_TOTAL_DELAY_SECONDS: 1800 1 RESET_CHILD_MANIFEST_EXPIRATION: true 1 The QUOTA_TOTAL_DELAY_SECONDS flag defaults to 1800 seconds, or 30 minutes. This allows Red Hat Quay 3.9 to successfully deploy before the quota management feature begins calculating storage consumption for every blob that has been pushed. Setting this flag to a lower number might result in miscalculation; it must be set to a number that is greater than the time it takes your Red Hat Quay deployment to start. 1800 is the recommended setting, however larger deployments that take longer than 30 minutes to start might require a longer duration than 1800 . Navigate to the Red Hat Quay UI and click the name of your Organization. The Total Quota Consumed should read 0.00 KB . Additionally, the Backfill Queued indicator should be present. After the allotted time, for example, 30 minutes, refresh your Red Hat Quay deployment page and return to your Organization. Now, the Total Quota Consumed should be present. 13.2.2. Option B: Configuring quota management for Red Hat Quay 3.9 by setting QUOTA_TOTAL_DELAY_SECONDS to 0 Use the following procedure to recalculate Red Hat Quay 3.9 quota management by setting QUOTA_TOTAL_DELAY_SECONDS to 0 . Note Using this option prevents the possibility of miscalculations, however is more time intensive. Use the following procedure for when your Red Hat Quay deployment swaps the FEATURE_QUOTA_MANAGEMENT parameter from false to true . Most users will find xref: Prerequisites You have upgraded to Red Hat Quay 3.9. You are logged into Red Hat Quay 3.9 as a superuser. Procedure Deploy Red Hat Quay 3.9 with the following config.yaml settings: FEATURE_GARBAGE_COLLECTION: true FEATURE_QUOTA_MANAGEMENT: true QUOTA_BACKFILL: false QUOTA_TOTAL_DELAY_SECONDS: 0 PERMANENTLY_DELETE_TAGS: true RESET_CHILD_MANIFEST_EXPIRATION: true Navigate to the Red Hat Quay UI and click the name of your Organization. The Total Quota Consumed should read 0.00 KB . Redeploy Red Hat Quay and set the QUOTA_BACKFILL flag set to true . For example: QUOTA_BACKFILL: true Note If you choose to disable quota management after it has calculated totals, Red Hat Quay marks those totals as stale. If you re-enable the quota management feature again in the future, those namespaces and repositories are recalculated by the backfill worker. 13.3. Testing quota management for Red Hat Quay 3.9 With quota management configured for Red Hat Quay 3.9, duplicative images are now only counted once towards the repository total. Use the following procedure to test that a duplicative image is only counted once toward the repository total. Prerequisites You have configured quota management for Red Hat Quay 3.9. Procedure Pull a sample image, for example, ubuntu:18.04 , by entering the following command: USD podman pull ubuntu:18.04 Tag the same image twice by entering the following command: USD podman tag docker.io/library/ubuntu:18.04 quay-server.example.com/quota-test/ubuntu:tag1 USD podman tag docker.io/library/ubuntu:18.04 quay-server.example.com/quota-test/ubuntu:tag2 Push the sample image to your organization by entering the following commands: USD podman push --tls-verify=false quay-server.example.com/quota-test/ubuntu:tag1 USD podman push --tls-verify=false quay-server.example.com/quota-test/ubuntu:tag2 On the Red Hat Quay UI, navigate to Organization and click the Repository Name , for example, quota-test/ubuntu . Then, click Tags . There should be two repository tags, tag1 and tag2 , each with the same manifest. For example: However, by clicking on the Organization link, we can see that the Total Quota Consumed is 27.94 MB , meaning that the Ubuntu image has only been accounted for once: If you delete one of the Ubuntu tags, the Total Quota Consumed remains the same. Note If you have configured the Red Hat Quay time machine to be longer than 0 seconds, subtraction will not happen until those tags pass the time machine window. If you want to expedite permanent deletion, see Permanently deleting an image tag in Red Hat Quay 3.9. 13.4. Setting default quota To specify a system-wide default storage quota that is applied to every organization and user, you can use the DEFAULT_SYSTEM_REJECT_QUOTA_BYTES configuration flag. If you configure a specific quota for an organization or user, and then delete that quota, the system-wide default quota will apply if one has been set. Similarly, if you have configured a specific quota for an organization or user, and then modify the system-wide default quota, the updated system-wide default will override any specific settings. For more information about the DEFAULT_SYSTEM_REJECT_QUOTA_BYTES flag, see link: 13.5. Establishing quota in Red Hat Quay UI The following procedure describes how you can report storage consumption and establish storage quota limits. Prerequisites A Red Hat Quay registry. A superuser account. Enough storage to meet the demands of quota limitations. Procedure Create a new organization or choose an existing one. Initially, no quota is configured, as can be seen on the Organization Settings tab: Log in to the registry as a superuser and navigate to the Manage Organizations tab on the Super User Admin Panel . Click the Options icon of the organization for which you want to create storage quota limits: Click Configure Quota and enter the initial quota, for example, 10 MB . Then click Apply and Close : Check that the quota consumed shows 0 of 10 MB on the Manage Organizations tab of the superuser panel: The consumed quota information is also available directly on the Organization page: Initial consumed quota To increase the quota to 100MB, navigate to the Manage Organizations tab on the superuser panel. Click the Options icon and select Configure Quota , setting the quota to 100 MB. Click Apply and then Close : Pull a sample image by entering the following command: USD podman pull ubuntu:18.04 Tag the sample image by entering the following command: USD podman tag docker.io/library/ubuntu:18.04 example-registry-quay-quay-enterprise.apps.docs.gcp.quaydev.org/testorg/ubuntu:18.04 Push the sample image to the organization by entering the following command: USD podman push --tls-verify=false example-registry-quay-quay-enterprise.apps.docs.gcp.quaydev.org/testorg/ubuntu:18.04 On the superuser panel, the quota consumed per organization is displayed: The Organization page shows the total proportion of the quota used by the image: Total Quota Consumed for first image Pull a second sample image by entering the following command: USD podman pull nginx Tag the second image by entering the following command: USD podman tag docker.io/library/nginx example-registry-quay-quay-enterprise.apps.docs.gcp.quaydev.org/testorg/nginx Push the second image to the organization by entering the following command: USD podman push --tls-verify=false example-registry-quay-quay-enterprise.apps.docs.gcp.quaydev.org/testorg/nginx The Organization page shows the total proportion of the quota used by each repository in that organization: Total Quota Consumed for each repository Create reject and warning limits: From the superuser panel, navigate to the Manage Organizations tab. Click the Options icon for the organization and select Configure Quota . In the Quota Policy section, with the Action type set to Reject , set the Quota Threshold to 80 and click Add Limit : To create a warning limit, select Warning as the Action type, set the Quota Threshold to 70 and click Add Limit : Click Close on the quota popup. The limits are viewable, but not editable, on the Settings tab of the Organization page: Push an image where the reject limit is exceeded: Because the reject limit (80%) has been set to below the current repository size (~83%), the pushed image is rejected automatically. Sample image push USD podman pull ubuntu:20.04 USD podman tag docker.io/library/ubuntu:20.04 example-registry-quay-quay-enterprise.apps.docs.gcp.quaydev.org/testorg/ubuntu:20.04 USD podman push --tls-verify=false example-registry-quay-quay-enterprise.apps.docs.gcp.quaydev.org/testorg/ubuntu:20.04 Sample output when quota exceeded Getting image source signatures Copying blob d4dfaa212623 [--------------------------------------] 8.0b / 3.5KiB Copying blob cba97cc5811c [--------------------------------------] 8.0b / 15.0KiB Copying blob 0c78fac124da [--------------------------------------] 8.0b / 71.8MiB WARN[0002] failed, retrying in 1s ... (1/3). Error: Error writing blob: Error initiating layer upload to /v2/testorg/ubuntu/blobs/uploads/ in example-registry-quay-quay-enterprise.apps.docs.gcp.quaydev.org: denied: Quota has been exceeded on namespace Getting image source signatures Copying blob d4dfaa212623 [--------------------------------------] 8.0b / 3.5KiB Copying blob cba97cc5811c [--------------------------------------] 8.0b / 15.0KiB Copying blob 0c78fac124da [--------------------------------------] 8.0b / 71.8MiB WARN[0005] failed, retrying in 1s ... (2/3). Error: Error writing blob: Error initiating layer upload to /v2/testorg/ubuntu/blobs/uploads/ in example-registry-quay-quay-enterprise.apps.docs.gcp.quaydev.org: denied: Quota has been exceeded on namespace Getting image source signatures Copying blob d4dfaa212623 [--------------------------------------] 8.0b / 3.5KiB Copying blob cba97cc5811c [--------------------------------------] 8.0b / 15.0KiB Copying blob 0c78fac124da [--------------------------------------] 8.0b / 71.8MiB WARN[0009] failed, retrying in 1s ... (3/3). Error: Error writing blob: Error initiating layer upload to /v2/testorg/ubuntu/blobs/uploads/ in example-registry-quay-quay-enterprise.apps.docs.gcp.quaydev.org: denied: Quota has been exceeded on namespace Getting image source signatures Copying blob d4dfaa212623 [--------------------------------------] 8.0b / 3.5KiB Copying blob cba97cc5811c [--------------------------------------] 8.0b / 15.0KiB Copying blob 0c78fac124da [--------------------------------------] 8.0b / 71.8MiB Error: Error writing blob: Error initiating layer upload to /v2/testorg/ubuntu/blobs/uploads/ in example-registry-quay-quay-enterprise.apps.docs.gcp.quaydev.org: denied: Quota has been exceeded on namespace When limits are exceeded, notifications are displayed in the UI: Quota notifications 13.6. Establishing quota with the Red Hat Quay API When an organization is first created, it does not have a quota applied. Use the /api/v1/organization/{organization}/quota endpoint: Sample command USD curl -k -X GET -H "Authorization: Bearer <token>" -H 'Content-Type: application/json' https://example-registry-quay-quay-enterprise.apps.docs.gcp.quaydev.org/api/v1/organization/testorg/quota | jq Sample output [] 13.6.1. Setting the quota To set a quota for an organization, POST data to the /api/v1/organization/{orgname}/quota endpoint: .Sample command USD curl -k -X POST -H "Authorization: Bearer <token>" -H 'Content-Type: application/json' -d '{"limit_bytes": 10485760}' https://example-registry-quay-quay-enterprise.apps.docs.quayteam.org/api/v1/organization/testorg/quota | jq Sample output "Created" 13.6.2. Viewing the quota To see the applied quota, GET data from the /api/v1/organization/{orgname}/quota endpoint: Sample command USD curl -k -X GET -H "Authorization: Bearer <token>" -H 'Content-Type: application/json' https://example-registry-quay-quay-enterprise.apps.docs.gcp.quaydev.org/api/v1/organization/testorg/quota | jq Sample output [ { "id": 1, "limit_bytes": 10485760, "default_config": false, "limits": [], "default_config_exists": false } ] 13.6.3. Modifying the quota To change the existing quota, in this instance from 10 MB to 100 MB, PUT data to the /api/v1/organization/{orgname}/quota/{quota_id} endpoint: Sample command USD curl -k -X PUT -H "Authorization: Bearer <token>" -H 'Content-Type: application/json' -d '{"limit_bytes": 104857600}' https://example-registry-quay-quay-enterprise.apps.docs.gcp.quaydev.org/api/v1/organization/testorg/quota/1 | jq Sample output { "id": 1, "limit_bytes": 104857600, "default_config": false, "limits": [], "default_config_exists": false } 13.6.4. Pushing images To see the storage consumed, push various images to the organization. 13.6.4.1. Pushing ubuntu:18.04 Push ubuntu:18.04 to the organization from the command line: Sample commands USD podman pull ubuntu:18.04 USD podman tag docker.io/library/ubuntu:18.04 example-registry-quay-quay-enterprise.apps.docs.gcp.quaydev.org/testorg/ubuntu:18.04 USD podman push --tls-verify=false example-registry-quay-quay-enterprise.apps.docs.gcp.quaydev.org/testorg/ubuntu:18.04 13.6.4.2. Using the API to view quota usage To view the storage consumed, GET data from the /api/v1/repository endpoint: Sample command USD curl -k -X GET -H "Authorization: Bearer <token>" -H 'Content-Type: application/json' 'https://example-registry-quay-quay-enterprise.apps.docs.gcp.quaydev.org/api/v1/repository?last_modified=true&namespace=testorg&popularity=true&public=true"a=true' | jq Sample output { "repositories": [ { "namespace": "testorg", "name": "ubuntu", "description": null, "is_public": false, "kind": "image", "state": "NORMAL", "quota_report": { "quota_bytes": 27959066, "configured_quota": 104857600 }, "last_modified": 1651225630, "popularity": 0, "is_starred": false } ] } 13.6.4.3. Pushing another image Pull, tag, and push a second image, for example, nginx : Sample commands USD podman pull nginx USD podman tag docker.io/library/nginx example-registry-quay-quay-enterprise.apps.docs.gcp.quaydev.org/testorg/nginx USD podman push --tls-verify=false example-registry-quay-quay-enterprise.apps.docs.gcp.quaydev.org/testorg/nginx To view the quota report for the repositories in the organization, use the /api/v1/repository endpoint: Sample command USD curl -k -X GET -H "Authorization: Bearer <token>" -H 'Content-Type: application/json' 'https://example-registry-quay-quay-enterprise.apps.docs.gcp.quaydev.org/api/v1/repository?last_modified=true&namespace=testorg&popularity=true&public=true"a=true' Sample output { "repositories": [ { "namespace": "testorg", "name": "ubuntu", "description": null, "is_public": false, "kind": "image", "state": "NORMAL", "quota_report": { "quota_bytes": 27959066, "configured_quota": 104857600 }, "last_modified": 1651225630, "popularity": 0, "is_starred": false }, { "namespace": "testorg", "name": "nginx", "description": null, "is_public": false, "kind": "image", "state": "NORMAL", "quota_report": { "quota_bytes": 59231659, "configured_quota": 104857600 }, "last_modified": 1651229507, "popularity": 0, "is_starred": false } ] } To view the quota information in the organization details, use the /api/v1/organization/{orgname} endpoint: Sample command USD curl -k -X GET -H "Authorization: Bearer <token>" -H 'Content-Type: application/json' 'https://example-registry-quay-quay-enterprise.apps.docs.gcp.quaydev.org/api/v1/organization/testorg' | jq Sample output { "name": "testorg", ... "quotas": [ { "id": 1, "limit_bytes": 104857600, "limits": [] } ], "quota_report": { "quota_bytes": 87190725, "configured_quota": 104857600 } } 13.6.5. Rejecting pushes using quota limits If an image push exceeds defined quota limitations, a soft or hard check occurs: For a soft check, or warning , users are notified. For a hard check, or reject , the push is terminated. 13.6.5.1. Setting reject and warning limits To set reject and warning limits, POST data to the /api/v1/organization/{orgname}/quota/{quota_id}/limit endpoint: Sample reject limit command USD curl -k -X POST -H "Authorization: Bearer <token>" -H 'Content-Type: application/json' -d '{"type":"Reject","threshold_percent":80}' https://example-registry-quay-quay-enterprise.apps.docs.gcp.quaydev.org/api/v1/organization/testorg/quota/1/limit Sample warning limit command USD curl -k -X POST -H "Authorization: Bearer <token>" -H 'Content-Type: application/json' -d '{"type":"Warning","threshold_percent":50}' https://example-registry-quay-quay-enterprise.apps.docs.gcp.quaydev.org/api/v1/organization/testorg/quota/1/limit 13.6.5.2. Viewing reject and warning limits To view the reject and warning limits, use the /api/v1/organization/{orgname}/quota endpoint: View quota limits USD curl -k -X GET -H "Authorization: Bearer <token>" -H 'Content-Type: application/json' https://example-registry-quay-quay-enterprise.apps.docs.gcp.quaydev.org/api/v1/organization/testorg/quota | jq Sample output for quota limits [ { "id": 1, "limit_bytes": 104857600, "default_config": false, "limits": [ { "id": 2, "type": "Warning", "limit_percent": 50 }, { "id": 1, "type": "Reject", "limit_percent": 80 } ], "default_config_exists": false } ] 13.6.5.3. Pushing an image when the reject limit is exceeded In this example, the reject limit (80%) has been set to below the current repository size (~83%), so the push should automatically be rejected. Push a sample image to the organization from the command line: Sample image push USD podman pull ubuntu:20.04 USD podman tag docker.io/library/ubuntu:20.04 example-registry-quay-quay-enterprise.apps.docs.gcp.quaydev.org/testorg/ubuntu:20.04 USD podman push --tls-verify=false example-registry-quay-quay-enterprise.apps.docs.gcp.quaydev.org/testorg/ubuntu:20.04 Sample output when quota exceeded Getting image source signatures Copying blob d4dfaa212623 [--------------------------------------] 8.0b / 3.5KiB Copying blob cba97cc5811c [--------------------------------------] 8.0b / 15.0KiB Copying blob 0c78fac124da [--------------------------------------] 8.0b / 71.8MiB WARN[0002] failed, retrying in 1s ... (1/3). Error: Error writing blob: Error initiating layer upload to /v2/testorg/ubuntu/blobs/uploads/ in example-registry-quay-quay-enterprise.apps.docs.gcp.quaydev.org: denied: Quota has been exceeded on namespace Getting image source signatures Copying blob d4dfaa212623 [--------------------------------------] 8.0b / 3.5KiB Copying blob cba97cc5811c [--------------------------------------] 8.0b / 15.0KiB Copying blob 0c78fac124da [--------------------------------------] 8.0b / 71.8MiB WARN[0005] failed, retrying in 1s ... (2/3). Error: Error writing blob: Error initiating layer upload to /v2/testorg/ubuntu/blobs/uploads/ in example-registry-quay-quay-enterprise.apps.docs.gcp.quaydev.org: denied: Quota has been exceeded on namespace Getting image source signatures Copying blob d4dfaa212623 [--------------------------------------] 8.0b / 3.5KiB Copying blob cba97cc5811c [--------------------------------------] 8.0b / 15.0KiB Copying blob 0c78fac124da [--------------------------------------] 8.0b / 71.8MiB WARN[0009] failed, retrying in 1s ... (3/3). Error: Error writing blob: Error initiating layer upload to /v2/testorg/ubuntu/blobs/uploads/ in example-registry-quay-quay-enterprise.apps.docs.gcp.quaydev.org: denied: Quota has been exceeded on namespace Getting image source signatures Copying blob d4dfaa212623 [--------------------------------------] 8.0b / 3.5KiB Copying blob cba97cc5811c [--------------------------------------] 8.0b / 15.0KiB Copying blob 0c78fac124da [--------------------------------------] 8.0b / 71.8MiB Error: Error writing blob: Error initiating layer upload to /v2/testorg/ubuntu/blobs/uploads/ in example-registry-quay-quay-enterprise.apps.docs.gcp.quaydev.org: denied: Quota has been exceeded on namespace 13.6.5.4. Notifications for limits exceeded When limits are exceeded, a notification appears: Quota notifications 13.7. Calculating the total registry size in Red Hat Quay 3.9 Use the following procedure to queue a registry total calculation. Note This feature is done on-demand, and calculating a registry total is database intensive. Use with caution. Prerequisites You have upgraded to Red Hat Quay 3.9. You are logged in as a Red Hat Quay superuser. Procedure On the Red Hat Quay UI, click your username Super User Admin Panel . In the navigation pane, click Manage Organizations . Click Calculate , to Total Registry Size: 0.00 KB, Updated: Never , Calculation required . Then, click Ok . After a few minutes, depending on the size of your registry, refresh the page. Now, the Total Registry Size should be calculated. For example: 13.8. Permanently deleting an image tag In some cases, users might want to delete an image tag outside of the time machine window. Use the following procedure to manually delete an image tag permanently. Important The results of the following procedure cannot be undone. Use with caution. 13.8.1. Permanently deleting an image tag using the Red Hat Quay v2 UI Use the following procedure to permanently delete an image tag using the Red Hat Quay v2 UI. Prerequisites You have set FEATURE_UI_V2 to true in your config.yaml file. Procedure Ensure that the PERMANENTLY_DELETE_TAGS and RESET_CHILD_MANIFEST_EXPIRATION parameters are set to true in your config.yaml file. For example: PERMANENTLY_DELETE_TAGS: true RESET_CHILD_MANIFEST_EXPIRATION: true In the navigation pane, click Repositories . Click the name of the repository, for example, quayadmin/busybox . Check the box of the image tag that will be deleted, for example, test . Click Actions Permanently Delete . Important This action is permanent and cannot be undone. 13.8.2. Permanently deleting an image tag using the Red Hat Quay legacy UI Use the following procedure to permanently delete an image tag using the Red Hat Quay legacy UI. Procedure Ensure that the PERMANENTLY_DELETE_TAGS and RESET_CHILD_MANIFEST_EXPIRATION parameters are set to true in your config.yaml file. For example: PERMANENTLY_DELETE_TAGS: true RESET_CHILD_MANIFEST_EXPIRATION: true On the Red Hat Quay UI, click Repositories and the name of the repository that contains the image tag you will delete, for example, quayadmin/busybox . In the navigation pane, click Tags . Check the box of the name of the tag you want to delete, for example, test . Click the Actions drop down menu and select Delete Tags Delete Tag . Click Tag History in the navigation pane. On the name of the tag that was just deleted, for example, test , click Delete test under the Permanently Delete category. For example: Permanently delete image tag Important This action is permanent and cannot be undone. | [
"FEATURE_QUOTA_MANAGEMENT: true FEATURE_GARBAGE_COLLECTION: true PERMANENTLY_DELETE_TAGS: true QUOTA_TOTAL_DELAY_SECONDS: 1800 1 RESET_CHILD_MANIFEST_EXPIRATION: true",
"FEATURE_GARBAGE_COLLECTION: true FEATURE_QUOTA_MANAGEMENT: true QUOTA_BACKFILL: false QUOTA_TOTAL_DELAY_SECONDS: 0 PERMANENTLY_DELETE_TAGS: true RESET_CHILD_MANIFEST_EXPIRATION: true",
"QUOTA_BACKFILL: true",
"podman pull ubuntu:18.04",
"podman tag docker.io/library/ubuntu:18.04 quay-server.example.com/quota-test/ubuntu:tag1",
"podman tag docker.io/library/ubuntu:18.04 quay-server.example.com/quota-test/ubuntu:tag2",
"podman push --tls-verify=false quay-server.example.com/quota-test/ubuntu:tag1",
"podman push --tls-verify=false quay-server.example.com/quota-test/ubuntu:tag2",
"podman pull ubuntu:18.04",
"podman tag docker.io/library/ubuntu:18.04 example-registry-quay-quay-enterprise.apps.docs.gcp.quaydev.org/testorg/ubuntu:18.04",
"podman push --tls-verify=false example-registry-quay-quay-enterprise.apps.docs.gcp.quaydev.org/testorg/ubuntu:18.04",
"podman pull nginx",
"podman tag docker.io/library/nginx example-registry-quay-quay-enterprise.apps.docs.gcp.quaydev.org/testorg/nginx",
"podman push --tls-verify=false example-registry-quay-quay-enterprise.apps.docs.gcp.quaydev.org/testorg/nginx",
"podman pull ubuntu:20.04 podman tag docker.io/library/ubuntu:20.04 example-registry-quay-quay-enterprise.apps.docs.gcp.quaydev.org/testorg/ubuntu:20.04 podman push --tls-verify=false example-registry-quay-quay-enterprise.apps.docs.gcp.quaydev.org/testorg/ubuntu:20.04",
"Getting image source signatures Copying blob d4dfaa212623 [--------------------------------------] 8.0b / 3.5KiB Copying blob cba97cc5811c [--------------------------------------] 8.0b / 15.0KiB Copying blob 0c78fac124da [--------------------------------------] 8.0b / 71.8MiB WARN[0002] failed, retrying in 1s ... (1/3). Error: Error writing blob: Error initiating layer upload to /v2/testorg/ubuntu/blobs/uploads/ in example-registry-quay-quay-enterprise.apps.docs.gcp.quaydev.org: denied: Quota has been exceeded on namespace Getting image source signatures Copying blob d4dfaa212623 [--------------------------------------] 8.0b / 3.5KiB Copying blob cba97cc5811c [--------------------------------------] 8.0b / 15.0KiB Copying blob 0c78fac124da [--------------------------------------] 8.0b / 71.8MiB WARN[0005] failed, retrying in 1s ... (2/3). Error: Error writing blob: Error initiating layer upload to /v2/testorg/ubuntu/blobs/uploads/ in example-registry-quay-quay-enterprise.apps.docs.gcp.quaydev.org: denied: Quota has been exceeded on namespace Getting image source signatures Copying blob d4dfaa212623 [--------------------------------------] 8.0b / 3.5KiB Copying blob cba97cc5811c [--------------------------------------] 8.0b / 15.0KiB Copying blob 0c78fac124da [--------------------------------------] 8.0b / 71.8MiB WARN[0009] failed, retrying in 1s ... (3/3). Error: Error writing blob: Error initiating layer upload to /v2/testorg/ubuntu/blobs/uploads/ in example-registry-quay-quay-enterprise.apps.docs.gcp.quaydev.org: denied: Quota has been exceeded on namespace Getting image source signatures Copying blob d4dfaa212623 [--------------------------------------] 8.0b / 3.5KiB Copying blob cba97cc5811c [--------------------------------------] 8.0b / 15.0KiB Copying blob 0c78fac124da [--------------------------------------] 8.0b / 71.8MiB Error: Error writing blob: Error initiating layer upload to /v2/testorg/ubuntu/blobs/uploads/ in example-registry-quay-quay-enterprise.apps.docs.gcp.quaydev.org: denied: Quota has been exceeded on namespace",
"curl -k -X GET -H \"Authorization: Bearer <token>\" -H 'Content-Type: application/json' https://example-registry-quay-quay-enterprise.apps.docs.gcp.quaydev.org/api/v1/organization/testorg/quota | jq",
"[]",
"curl -k -X POST -H \"Authorization: Bearer <token>\" -H 'Content-Type: application/json' -d '{\"limit_bytes\": 10485760}' https://example-registry-quay-quay-enterprise.apps.docs.quayteam.org/api/v1/organization/testorg/quota | jq",
"\"Created\"",
"curl -k -X GET -H \"Authorization: Bearer <token>\" -H 'Content-Type: application/json' https://example-registry-quay-quay-enterprise.apps.docs.gcp.quaydev.org/api/v1/organization/testorg/quota | jq",
"[ { \"id\": 1, \"limit_bytes\": 10485760, \"default_config\": false, \"limits\": [], \"default_config_exists\": false } ]",
"curl -k -X PUT -H \"Authorization: Bearer <token>\" -H 'Content-Type: application/json' -d '{\"limit_bytes\": 104857600}' https://example-registry-quay-quay-enterprise.apps.docs.gcp.quaydev.org/api/v1/organization/testorg/quota/1 | jq",
"{ \"id\": 1, \"limit_bytes\": 104857600, \"default_config\": false, \"limits\": [], \"default_config_exists\": false }",
"podman pull ubuntu:18.04 podman tag docker.io/library/ubuntu:18.04 example-registry-quay-quay-enterprise.apps.docs.gcp.quaydev.org/testorg/ubuntu:18.04 podman push --tls-verify=false example-registry-quay-quay-enterprise.apps.docs.gcp.quaydev.org/testorg/ubuntu:18.04",
"curl -k -X GET -H \"Authorization: Bearer <token>\" -H 'Content-Type: application/json' 'https://example-registry-quay-quay-enterprise.apps.docs.gcp.quaydev.org/api/v1/repository?last_modified=true&namespace=testorg&popularity=true&public=true"a=true' | jq",
"{ \"repositories\": [ { \"namespace\": \"testorg\", \"name\": \"ubuntu\", \"description\": null, \"is_public\": false, \"kind\": \"image\", \"state\": \"NORMAL\", \"quota_report\": { \"quota_bytes\": 27959066, \"configured_quota\": 104857600 }, \"last_modified\": 1651225630, \"popularity\": 0, \"is_starred\": false } ] }",
"podman pull nginx podman tag docker.io/library/nginx example-registry-quay-quay-enterprise.apps.docs.gcp.quaydev.org/testorg/nginx podman push --tls-verify=false example-registry-quay-quay-enterprise.apps.docs.gcp.quaydev.org/testorg/nginx",
"curl -k -X GET -H \"Authorization: Bearer <token>\" -H 'Content-Type: application/json' 'https://example-registry-quay-quay-enterprise.apps.docs.gcp.quaydev.org/api/v1/repository?last_modified=true&namespace=testorg&popularity=true&public=true"a=true'",
"{ \"repositories\": [ { \"namespace\": \"testorg\", \"name\": \"ubuntu\", \"description\": null, \"is_public\": false, \"kind\": \"image\", \"state\": \"NORMAL\", \"quota_report\": { \"quota_bytes\": 27959066, \"configured_quota\": 104857600 }, \"last_modified\": 1651225630, \"popularity\": 0, \"is_starred\": false }, { \"namespace\": \"testorg\", \"name\": \"nginx\", \"description\": null, \"is_public\": false, \"kind\": \"image\", \"state\": \"NORMAL\", \"quota_report\": { \"quota_bytes\": 59231659, \"configured_quota\": 104857600 }, \"last_modified\": 1651229507, \"popularity\": 0, \"is_starred\": false } ] }",
"curl -k -X GET -H \"Authorization: Bearer <token>\" -H 'Content-Type: application/json' 'https://example-registry-quay-quay-enterprise.apps.docs.gcp.quaydev.org/api/v1/organization/testorg' | jq",
"{ \"name\": \"testorg\", \"quotas\": [ { \"id\": 1, \"limit_bytes\": 104857600, \"limits\": [] } ], \"quota_report\": { \"quota_bytes\": 87190725, \"configured_quota\": 104857600 } }",
"curl -k -X POST -H \"Authorization: Bearer <token>\" -H 'Content-Type: application/json' -d '{\"type\":\"Reject\",\"threshold_percent\":80}' https://example-registry-quay-quay-enterprise.apps.docs.gcp.quaydev.org/api/v1/organization/testorg/quota/1/limit",
"curl -k -X POST -H \"Authorization: Bearer <token>\" -H 'Content-Type: application/json' -d '{\"type\":\"Warning\",\"threshold_percent\":50}' https://example-registry-quay-quay-enterprise.apps.docs.gcp.quaydev.org/api/v1/organization/testorg/quota/1/limit",
"curl -k -X GET -H \"Authorization: Bearer <token>\" -H 'Content-Type: application/json' https://example-registry-quay-quay-enterprise.apps.docs.gcp.quaydev.org/api/v1/organization/testorg/quota | jq",
"[ { \"id\": 1, \"limit_bytes\": 104857600, \"default_config\": false, \"limits\": [ { \"id\": 2, \"type\": \"Warning\", \"limit_percent\": 50 }, { \"id\": 1, \"type\": \"Reject\", \"limit_percent\": 80 } ], \"default_config_exists\": false } ]",
"podman pull ubuntu:20.04 podman tag docker.io/library/ubuntu:20.04 example-registry-quay-quay-enterprise.apps.docs.gcp.quaydev.org/testorg/ubuntu:20.04 podman push --tls-verify=false example-registry-quay-quay-enterprise.apps.docs.gcp.quaydev.org/testorg/ubuntu:20.04",
"Getting image source signatures Copying blob d4dfaa212623 [--------------------------------------] 8.0b / 3.5KiB Copying blob cba97cc5811c [--------------------------------------] 8.0b / 15.0KiB Copying blob 0c78fac124da [--------------------------------------] 8.0b / 71.8MiB WARN[0002] failed, retrying in 1s ... (1/3). Error: Error writing blob: Error initiating layer upload to /v2/testorg/ubuntu/blobs/uploads/ in example-registry-quay-quay-enterprise.apps.docs.gcp.quaydev.org: denied: Quota has been exceeded on namespace Getting image source signatures Copying blob d4dfaa212623 [--------------------------------------] 8.0b / 3.5KiB Copying blob cba97cc5811c [--------------------------------------] 8.0b / 15.0KiB Copying blob 0c78fac124da [--------------------------------------] 8.0b / 71.8MiB WARN[0005] failed, retrying in 1s ... (2/3). Error: Error writing blob: Error initiating layer upload to /v2/testorg/ubuntu/blobs/uploads/ in example-registry-quay-quay-enterprise.apps.docs.gcp.quaydev.org: denied: Quota has been exceeded on namespace Getting image source signatures Copying blob d4dfaa212623 [--------------------------------------] 8.0b / 3.5KiB Copying blob cba97cc5811c [--------------------------------------] 8.0b / 15.0KiB Copying blob 0c78fac124da [--------------------------------------] 8.0b / 71.8MiB WARN[0009] failed, retrying in 1s ... (3/3). Error: Error writing blob: Error initiating layer upload to /v2/testorg/ubuntu/blobs/uploads/ in example-registry-quay-quay-enterprise.apps.docs.gcp.quaydev.org: denied: Quota has been exceeded on namespace Getting image source signatures Copying blob d4dfaa212623 [--------------------------------------] 8.0b / 3.5KiB Copying blob cba97cc5811c [--------------------------------------] 8.0b / 15.0KiB Copying blob 0c78fac124da [--------------------------------------] 8.0b / 71.8MiB Error: Error writing blob: Error initiating layer upload to /v2/testorg/ubuntu/blobs/uploads/ in example-registry-quay-quay-enterprise.apps.docs.gcp.quaydev.org: denied: Quota has been exceeded on namespace",
"PERMANENTLY_DELETE_TAGS: true RESET_CHILD_MANIFEST_EXPIRATION: true",
"PERMANENTLY_DELETE_TAGS: true RESET_CHILD_MANIFEST_EXPIRATION: true"
] | https://docs.redhat.com/en/documentation/red_hat_quay/3.10/html/manage_red_hat_quay/red-hat-quay-quota-management-and-enforcement |
Virtualization | Virtualization OpenShift Container Platform 4.10 OpenShift Virtualization installation, usage, and release notes Red Hat OpenShift Documentation Team | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.10/html/virtualization/index |
Chapter 6. Kafka Bridge | Chapter 6. Kafka Bridge This chapter provides an overview of the AMQ Streams Kafka Bridge and helps you get started using its REST API to interact with AMQ Streams. To try out the Kafka Bridge in your local environment, see the Section 6.2, "Kafka Bridge quickstart" later in this chapter. For detailed configuration steps, see Section 2.5, "Kafka Bridge cluster configuration" . To view the API documentation, see the Kafka Bridge API reference . 6.1. Kafka Bridge overview You can use the AMQ Streams Kafka Bridge as an interface to make specific types of HTTP requests to the Kafka cluster. 6.1.1. Kafka Bridge interface The Kafka Bridge provides a RESTful interface that allows HTTP-based clients to interact with a Kafka cluster. It offers the advantages of a web API connection to AMQ Streams, without the need for client applications to interpret the Kafka protocol. The API has two main resources - consumers and topics - that are exposed and made accessible through endpoints to interact with consumers and producers in your Kafka cluster. The resources relate only to the Kafka Bridge, not the consumers and producers connected directly to Kafka. 6.1.1.1. HTTP requests The Kafka Bridge supports HTTP requests to a Kafka cluster, with methods to: Send messages to a topic. Retrieve messages from topics. Retrieve a list of partitions for a topic. Create and delete consumers. Subscribe consumers to topics, so that they start receiving messages from those topics. Retrieve a list of topics that a consumer is subscribed to. Unsubscribe consumers from topics. Assign partitions to consumers. Commit a list of consumer offsets. Seek on a partition, so that a consumer starts receiving messages from the first or last offset position, or a given offset position. The methods provide JSON responses and HTTP response code error handling. Messages can be sent in JSON or binary formats. Clients can produce and consume messages without the requirement to use the native Kafka protocol. Additional resources To view the API documentation, including example requests and responses, see the Kafka Bridge API reference . 6.1.2. Supported clients for the Kafka Bridge You can use the Kafka Bridge to integrate both internal and external HTTP client applications with your Kafka cluster. Internal clients Internal clients are container-based HTTP clients running in the same OpenShift cluster as the Kafka Bridge itself. Internal clients can access the Kafka Bridge on the host and port defined in the KafkaBridge custom resource. External clients External clients are HTTP clients running outside the OpenShift cluster in which the Kafka Bridge is deployed and running. External clients can access the Kafka Bridge through an OpenShift Route, a loadbalancer service, or using an Ingress. HTTP internal and external client integration 6.1.3. Securing the Kafka Bridge AMQ Streams does not currently provide any encryption, authentication, or authorization for the Kafka Bridge. This means that requests sent from external clients to the Kafka Bridge are: Not encrypted, and must use HTTP rather than HTTPS Sent without authentication However, you can secure the Kafka Bridge using other methods, such as: OpenShift Network Policies that define which pods can access the Kafka Bridge. Reverse proxies with authentication or authorization, for example, OAuth2 proxies. API Gateways. Ingress or OpenShift Routes with TLS termination. The Kafka Bridge supports TLS encryption and TLS and SASL authentication when connecting to the Kafka Brokers. Within your OpenShift cluster, you can configure: TLS or SASL-based authentication between the Kafka Bridge and your Kafka cluster A TLS-encrypted connection between the Kafka Bridge and your Kafka cluster. For more information, see Section 2.5.1, "Configuring the Kafka Bridge" . You can use ACLs in Kafka brokers to restrict the topics that can be consumed and produced using the Kafka Bridge. 6.1.4. Accessing the Kafka Bridge outside of OpenShift After deployment, the AMQ Streams Kafka Bridge can only be accessed by applications running in the same OpenShift cluster. These applications use the kafka-bridge-name -bridge-service Service to access the API. If you want to make the Kafka Bridge accessible to applications running outside of the OpenShift cluster, you can expose it manually by using one of the following features: Services of types LoadBalancer or NodePort Ingress resources OpenShift Routes If you decide to create Services, use the following labels in the selector to configure the pods to which the service will route the traffic: # ... selector: strimzi.io/cluster: kafka-bridge-name 1 strimzi.io/kind: KafkaBridge #... 1 Name of the Kafka Bridge custom resource in your OpenShift cluster. 6.1.5. Requests to the Kafka Bridge Specify data formats and HTTP headers to ensure valid requests are submitted to the Kafka Bridge. 6.1.5.1. Content Type headers API request and response bodies are always encoded as JSON. When performing consumer operations, POST requests must provide the following Content-Type header if there is a non-empty body: Content-Type: application/vnd.kafka.v2+json When performing producer operations, POST requests must provide Content-Type headers specifying the embedded data format of the messages produced. This can be either json or binary . Embedded data format Content-Type header JSON Content-Type: application/vnd.kafka.json.v2+json Binary Content-Type: application/vnd.kafka.binary.v2+json The embedded data format is set per consumer, as described in the section. The Content-Type must not be set if the POST request has an empty body. An empty body can be used to create a consumer with the default values. 6.1.5.2. Embedded data format The embedded data format is the format of the Kafka messages that are transmitted, over HTTP, from a producer to a consumer using the Kafka Bridge. Two embedded data formats are supported: JSON and binary. When creating a consumer using the /consumers/ groupid endpoint, the POST request body must specify an embedded data format of either JSON or binary. This is specified in the format field, for example: { "name": "my-consumer", "format": "binary", 1 ... } 1 A binary embedded data format. The embedded data format specified when creating a consumer must match the data format of the Kafka messages it will consume. If you choose to specify a binary embedded data format, subsequent producer requests must provide the binary data in the request body as Base64-encoded strings. For example, when sending messages using the /topics/ topicname endpoint, records.value must be encoded in Base64: { "records": [ { "key": "my-key", "value": "ZWR3YXJkdGhldGhyZWVsZWdnZWRjYXQ=" }, ] } Producer requests must also provide a Content-Type header that corresponds to the embedded data format, for example, Content-Type: application/vnd.kafka.binary.v2+json . 6.1.5.3. Message format When sending messages using the /topics endpoint, you enter the message payload in the request body, in the records parameter. The records parameter can contain any of these optional fields: Message headers Message key Message value Destination partition Example POST request to /topics curl -X POST \ http://localhost:8080/topics/my-topic \ -H 'content-type: application/vnd.kafka.json.v2+json' \ -d '{ "records": [ { "key": "my-key", "value": "sales-lead-0001" "partition": 2 "headers": [ { "key": "key1", "value": "QXBhY2hlIEthZmthIGlzIHRoZSBib21iIQ==" 1 } ] }, ] }' 1 The header value in binary format and encoded as Base64. 6.1.5.4. Accept headers After creating a consumer, all subsequent GET requests must provide an Accept header in the following format: Accept: application/vnd.kafka. EMBEDDED-DATA-FORMAT .v2+json The EMBEDDED-DATA-FORMAT is either json or binary . For example, when retrieving records for a subscribed consumer using an embedded data format of JSON, include this Accept header: Accept: application/vnd.kafka.json.v2+json 6.1.6. CORS Cross-Origin Resource Sharing (CORS) allows you to specify allowed methods and originating URLs for accessing the Kafka cluster in your Kafka Bridge HTTP configuration . Example CORS configuration for Kafka Bridge # ... cors: allowedOrigins: "https://strimzi.io" allowedMethods: "GET,POST,PUT,DELETE,OPTIONS,PATCH" # ... CORS allows for simple and preflighted requests between origin sources on different domains. Simple requests are suitable for standard requests using GET , HEAD , POST methods. A preflighted request sends a HTTP OPTIONS request as an initial check that the actual request is safe to send. On confirmation, the actual request is sent. Preflight requests are suitable for methods that require greater safeguards, such as PUT and DELETE , and use non-standard headers. All requests require an Origin value in their header, which is the source of the HTTP request. 6.1.6.1. Simple request For example, this simple request header specifies the origin as https://strimzi.io . Origin: https://strimzi.io The header information is added to the request. curl -v -X GET HTTP-ADDRESS /bridge-consumer/records \ -H 'Origin: https://strimzi.io'\ -H 'content-type: application/vnd.kafka.v2+json' In the response from the Kafka Bridge, an Access-Control-Allow-Origin header is returned. HTTP/1.1 200 OK Access-Control-Allow-Origin: * 1 1 Returning an asterisk ( * ) shows the resource can be accessed by any domain. 6.1.6.2. Preflighted request An initial preflight request is sent to Kafka Bridge using an OPTIONS method. The HTTP OPTIONS request sends header information to check that Kafka Bridge will allow the actual request. Here the preflight request checks that a POST request is valid from https://strimzi.io . OPTIONS /my-group/instances/my-user/subscription HTTP/1.1 Origin: https://strimzi.io Access-Control-Request-Method: POST 1 Access-Control-Request-Headers: Content-Type 2 1 Kafka Bridge is alerted that the actual request is a POST request. 2 The actual request will be sent with a Content-Type header. OPTIONS is added to the header information of the preflight request. curl -v -X OPTIONS -H 'Origin: https://strimzi.io' \ -H 'Access-Control-Request-Method: POST' \ -H 'content-type: application/vnd.kafka.v2+json' Kafka Bridge responds to the initial request to confirm that the request will be accepted. The response header returns allowed origins, methods and headers. HTTP/1.1 200 OK Access-Control-Allow-Origin: https://strimzi.io Access-Control-Allow-Methods: GET,POST,PUT,DELETE,OPTIONS,PATCH Access-Control-Allow-Headers: content-type If the origin or method is rejected, an error message is returned. The actual request does not require Access-Control-Request-Method header, as it was confirmed in the preflight request, but it does require the origin header. curl -v -X POST HTTP-ADDRESS /topics/bridge-topic \ -H 'Origin: https://strimzi.io' \ -H 'content-type: application/vnd.kafka.v2+json' The response shows the originating URL is allowed. HTTP/1.1 200 OK Access-Control-Allow-Origin: https://strimzi.io Additional resources Fetch CORS specification 6.1.7. Kafka Bridge API resources For the full list of REST API endpoints and descriptions, including example requests and responses, see the Kafka Bridge API reference . 6.1.8. Kafka Bridge deployment You deploy the Kafka Bridge into your OpenShift cluster by using the Cluster Operator. After the Kafka Bridge is deployed, the Cluster Operator creates Kafka Bridge objects in your OpenShift cluster. Objects include the deployment , service , and pod , each named after the name given in the custom resource for the Kafka Bridge. Additional resources For deployment instructions, see Deploying Kafka Bridge to your OpenShift cluster in the Deploying and Upgrading AMQ Streams on OpenShift guide. For detailed information on configuring the Kafka Bridge, see Section 2.5, "Kafka Bridge cluster configuration" For information on configuring the host and port for the KafkaBridge resource, see Section 2.5.1, "Configuring the Kafka Bridge" . For information on integrating external clients, see Section 6.1.4, "Accessing the Kafka Bridge outside of OpenShift" . 6.2. Kafka Bridge quickstart Use this quickstart to try out the AMQ Streams Kafka Bridge in your local development environment. You will learn how to: Deploy the Kafka Bridge to your OpenShift cluster Expose the Kafka Bridge service to your local machine by using port-forwarding Produce messages to topics and partitions in your Kafka cluster Create a Kafka Bridge consumer Perform basic consumer operations, such as subscribing the consumer to topics and retrieving the messages that you produced In this quickstart, HTTP requests are formatted as curl commands that you can copy and paste to your terminal. Access to an OpenShift cluster is required. Ensure you have the prerequisites and then follow the tasks in the order provided in this chapter. About data formats In this quickstart, you will produce and consume messages in JSON format, not binary. For more information on the data formats and HTTP headers used in the example requests, see Section 6.1.5, "Requests to the Kafka Bridge" . Prerequisites for the quickstart Cluster administrator access to a local or remote OpenShift cluster. AMQ Streams is installed. A running Kafka cluster, deployed by the Cluster Operator, in an OpenShift namespace. The Entity Operator is deployed and running as part of the Kafka cluster. 6.2.1. Deploying the Kafka Bridge to your OpenShift cluster AMQ Streams includes a YAML example that specifies the configuration of the AMQ Streams Kafka Bridge. Make some minimal changes to this file and then deploy an instance of the Kafka Bridge to your OpenShift cluster. Procedure Edit the examples/bridge/kafka-bridge.yaml file. apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaBridge metadata: name: quickstart 1 spec: replicas: 1 bootstrapServers: <cluster-name>-kafka-bootstrap:9092 2 http: port: 8080 1 When the Kafka Bridge is deployed, -bridge is appended to the name of the deployment and other related resources. In this example, the Kafka Bridge deployment is named quickstart-bridge and the accompanying Kafka Bridge service is named quickstart-bridge-service . 2 In the bootstrapServers property, enter the name of the Kafka cluster as the <cluster-name> . Deploy the Kafka Bridge to your OpenShift cluster: oc apply -f examples/bridge/kafka-bridge.yaml A quickstart-bridge deployment, service, and other related resources are created in your OpenShift cluster. Verify that the Kafka Bridge was successfully deployed: oc get deployments NAME READY UP-TO-DATE AVAILABLE AGE quickstart-bridge 1/1 1 1 34m my-cluster-connect 1/1 1 1 24h my-cluster-entity-operator 1/1 1 1 24h #... What to do After deploying the Kafka Bridge to your OpenShift cluster, expose the Kafka Bridge service to your local machine . Additional resources For more detailed information about configuring the Kafka Bridge, see Section 2.5, "Kafka Bridge cluster configuration" . 6.2.2. Exposing the Kafka Bridge service to your local machine , use port forwarding to expose the AMQ Streams Kafka Bridge service to your local machine on http://localhost:8080 . Note Port forwarding is only suitable for development and testing purposes. Procedure List the names of the pods in your OpenShift cluster: oc get pods -o name pod/kafka-consumer # ... pod/quickstart-bridge-589d78784d-9jcnr pod/strimzi-cluster-operator-76bcf9bc76-8dnfm Connect to the quickstart-bridge pod on port 8080 : oc port-forward pod/quickstart-bridge-589d78784d-9jcnr 8080:8080 & Note If port 8080 on your local machine is already in use, use an alternative HTTP port, such as 8008 . API requests are now forwarded from port 8080 on your local machine to port 8080 in the Kafka Bridge pod. 6.2.3. Producing messages to topics and partitions , produce messages to topics in JSON format by using the topics endpoint. You can specify destination partitions for messages in the request body, as shown here. The partitions endpoint provides an alternative method for specifying a single destination partition for all messages as a path parameter. Procedure In a text editor, create a YAML definition for a Kafka topic with three partitions. apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaTopic metadata: name: bridge-quickstart-topic labels: strimzi.io/cluster: <kafka-cluster-name> 1 spec: partitions: 3 2 replicas: 1 config: retention.ms: 7200000 segment.bytes: 1073741824 1 The name of the Kafka cluster in which the Kafka Bridge is deployed. 2 The number of partitions for the topic. Save the file to the examples/topic directory as bridge-quickstart-topic.yaml . Create the topic in your OpenShift cluster: oc apply -f examples/topic/bridge-quickstart-topic.yaml Using the Kafka Bridge, produce three messages to the topic you created: curl -X POST \ http://localhost:8080/topics/bridge-quickstart-topic \ -H 'content-type: application/vnd.kafka.json.v2+json' \ -d '{ "records": [ { "key": "my-key", "value": "sales-lead-0001" }, { "value": "sales-lead-0002", "partition": 2 }, { "value": "sales-lead-0003" } ] }' sales-lead-0001 is sent to a partition based on the hash of the key. sales-lead-0002 is sent directly to partition 2. sales-lead-0003 is sent to a partition in the bridge-quickstart-topic topic using a round-robin method. If the request is successful, the Kafka Bridge returns an offsets array, along with a 200 code and a content-type header of application/vnd.kafka.v2+json . For each message, the offsets array describes: The partition that the message was sent to The current message offset of the partition Example response #... { "offsets":[ { "partition":0, "offset":0 }, { "partition":2, "offset":0 }, { "partition":0, "offset":1 } ] } What to do After producing messages to topics and partitions, create a Kafka Bridge consumer . Additional resources POST /topics/{topicname} in the API reference documentation. POST /topics/{topicname}/partitions/{partitionid} in the API reference documentation. 6.2.4. Creating a Kafka Bridge consumer Before you can perform any consumer operations in the Kafka cluster, you must first create a consumer by using the consumers endpoint. The consumer is referred to as a Kafka Bridge consumer . Procedure Create a Kafka Bridge consumer in a new consumer group named bridge-quickstart-consumer-group : curl -X POST http://localhost:8080/consumers/bridge-quickstart-consumer-group \ -H 'content-type: application/vnd.kafka.v2+json' \ -d '{ "name": "bridge-quickstart-consumer", "auto.offset.reset": "earliest", "format": "json", "enable.auto.commit": false, "fetch.min.bytes": 512, "consumer.request.timeout.ms": 30000 }' The consumer is named bridge-quickstart-consumer and the embedded data format is set as json . Some basic configuration settings are defined. The consumer will not commit offsets to the log automatically because the enable.auto.commit setting is false . You will commit the offsets manually later in this quickstart. If the request is successful, the Kafka Bridge returns the consumer ID ( instance_id ) and base URL ( base_uri ) in the response body, along with a 200 code. Example response #... { "instance_id": "bridge-quickstart-consumer", "base_uri":"http://<bridge-name>-bridge-service:8080/consumers/bridge-quickstart-consumer-group/instances/bridge-quickstart-consumer" } Copy the base URL ( base_uri ) to use in the other consumer operations in this quickstart. What to do Now that you have created a Kafka Bridge consumer, you can subscribe it to topics . Additional resources POST /consumers/{groupid} in the API reference documentation. 6.2.5. Subscribing a Kafka Bridge consumer to topics After you have created a Kafka Bridge consumer, subscribe it to one or more topics by using the subscription endpoint. Once subscribed, the consumer starts receiving all messages that are produced to the topic. Procedure Subscribe the consumer to the bridge-quickstart-topic topic that you created earlier, in Producing messages to topics and partitions : curl -X POST http://localhost:8080/consumers/bridge-quickstart-consumer-group/instances/bridge-quickstart-consumer/subscription \ -H 'content-type: application/vnd.kafka.v2+json' \ -d '{ "topics": [ "bridge-quickstart-topic" ] }' The topics array can contain a single topic (as shown here) or multiple topics. If you want to subscribe the consumer to multiple topics that match a regular expression, you can use the topic_pattern string instead of the topics array. If the request is successful, the Kafka Bridge returns a 204 (No Content) code only. What to do After subscribing a Kafka Bridge consumer to topics, you can retrieve messages from the consumer . Additional resources POST /consumers/{groupid}/instances/{name}/subscription in the API reference documentation. 6.2.6. Retrieving the latest messages from a Kafka Bridge consumer , retrieve the latest messages from the Kafka Bridge consumer by requesting data from the records endpoint. In production, HTTP clients can call this endpoint repeatedly (in a loop). Procedure Produce additional messages to the Kafka Bridge consumer, as described in Producing messages to topics and partitions . Submit a GET request to the records endpoint: curl -X GET http://localhost:8080/consumers/bridge-quickstart-consumer-group/instances/bridge-quickstart-consumer/records \ -H 'accept: application/vnd.kafka.json.v2+json' After creating and subscribing to a Kafka Bridge consumer, a first GET request will return an empty response because the poll operation starts a rebalancing process to assign partitions. Repeat step two to retrieve messages from the Kafka Bridge consumer. The Kafka Bridge returns an array of messages - describing the topic name, key, value, partition, and offset - in the response body, along with a 200 code. Messages are retrieved from the latest offset by default. HTTP/1.1 200 OK content-type: application/vnd.kafka.json.v2+json #... [ { "topic":"bridge-quickstart-topic", "key":"my-key", "value":"sales-lead-0001", "partition":0, "offset":0 }, { "topic":"bridge-quickstart-topic", "key":null, "value":"sales-lead-0003", "partition":0, "offset":1 }, #... Note If an empty response is returned, produce more records to the consumer as described in Producing messages to topics and partitions , and then try retrieving messages again. What to do After retrieving messages from a Kafka Bridge consumer, try committing offsets to the log . Additional resources GET /consumers/{groupid}/instances/{name}/records in the API reference documentation. 6.2.7. Commiting offsets to the log , use the offsets endpoint to manually commit offsets to the log for all messages received by the Kafka Bridge consumer. This is required because the Kafka Bridge consumer that you created earlier, in Creating a Kafka Bridge consumer , was configured with the enable.auto.commit setting as false . Procedure Commit offsets to the log for the bridge-quickstart-consumer : curl -X POST http://localhost:8080/consumers/bridge-quickstart-consumer-group/instances/bridge-quickstart-consumer/offsets Because no request body is submitted, offsets are committed for all the records that have been received by the consumer. Alternatively, the request body can contain an array ( OffsetCommitSeekList ) that specifies the topics and partitions that you want to commit offsets for. If the request is successful, the Kafka Bridge returns a 204 code only. What to do After committing offsets to the log, try out the endpoints for seeking to offsets . Additional resources POST /consumers/{groupid}/instances/{name}/offsets in the API reference documentation. 6.2.8. Seeking to offsets for a partition , use the positions endpoints to configure the Kafka Bridge consumer to retrieve messages for a partition from a specific offset, and then from the latest offset. This is referred to in Apache Kafka as a seek operation. Procedure Seek to a specific offset for partition 0 of the quickstart-bridge-topic topic: curl -X POST http://localhost:8080/consumers/bridge-quickstart-consumer-group/instances/bridge-quickstart-consumer/positions \ -H 'content-type: application/vnd.kafka.v2+json' \ -d '{ "offsets": [ { "topic": "bridge-quickstart-topic", "partition": 0, "offset": 2 } ] }' If the request is successful, the Kafka Bridge returns a 204 code only. Submit a GET request to the records endpoint: curl -X GET http://localhost:8080/consumers/bridge-quickstart-consumer-group/instances/bridge-quickstart-consumer/records \ -H 'accept: application/vnd.kafka.json.v2+json' The Kafka Bridge returns messages from the offset that you seeked to. Restore the default message retrieval behavior by seeking to the last offset for the same partition. This time, use the positions/end endpoint. curl -X POST http://localhost:8080/consumers/bridge-quickstart-consumer-group/instances/bridge-quickstart-consumer/positions/end \ -H 'content-type: application/vnd.kafka.v2+json' \ -d '{ "partitions": [ { "topic": "bridge-quickstart-topic", "partition": 0 } ] }' If the request is successful, the Kafka Bridge returns another 204 code. Note You can also use the positions/beginning endpoint to seek to the first offset for one or more partitions. What to do In this quickstart, you have used the AMQ Streams Kafka Bridge to perform several common operations on a Kafka cluster. You can now delete the Kafka Bridge consumer that you created earlier. Additional resources POST /consumers/{groupid}/instances/{name}/positions in the API reference documentation. POST /consumers/{groupid}/instances/{name}/positions/beginning in the API reference documentation. POST /consumers/{groupid}/instances/{name}/positions/end in the API reference documentation. 6.2.9. Deleting a Kafka Bridge consumer Finally, delete the Kafa Bridge consumer that you used throughout this quickstart. Procedure Delete the Kafka Bridge consumer by sending a DELETE request to the instances endpoint. curl -X DELETE http://localhost:8080/consumers/bridge-quickstart-consumer-group/instances/bridge-quickstart-consumer If the request is successful, the Kafka Bridge returns a 204 code only. Additional resources DELETE /consumers/{groupid}/instances/{name} in the API reference documentation. | [
"selector: strimzi.io/cluster: kafka-bridge-name 1 strimzi.io/kind: KafkaBridge #",
"Content-Type: application/vnd.kafka.v2+json",
"{ \"name\": \"my-consumer\", \"format\": \"binary\", 1 }",
"{ \"records\": [ { \"key\": \"my-key\", \"value\": \"ZWR3YXJkdGhldGhyZWVsZWdnZWRjYXQ=\" }, ] }",
"curl -X POST http://localhost:8080/topics/my-topic -H 'content-type: application/vnd.kafka.json.v2+json' -d '{ \"records\": [ { \"key\": \"my-key\", \"value\": \"sales-lead-0001\" \"partition\": 2 \"headers\": [ { \"key\": \"key1\", \"value\": \"QXBhY2hlIEthZmthIGlzIHRoZSBib21iIQ==\" 1 } ] }, ] }'",
"Accept: application/vnd.kafka. EMBEDDED-DATA-FORMAT .v2+json",
"Accept: application/vnd.kafka.json.v2+json",
"cors: allowedOrigins: \"https://strimzi.io\" allowedMethods: \"GET,POST,PUT,DELETE,OPTIONS,PATCH\" #",
"Origin: https://strimzi.io",
"curl -v -X GET HTTP-ADDRESS /bridge-consumer/records -H 'Origin: https://strimzi.io' -H 'content-type: application/vnd.kafka.v2+json'",
"HTTP/1.1 200 OK Access-Control-Allow-Origin: * 1",
"OPTIONS /my-group/instances/my-user/subscription HTTP/1.1 Origin: https://strimzi.io Access-Control-Request-Method: POST 1 Access-Control-Request-Headers: Content-Type 2",
"curl -v -X OPTIONS -H 'Origin: https://strimzi.io' -H 'Access-Control-Request-Method: POST' -H 'content-type: application/vnd.kafka.v2+json'",
"HTTP/1.1 200 OK Access-Control-Allow-Origin: https://strimzi.io Access-Control-Allow-Methods: GET,POST,PUT,DELETE,OPTIONS,PATCH Access-Control-Allow-Headers: content-type",
"curl -v -X POST HTTP-ADDRESS /topics/bridge-topic -H 'Origin: https://strimzi.io' -H 'content-type: application/vnd.kafka.v2+json'",
"HTTP/1.1 200 OK Access-Control-Allow-Origin: https://strimzi.io",
"apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaBridge metadata: name: quickstart 1 spec: replicas: 1 bootstrapServers: <cluster-name>-kafka-bootstrap:9092 2 http: port: 8080",
"apply -f examples/bridge/kafka-bridge.yaml",
"get deployments",
"NAME READY UP-TO-DATE AVAILABLE AGE quickstart-bridge 1/1 1 1 34m my-cluster-connect 1/1 1 1 24h my-cluster-entity-operator 1/1 1 1 24h #",
"get pods -o name pod/kafka-consumer pod/quickstart-bridge-589d78784d-9jcnr pod/strimzi-cluster-operator-76bcf9bc76-8dnfm",
"port-forward pod/quickstart-bridge-589d78784d-9jcnr 8080:8080 &",
"apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaTopic metadata: name: bridge-quickstart-topic labels: strimzi.io/cluster: <kafka-cluster-name> 1 spec: partitions: 3 2 replicas: 1 config: retention.ms: 7200000 segment.bytes: 1073741824",
"apply -f examples/topic/bridge-quickstart-topic.yaml",
"curl -X POST http://localhost:8080/topics/bridge-quickstart-topic -H 'content-type: application/vnd.kafka.json.v2+json' -d '{ \"records\": [ { \"key\": \"my-key\", \"value\": \"sales-lead-0001\" }, { \"value\": \"sales-lead-0002\", \"partition\": 2 }, { \"value\": \"sales-lead-0003\" } ] }'",
"# { \"offsets\":[ { \"partition\":0, \"offset\":0 }, { \"partition\":2, \"offset\":0 }, { \"partition\":0, \"offset\":1 } ] }",
"curl -X POST http://localhost:8080/consumers/bridge-quickstart-consumer-group -H 'content-type: application/vnd.kafka.v2+json' -d '{ \"name\": \"bridge-quickstart-consumer\", \"auto.offset.reset\": \"earliest\", \"format\": \"json\", \"enable.auto.commit\": false, \"fetch.min.bytes\": 512, \"consumer.request.timeout.ms\": 30000 }'",
"# { \"instance_id\": \"bridge-quickstart-consumer\", \"base_uri\":\"http://<bridge-name>-bridge-service:8080/consumers/bridge-quickstart-consumer-group/instances/bridge-quickstart-consumer\" }",
"curl -X POST http://localhost:8080/consumers/bridge-quickstart-consumer-group/instances/bridge-quickstart-consumer/subscription -H 'content-type: application/vnd.kafka.v2+json' -d '{ \"topics\": [ \"bridge-quickstart-topic\" ] }'",
"curl -X GET http://localhost:8080/consumers/bridge-quickstart-consumer-group/instances/bridge-quickstart-consumer/records -H 'accept: application/vnd.kafka.json.v2+json'",
"HTTP/1.1 200 OK content-type: application/vnd.kafka.json.v2+json # [ { \"topic\":\"bridge-quickstart-topic\", \"key\":\"my-key\", \"value\":\"sales-lead-0001\", \"partition\":0, \"offset\":0 }, { \"topic\":\"bridge-quickstart-topic\", \"key\":null, \"value\":\"sales-lead-0003\", \"partition\":0, \"offset\":1 }, #",
"curl -X POST http://localhost:8080/consumers/bridge-quickstart-consumer-group/instances/bridge-quickstart-consumer/offsets",
"curl -X POST http://localhost:8080/consumers/bridge-quickstart-consumer-group/instances/bridge-quickstart-consumer/positions -H 'content-type: application/vnd.kafka.v2+json' -d '{ \"offsets\": [ { \"topic\": \"bridge-quickstart-topic\", \"partition\": 0, \"offset\": 2 } ] }'",
"curl -X GET http://localhost:8080/consumers/bridge-quickstart-consumer-group/instances/bridge-quickstart-consumer/records -H 'accept: application/vnd.kafka.json.v2+json'",
"curl -X POST http://localhost:8080/consumers/bridge-quickstart-consumer-group/instances/bridge-quickstart-consumer/positions/end -H 'content-type: application/vnd.kafka.v2+json' -d '{ \"partitions\": [ { \"topic\": \"bridge-quickstart-topic\", \"partition\": 0 } ] }'",
"curl -X DELETE http://localhost:8080/consumers/bridge-quickstart-consumer-group/instances/bridge-quickstart-consumer"
] | https://docs.redhat.com/en/documentation/red_hat_amq/2021.q3/html/using_amq_streams_on_openshift/kafka-bridge-concepts-str |
Chapter 1. Preparing the installation | Chapter 1. Preparing the installation To prepare a OpenShift Dev Spaces installation, learn about the OpenShift Dev Spaces ecosystem and deployment constraints: Section 1.1, "Supported platforms" Section 1.2, "Installing the dsc management tool" Section 1.3, "Architecture" Section 1.4, "Calculating Dev Spaces resource requirements" Section 3.1, "Understanding the CheCluster Custom Resource" 1.1. Supported platforms OpenShift Dev Spaces runs on OpenShift 4.12-4.16 on the following CPU architectures: AMD64 and Intel 64 ( x86_64 ) IBM Z ( s390x ) The following CPU architecture requires Openshift 4.13-4.16 to run OpenShift Dev Spaces: IBM Power ( ppc64le ) Additional resources OpenShift Documentation 1.2. Installing the dsc management tool You can install dsc , the Red Hat OpenShift Dev Spaces command-line management tool, on Microsoft Windows, Apple MacOS, and Linux. With dsc , you can perform operations the OpenShift Dev Spaces server such as starting, stopping, updating, and deleting the server. Prerequisites Linux or macOS. Note For installing dsc on Windows, see the following pages: https://developers.redhat.com/products/openshift-dev-spaces/download https://github.com/redhat-developer/devspaces-chectl Procedure Download the archive from https://developers.redhat.com/products/openshift-dev-spaces/download to a directory such as USDHOME . Run tar xvzf on the archive to extract the /dsc directory. Add the extracted /dsc/bin subdirectory to USDPATH . Verification Run dsc to view information about it. Additional resources " dsc reference documentation " 1.3. Architecture Figure 1.1. High-level OpenShift Dev Spaces architecture with the Dev Workspace operator OpenShift Dev Spaces runs on three groups of components: OpenShift Dev Spaces server components Manage User project and workspaces. The main component is the User dashboard, from which users control their workspaces. Dev Workspace operator Creates and controls the necessary OpenShift objects to run User workspaces. Including Pods , Services , and PersistentVolumes . User workspaces Container-based development environments, the IDE included. The role of these OpenShift features is central: Dev Workspace Custom Resources Valid OpenShift objects representing the User workspaces and manipulated by OpenShift Dev Spaces. It is the communication channel for the three groups of components. OpenShift role-based access control (RBAC) Controls access to all resources. Additional resources Section 1.3.1, "Server components" Section 1.3.1.2, "Dev Workspace operator" Section 1.3.2, "User workspaces" Dev Workspace Operator repository Kubernetes documentation - Custom Resources 1.3.1. Server components The OpenShift Dev Spaces server components ensure multi-tenancy and workspaces management. Figure 1.2. OpenShift Dev Spaces server components interacting with the Dev Workspace operator Additional resources Section 1.3.1.1, "Dev Spaces operator" Section 1.3.1.3, "Gateway" Section 1.3.1.4, "User dashboard" Section 1.3.1.5, "Dev Spaces server" Section 1.3.1.6, "Plug-in registry" 1.3.1.1. Dev Spaces operator The OpenShift Dev Spaces operator ensure full lifecycle management of the OpenShift Dev Spaces server components. It introduces: CheCluster custom resource definition (CRD) Defines the CheCluster OpenShift object. OpenShift Dev Spaces controller Creates and controls the necessary OpenShift objects to run a OpenShift Dev Spaces instance, such as pods, services, and persistent volumes. CheCluster custom resource (CR) On a cluster with the OpenShift Dev Spaces operator, it is possible to create a CheCluster custom resource (CR). The OpenShift Dev Spaces operator ensures the full lifecycle management of the OpenShift Dev Spaces server components on this OpenShift Dev Spaces instance: Section 1.3.1.2, "Dev Workspace operator" Section 1.3.1.3, "Gateway" Section 1.3.1.4, "User dashboard" Section 1.3.1.5, "Dev Spaces server" Section 1.3.1.6, "Plug-in registry" Additional resources Section 3.1, "Understanding the CheCluster Custom Resource" Chapter 2, Installing Dev Spaces 1.3.1.2. Dev Workspace operator The Dev Workspace operator extends OpenShift to provide Dev Workspace support. It introduces: Dev Workspace custom resource definition Defines the Dev Workspace OpenShift object from the Devfile v2 specification. Dev Workspace controller Creates and controls the necessary OpenShift objects to run a Dev Workspace, such as pods, services, and persistent volumes. Dev Workspace custom resource On a cluster with the Dev Workspace operator, it is possible to create Dev Workspace custom resources (CR). A Dev Workspace CR is a OpenShift representation of a Devfile. It defines a User workspaces in a OpenShift cluster. Additional resources Devfile API repository 1.3.1.3. Gateway The OpenShift Dev Spaces gateway has following roles: Routing requests. It uses Traefik . Authenticating users with OpenID Connect (OIDC). It uses OpenShift OAuth2 proxy . Applying OpenShift Role based access control (RBAC) policies to control access to any OpenShift Dev Spaces resource. It uses `kube-rbac-proxy` . The OpenShift Dev Spaces operator manages it as the che-gateway Deployment. It controls access to: Section 1.3.1.4, "User dashboard" Section 1.3.1.5, "Dev Spaces server" Section 1.3.1.6, "Plug-in registry" Section 1.3.2, "User workspaces" Figure 1.3. OpenShift Dev Spaces gateway interactions with other components Additional resources Section 3.11, "Managing identities and authorizations" 1.3.1.4. User dashboard The user dashboard is the landing page of Red Hat OpenShift Dev Spaces. OpenShift Dev Spaces users browse the user dashboard to access and manage their workspaces. It is a React application. The OpenShift Dev Spaces deployment starts it in the devspaces-dashboard Deployment. It needs access to: Section 1.3.1.5, "Dev Spaces server" Section 1.3.1.6, "Plug-in registry" OpenShift API Figure 1.4. User dashboard interactions with other components When the user requests the user dashboard to start a workspace, the user dashboard executes this sequence of actions: Sends the repository URL to Section 1.3.1.5, "Dev Spaces server" and expects a devfile in return, when the user is creating a workspace from a remote devfile. Reads the devfile describing the workspace. Collects the additional metadata from the Section 1.3.1.6, "Plug-in registry" . Converts the information into a Dev Workspace Custom Resource. Creates the Dev Workspace Custom Resource in the user project using the OpenShift API. Watches the Dev Workspace Custom Resource status. Redirects the user to the running workspace IDE. 1.3.1.5. Dev Spaces server Additional resources The OpenShift Dev Spaces server main functions are: Creating user namespaces. Provisioning user namespaces with required secrets and config maps. Integrating with Git services providers, to fetch and validate devfiles and authentication. The OpenShift Dev Spaces server is a Java web service exposing an HTTP REST API and needs access to: Git service providers OpenShift API Figure 1.5. OpenShift Dev Spaces server interactions with other components Additional resources Section 3.3.2, "Advanced configuration options for Dev Spaces server" 1.3.1.6. Plug-in registry Each OpenShift Dev Spaces workspace starts with a specific editor and set of associated extensions. The OpenShift Dev Spaces plugin registry provides the list of available editors and editor extensions. A Devfile v2 describes each editor or extension. The Section 1.3.1.4, "User dashboard" is reading the content of the registry. Figure 1.6. Plugin registries interactions with other components Additional resources Editor definitions in the OpenShift Dev Spaces plugin registry repository Plugin registry latest community version online instance 1.3.2. User workspaces Figure 1.7. User workspaces interactions with other components User workspaces are web IDEs running in containers. A User workspace is a web application. It consists of microservices running in containers providing all the services of a modern IDE running in your browser: Editor Language auto-completion Language server Debugging tools Plug-ins Application runtimes A workspace is one OpenShift Deployment containing the workspace containers and enabled plugins, plus related OpenShift components: Containers ConfigMaps Services Endpoints Ingresses or Routes Secrets Persistent Volumes (PV) A OpenShift Dev Spaces workspace contains the source code of the projects, persisted in a OpenShift Persistent Volume (PV). Microservices have read/write access to this shared directory. Use the devfile v2 format to specify the tools and runtime applications of a OpenShift Dev Spaces workspace. The following diagram shows one running OpenShift Dev Spaces workspace and its components. Figure 1.8. OpenShift Dev Spaces workspace components In the diagram, there is one running workspaces. 1.4. Calculating Dev Spaces resource requirements The OpenShift Dev Spaces Operator, Dev Workspace Controller, and user workspaces consist of a set of pods. The pods contribute to the resource consumption in CPU and memory limits and requests. Note The following link to an example devfile is a pointer to material from the upstream community. This material represents the very latest available content and the most recent best practices. These tips have not yet been vetted by Red Hat's QE department, and they have not yet been proven by a wide user group. Please, use this information cautiously. It is best used for educational and 'developmental' purposes rather than 'production' purposes. Procedure Identify the workspace resource requirements which depend on the devfile that is used for defining the development environment. This includes identifying the workspace components explicitly specified in the components section of the devfile. Here is an example devfile with the following components: Example 1.1. tools The tools component of the devfile defines the following requests and limits: memoryLimit: 6G memoryRequest: 512M cpuRequest: 1000m cpuLimit: 4000m During the workspace startup, an internal che-gateway container is implicitly provisioned with the following requests and limits: memoryLimit: 256M memoryRequest: 64M cpuRequest: 50m cpuLimit: 500m Calculate the sums of the resources required for each workspace. If you intend to use multiple devfiles, repeat this calculation for every expected devfile. Example 1.2. Workspace requirements for the example devfile in the step Purpose Pod Container name Memory limit Memory request CPU limit CPU request Developer tools workspace tools 6 GiB 512 MiB 4000 m 1000 m OpenShift Dev Spaces gateway workspace che-gateway 256 MiB 64 MiB 500 m 50 m Total 6.3 GiB 576 MiB 4500 m 1050 m Multiply the resources calculated per workspace by the number of workspaces that you expect all of your users to run simultaneously. Calculate the sums of the requirements for the OpenShift Dev Spaces Operator, Operands, and Dev Workspace Controller. Table 1.1. Default requirements for the OpenShift Dev Spaces Operator, Operands, and Dev Workspace Controller Purpose Pod name Container names Memory limit Memory request CPU limit CPU request OpenShift Dev Spaces operator devspaces-operator devspaces-operator 256 MiB 64 MiB 500 m 100 m OpenShift Dev Spaces Server devspaces devspaces-server 1 GiB 512 MiB 1000 m 100 m OpenShift Dev Spaces Dashboard devspaces-dashboard devspaces-dashboard 256 MiB 32 MiB 500 m 100 m OpenShift Dev Spaces Gateway devspaces-gateway traefik 4 GiB 128 MiB 1000 m 100 m OpenShift Dev Spaces Gateway devspaces-gateway configbump 256 MiB 64 MiB 500 m 50 m OpenShift Dev Spaces Gateway devspaces-gateway oauth-proxy 512 MiB 64 MiB 500 m 100 m OpenShift Dev Spaces Gateway devspaces-gateway kube-rbac-proxy 512 MiB 64 MiB 500 m 100 m Devfile registry devfile-registry devfile-registry 256 MiB 32 MiB 500 m 100 m Plugin registry plugin-registry plugin-registry 256 MiB 32 MiB 500 m 100 m Dev Workspace Controller Manager devworkspace-controller-manager devworkspace-controller 1 GiB 100 MiB 1000 m 250 m Dev Workspace Controller Manager devworkspace-controller-manager kube-rbac-proxy N/A N/A N/A N/A Dev Workspace webhook server devworkspace-webhook-server webhook-server 300 MiB 20 MiB 200 m 100 m Dev Workspace Operator Catalog devworkspace-operator-catalog registry-server N/A 50 MiB N/A 10 m Dev Workspace Webhook Server devworkspace-webhook-server webhook-server 300 MiB 20 MiB 200 m 100 m Dev Workspace Webhook Server devworkspace-webhook-server kube-rbac-proxy N/A N/A N/A N/A Total 9 GiB 1.2 GiB 6.9 1.3 Additional resources What is a devfile Benefits of devfile Devfile customization overview | [
"dsc",
"memoryLimit: 6G memoryRequest: 512M cpuRequest: 1000m cpuLimit: 4000m",
"memoryLimit: 256M memoryRequest: 64M cpuRequest: 50m cpuLimit: 500m"
] | https://docs.redhat.com/en/documentation/red_hat_openshift_dev_spaces/3.16/html/administration_guide/preparing-the-installation |
5.11. Configuring ACPI For Use with Integrated Fence Devices | 5.11. Configuring ACPI For Use with Integrated Fence Devices If your cluster uses integrated fence devices, you must configure ACPI (Advanced Configuration and Power Interface) to ensure immediate and complete fencing. If a cluster node is configured to be fenced by an integrated fence device, disable ACPI Soft-Off for that node. Disabling ACPI Soft-Off allows an integrated fence device to turn off a node immediately and completely rather than attempting a clean shutdown (for example, shutdown -h now ). Otherwise, if ACPI Soft-Off is enabled, an integrated fence device can take four or more seconds to turn off a node (see the note that follows). In addition, if ACPI Soft-Off is enabled and a node panics or freezes during shutdown, an integrated fence device may not be able to turn off the node. Under those circumstances, fencing is delayed or unsuccessful. Consequently, when a node is fenced with an integrated fence device and ACPI Soft-Off is enabled, a cluster recovers slowly or requires administrative intervention to recover. Note The amount of time required to fence a node depends on the integrated fence device used. Some integrated fence devices perform the equivalent of pressing and holding the power button; therefore, the fence device turns off the node in four to five seconds. Other integrated fence devices perform the equivalent of pressing the power button momentarily, relying on the operating system to turn off the node; therefore, the fence device turns off the node in a time span much longer than four to five seconds. The preferred way to disable ACPI Soft-Off is to change the BIOS setting to "instant-off" or an equivalent setting that turns off the node without delay, as described in Section 5.11.1, "Disabling ACPI Soft-Off with the BIOS" . Disabling ACPI Soft-Off with the BIOS may not be possible with some systems. If disabling ACPI Soft-Off with the BIOS is not satisfactory for your cluster, you can disable ACPI Soft-Off with one of the following alternate methods: Setting HandlePowerKey=ignore in the /etc/systemd/logind.conf file and verifying that the node node turns off immediately when fenced, as described in Section 5.11.2, "Disabling ACPI Soft-Off in the logind.conf file" . This is the first alternate method of disabling ACPI Soft-Off. Appending acpi=off to the kernel boot command line, as described in Section 5.11.3, "Disabling ACPI Completely in the GRUB 2 File" . This is the second alternate method of disabling ACPI Soft-Off, if the preferred or the first alternate method is not available. Important This method completely disables ACPI; some computers do not boot correctly if ACPI is completely disabled. Use this method only if the other methods are not effective for your cluster. 5.11.1. Disabling ACPI Soft-Off with the BIOS You can disable ACPI Soft-Off by configuring the BIOS of each cluster node with the following procedure. Note The procedure for disabling ACPI Soft-Off with the BIOS may differ among server systems. You should verify this procedure with your hardware documentation. Reboot the node and start the BIOS CMOS Setup Utility program. Navigate to the Power menu (or equivalent power management menu). At the Power menu, set the Soft-Off by PWR-BTTN function (or equivalent) to Instant-Off (or the equivalent setting that turns off the node by means of the power button without delay). Example 5.1, " BIOS CMOS Setup Utility : Soft-Off by PWR-BTTN set to Instant-Off " shows a Power menu with ACPI Function set to Enabled and Soft-Off by PWR-BTTN set to Instant-Off . Note The equivalents to ACPI Function , Soft-Off by PWR-BTTN , and Instant-Off may vary among computers. However, the objective of this procedure is to configure the BIOS so that the computer is turned off by means of the power button without delay. Exit the BIOS CMOS Setup Utility program, saving the BIOS configuration. Verify that the node turns off immediately when fenced. For information on testing a fence device, see Section 5.12, "Testing a Fence Device" . Example 5.1. BIOS CMOS Setup Utility : Soft-Off by PWR-BTTN set to Instant-Off This example shows ACPI Function set to Enabled , and Soft-Off by PWR-BTTN set to Instant-Off . 5.11.2. Disabling ACPI Soft-Off in the logind.conf file To disable power-key handing in the /etc/systemd/logind.conf file, use the following procedure. Define the following configuration in the /etc/systemd/logind.conf file: Reload the systemd configuration: Verify that the node turns off immediately when fenced. For information on testing a fence device, see Section 5.12, "Testing a Fence Device" . 5.11.3. Disabling ACPI Completely in the GRUB 2 File You can disable ACPI Soft-Off by appending acpi=off to the GRUB menu entry for a kernel. Important This method completely disables ACPI; some computers do not boot correctly if ACPI is completely disabled. Use this method only if the other methods are not effective for your cluster. Use the following procedure to disable ACPI in the GRUB 2 file: Use the --args option in combination with the --update-kernel option of the grubby tool to change the grub.cfg file of each cluster node as follows: For general information on GRUB 2, see the Working with GRUB 2 chapter in the System Administrator's Guide . Reboot the node. Verify that the node turns off immediately when fenced. For information on testing a fence device, see Section 5.12, "Testing a Fence Device" . | [
"+---------------------------------------------|-------------------+ | ACPI Function [Enabled] | Item Help | | ACPI Suspend Type [S1(POS)] |-------------------| | x Run VGABIOS if S3 Resume Auto | Menu Level * | | Suspend Mode [Disabled] | | | HDD Power Down [Disabled] | | | Soft-Off by PWR-BTTN [Instant-Off | | | CPU THRM-Throttling [50.0%] | | | Wake-Up by PCI card [Enabled] | | | Power On by Ring [Enabled] | | | Wake Up On LAN [Enabled] | | | x USB KB Wake-Up From S3 Disabled | | | Resume by Alarm [Disabled] | | | x Date(of Month) Alarm 0 | | | x Time(hh:mm:ss) Alarm 0 : 0 : | | | POWER ON Function [BUTTON ONLY | | | x KB Power ON Password Enter | | | x Hot Key Power ON Ctrl-F1 | | | | | | | | +---------------------------------------------|-------------------+",
"HandlePowerKey=ignore",
"systemctl daemon-reload",
"grubby --args=acpi=off --update-kernel=ALL"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/high_availability_add-on_reference/s1-acpi-CA |
Chapter 5. Deploying Red Hat Quay on public cloud | Chapter 5. Deploying Red Hat Quay on public cloud Red Hat Quay can run on public clouds, either in standalone mode or where OpenShift Container Platform itself has been deployed on public cloud. A full list of tested and supported configurations can be found in the Red Hat Quay Tested Integrations Matrix at https://access.redhat.com/articles/4067991 . Recommendation: If Red Hat Quay is running on public cloud, then you should use the public cloud services for Red Hat Quay backend services to ensure proper high availability and scalability. 5.1. Running Red Hat Quay on Amazon Web Services If Red Hat Quay is running on Amazon Web Services (AWS), you can use the following features: AWS Elastic Load Balancer AWS S3 (hot) blob storage AWS RDS database AWS ElastiCache Redis EC2 virtual machine recommendation: M3.Large or M4.XLarge The following image provides a high level overview of Red Hat Quay running on AWS: Red Hat Quay on AWS 5.2. Running Red Hat Quay on Microsoft Azure If Red Hat Quay is running on Microsoft Azure, you can use the following features: Azure managed services such as highly available PostgreSQL Azure Blob Storage must be hot storage Azure cool storage is not available for Red Hat Quay Azure Cache for Redis The following image provides a high level overview of Red Hat Quay running on Microsoft Azure: Red Hat Quay on Microsoft Azure | null | https://docs.redhat.com/en/documentation/red_hat_quay/3.12/html/red_hat_quay_architecture/arch-deploy-quay-public-cloud |
Chapter 2. Installing and configuring Pipelines as Code | Chapter 2. Installing and configuring Pipelines as Code You can install Pipelines as Code as a part of Red Hat OpenShift Pipelines installation. 2.1. Installing Pipelines as Code on an OpenShift Container Platform Pipelines as Code is installed in the openshift-pipelines namespace when you install the Red Hat OpenShift Pipelines Operator. For more details, see Installing OpenShift Pipelines in the Additional resources section. To disable the default installation of Pipelines as Code with the Operator, set the value of the enable parameter to false in the TektonConfig custom resource. apiVersion: operator.tekton.dev/v1alpha1 kind: TektonConfig metadata: name: config spec: platforms: openshift: pipelinesAsCode: enable: false settings: application-name: Pipelines as Code CI auto-configure-new-github-repo: "false" bitbucket-cloud-check-source-ip: "true" hub-catalog-name: tekton hub-url: https://api.hub.tekton.dev/v1 remote-tasks: "true" secret-auto-create: "true" # ... Optionally, you can run the following command: USD oc patch tektonconfig config --type="merge" -p '{"spec": {"platforms": {"openshift":{"pipelinesAsCode": {"enable": false}}}}}' To enable the default installation of Pipelines as Code with the Red Hat OpenShift Pipelines Operator, set the value of the enable parameter to true in the TektonConfig custom resource: apiVersion: operator.tekton.dev/v1alpha1 kind: TektonConfig metadata: name: config spec: platforms: openshift: pipelinesAsCode: enable: true settings: application-name: Pipelines as Code CI auto-configure-new-github-repo: "false" bitbucket-cloud-check-source-ip: "true" hub-catalog-name: tekton hub-url: https://api.hub.tekton.dev/v1 remote-tasks: "true" secret-auto-create: "true" # ... Optionally, you can run the following command: USD oc patch tektonconfig config --type="merge" -p '{"spec": {"platforms": {"openshift":{"pipelinesAsCode": {"enable": true}}}}}' 2.2. Installing Pipelines as Code CLI Cluster administrators can use the tkn pac and opc CLI tools on local machines or as containers for testing. The tkn pac and opc CLI tools are installed automatically when you install the tkn CLI for Red Hat OpenShift Pipelines. You can install the tkn pac and opc version 1.15.0 binaries for the supported platforms: Linux (x86_64, amd64) Linux on IBM zSystems and IBM(R) LinuxONE (s390x) Linux on IBM Power (ppc64le) Linux on ARM (aarch64, arm64) macOS Windows 2.3. Customizing Pipelines as Code configuration To customize Pipelines as Code, cluster administrators can configure the following parameters in the TektonConfig custom resource, in the platforms.openshift.pipelinesAsCode.settings spec: Table 2.1. Customizing Pipelines as Code configuration Parameter Description Default application-name The name of the application. For example, the name displayed in the GitHub Checks labels. "Pipelines as Code CI" secret-auto-create Indicates whether or not a secret should be automatically created using the token generated in the GitHub application. This secret can then be used with private repositories. enabled remote-tasks When enabled, allows remote tasks from pipeline run annotations. enabled hub-url The base URL for the Tekton Hub API . https://hub.tekton.dev/ hub-catalog-name The Tekton Hub catalog name. tekton tekton-dashboard-url The URL of the Tekton Hub dashboard. Pipelines as Code uses this URL to generate a PipelineRun URL on the Tekton Hub dashboard. NA bitbucket-cloud-check-source-ip Indicates whether to secure the service requests by querying IP ranges for a public Bitbucket. Changing the parameter's default value might result into a security issue. enabled bitbucket-cloud-additional-source-ip Indicates whether to provide an additional set of IP ranges or networks, which are separated by commas. NA max-keep-run-upper-limit A maximum limit for the max-keep-run value for a pipeline run. NA default-max-keep-runs A default limit for the max-keep-run value for a pipeline run. If defined, the value is applied to all pipeline runs that do not have a max-keep-run annotation. NA auto-configure-new-github-repo Configures new GitHub repositories automatically. Pipelines as Code sets up a namespace and creates a custom resource for your repository. This parameter is only supported with GitHub applications. disabled auto-configure-repo-namespace-template Configures a template to automatically generate the namespace for your new repository, if auto-configure-new-github-repo is enabled. {repo_name}-pipelines error-log-snippet Enables or disables the view of a log snippet for the failed tasks, with an error in a pipeline. You can disable this parameter in the case of data leakage from your pipeline. true error-detection-from-container-logs Enables or disables the inspection of container logs to detect error message and expose them as annotations on the pull request. This setting applies only if you are using the GitHub app. true error-detection-max-number-of-lines The maximum number of lines inspected in the container logs to search for error messages. Set to -1 to inspect an unlimited number of lines. 50 secret-github-app-token-scoped If set to true , the GitHub access token that Pipelines as Code generates using the GitHub app is scoped only to the repository from which Pipelines as Code fetches the pipeline definition. If set to false , you can use both the TektonConfig custom resource and the Repository custom resource to scope the token to additional repositories. true secret-github-app-scope-extra-repos Additional repositories for scoping the generated GitHub access token. 2.4. Configuring additional Pipelines as Code controllers to support additional GitHub apps By default, you can configure Pipelines as Code to interact with one GitHub app. In some cases you might need to use more than one GitHub app, for example, if you need to use different GitHub accounts or different GitHub instances such as GitHub Enterprise or GitHub SaaS. If you want to use more than one GitHub app, you must configure an additional Pipelines as Code controller for every additional GitHub app. Procedure In the TektonConfig custom resource, add the additionalPACControllers section to the platforms.openshift.pipelinesAsCode spec, as in the following example: Example additionalPACControllers section apiVersion: operator.tekton.dev/v1 kind: TektonConfig metadata: name: config spec: platforms: openshift: pipelinesAsCode: additionalPACControllers: pac_controller_2: 1 enable: true 2 secretName: pac_secret_2 3 settings: # 4 # ... 1 The name of the controller. This name must be unique and not exceed 25 characters in length. 2 This parameter is optional. Set this parameter to true to enable the additional controller or to false to disable the additional controller. The default vaule is true . 3 Set this parameter to the name of a secret that you must create for the GitHub app. 4 This section is optional. In this section, you can set any Pipelines as Code settings for this controller if the settings must be different from the main Pipelines as Code controller. Optional: If you want to use more than two GitHub apps, create additional sections under the pipelinesAsCode.additionalPACControllers spec to configure a Pipelines as Code controller for every GitHub instance. Use a unique name for every controller. Additional resources Customizing Pipelines as Code configuration Configuring a GitHub App manually and creating a secret for Pipelines as Code 2.5. Additional resources Installing OpenShift Pipelines Installing tkn Red Hat OpenShift Pipelines release notes | [
"apiVersion: operator.tekton.dev/v1alpha1 kind: TektonConfig metadata: name: config spec: platforms: openshift: pipelinesAsCode: enable: false settings: application-name: Pipelines as Code CI auto-configure-new-github-repo: \"false\" bitbucket-cloud-check-source-ip: \"true\" hub-catalog-name: tekton hub-url: https://api.hub.tekton.dev/v1 remote-tasks: \"true\" secret-auto-create: \"true\"",
"oc patch tektonconfig config --type=\"merge\" -p '{\"spec\": {\"platforms\": {\"openshift\":{\"pipelinesAsCode\": {\"enable\": false}}}}}'",
"apiVersion: operator.tekton.dev/v1alpha1 kind: TektonConfig metadata: name: config spec: platforms: openshift: pipelinesAsCode: enable: true settings: application-name: Pipelines as Code CI auto-configure-new-github-repo: \"false\" bitbucket-cloud-check-source-ip: \"true\" hub-catalog-name: tekton hub-url: https://api.hub.tekton.dev/v1 remote-tasks: \"true\" secret-auto-create: \"true\"",
"oc patch tektonconfig config --type=\"merge\" -p '{\"spec\": {\"platforms\": {\"openshift\":{\"pipelinesAsCode\": {\"enable\": true}}}}}'",
"apiVersion: operator.tekton.dev/v1 kind: TektonConfig metadata: name: config spec: platforms: openshift: pipelinesAsCode: additionalPACControllers: pac_controller_2: 1 enable: true 2 secretName: pac_secret_2 3 settings: # 4"
] | https://docs.redhat.com/en/documentation/red_hat_openshift_pipelines/1.15/html/pipelines_as_code/install-config-pipelines-as-code |
Chapter 48. QuotasPluginKafka schema reference | Chapter 48. QuotasPluginKafka schema reference Used in: KafkaClusterSpec The type property is a discriminator that distinguishes use of the QuotasPluginKafka type from QuotasPluginStrimzi . It must have the value kafka for the type QuotasPluginKafka . Property Property type Description type string Must be kafka . producerByteRate integer The default client quota on the maximum bytes per-second that each client can publish to each broker before it is throttled. Applied on a per-broker basis. consumerByteRate integer The default client quota on the maximum bytes per-second that each client can fetch from each broker before it is throttled. Applied on a per-broker basis. requestPercentage integer The default client quota limits the maximum CPU utilization of each client as a percentage of the network and I/O threads of each broker. Applied on a per-broker basis. controllerMutationRate number The default client quota on the rate at which mutations are accepted per second for create topic requests, create partition requests, and delete topic requests, defined for each broker. The mutations rate is measured by the number of partitions created or deleted. Applied on a per-broker basis. | null | https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.9/html/streams_for_apache_kafka_api_reference/type-quotaspluginkafka-reference |
Chapter 4. Technology Previews | Chapter 4. Technology Previews This section provides a list of all Technology Previews available in OpenShift sandboxed containers 1.8. See Technology Preview Features Support Scope for more information. Peer pod support for IBM Z and IBM LinuxONE You can deploy OpenShift sandboxed containers workloads, without nested virtualization, by using peer pods on IBM Z(R) and IBM(R) LinuxONE (s390x architecture). Jira:KATA-2030 Confidential Containers on Microsoft Azure Cloud Computing Services, IBM Z, and IBM LinuxONE Confidential Containers provides enhanced security for cloud-native applications, allowing them to run in secure and isolated environments known as Trusted Execution Environments (TEEs), which protect the containers and their data even when in use. Note the following limitations: No encryption and integrity protection of the confidential virtual machine (CVM) root filesystem (rootfs): The CVM executes inside the TEE and runs the container workload. Lack of encryption and integrity protection of the rootfs could allow a malicious admin to exfiltrate sensitive data written to the rootfs or to tamper with the rootfs data. Integrity protection and encryption for the rootfs is currently work in progress. You must ensure that all your application writes are in memory. No encrypted container image support: Only signed container image support is currently available. Encrypted container image support is work in progress. Communication between the Kata shim and the agent components inside the CVM is subject to tampering: The agent components inside the CVM are responsible for executing Kubernetes API commands from the Kata shim running on the OpenShift worker node. We use an agent policy in the CVM that turns off Kubernetes exec and log APIs for the containers to avoid exfiltration of sensitive data via the Kubernetes API. However, this is incomplete; further work is ongoing to harden the communication channel between the shim and the agent components. The agent policy can be overridden at runtime by using pod annotations. Currently, runtime policy annotations in the pod are not verified by the attestation process. No native support for encrypted pod-to-pod communication: Pod-to-pod communication is unencrypted. You must use TLS at the application level for all pod-to-pod communication. Image double-pull on the worker node and inside the CVM: The container image is downloaded and executed in the CVM that executes inside the TEE. However, currently the image is also downloaded on the worker node. Building the CVM image for Confidential Containers requires the OpenShift sandboxed containers Operator to be available in the cluster. Jira:KATA-2416 | null | https://docs.redhat.com/en/documentation/openshift_sandboxed_containers/1.8/html/release_notes/technology-previews |
Preface | Preface The following guide shows you how to configure the Red Hat Quay builds feature on both bare metal and virtual machines. | null | https://docs.redhat.com/en/documentation/red_hat_quay/3.13/html/builders_and_image_automation/pr01 |
Chapter 17. Admin CLI | Chapter 17. Admin CLI With Red Hat build of Keycloak, you can perform administration tasks from the command-line interface (CLI) by using the Admin CLI command-line tool. 17.1. Installing the Admin CLI Red Hat build of Keycloak packages the Admin CLI server distribution with the execution scripts in the bin directory. The Linux script is called kcadm.sh , and the script for Windows is called kcadm.bat . Add the Red Hat build of Keycloak server directory to your PATH to use the client from any location on your file system. For example: Linux: Windows: Note You must set the KEYCLOAK_HOME environment variable to the path where you extracted the Red Hat build of Keycloak Server distribution. To avoid repetition, the rest of this document only uses Windows examples in places where the CLI differences are more than just in the kcadm command name. 17.2. Using the Admin CLI The Admin CLI makes HTTP requests to Admin REST endpoints. Access to the Admin REST endpoints requires authentication. Note Consult the Admin REST API documentation for details about JSON attributes for specific endpoints. Start an authenticated session by logging in. You can now perform create, read, update, and delete (CRUD) operations. For example: Linux: Windows: In a production environment, access Red Hat build of Keycloak by using https: to avoid exposing tokens. If a trusted certificate authority, included in Java's default certificate truststore, has not issued a server's certificate, prepare a truststore.jks file and instruct the Admin CLI to use it. For example: Linux: Windows: 17.3. Authenticating When you log in with the Admin CLI, you specify: A server endpoint URL A realm A user name Another option is to specify a clientId only, which creates a unique service account for you to use. When you log in using a user name, use a password for the specified user. When you log in using a clientId, you need the client secret only, not the user password. You can also use the Signed JWT rather than the client secret. Ensure the account used for the session has the proper permissions to invoke Admin REST API operations. For example, the realm-admin role of the realm-management client can administer the realm of the user. Two primary mechanisms are available for authentication. One mechanism uses kcadm config credentials to start an authenticated session. This mechanism maintains an authenticated session between the kcadm command invocations by saving the obtained access token and its associated refresh token. It can maintain other secrets in a private configuration file. See the chapter for more information. The second mechanism authenticates each command invocation for the duration of the invocation. This mechanism increases the load on the server and the time spent on round trips obtaining tokens. The benefit of this approach is that it is unnecessary to save tokens between invocations, so nothing is saved to disk. Red Hat build of Keycloak uses this mode when the --no-config argument is specified. For example, when performing an operation, specify all the information required for authentication. Run the kcadm.sh help command for more information on using the Admin CLI. Run the kcadm.sh config credentials --help command for more information about starting an authenticated session. 17.4. Working with alternative configurations By default, the Admin CLI maintains a configuration file named kcadm.config . Red Hat build of Keycloak places this file in the user's home directory. In Linux-based systems, the full pathname is USDHOME/.keycloak/kcadm.config . In Windows, the full pathname is %HOMEPATH%\.keycloak\kcadm.config . You can use the --config option to point to a different file or location so you can maintain multiple authenticated sessions in parallel. Note Perform operations tied to a single configuration file from a single thread. Ensure the configuration file is invisible to other users on the system. It contains access tokens and secrets that must be private. Red Hat build of Keycloak creates the ~/.keycloak directory and its contents automatically with proper access limits. If the directory already exists, Red Hat build of Keycloak does not update the directory's permissions. It is possible to avoid storing secrets inside a configuration file, but doing so is inconvenient and increases the number of token requests. Use the --no-config option with all commands and specify the authentication information the config credentials command requires with each invocation of kcadm . 17.5. Basic operations and resource URIs The Admin CLI can generically perform CRUD operations against Admin REST API endpoints with additional commands that simplify particular tasks. The main usage pattern is listed here: The create , get , update , and delete commands map to the HTTP verbs POST , GET , PUT , and DELETE , respectively. ENDPOINT is a target resource URI and can be absolute (starting with http: or https: ) or relative, that Red Hat build of Keycloak uses to compose absolute URLs in the following format: For example, if you authenticate against the server http://localhost:8080 and realm is master , using users as ENDPOINT creates the http://localhost:8080/admin/realms/master/users resource URL. If you set ENDPOINT to clients , the effective resource URI is http://localhost:8080/admin/realms/master/clients . Red Hat build of Keycloak has a realms endpoint that is the container for realms. It resolves to: Red Hat build of Keycloak has a serverinfo endpoint. This endpoint is independent of realms. When you authenticate as a user with realm-admin powers, you may need to perform commands on multiple realms. If so, specify the -r option to tell the CLI which realm the command is to execute against explicitly. Instead of using REALM as specified by the --realm option of kcadm.sh config credentials , the command uses TARGET_REALM . For example: In this example, you start a session authenticated as the admin user in the master realm. You then perform a POST call against the resource URL http://localhost:8080/admin/realms/demorealm/users . The create and update commands send a JSON body to the server. You can use -f FILENAME to read a pre-made document from a file. When you can use the -f - option, Red Hat build of Keycloak reads the message body from the standard input. You can specify individual attributes and their values, as seen in the create users example. Red Hat build of Keycloak composes the attributes into a JSON body and sends them to the server. Several methods are available in Red Hat build of Keycloak to update a resource using the update command. You can determine the current state of a resource and save it to a file, edit that file, and send it to the server for an update. For example: This method updates the resource on the server with the attributes in the sent JSON document. Another method is to perform an on-the-fly update by using the -s, --set options to set new values. For example: This method sets the enabled attribute to false . By default, the update command performs a get and then merges the new attribute values with existing values. In some cases, the endpoint may support the put command but not the get command. You can use the -n option to perform a no-merge update, which performs a put command without first running a get command. 17.6. Realm operations Creating a new realm Use the create command on the realms endpoint to create a new enabled realm. Set the attributes to realm and enabled . Red Hat build of Keycloak disables realms by default. You can use a realm immediately for authentication by enabling it. A description for a new object can also be in JSON format. You can send a JSON document with realm attributes directly from a file or pipe the document to standard input. For example: Linux: Windows: Listing existing realms This command returns a list of all realms. Note Red Hat build of Keycloak filters the list of realms on the server to return realms a user can see only. The list of all realm attributes can be verbose, and most users are interested in a subset of attributes, such as the realm name and the enabled status of the realm. You can specify the attributes to return by using the --fields option. You can display the result as comma-separated values. Getting a specific realm Append a realm name to a collection URI to get an individual realm. Updating a realm Use the -s option to set new values for the attributes when you do not want to change all of the realm's attributes. For example: If you want to set all writable attributes to new values: Run a get command. Edit the current values in the JSON file. Resubmit. For example: Deleting a realm Run the following command to delete a realm: Turning on all login page options for the realm Set the attributes that control specific capabilities to true . For example: Listing the realm keys Use the get operation on the keys endpoint of the target realm. Generating new realm keys Get the ID of the target realm before adding a new RSA-generated key pair. For example: Add a new key provider with a higher priority than the existing providers as revealed by kcadm.sh get keys -r demorealm . For example: Linux: Windows: Set the parentId attribute to the value of the target realm's ID. The newly added key is now the active key, as revealed by kcadm.sh get keys -r demorealm . Adding new realm keys from a Java Key Store file Add a new key provider to add a new key pair pre-prepared as a JKS file. For example, on: Linux: Windows: Ensure you change the attribute values for keystore , keystorePassword , keyPassword , and alias to match your specific keystore. Set the parentId attribute to the value of the target realm's ID. Making the key passive or disabling the key Identify the key you want to make passive. Use the key's providerId attribute to construct an endpoint URI, such as components/PROVIDER_ID . Perform an update . For example: Linux: Windows: You can update other key attributes: Set a new enabled value to disable the key, for example, config.enabled=["false"] . Set a new priority value to change the key's priority, for example, config.priority=["110"] . Deleting an old key Ensure the key you are deleting is inactive and you have disabled it. This action is to prevent existing tokens held by applications and users from failing. Identify the key to delete. Use the providerId of the key to perform the delete. Configuring event logging for a realm Use the update command on the events/config endpoint. The eventsListeners attribute contains a list of EventListenerProviderFactory IDs, specifying all event listeners that receive events. Attributes are available that control built-in event storage, so you can query past events using the Admin REST API. Red Hat build of Keycloak has separate control over the logging of service calls ( eventsEnabled ) and the auditing events triggered by the Admin Console or Admin REST API ( adminEventsEnabled ). You can set up the eventsExpiration event to expire to prevent your database from filling. Red Hat build of Keycloak sets eventsExpiration to time-to-live expressed in seconds. You can set up a built-in event listener that receives all events and logs the events through JBoss-logging. Using the org.keycloak.events logger, Red Hat build of Keycloak logs error events as WARN and other events as DEBUG . For example: Linux: Windows: For example: You can turn on storage for all available ERROR events, not including auditing events, for two days so you can retrieve the events through Admin REST. Linux: Windows: You can reset stored event types to all available event types . Setting the value to an empty list is the same as enumerating all. You can enable storage of auditing events. You can get the last 100 events. The events are ordered from newest to oldest. You can delete all saved events. Flushing the caches Use the create command with one of these endpoints to clear caches: clear-realm-cache clear-user-cache clear-keys-cache Set realm to the same value as the target realm. For example: Importing a realm from exported .json file Use the create command on the partialImport endpoint. Set ifResourceExists to FAIL , SKIP , or OVERWRITE . Use -f to submit the exported realm .json file. For example: If the realm does not yet exist, create it first. For example: 17.7. Role operations Creating a realm role Use the roles endpoint to create a realm role. Creating a client role Identify the client. Use the get command to list the available clients. Create a new role by using the clientId attribute to construct an endpoint URI, such as clients/ID/roles . For example: Listing realm roles Use the get command on the roles endpoint to list existing realm roles. You can use the get-roles command also. Listing client roles Red Hat build of Keycloak has a dedicated get-roles command to simplify the listing of realm and client roles. The command is an extension of the get command and behaves the same as the get command but with additional semantics for listing roles. Use the get-roles command by passing it the clientId ( --cclientid ) option or the id ( --cid ) option to identify the client to list client roles. For example: Getting a specific realm role Use the get command and the role name to construct an endpoint URI for a specific realm role, roles/ROLE_NAME , where user is the existing role's name. For example: You can use the get-roles command, passing it a role name ( --rolename option) or ID ( --roleid option). For example: Getting a specific client role Use the get-roles command, passing it the clientId attribute ( --cclientid option) or ID attribute ( --cid option) to identify the client, and pass the role name ( --rolename option) or the role ID attribute ( --roleid ) to identify a specific client role. For example: Updating a realm role Use the update command with the endpoint URI you used to get a specific realm role. For example: Updating a client role Use the update command with the endpoint URI that you used to get a specific client role. For example: Deleting a realm role Use the delete command with the endpoint URI that you used to get a specific realm role. For example: Deleting a client role Use the delete command with the endpoint URI that you used to get a specific client role. For example: Listing assigned, available, and effective realm roles for a composite role Use the get-roles command to list assigned, available, and effective realm roles for a composite role. To list assigned realm roles for the composite role, specify the target composite role by name ( --rname option) or ID ( --rid option). For example: Use the --effective option to list effective realm roles. For example: Use the --available option to list realm roles that you can add to the composite role. For example: Listing assigned, available, and effective client roles for a composite role Use the get-roles command to list assigned, available, and effective client roles for a composite role. To list assigned client roles for the composite role, you can specify the target composite role by name ( --rname option) or ID ( --rid option) and client by the clientId attribute ( --cclientid option) or ID ( --cid option). For example: Use the --effective option to list effective realm roles. For example: Use the --available option to list realm roles that you can add to the target composite role. For example: Adding realm roles to a composite role Red Hat build of Keycloak provides an add-roles command for adding realm roles and client roles. This example adds the user role to the composite role testrole . Removing realm roles from a composite role Red Hat build of Keycloak provides a remove-roles command for removing realm roles and client roles. The following example removes the user role from the target composite role testrole . Adding client roles to a realm role Red Hat build of Keycloak provides an add-roles command for adding realm roles and client roles. The following example adds the roles defined on the client realm-management , create-client , and view-users , to the testrole composite role. Adding client roles to a client role Determine the ID of the composite client role by using the get-roles command. For example: Assume that a client exists with a clientId attribute named test-client , a client role named support , and a client role named operations which becomes a composite role that has an ID of "fc400897-ef6a-4e8c-872b-1581b7fa8a71". Use the following example to add another role to the composite role. List the roles of a composite role by using the get-roles --all command. For example: Removing client roles from a composite role Use the remove-roles command to remove client roles from a composite role. Use the following example to remove two roles defined on the client realm-management , the create-client role and the view-users role, from the testrole composite role. Adding client roles to a group Use the add-roles command to add realm roles and client roles. The following example adds the roles defined on the client realm-management , create-client and view-users , to the Group group ( --gname option). Alternatively, you can specify the group by ID ( --gid option). See Group operations for more information. Removing client roles from a group Use the remove-roles command to remove client roles from a group. The following example removes two roles defined on the client realm management , create-client and view-users , from the Group group. See Group operations for more information. 17.8. Client operations Creating a client Run the create command on a clients endpoint to create a new client. For example: Specify a secret if to set a secret for adapters to authenticate. For example: Listing clients Use the get command on the clients endpoint to list clients. This example filters the output to list only the id and clientId attributes: Getting a specific client Use the client ID to construct an endpoint URI that targets a specific client, such as clients/ID . For example: Getting the current secret for a specific client Use the client ID to construct an endpoint URI, such as clients/ID/client-secret . For example: Generate a new secret for a specific client Use the client ID to construct an endpoint URI, such as clients/ID/client-secret . For example: Updating the current secret for a specific client Use the client ID to construct an endpoint URI, such as clients/ID . For example: Getting an adapter configuration file (keycloak.json) for a specific client Use the client ID to construct an endpoint URI that targets a specific client, such as clients/ID/installation/providers/keycloak-oidc-keycloak-json . For example: Getting a WildFly subsystem adapter configuration for a specific client Use the client ID to construct an endpoint URI that targets a specific client, such as clients/ID/installation/providers/keycloak-oidc-jboss-subsystem . For example: Getting a Docker-v2 example configuration for a specific client Use the client ID to construct an endpoint URI that targets a specific client, such as clients/ID/installation/providers/docker-v2-compose-yaml . The response is in .zip format. For example: Updating a client Use the update command with the same endpoint URI that you use to get a specific client. For example: Linux: Windows: Deleting a client Use the delete command with the same endpoint URI that you use to get a specific client. For example: Adding or removing roles for client's service account A client's service account is a user account with username service-account-CLIENT_ID . You can perform the same user operations on this account as a regular account. 17.9. User operations Creating a user Run the create command on the users endpoint to create a new user. For example: Listing users Use the users endpoint to list users. The target user must change their password the time they log in. For example: You can filter users by username , firstName , lastName , or email . For example: Note Filtering does not use exact matching. This example matches the value of the username attribute against the *testuser* pattern. You can filter across multiple attributes by specifying multiple -q options. Red Hat build of Keycloak returns users that match the condition for all the attributes only. Getting a specific user Use the user ID to compose an endpoint URI, such as users/USER_ID . For example: Updating a user Use the update command with the same endpoint URI that you use to get a specific user. For example: Linux: Windows: Deleting a user Use the delete command with the same endpoint URI that you use to get a specific user. For example: Resetting a user's password Use the dedicated set-password command to reset a user's password. For example: This command sets a temporary password for the user. The target user must change the password the time they log in. You can use --userid to specify the user by using the id attribute. You can achieve the same result using the update command on an endpoint constructed from the one you used to get a specific user, such as users/USER_ID/reset-password . For example: The -n parameter ensures that Red Hat build of Keycloak performs the PUT command without performing a GET command before the PUT command. This is necessary because the reset-password endpoint does not support GET . Listing assigned, available, and effective realm roles for a user You can use a get-roles command to list assigned, available, and effective realm roles for a user. Specify the target user by user name or ID to list the user's assigned realm roles. For example: Use the --effective option to list effective realm roles. For example: Use the --available option to list realm roles that you can add to a user. For example: Listing assigned, available, and effective client roles for a user Use a get-roles command to list assigned, available, and effective client roles for a user. Specify the target user by user name ( --uusername option) or ID ( --uid option) and client by a clientId attribute ( --cclientid option) or an ID ( --cid option) to list assigned client roles for the user. For example: Use the --effective option to list effective realm roles. For example: Use the --available option to list realm roles that you can add to a user. For example: Adding realm roles to a user Use an add-roles command to add realm roles to a user. Use the following example to add the user role to user testuser : Removing realm roles from a user Use a remove-roles command to remove realm roles from a user. Use the following example to remove the user role from the user testuser : Adding client roles to a user Use an add-roles command to add client roles to a user. Use the following example to add two roles defined on the client realm management , the create-client role and the view-users role, to the user testuser . Removing client roles from a user Use a remove-roles command to remove client roles from a user. Use the following example to remove two roles defined on the realm management client: Listing a user's sessions Identify the user's ID, Use the ID to compose an endpoint URI, such as users/ID/sessions . Use the get command to retrieve a list of the user's sessions. For example: Logging out a user from a specific session Determine the session's ID as described earlier. Use the session's ID to compose an endpoint URI, such as sessions/ID . Use the delete command to invalidate the session. For example: Logging out a user from all sessions Use the user's ID to construct an endpoint URI, such as users/ID/logout . Use the create command to perform POST on that endpoint URI. For example: 17.10. Group operations Creating a group Use the create command on the groups endpoint to create a new group. For example: Listing groups Use the get command on the groups endpoint to list groups. For example: Getting a specific group Use the group's ID to construct an endpoint URI, such as groups/GROUP_ID . For example: Updating a group Use the update command with the same endpoint URI that you use to get a specific group. For example: Deleting a group Use the delete command with the same endpoint URI that you use to get a specific group. For example: Creating a subgroup Find the ID of the parent group by listing groups. Use that ID to construct an endpoint URI, such as groups/GROUP_ID/children . For example: Moving a group under another group Find the ID of an existing parent group and the ID of an existing child group. Use the parent group's ID to construct an endpoint URI, such as groups/PARENT_GROUP_ID/children . Run the create command on this endpoint and pass the child group's ID as a JSON body. For example: Get groups for a specific user Use a user's ID to determine a user's membership in groups to compose an endpoint URI, such as users/USER_ID/groups . For example: Adding a user to a group Use the update command with an endpoint URI composed of a user's ID and a group's ID, such as users/USER_ID/groups/GROUP_ID , to add a user to a group. For example: Removing a user from a group Use the delete command on the same endpoint URI you use for adding a user to a group, such as users/USER_ID/groups/GROUP_ID , to remove a user from a group. For example: Listing assigned, available, and effective realm roles for a group Use a dedicated get-roles command to list assigned, available, and effective realm roles for a group. Specify the target group by name ( --gname option), path ( --gpath option), or ID ( --gid option) to list assigned realm roles for the group. For example: Use the --effective option to list effective realm roles. For example: Use the --available option to list realm roles that you can add to the group. For example: Listing assigned, available, and effective client roles for a group Use the get-roles command to list assigned, available, and effective client roles for a group. Specify the target group by name ( --gname option) or ID ( --gid option), Specify the client by the clientId attribute ( --cclientid option) or ID ( --id option) to list assigned client roles for the user. For example: Use the --effective option to list effective realm roles. For example: Use the --available option to list realm roles that you can still add to the group. For example: 17.11. Identity provider operations Listing available identity providers Use the serverinfo endpoint to list available identity providers. For example: Note Red Hat build of Keycloak processes the serverinfo endpoint similarly to the realms endpoint. Red Hat build of Keycloak does not resolve the endpoint relative to a target realm because it exists outside any specific realm. Listing configured identity providers Use the identity-provider/instances endpoint. For example: Getting a specific configured identity provider Use the identity provider's alias attribute to construct an endpoint URI, such as identity-provider/instances/ALIAS , to get a specific identity provider. For example: Removing a specific configured identity provider Use the delete command with the same endpoint URI that you use to get a specific configured identity provider to remove a specific configured identity provider. For example: Configuring a Keycloak OpenID Connect identity provider Use keycloak-oidc as the providerId when you create a new identity provider instance. Provide the config attributes: authorizationUrl , tokenUrl , clientId , and clientSecret . For example: Configuring an OpenID Connect identity provider Configure the generic OpenID Connect provider the same way you configure the Keycloak OpenID Connect provider, except you set the providerId attribute value to oidc . Configuring a SAML 2 identity provider Use saml as the providerId . Provide the config attributes: singleSignOnServiceUrl , nameIDPolicyFormat , and signatureAlgorithm . For example: Configuring a Facebook identity provider Use facebook as the providerId . Provide the config attributes: clientId and clientSecret . You can find these attributes in the Facebook Developers application configuration page for your application. See the Facebook identity broker page for more information. For example: Configuring a Google identity provider Use google as the providerId . Provide the config attributes: clientId and clientSecret . You can find these attributes in the Google Developers application configuration page for your application. See the Google identity broker page for more information. For example: Configuring a Twitter identity provider Use twitter as the providerId . Provide the config attributes clientId and clientSecret . You can find these attributes in the Twitter Application Management application configuration page for your application. See the Twitter identity broker page for more information. For example: Configuring a GitHub identity provider Use github as the providerId . Provide the config attributes clientId and clientSecret . You can find these attributes in the GitHub Developer Application Settings page for your application. See the GitHub identity broker page for more information. For example: Configuring a LinkedIn identity provider Use linkedin as the providerId . Provide the config attributes clientId and clientSecret . You can find these attributes in the LinkedIn Developer Console application page for your application. See the LinkedIn identity broker page for more information. For example: Configuring a Microsoft Live identity provider Use microsoft as the providerId . Provide the config attributes clientId and clientSecret . You can find these attributes in the Microsoft Application Registration Portal page for your application. See the Microsoft identity broker page for more information. For example: Configuring a Stack Overflow identity provider Use stackoverflow command as the providerId . Provide the config attributes clientId , clientSecret , and key . You can find these attributes in the Stack Apps OAuth page for your application. See the Stack Overflow identity broker page for more information. For example: 17.12. Storage provider operations Configuring a Kerberos storage provider Use the create command against the components endpoint. Specify the realm id as a value of the parentId attribute. Specify kerberos as the value of the providerId attribute, and org.keycloak.storage.UserStorageProvider as the value of the providerType attribute. For example: Configuring an LDAP user storage provider Use the create command against the components endpoint. Specify ldap as the value of the providerId attribute, and org.keycloak.storage.UserStorageProvider as the value of the providerType attribute. Provide the realm ID as the value of the parentId attribute. Use the following example to create a Kerberos-integrated LDAP provider. Removing a user storage provider instance Use the storage provider instance's id attribute to compose an endpoint URI, such as components/ID . Run the delete command against this endpoint. For example: Triggering synchronization of all users for a specific user storage provider Use the storage provider's id attribute to compose an endpoint URI, such as user-storage/ID_OF_USER_STORAGE_INSTANCE/sync . Add the action=triggerFullSync query parameter. Run the create command. For example: Triggering synchronization of changed users for a specific user storage provider Use the storage provider's id attribute to compose an endpoint URI, such as user-storage/ID_OF_USER_STORAGE_INSTANCE/sync . Add the action=triggerChangedUsersSync query parameter. Run the create command. For example: Test LDAP user storage connectivity Run the get command on the testLDAPConnection endpoint. Provide query parameters bindCredential , bindDn , connectionUrl , and useTruststoreSpi . Set the action query parameter to testConnection . For example: Test LDAP user storage authentication Run the get command on the testLDAPConnection endpoint. Provide the query parameters bindCredential , bindDn , connectionUrl , and useTruststoreSpi . Set the action query parameter to testAuthentication . For example: 17.13. Adding mappers Adding a hard-coded role LDAP mapper Run the create command on the components endpoint. Set the providerType attribute to org.keycloak.storage.ldap.mappers.LDAPStorageMapper . Set the parentId attribute to the ID of the LDAP provider instance. Set the providerId attribute to hardcoded-ldap-role-mapper . Ensure you provide a value of role configuration parameter. For example: Adding an MS Active Directory mapper Run the create command on the components endpoint. Set the providerType attribute to org.keycloak.storage.ldap.mappers.LDAPStorageMapper . Set the parentId attribute to the ID of the LDAP provider instance. Set the providerId attribute to msad-user-account-control-mapper . For example: Adding a user attribute LDAP mapper Run the create command on the components endpoint. Set the providerType attribute to org.keycloak.storage.ldap.mappers.LDAPStorageMapper . Set the parentId attribute to the ID of the LDAP provider instance. Set the providerId attribute to user-attribute-ldap-mapper . For example: Adding a group LDAP mapper Run the create command on the components endpoint. Set the providerType attribute to org.keycloak.storage.ldap.mappers.LDAPStorageMapper . Set the parentId attribute to the ID of the LDAP provider instance. Set the providerId attribute to group-ldap-mapper . For example: Adding a full name LDAP mapper Run the create command on the components endpoint. Set the providerType attribute to org.keycloak.storage.ldap.mappers.LDAPStorageMapper . Set the parentId attribute to the ID of the LDAP provider instance. Set the providerId attribute to full-name-ldap-mapper . For example: 17.14. Authentication operations Setting a password policy Set the realm's passwordPolicy attribute to an enumeration expression that includes the specific policy provider ID and optional configuration. Use the following example to set a password policy to default values. The default values include: 27,500 hashing iterations at least one special character at least one uppercase character at least one digit character not be equal to a user's username be at least eight characters long To use values different from defaults, pass the configuration in brackets. Use the following example to set a password policy to: 25,000 hash iterations at least two special characters at least two uppercase characters at least two lowercase characters at least two digits be at least nine characters long not be equal to a user's username not repeat for at least four changes back Obtaining the current password policy You can get the current realm configuration by filtering all output except for the passwordPolicy attribute. For example, display passwordPolicy for demorealm . Listing authentication flows Run the get command on the authentication/flows endpoint. For example: Getting a specific authentication flow Run the get command on the authentication/flows/FLOW_ID endpoint. For example: Listing executions for a flow Run the get command on the authentication/flows/FLOW_ALIAS/executions endpoint. For example: Adding configuration to an execution Get execution for a flow. Note the ID of the flow. Run the create command on the authentication/executions/{executionId}/config endpoint. For example: Getting configuration for an execution Get execution for a flow. Note its authenticationConfig attribute, which contains the config ID. Run the get command on the authentication/config/ID endpoint. For example: Updating configuration for an execution Get the execution for the flow. Get the flow's authenticationConfig attribute. Note the config ID from the attribute. Run the update command on the authentication/config/ID endpoint. For example: Deleting configuration for an execution Get execution for a flow. Get the flows authenticationConfig attribute. Note the config ID from the attribute. Run the delete command on the authentication/config/ID endpoint. For example: | [
"export PATH=USDPATH:USDKEYCLOAK_HOME/bin kcadm.sh",
"c:\\> set PATH=%PATH%;%KEYCLOAK_HOME%\\bin c:\\> kcadm",
"kcadm.sh config credentials --server http://localhost:8080 --realm demo --user admin --client admin kcadm.sh create realms -s realm=demorealm -s enabled=true -o CID=USD(kcadm.sh create clients -r demorealm -s clientId=my_client -s 'redirectUris=[\"http://localhost:8980/myapp/*\"]' -i) kcadm.sh get clients/USDCID/installation/providers/keycloak-oidc-keycloak-json",
"c:\\> kcadm config credentials --server http://localhost:8080 --realm demo --user admin --client admin c:\\> kcadm create realms -s realm=demorealm -s enabled=true -o c:\\> kcadm create clients -r demorealm -s clientId=my_client -s \"redirectUris=[\\\"http://localhost:8980/myapp/*\\\"]\" -i > clientid.txt c:\\> set /p CID=<clientid.txt c:\\> kcadm get clients/%CID%/installation/providers/keycloak-oidc-keycloak-json",
"kcadm.sh config truststore --trustpass USDPASSWORD ~/.keycloak/truststore.jks",
"c:\\> kcadm config truststore --trustpass %PASSWORD% %HOMEPATH%\\.keycloak\\truststore.jks",
"kcadm.sh config credentials --server http://localhost:8080 --realm master --user admin --password admin",
"kcadm.sh get realms --no-config --server http://localhost:8080 --realm master --user admin --password admin",
"kcadm.sh create ENDPOINT [ARGUMENTS] kcadm.sh get ENDPOINT [ARGUMENTS] kcadm.sh update ENDPOINT [ARGUMENTS] kcadm.sh delete ENDPOINT [ARGUMENTS]",
"SERVER_URI/admin/realms/REALM/ENDPOINT",
"SERVER_URI/admin/realms",
"SERVER_URI/admin/realms/TARGET_REALM/ENDPOINT",
"kcadm.sh config credentials --server http://localhost:8080 --realm master --user admin --password admin kcadm.sh create users -s username=testuser -s enabled=true -r demorealm",
"kcadm.sh get realms/demorealm > demorealm.json vi demorealm.json kcadm.sh update realms/demorealm -f demorealm.json",
"kcadm.sh update realms/demorealm -s enabled=false",
"kcadm.sh create realms -s realm=demorealm -s enabled=true",
"kcadm.sh create realms -f demorealm.json",
"kcadm.sh create realms -f - << EOF { \"realm\": \"demorealm\", \"enabled\": true } EOF",
"c:\\> echo { \"realm\": \"demorealm\", \"enabled\": true } | kcadm create realms -f -",
"kcadm.sh get realms",
"kcadm.sh get realms --fields realm,enabled",
"kcadm.sh get realms --fields realm --format csv --noquotes",
"kcadm.sh get realms/master",
"kcadm.sh update realms/demorealm -s enabled=false",
"kcadm.sh get realms/demorealm > demorealm.json vi demorealm.json kcadm.sh update realms/demorealm -f demorealm.json",
"kcadm.sh delete realms/demorealm",
"kcadm.sh update realms/demorealm -s registrationAllowed=true -s registrationEmailAsUsername=true -s rememberMe=true -s verifyEmail=true -s resetPasswordAllowed=true -s editUsernameAllowed=true",
"kcadm.sh get keys -r demorealm",
"kcadm.sh get realms/demorealm --fields id --format csv --noquotes",
"kcadm.sh create components -r demorealm -s name=rsa-generated -s providerId=rsa-generated -s providerType=org.keycloak.keys.KeyProvider -s parentId=959844c1-d149-41d7-8359-6aa527fca0b0 -s 'config.priority=[\"101\"]' -s 'config.enabled=[\"true\"]' -s 'config.active=[\"true\"]' -s 'config.keySize=[\"2048\"]'",
"c:\\> kcadm create components -r demorealm -s name=rsa-generated -s providerId=rsa-generated -s providerType=org.keycloak.keys.KeyProvider -s parentId=959844c1-d149-41d7-8359-6aa527fca0b0 -s \"config.priority=[\\\"101\\\"]\" -s \"config.enabled=[\\\"true\\\"]\" -s \"config.active=[\\\"true\\\"]\" -s \"config.keySize=[\\\"2048\\\"]\"",
"kcadm.sh create components -r demorealm -s name=java-keystore -s providerId=java-keystore -s providerType=org.keycloak.keys.KeyProvider -s parentId=959844c1-d149-41d7-8359-6aa527fca0b0 -s 'config.priority=[\"101\"]' -s 'config.enabled=[\"true\"]' -s 'config.active=[\"true\"]' -s 'config.keystore=[\"/opt/keycloak/keystore.jks\"]' -s 'config.keystorePassword=[\"secret\"]' -s 'config.keyPassword=[\"secret\"]' -s 'config.keyAlias=[\"localhost\"]'",
"c:\\> kcadm create components -r demorealm -s name=java-keystore -s providerId=java-keystore -s providerType=org.keycloak.keys.KeyProvider -s parentId=959844c1-d149-41d7-8359-6aa527fca0b0 -s \"config.priority=[\\\"101\\\"]\" -s \"config.enabled=[\\\"true\\\"]\" -s \"config.active=[\\\"true\\\"]\" -s \"config.keystore=[\\\"/opt/keycloak/keystore.jks\\\"]\" -s \"config.keystorePassword=[\\\"secret\\\"]\" -s \"config.keyPassword=[\\\"secret\\\"]\" -s \"config.keyAlias=[\\\"localhost\\\"]\"",
"kcadm.sh get keys -r demorealm",
"kcadm.sh update components/PROVIDER_ID -r demorealm -s 'config.active=[\"false\"]'",
"c:\\> kcadm update components/PROVIDER_ID -r demorealm -s \"config.active=[\\\"false\\\"]\"",
"kcadm.sh get keys -r demorealm",
"kcadm.sh delete components/PROVIDER_ID -r demorealm",
"kcadm.sh update events/config -r demorealm -s 'eventsListeners=[\"jboss-logging\"]'",
"c:\\> kcadm update events/config -r demorealm -s \"eventsListeners=[\\\"jboss-logging\\\"]\"",
"kcadm.sh update events/config -r demorealm -s eventsEnabled=true -s 'enabledEventTypes=[\"LOGIN_ERROR\",\"REGISTER_ERROR\",\"LOGOUT_ERROR\",\"CODE_TO_TOKEN_ERROR\",\"CLIENT_LOGIN_ERROR\",\"FEDERATED_IDENTITY_LINK_ERROR\",\"REMOVE_FEDERATED_IDENTITY_ERROR\",\"UPDATE_EMAIL_ERROR\",\"UPDATE_PROFILE_ERROR\",\"UPDATE_PASSWORD_ERROR\",\"UPDATE_TOTP_ERROR\",\"VERIFY_EMAIL_ERROR\",\"REMOVE_TOTP_ERROR\",\"SEND_VERIFY_EMAIL_ERROR\",\"SEND_RESET_PASSWORD_ERROR\",\"SEND_IDENTITY_PROVIDER_LINK_ERROR\",\"RESET_PASSWORD_ERROR\",\"IDENTITY_PROVIDER_FIRST_LOGIN_ERROR\",\"IDENTITY_PROVIDER_POST_LOGIN_ERROR\",\"CUSTOM_REQUIRED_ACTION_ERROR\",\"EXECUTE_ACTIONS_ERROR\",\"CLIENT_REGISTER_ERROR\",\"CLIENT_UPDATE_ERROR\",\"CLIENT_DELETE_ERROR\"]' -s eventsExpiration=172800",
"c:\\> kcadm update events/config -r demorealm -s eventsEnabled=true -s \"enabledEventTypes=[\\\"LOGIN_ERROR\\\",\\\"REGISTER_ERROR\\\",\\\"LOGOUT_ERROR\\\",\\\"CODE_TO_TOKEN_ERROR\\\",\\\"CLIENT_LOGIN_ERROR\\\",\\\"FEDERATED_IDENTITY_LINK_ERROR\\\",\\\"REMOVE_FEDERATED_IDENTITY_ERROR\\\",\\\"UPDATE_EMAIL_ERROR\\\",\\\"UPDATE_PROFILE_ERROR\\\",\\\"UPDATE_PASSWORD_ERROR\\\",\\\"UPDATE_TOTP_ERROR\\\",\\\"VERIFY_EMAIL_ERROR\\\",\\\"REMOVE_TOTP_ERROR\\\",\\\"SEND_VERIFY_EMAIL_ERROR\\\",\\\"SEND_RESET_PASSWORD_ERROR\\\",\\\"SEND_IDENTITY_PROVIDER_LINK_ERROR\\\",\\\"RESET_PASSWORD_ERROR\\\",\\\"IDENTITY_PROVIDER_FIRST_LOGIN_ERROR\\\",\\\"IDENTITY_PROVIDER_POST_LOGIN_ERROR\\\",\\\"CUSTOM_REQUIRED_ACTION_ERROR\\\",\\\"EXECUTE_ACTIONS_ERROR\\\",\\\"CLIENT_REGISTER_ERROR\\\",\\\"CLIENT_UPDATE_ERROR\\\",\\\"CLIENT_DELETE_ERROR\\\"]\" -s eventsExpiration=172800",
"kcadm.sh update events/config -r demorealm -s enabledEventTypes=[]",
"kcadm.sh update events/config -r demorealm -s adminEventsEnabled=true -s adminEventsDetailsEnabled=true",
"kcadm.sh get events --offset 0 --limit 100",
"kcadm delete events",
"kcadm.sh create clear-realm-cache -r demorealm -s realm=demorealm kcadm.sh create clear-user-cache -r demorealm -s realm=demorealm kcadm.sh create clear-keys-cache -r demorealm -s realm=demorealm",
"kcadm.sh create partialImport -r demorealm2 -s ifResourceExists=FAIL -o -f demorealm.json",
"kcadm.sh create realms -s realm=demorealm2 -s enabled=true",
"kcadm.sh create roles -r demorealm -s name=user -s 'description=Regular user with a limited set of permissions'",
"kcadm.sh get clients -r demorealm --fields id,clientId",
"kcadm.sh create clients/a95b6af3-0bdc-4878-ae2e-6d61a4eca9a0/roles -r demorealm -s name=editor -s 'description=Editor can edit, and publish any article'",
"kcadm.sh get roles -r demorealm",
"kcadm.sh get-roles -r demorealm",
"kcadm.sh get-roles -r demorealm --cclientid realm-management",
"kcadm.sh get roles/user -r demorealm",
"kcadm.sh get-roles -r demorealm --rolename user",
"kcadm.sh get-roles -r demorealm --cclientid realm-management --rolename manage-clients",
"kcadm.sh update roles/user -r demorealm -s 'description=Role representing a regular user'",
"kcadm.sh update clients/a95b6af3-0bdc-4878-ae2e-6d61a4eca9a0/roles/editor -r demorealm -s 'description=User that can edit, and publish articles'",
"kcadm.sh delete roles/user -r demorealm",
"kcadm.sh delete clients/a95b6af3-0bdc-4878-ae2e-6d61a4eca9a0/roles/editor -r demorealm",
"kcadm.sh get-roles -r demorealm --rname testrole",
"kcadm.sh get-roles -r demorealm --rname testrole --effective",
"kcadm.sh get-roles -r demorealm --rname testrole --available",
"kcadm.sh get-roles -r demorealm --rname testrole --cclientid realm-management",
"kcadm.sh get-roles -r demorealm --rname testrole --cclientid realm-management --effective",
"kcadm.sh get-roles -r demorealm --rname testrole --cclientid realm-management --available",
"kcadm.sh add-roles --rname testrole --rolename user -r demorealm",
"kcadm.sh remove-roles --rname testrole --rolename user -r demorealm",
"kcadm.sh add-roles -r demorealm --rname testrole --cclientid realm-management --rolename create-client --rolename view-users",
"kcadm.sh get-roles -r demorealm --cclientid test-client --rolename operations",
"kcadm.sh add-roles -r demorealm --cclientid test-client --rid fc400897-ef6a-4e8c-872b-1581b7fa8a71 --rolename support",
"kcadm.sh get-roles --rid fc400897-ef6a-4e8c-872b-1581b7fa8a71 --all",
"kcadm.sh remove-roles -r demorealm --rname testrole --cclientid realm-management --rolename create-client --rolename view-users",
"kcadm.sh add-roles -r demorealm --gname Group --cclientid realm-management --rolename create-client --rolename view-users",
"kcadm.sh remove-roles -r demorealm --gname Group --cclientid realm-management --rolename create-client --rolename view-users",
"kcadm.sh create clients -r demorealm -s clientId=myapp -s enabled=true",
"kcadm.sh create clients -r demorealm -s clientId=myapp -s enabled=true -s clientAuthenticatorType=client-secret -s secret=d0b8122f-8dfb-46b7-b68a-f5cc4e25d000",
"kcadm.sh get clients -r demorealm --fields id,clientId",
"kcadm.sh get clients/c7b8547f-e748-4333-95d0-410b76b3f4a3 -r demorealm",
"kcadm.sh get clients/USDCID/client-secret -r demorealm",
"kcadm.sh create clients/USDCID/client-secret -r demorealm",
"kcadm.sh update clients/USDCID -s \"secret=newSecret\" -r demorealm",
"kcadm.sh get clients/c7b8547f-e748-4333-95d0-410b76b3f4a3/installation/providers/keycloak-oidc-keycloak-json -r demorealm",
"kcadm.sh get clients/c7b8547f-e748-4333-95d0-410b76b3f4a3/installation/providers/keycloak-oidc-jboss-subsystem -r demorealm",
"kcadm.sh get http://localhost:8080/admin/realms/demorealm/clients/8f271c35-44e3-446f-8953-b0893810ebe7/installation/providers/docker-v2-compose-yaml -r demorealm > keycloak-docker-compose-yaml.zip",
"kcadm.sh update clients/c7b8547f-e748-4333-95d0-410b76b3f4a3 -r demorealm -s enabled=false -s publicClient=true -s 'redirectUris=[\"http://localhost:8080/myapp/*\"]' -s baseUrl=http://localhost:8080/myapp -s adminUrl=http://localhost:8080/myapp",
"c:\\> kcadm update clients/c7b8547f-e748-4333-95d0-410b76b3f4a3 -r demorealm -s enabled=false -s publicClient=true -s \"redirectUris=[\\\"http://localhost:8080/myapp/*\\\"]\" -s baseUrl=http://localhost:8080/myapp -s adminUrl=http://localhost:8080/myapp",
"kcadm.sh delete clients/c7b8547f-e748-4333-95d0-410b76b3f4a3 -r demorealm",
"kcadm.sh create users -r demorealm -s username=testuser -s enabled=true",
"kcadm.sh get users -r demorealm --offset 0 --limit 1000",
"kcadm.sh get users -r demorealm -q email=google.com kcadm.sh get users -r demorealm -q username=testuser",
"kcadm.sh get users/0ba7a3fd-6fd8-48cd-a60b-2e8fd82d56e2 -r demorealm",
"kcadm.sh update users/0ba7a3fd-6fd8-48cd-a60b-2e8fd82d56e2 -r demorealm -s 'requiredActions=[\"VERIFY_EMAIL\",\"UPDATE_PROFILE\",\"CONFIGURE_TOTP\",\"UPDATE_PASSWORD\"]'",
"c:\\> kcadm update users/0ba7a3fd-6fd8-48cd-a60b-2e8fd82d56e2 -r demorealm -s \"requiredActions=[\\\"VERIFY_EMAIL\\\",\\\"UPDATE_PROFILE\\\",\\\"CONFIGURE_TOTP\\\",\\\"UPDATE_PASSWORD\\\"]\"",
"kcadm.sh delete users/0ba7a3fd-6fd8-48cd-a60b-2e8fd82d56e2 -r demorealm",
"kcadm.sh set-password -r demorealm --username testuser --new-password NEWPASSWORD --temporary",
"kcadm.sh update users/0ba7a3fd-6fd8-48cd-a60b-2e8fd82d56e2/reset-password -r demorealm -s type=password -s value=NEWPASSWORD -s temporary=true -n",
"kcadm.sh get-roles -r demorealm --uusername testuser",
"kcadm.sh get-roles -r demorealm --uusername testuser --effective",
"kcadm.sh get-roles -r demorealm --uusername testuser --available",
"kcadm.sh get-roles -r demorealm --uusername testuser --cclientid realm-management",
"kcadm.sh get-roles -r demorealm --uusername testuser --cclientid realm-management --effective",
"kcadm.sh get-roles -r demorealm --uusername testuser --cclientid realm-management --available",
"kcadm.sh add-roles --uusername testuser --rolename user -r demorealm",
"kcadm.sh remove-roles --uusername testuser --rolename user -r demorealm",
"kcadm.sh add-roles -r demorealm --uusername testuser --cclientid realm-management --rolename create-client --rolename view-users",
"kcadm.sh remove-roles -r demorealm --uusername testuser --cclientid realm-management --rolename create-client --rolename view-users",
"kcadm.sh get users/6da5ab89-3397-4205-afaa-e201ff638f9e/sessions -r demorealm",
"kcadm.sh delete sessions/d0eaa7cc-8c5d-489d-811a-69d3c4ec84d1 -r demorealm",
"kcadm.sh create users/6da5ab89-3397-4205-afaa-e201ff638f9e/logout -r demorealm -s realm=demorealm -s user=6da5ab89-3397-4205-afaa-e201ff638f9e",
"kcadm.sh create groups -r demorealm -s name=Group",
"kcadm.sh get groups -r demorealm",
"kcadm.sh get groups/51204821-0580-46db-8f2d-27106c6b5ded -r demorealm",
"kcadm.sh update groups/51204821-0580-46db-8f2d-27106c6b5ded -s 'attributes.email=[\"[email protected]\"]' -r demorealm",
"kcadm.sh delete groups/51204821-0580-46db-8f2d-27106c6b5ded -r demorealm",
"kcadm.sh create groups/51204821-0580-46db-8f2d-27106c6b5ded/children -r demorealm -s name=SubGroup",
"kcadm.sh create groups/51204821-0580-46db-8f2d-27106c6b5ded/children -r demorealm -s id=08d410c6-d585-4059-bb07-54dcb92c5094 -s name=SubGroup",
"kcadm.sh get users/b544f379-5fc4-49e5-8a8d-5cfb71f46f53/groups -r demorealm",
"kcadm.sh update users/b544f379-5fc4-49e5-8a8d-5cfb71f46f53/groups/ce01117a-7426-4670-a29a-5c118056fe20 -r demorealm -s realm=demorealm -s userId=b544f379-5fc4-49e5-8a8d-5cfb71f46f53 -s groupId=ce01117a-7426-4670-a29a-5c118056fe20 -n",
"kcadm.sh delete users/b544f379-5fc4-49e5-8a8d-5cfb71f46f53/groups/ce01117a-7426-4670-a29a-5c118056fe20 -r demorealm",
"kcadm.sh get-roles -r demorealm --gname Group",
"kcadm.sh get-roles -r demorealm --gname Group --effective",
"kcadm.sh get-roles -r demorealm --gname Group --available",
"kcadm.sh get-roles -r demorealm --gname Group --cclientid realm-management",
"kcadm.sh get-roles -r demorealm --gname Group --cclientid realm-management --effective",
"kcadm.sh get-roles -r demorealm --gname Group --cclientid realm-management --available",
"kcadm.sh get serverinfo -r demorealm --fields 'identityProviders(*)'",
"kcadm.sh get identity-provider/instances -r demorealm --fields alias,providerId,enabled",
"kcadm.sh get identity-provider/instances/facebook -r demorealm",
"kcadm.sh delete identity-provider/instances/facebook -r demorealm",
"kcadm.sh create identity-provider/instances -r demorealm -s alias=keycloak-oidc -s providerId=keycloak-oidc -s enabled=true -s 'config.useJwksUrl=\"true\"' -s config.authorizationUrl=http://localhost:8180/realms/demorealm/protocol/openid-connect/auth -s config.tokenUrl=http://localhost:8180/realms/demorealm/protocol/openid-connect/token -s config.clientId=demo-oidc-provider -s config.clientSecret=secret",
"kcadm.sh create identity-provider/instances -r demorealm -s alias=saml -s providerId=saml -s enabled=true -s 'config.useJwksUrl=\"true\"' -s config.singleSignOnServiceUrl=http://localhost:8180/realms/saml-broker-realm/protocol/saml -s config.nameIDPolicyFormat=urn:oasis:names:tc:SAML:2.0:nameid-format:persistent -s config.signatureAlgorithm=RSA_SHA256",
"kcadm.sh create identity-provider/instances -r demorealm -s alias=facebook -s providerId=facebook -s enabled=true -s 'config.useJwksUrl=\"true\"' -s config.clientId=FACEBOOK_CLIENT_ID -s config.clientSecret=FACEBOOK_CLIENT_SECRET",
"kcadm.sh create identity-provider/instances -r demorealm -s alias=google -s providerId=google -s enabled=true -s 'config.useJwksUrl=\"true\"' -s config.clientId=GOOGLE_CLIENT_ID -s config.clientSecret=GOOGLE_CLIENT_SECRET",
"kcadm.sh create identity-provider/instances -r demorealm -s alias=google -s providerId=google -s enabled=true -s 'config.useJwksUrl=\"true\"' -s config.clientId=TWITTER_API_KEY -s config.clientSecret=TWITTER_API_SECRET",
"kcadm.sh create identity-provider/instances -r demorealm -s alias=github -s providerId=github -s enabled=true -s 'config.useJwksUrl=\"true\"' -s config.clientId=GITHUB_CLIENT_ID -s config.clientSecret=GITHUB_CLIENT_SECRET",
"kcadm.sh create identity-provider/instances -r demorealm -s alias=linkedin -s providerId=linkedin -s enabled=true -s 'config.useJwksUrl=\"true\"' -s config.clientId=LINKEDIN_CLIENT_ID -s config.clientSecret=LINKEDIN_CLIENT_SECRET",
"kcadm.sh create identity-provider/instances -r demorealm -s alias=microsoft -s providerId=microsoft -s enabled=true -s 'config.useJwksUrl=\"true\"' -s config.clientId=MICROSOFT_APP_ID -s config.clientSecret=MICROSOFT_PASSWORD",
"kcadm.sh create identity-provider/instances -r demorealm -s alias=stackoverflow -s providerId=stackoverflow -s enabled=true -s 'config.useJwksUrl=\"true\"' -s config.clientId=STACKAPPS_CLIENT_ID -s config.clientSecret=STACKAPPS_CLIENT_SECRET -s config.key=STACKAPPS_KEY",
"kcadm.sh create components -r demorealm -s parentId=demorealmId -s id=demokerberos -s name=demokerberos -s providerId=kerberos -s providerType=org.keycloak.storage.UserStorageProvider -s 'config.priority=[\"0\"]' -s 'config.debug=[\"false\"]' -s 'config.allowPasswordAuthentication=[\"true\"]' -s 'config.editMode=[\"UNSYNCED\"]' -s 'config.updateProfileFirstLogin=[\"true\"]' -s 'config.allowKerberosAuthentication=[\"true\"]' -s 'config.kerberosRealm=[\"KEYCLOAK.ORG\"]' -s 'config.keyTab=[\"http.keytab\"]' -s 'config.serverPrincipal=[\"HTTP/[email protected]\"]' -s 'config.cachePolicy=[\"DEFAULT\"]'",
"kcadm.sh create components -r demorealm -s name=kerberos-ldap-provider -s providerId=ldap -s providerType=org.keycloak.storage.UserStorageProvider -s parentId=3d9c572b-8f33-483f-98a6-8bb421667867 -s 'config.priority=[\"1\"]' -s 'config.fullSyncPeriod=[\"-1\"]' -s 'config.changedSyncPeriod=[\"-1\"]' -s 'config.cachePolicy=[\"DEFAULT\"]' -s config.evictionDay=[] -s config.evictionHour=[] -s config.evictionMinute=[] -s config.maxLifespan=[] -s 'config.batchSizeForSync=[\"1000\"]' -s 'config.editMode=[\"WRITABLE\"]' -s 'config.syncRegistrations=[\"false\"]' -s 'config.vendor=[\"other\"]' -s 'config.usernameLDAPAttribute=[\"uid\"]' -s 'config.rdnLDAPAttribute=[\"uid\"]' -s 'config.uuidLDAPAttribute=[\"entryUUID\"]' -s 'config.userObjectClasses=[\"inetOrgPerson, organizationalPerson\"]' -s 'config.connectionUrl=[\"ldap://localhost:10389\"]' -s 'config.usersDn=[\"ou=People,dc=keycloak,dc=org\"]' -s 'config.authType=[\"simple\"]' -s 'config.bindDn=[\"uid=admin,ou=system\"]' -s 'config.bindCredential=[\"secret\"]' -s 'config.searchScope=[\"1\"]' -s 'config.useTruststoreSpi=[\"always\"]' -s 'config.connectionPooling=[\"true\"]' -s 'config.pagination=[\"true\"]' -s 'config.allowKerberosAuthentication=[\"true\"]' -s 'config.serverPrincipal=[\"HTTP/[email protected]\"]' -s 'config.keyTab=[\"http.keytab\"]' -s 'config.kerberosRealm=[\"KEYCLOAK.ORG\"]' -s 'config.debug=[\"true\"]' -s 'config.useKerberosForPasswordAuthentication=[\"true\"]'",
"kcadm.sh delete components/3d9c572b-8f33-483f-98a6-8bb421667867 -r demorealm",
"kcadm.sh create user-storage/b7c63d02-b62a-4fc1-977c-947d6a09e1ea/sync?action=triggerFullSync",
"kcadm.sh create user-storage/b7c63d02-b62a-4fc1-977c-947d6a09e1ea/sync?action=triggerChangedUsersSync",
"kcadm.sh create testLDAPConnection -s action=testConnection -s bindCredential=secret -s bindDn=uid=admin,ou=system -s connectionUrl=ldap://localhost:10389 -s useTruststoreSpi=always",
"kcadm.sh create testLDAPConnection -s action=testAuthentication -s bindCredential=secret -s bindDn=uid=admin,ou=system -s connectionUrl=ldap://localhost:10389 -s useTruststoreSpi=always",
"kcadm.sh create components -r demorealm -s name=hardcoded-ldap-role-mapper -s providerId=hardcoded-ldap-role-mapper -s providerType=org.keycloak.storage.ldap.mappers.LDAPStorageMapper -s parentId=b7c63d02-b62a-4fc1-977c-947d6a09e1ea -s 'config.role=[\"realm-management.create-client\"]'",
"kcadm.sh create components -r demorealm -s name=msad-user-account-control-mapper -s providerId=msad-user-account-control-mapper -s providerType=org.keycloak.storage.ldap.mappers.LDAPStorageMapper -s parentId=b7c63d02-b62a-4fc1-977c-947d6a09e1ea",
"kcadm.sh create components -r demorealm -s name=user-attribute-ldap-mapper -s providerId=user-attribute-ldap-mapper -s providerType=org.keycloak.storage.ldap.mappers.LDAPStorageMapper -s parentId=b7c63d02-b62a-4fc1-977c-947d6a09e1ea -s 'config.\"user.model.attribute\"=[\"email\"]' -s 'config.\"ldap.attribute\"=[\"mail\"]' -s 'config.\"read.only\"=[\"false\"]' -s 'config.\"always.read.value.from.ldap\"=[\"false\"]' -s 'config.\"is.mandatory.in.ldap\"=[\"false\"]'",
"kcadm.sh create components -r demorealm -s name=group-ldap-mapper -s providerId=group-ldap-mapper -s providerType=org.keycloak.storage.ldap.mappers.LDAPStorageMapper -s parentId=b7c63d02-b62a-4fc1-977c-947d6a09e1ea -s 'config.\"groups.dn\"=[]' -s 'config.\"group.name.ldap.attribute\"=[\"cn\"]' -s 'config.\"group.object.classes\"=[\"groupOfNames\"]' -s 'config.\"preserve.group.inheritance\"=[\"true\"]' -s 'config.\"membership.ldap.attribute\"=[\"member\"]' -s 'config.\"membership.attribute.type\"=[\"DN\"]' -s 'config.\"groups.ldap.filter\"=[]' -s 'config.mode=[\"LDAP_ONLY\"]' -s 'config.\"user.roles.retrieve.strategy\"=[\"LOAD_GROUPS_BY_MEMBER_ATTRIBUTE\"]' -s 'config.\"mapped.group.attributes\"=[\"admins-group\"]' -s 'config.\"drop.non.existing.groups.during.sync\"=[\"false\"]' -s 'config.roles=[\"admins\"]' -s 'config.groups=[\"admins-group\"]' -s 'config.group=[]' -s 'config.preserve=[\"true\"]' -s 'config.membership=[\"member\"]'",
"kcadm.sh create components -r demorealm -s name=full-name-ldap-mapper -s providerId=full-name-ldap-mapper -s providerType=org.keycloak.storage.ldap.mappers.LDAPStorageMapper -s parentId=b7c63d02-b62a-4fc1-977c-947d6a09e1ea -s 'config.\"ldap.full.name.attribute\"=[\"cn\"]' -s 'config.\"read.only\"=[\"false\"]' -s 'config.\"write.only\"=[\"true\"]'",
"kcadm.sh update realms/demorealm -s 'passwordPolicy=\"hashIterations and specialChars and upperCase and digits and notUsername and length\"'",
"kcadm.sh update realms/demorealm -s 'passwordPolicy=\"hashIterations(25000) and specialChars(2) and upperCase(2) and lowerCase(2) and digits(2) and length(9) and notUsername and passwordHistory(4)\"'",
"kcadm.sh get realms/demorealm --fields passwordPolicy",
"kcadm.sh get authentication/flows -r demorealm",
"kcadm.sh get authentication/flows/febfd772-e1a1-42fb-b8ae-00c0566fafb8 -r demorealm",
"kcadm.sh get authentication/flows/Copy%20of%20browser/executions -r demorealm",
"kcadm.sh create \"authentication/executions/a3147129-c402-4760-86d9-3f2345e401c7/config\" -r demorealm -b '{\"config\":{\"x509-cert-auth.mapping-source-selection\":\"Match SubjectDN using regular expression\",\"x509-cert-auth.regular-expression\":\"(.*?)(?:USD)\",\"x509-cert-auth.mapper-selection\":\"Custom Attribute Mapper\",\"x509-cert-auth.mapper-selection.user-attribute-name\":\"usercertificate\",\"x509-cert-auth.crl-checking-enabled\":\"\",\"x509-cert-auth.crldp-checking-enabled\":false,\"x509-cert-auth.crl-relative-path\":\"crl.pem\",\"x509-cert-auth.ocsp-checking-enabled\":\"\",\"x509-cert-auth.ocsp-responder-uri\":\"\",\"x509-cert-auth.keyusage\":\"\",\"x509-cert-auth.extendedkeyusage\":\"\",\"x509-cert-auth.confirmation-page-disallowed\":\"\"},\"alias\":\"my_otp_config\"}'",
"kcadm get \"authentication/config/dd91611a-d25c-421a-87e2-227c18421833\" -r demorealm",
"kcadm update \"authentication/config/dd91611a-d25c-421a-87e2-227c18421833\" -r demorealm -b '{\"id\":\"dd91611a-d25c-421a-87e2-227c18421833\",\"alias\":\"my_otp_config\",\"config\":{\"x509-cert-auth.extendedkeyusage\":\"\",\"x509-cert-auth.mapper-selection.user-attribute-name\":\"usercertificate\",\"x509-cert-auth.ocsp-responder-uri\":\"\",\"x509-cert-auth.regular-expression\":\"(.*?)(?:USD)\",\"x509-cert-auth.crl-checking-enabled\":\"true\",\"x509-cert-auth.confirmation-page-disallowed\":\"\",\"x509-cert-auth.keyusage\":\"\",\"x509-cert-auth.mapper-selection\":\"Custom Attribute Mapper\",\"x509-cert-auth.crl-relative-path\":\"crl.pem\",\"x509-cert-auth.crldp-checking-enabled\":\"false\",\"x509-cert-auth.mapping-source-selection\":\"Match SubjectDN using regular expression\",\"x509-cert-auth.ocsp-checking-enabled\":\"\"}}'",
"kcadm delete \"authentication/config/dd91611a-d25c-421a-87e2-227c18421833\" -r demorealm"
] | https://docs.redhat.com/en/documentation/red_hat_build_of_keycloak/22.0/html/server_administration_guide/admin_cli |
Nodes | Nodes Red Hat OpenShift Service on AWS 4 Red Hat OpenShift Service on AWS Nodes Red Hat OpenShift Documentation Team | [
"kind: Pod apiVersion: v1 metadata: name: example labels: environment: production app: abc 1 spec: restartPolicy: Always 2 securityContext: 3 runAsNonRoot: true seccompProfile: type: RuntimeDefault containers: 4 - name: abc args: - sleep - \"1000000\" volumeMounts: 5 - name: cache-volume mountPath: /cache 6 image: registry.access.redhat.com/ubi7/ubi-init:latest 7 securityContext: allowPrivilegeEscalation: false runAsNonRoot: true capabilities: drop: [\"ALL\"] resources: limits: memory: \"100Mi\" cpu: \"1\" requests: memory: \"100Mi\" cpu: \"1\" volumes: 8 - name: cache-volume emptyDir: sizeLimit: 500Mi",
"oc project <project-name>",
"oc get pods",
"oc get pods",
"NAME READY STATUS RESTARTS AGE console-698d866b78-bnshf 1/1 Running 2 165m console-698d866b78-m87pm 1/1 Running 2 165m",
"oc get pods -o wide",
"NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE console-698d866b78-bnshf 1/1 Running 2 166m 10.128.0.24 ip-10-0-152-71.ec2.internal <none> console-698d866b78-m87pm 1/1 Running 2 166m 10.129.0.23 ip-10-0-173-237.ec2.internal <none>",
"oc adm top pods",
"oc adm top pods -n openshift-console",
"NAME CPU(cores) MEMORY(bytes) console-7f58c69899-q8c8k 0m 22Mi console-7f58c69899-xhbgg 0m 25Mi downloads-594fcccf94-bcxk8 3m 18Mi downloads-594fcccf94-kv4p6 2m 15Mi",
"oc adm top pod --selector=''",
"oc adm top pod --selector='name=my-pod'",
"oc logs -f <pod_name> -c <container_name>",
"oc logs ruby-58cd97df55-mww7r",
"oc logs -f ruby-57f7f4855b-znl92 -c ruby",
"oc logs <object_type>/<resource_name> 1",
"oc logs deployment/ruby",
"{ \"kind\": \"Pod\", \"spec\": { \"containers\": [ { \"image\": \"openshift/hello-openshift\", \"name\": \"hello-openshift\" } ] }, \"apiVersion\": \"v1\", \"metadata\": { \"name\": \"iperf-slow\", \"annotations\": { \"kubernetes.io/ingress-bandwidth\": \"10M\", \"kubernetes.io/egress-bandwidth\": \"10M\" } } }",
"oc create -f <file_or_dir_path>",
"oc get poddisruptionbudget --all-namespaces",
"NAMESPACE NAME MIN AVAILABLE MAX UNAVAILABLE ALLOWED DISRUPTIONS AGE openshift-apiserver openshift-apiserver-pdb N/A 1 1 121m openshift-cloud-controller-manager aws-cloud-controller-manager 1 N/A 1 125m openshift-cloud-credential-operator pod-identity-webhook 1 N/A 1 117m openshift-cluster-csi-drivers aws-ebs-csi-driver-controller-pdb N/A 1 1 121m openshift-cluster-storage-operator csi-snapshot-controller-pdb N/A 1 1 122m openshift-cluster-storage-operator csi-snapshot-webhook-pdb N/A 1 1 122m openshift-console console N/A 1 1 116m #",
"apiVersion: policy/v1 1 kind: PodDisruptionBudget metadata: name: my-pdb spec: minAvailable: 2 2 selector: 3 matchLabels: name: my-pod",
"apiVersion: policy/v1 1 kind: PodDisruptionBudget metadata: name: my-pdb spec: maxUnavailable: 25% 2 selector: 3 matchLabels: name: my-pod",
"oc create -f </path/to/file> -n <project_name>",
"apiVersion: v1 kind: Secret metadata: name: test-secret namespace: my-namespace type: Opaque 1 data: 2 username: <username> 3 password: <password> stringData: 4 hostname: myapp.mydomain.com 5",
"apiVersion: v1 kind: Secret metadata: name: test-secret type: Opaque 1 data: 2 username: <username> password: <password> stringData: 3 hostname: myapp.mydomain.com secret.properties: | property1=valueA property2=valueB",
"apiVersion: v1 kind: ServiceAccount secrets: - name: test-secret",
"apiVersion: v1 kind: Pod metadata: name: secret-example-pod spec: securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault containers: - name: secret-test-container image: busybox command: [ \"/bin/sh\", \"-c\", \"cat /etc/secret-volume/*\" ] volumeMounts: 1 - name: secret-volume mountPath: /etc/secret-volume 2 readOnly: true 3 securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL] volumes: - name: secret-volume secret: secretName: test-secret 4 restartPolicy: Never",
"apiVersion: v1 kind: Pod metadata: name: secret-example-pod spec: securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault containers: - name: secret-test-container image: busybox command: [ \"/bin/sh\", \"-c\", \"export\" ] env: - name: TEST_SECRET_USERNAME_ENV_VAR valueFrom: secretKeyRef: 1 name: test-secret key: username securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL] restartPolicy: Never",
"apiVersion: build.openshift.io/v1 kind: BuildConfig metadata: name: secret-example-bc spec: strategy: sourceStrategy: env: - name: TEST_SECRET_USERNAME_ENV_VAR valueFrom: secretKeyRef: 1 name: test-secret key: username from: kind: ImageStreamTag namespace: openshift name: 'cli:latest'",
"apiVersion: v1 kind: Secret metadata: name: mysecret type: Opaque 1 data: username: <username> password: <password>",
"oc create -f <filename>.yaml",
"apiVersion: v1 kind: Secret metadata: name: secret-sa-sample annotations: kubernetes.io/service-account.name: \"sa-name\" 1 type: kubernetes.io/service-account-token 2",
"oc create -f <filename>.yaml",
"apiVersion: v1 kind: Secret metadata: name: secret-basic-auth type: kubernetes.io/basic-auth 1 data: stringData: 2 username: admin password: <password>",
"oc create -f <filename>.yaml",
"apiVersion: v1 kind: Secret metadata: name: secret-ssh-auth type: kubernetes.io/ssh-auth 1 data: ssh-privatekey: | 2 MIIEpQIBAAKCAQEAulqb/Y",
"oc create -f <filename>.yaml",
"apiVersion: v1 kind: Secret metadata: name: secret-docker-cfg namespace: my-project type: kubernetes.io/dockerconfig 1 data: .dockerconfig:bm5ubm5ubm5ubm5ubm5ubm5ubm5ubmdnZ2dnZ2dnZ2dnZ2dnZ2dnZ2cgYXV0aCBrZXlzCg== 2",
"apiVersion: v1 kind: Secret metadata: name: secret-docker-json namespace: my-project type: kubernetes.io/dockerconfig 1 data: .dockerconfigjson:bm5ubm5ubm5ubm5ubm5ubm5ubm5ubmdnZ2dnZ2dnZ2dnZ2dnZ2dnZ2cgYXV0aCBrZXlzCg== 2",
"oc create -f <filename>.yaml",
"apiVersion: v1 kind: Secret metadata: name: example namespace: <namespace> type: Opaque 1 data: username: <base64 encoded username> password: <base64 encoded password> stringData: 2 hostname: myapp.mydomain.com",
"oc create sa <service_account_name> -n <your_namespace>",
"apiVersion: v1 kind: Secret metadata: name: <secret_name> 1 annotations: kubernetes.io/service-account.name: \"sa-name\" 2 type: kubernetes.io/service-account-token 3",
"oc apply -f service-account-token-secret.yaml",
"oc get secret <sa_token_secret> -o jsonpath='{.data.token}' | base64 --decode 1",
"ayJhbGciOiJSUzI1NiIsImtpZCI6IklOb2dtck1qZ3hCSWpoNnh5YnZhSE9QMkk3YnRZMVZoclFfQTZfRFp1YlUifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJkZWZhdWx0Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZWNyZXQubmFtZSI6ImJ1aWxkZXItdG9rZW4tdHZrbnIiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoiYnVpbGRlciIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6IjNmZGU2MGZmLTA1NGYtNDkyZi04YzhjLTNlZjE0NDk3MmFmNyIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDpkZWZhdWx0OmJ1aWxkZXIifQ.OmqFTDuMHC_lYvvEUrjr1x453hlEEHYcxS9VKSzmRkP1SiVZWPNPkTWlfNRp6bIUZD3U6aN3N7dMSN0eI5hu36xPgpKTdvuckKLTCnelMx6cxOdAbrcw1mCmOClNscwjS1KO1kzMtYnnq8rXHiMJELsNlhnRyyIXRTtNBsy4t64T3283s3SLsancyx0gy0ujx-Ch3uKAKdZi5iT-I8jnnQ-ds5THDs2h65RJhgglQEmSxpHrLGZFmyHAQI-_SjvmHZPXEc482x3SkaQHNLqpmrpJorNqh1M8ZHKzlujhZgVooMvJmWPXTb2vnvi3DGn2XI-hZxl1yD2yGH1RBpYUHA",
"curl -X GET <openshift_cluster_api> --header \"Authorization: Bearer <token>\" 1 2",
"apiVersion: v1 kind: Service metadata: name: registry annotations: service.beta.openshift.io/serving-cert-secret-name: registry-cert 1",
"kind: Service apiVersion: v1 metadata: name: my-service annotations: service.beta.openshift.io/serving-cert-secret-name: my-cert 1 spec: selector: app: MyApp ports: - protocol: TCP port: 80 targetPort: 9376",
"oc create -f <file-name>.yaml",
"oc get secrets",
"NAME TYPE DATA AGE my-cert kubernetes.io/tls 2 9m",
"oc describe secret my-cert",
"Name: my-cert Namespace: openshift-console Labels: <none> Annotations: service.beta.openshift.io/expiry: 2023-03-08T23:22:40Z service.beta.openshift.io/originating-service-name: my-service service.beta.openshift.io/originating-service-uid: 640f0ec3-afc2-4380-bf31-a8c784846a11 service.beta.openshift.io/expiry: 2023-03-08T23:22:40Z Type: kubernetes.io/tls Data ==== tls.key: 1679 bytes tls.crt: 2595 bytes",
"apiVersion: v1 kind: Pod metadata: name: my-service-pod spec: securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault containers: - name: mypod image: redis volumeMounts: - name: my-container mountPath: \"/etc/my-path\" securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL] volumes: - name: my-volume secret: secretName: my-cert items: - key: username path: my-group/my-username mode: 511",
"secret/ssl-key references serviceUID 62ad25ca-d703-11e6-9d6f-0e9c0057b608, which does not match 77b6dd80-d716-11e6-9d6f-0e9c0057b60",
"oc delete secret <secret_name>",
"oc annotate service <service_name> service.beta.openshift.io/serving-cert-generation-error-",
"oc annotate service <service_name> service.beta.openshift.io/serving-cert-generation-error-num-",
"kind: ConfigMap apiVersion: v1 metadata: creationTimestamp: 2016-02-18T19:14:38Z name: example-config namespace: my-namespace data: 1 example.property.1: hello example.property.2: world example.property.file: |- property.1=value-1 property.2=value-2 property.3=value-3 binaryData: bar: L3Jvb3QvMTAw 2",
"oc create configmap <configmap_name> [options]",
"oc create configmap game-config --from-file=example-files/",
"oc describe configmaps game-config",
"Name: game-config Namespace: default Labels: <none> Annotations: <none> Data game.properties: 158 bytes ui.properties: 83 bytes",
"cat example-files/game.properties",
"enemies=aliens lives=3 enemies.cheat=true enemies.cheat.level=noGoodRotten secret.code.passphrase=UUDDLRLRBABAS secret.code.allowed=true secret.code.lives=30",
"cat example-files/ui.properties",
"color.good=purple color.bad=yellow allow.textmode=true how.nice.to.look=fairlyNice",
"oc create configmap game-config --from-file=example-files/",
"oc get configmaps game-config -o yaml",
"apiVersion: v1 data: game.properties: |- enemies=aliens lives=3 enemies.cheat=true enemies.cheat.level=noGoodRotten secret.code.passphrase=UUDDLRLRBABAS secret.code.allowed=true secret.code.lives=30 ui.properties: | color.good=purple color.bad=yellow allow.textmode=true how.nice.to.look=fairlyNice kind: ConfigMap metadata: creationTimestamp: 2016-02-18T18:34:05Z name: game-config namespace: default resourceVersion: \"407\" selflink: /api/v1/namespaces/default/configmaps/game-config uid: 30944725-d66e-11e5-8cd0-68f728db1985",
"oc create configmap game-config-3 --from-file=game-special-key=example-files/game.properties",
"cat example-files/game.properties",
"enemies=aliens lives=3 enemies.cheat=true enemies.cheat.level=noGoodRotten secret.code.passphrase=UUDDLRLRBABAS secret.code.allowed=true secret.code.lives=30",
"cat example-files/ui.properties",
"color.good=purple color.bad=yellow allow.textmode=true how.nice.to.look=fairlyNice",
"oc create configmap game-config-2 --from-file=example-files/game.properties --from-file=example-files/ui.properties",
"oc create configmap game-config-3 --from-file=game-special-key=example-files/game.properties",
"oc get configmaps game-config-2 -o yaml",
"apiVersion: v1 data: game.properties: |- enemies=aliens lives=3 enemies.cheat=true enemies.cheat.level=noGoodRotten secret.code.passphrase=UUDDLRLRBABAS secret.code.allowed=true secret.code.lives=30 ui.properties: | color.good=purple color.bad=yellow allow.textmode=true how.nice.to.look=fairlyNice kind: ConfigMap metadata: creationTimestamp: 2016-02-18T18:52:05Z name: game-config-2 namespace: default resourceVersion: \"516\" selflink: /api/v1/namespaces/default/configmaps/game-config-2 uid: b4952dc3-d670-11e5-8cd0-68f728db1985",
"oc get configmaps game-config-3 -o yaml",
"apiVersion: v1 data: game-special-key: |- 1 enemies=aliens lives=3 enemies.cheat=true enemies.cheat.level=noGoodRotten secret.code.passphrase=UUDDLRLRBABAS secret.code.allowed=true secret.code.lives=30 kind: ConfigMap metadata: creationTimestamp: 2016-02-18T18:54:22Z name: game-config-3 namespace: default resourceVersion: \"530\" selflink: /api/v1/namespaces/default/configmaps/game-config-3 uid: 05f8da22-d671-11e5-8cd0-68f728db1985",
"oc create configmap special-config --from-literal=special.how=very --from-literal=special.type=charm",
"oc get configmaps special-config -o yaml",
"apiVersion: v1 data: special.how: very special.type: charm kind: ConfigMap metadata: creationTimestamp: 2016-02-18T19:14:38Z name: special-config namespace: default resourceVersion: \"651\" selflink: /api/v1/namespaces/default/configmaps/special-config uid: dadce046-d673-11e5-8cd0-68f728db1985",
"apiVersion: v1 kind: ConfigMap metadata: name: special-config 1 namespace: default 2 data: special.how: very 3 special.type: charm 4",
"apiVersion: v1 kind: ConfigMap metadata: name: env-config 1 namespace: default data: log_level: INFO 2",
"apiVersion: v1 kind: Pod metadata: name: dapi-test-pod spec: securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault containers: - name: test-container image: gcr.io/google_containers/busybox command: [ \"/bin/sh\", \"-c\", \"env\" ] env: 1 - name: SPECIAL_LEVEL_KEY 2 valueFrom: configMapKeyRef: name: special-config 3 key: special.how 4 - name: SPECIAL_TYPE_KEY valueFrom: configMapKeyRef: name: special-config 5 key: special.type 6 optional: true 7 envFrom: 8 - configMapRef: name: env-config 9 securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL] restartPolicy: Never",
"SPECIAL_LEVEL_KEY=very log_level=INFO",
"apiVersion: v1 kind: ConfigMap metadata: name: special-config namespace: default data: special.how: very special.type: charm",
"apiVersion: v1 kind: Pod metadata: name: dapi-test-pod spec: securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault containers: - name: test-container image: gcr.io/google_containers/busybox command: [ \"/bin/sh\", \"-c\", \"echo USD(SPECIAL_LEVEL_KEY) USD(SPECIAL_TYPE_KEY)\" ] 1 env: - name: SPECIAL_LEVEL_KEY valueFrom: configMapKeyRef: name: special-config key: special.how - name: SPECIAL_TYPE_KEY valueFrom: configMapKeyRef: name: special-config key: special.type securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL] restartPolicy: Never",
"very charm",
"apiVersion: v1 kind: ConfigMap metadata: name: special-config namespace: default data: special.how: very special.type: charm",
"apiVersion: v1 kind: Pod metadata: name: dapi-test-pod spec: securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault containers: - name: test-container image: gcr.io/google_containers/busybox command: [ \"/bin/sh\", \"-c\", \"cat\", \"/etc/config/special.how\" ] volumeMounts: - name: config-volume mountPath: /etc/config securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL] volumes: - name: config-volume configMap: name: special-config 1 restartPolicy: Never",
"very",
"apiVersion: v1 kind: Pod metadata: name: dapi-test-pod spec: securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault containers: - name: test-container image: gcr.io/google_containers/busybox command: [ \"/bin/sh\", \"-c\", \"cat\", \"/etc/config/path/to/special-key\" ] volumeMounts: - name: config-volume mountPath: /etc/config securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL] volumes: - name: config-volume configMap: name: special-config items: - key: special.how path: path/to/special-key 1 restartPolicy: Never",
"very",
"oc get priorityclasses",
"NAME VALUE GLOBAL-DEFAULT AGE system-node-critical 2000001000 false 72m system-cluster-critical 2000000000 false 72m openshift-user-critical 1000000000 false 3d13h cluster-logging 1000000 false 29s",
"apiVersion: v1 kind: Pod metadata: name: nginx labels: env: test spec: containers: - name: nginx image: nginx imagePullPolicy: IfNotPresent priorityClassName: system-cluster-critical 1",
"oc create -f <file-name>.yaml",
"oc describe pod router-default-66d5cf9464-7pwkc",
"kind: Pod apiVersion: v1 metadata: Name: router-default-66d5cf9464-7pwkc Namespace: openshift-ingress Controlled By: ReplicaSet/router-default-66d5cf9464",
"apiVersion: v1 kind: Pod metadata: name: router-default-66d5cf9464-7pwkc ownerReferences: - apiVersion: apps/v1 kind: ReplicaSet name: router-default-66d5cf9464 uid: d81dd094-da26-11e9-a48a-128e7edf0312 controller: true blockOwnerDeletion: true",
"kind: ReplicaSet apiVersion: apps/v1 metadata: name: hello-node-6fbccf8d9 spec: template: metadata: creationTimestamp: null labels: ingresscontroller.operator.openshift.io/deployment-ingresscontroller: default pod-template-hash: 66d5cf9464 spec: nodeSelector: kubernetes.io/os: linux node-role.kubernetes.io/worker: '' type: user-node 1",
"apiVersion: v1 kind: Pod metadata: name: hello-node-6fbccf8d9 spec: nodeSelector: region: east type: user-node",
"oc delete crd scaledobjects.keda.k8s.io",
"oc delete crd triggerauthentications.keda.k8s.io",
"oc create configmap -n openshift-keda thanos-cert --from-file=ca-cert.pem",
"oc get all -n keda",
"NAME READY STATUS RESTARTS AGE pod/custom-metrics-autoscaler-operator-5fd8d9ffd8-xt4xp 1/1 Running 0 18m NAME READY UP-TO-DATE AVAILABLE AGE deployment.apps/custom-metrics-autoscaler-operator 1/1 1 1 18m NAME DESIRED CURRENT READY AGE replicaset.apps/custom-metrics-autoscaler-operator-5fd8d9ffd8 1 1 1 18m",
"kind: KedaController apiVersion: keda.sh/v1alpha1 metadata: name: keda namespace: keda spec: watchNamespace: '' 1 operator: logLevel: info 2 logEncoder: console 3 caConfigMaps: 4 - thanos-cert - kafka-cert metricsServer: logLevel: '0' 5 auditConfig: 6 logFormat: \"json\" logOutputVolumeClaim: \"persistentVolumeClaimName\" policy: rules: - level: Metadata omitStages: [\"RequestReceived\"] omitManagedFields: false lifetime: maxAge: \"2\" maxBackup: \"1\" maxSize: \"50\" serviceAccount: {}",
"apiVersion: keda.sh/v1alpha1 kind: ScaledObject metadata: name: prom-scaledobject namespace: my-namespace spec: triggers: - type: prometheus 1 metadata: serverAddress: https://thanos-querier.openshift-monitoring.svc.cluster.local:9092 2 namespace: kedatest 3 metricName: http_requests_total 4 threshold: '5' 5 query: sum(rate(http_requests_total{job=\"test-app\"}[1m])) 6 authModes: basic 7 cortexOrgID: my-org 8 ignoreNullValues: \"false\" 9 unsafeSsl: \"false\" 10",
"oc project <project_name> 1",
"oc create serviceaccount thanos 1",
"apiVersion: v1 kind: Secret metadata: name: thanos-token annotations: kubernetes.io/service-account.name: thanos 1 type: kubernetes.io/service-account-token",
"oc create -f <file_name>.yaml",
"oc describe serviceaccount thanos 1",
"Name: thanos Namespace: <namespace_name> Labels: <none> Annotations: <none> Image pull secrets: thanos-dockercfg-nnwgj Mountable secrets: thanos-dockercfg-nnwgj Tokens: thanos-token 1 Events: <none>",
"apiVersion: keda.sh/v1alpha1 kind: <authentication_method> 1 metadata: name: keda-trigger-auth-prometheus spec: secretTargetRef: 2 - parameter: bearerToken 3 name: thanos-token 4 key: token 5 - parameter: ca name: thanos-token key: ca.crt",
"oc create -f <file-name>.yaml",
"apiVersion: rbac.authorization.k8s.io/v1 kind: Role metadata: name: thanos-metrics-reader rules: - apiGroups: - \"\" resources: - pods verbs: - get - apiGroups: - metrics.k8s.io resources: - pods - nodes verbs: - get - list - watch",
"oc create -f <file-name>.yaml",
"apiVersion: rbac.authorization.k8s.io/v1 kind: <binding_type> 1 metadata: name: thanos-metrics-reader 2 namespace: my-project 3 roleRef: apiGroup: rbac.authorization.k8s.io kind: Role name: thanos-metrics-reader subjects: - kind: ServiceAccount name: thanos 4 namespace: <namespace_name> 5",
"oc create -f <file-name>.yaml",
"apiVersion: keda.sh/v1alpha1 kind: ScaledObject metadata: name: cpu-scaledobject namespace: my-namespace spec: triggers: - type: cpu 1 metricType: Utilization 2 metadata: value: '60' 3 minReplicaCount: 1 4",
"apiVersion: keda.sh/v1alpha1 kind: ScaledObject metadata: name: memory-scaledobject namespace: my-namespace spec: triggers: - type: memory 1 metricType: Utilization 2 metadata: value: '60' 3 containerName: api 4",
"apiVersion: keda.sh/v1alpha1 kind: ScaledObject metadata: name: kafka-scaledobject namespace: my-namespace spec: triggers: - type: kafka 1 metadata: topic: my-topic 2 bootstrapServers: my-cluster-kafka-bootstrap.openshift-operators.svc:9092 3 consumerGroup: my-group 4 lagThreshold: '10' 5 activationLagThreshold: '5' 6 offsetResetPolicy: latest 7 allowIdleConsumers: true 8 scaleToZeroOnInvalidOffset: false 9 excludePersistentLag: false 10 version: '1.0.0' 11 partitionLimitation: '1,2,10-20,31' 12 tls: enable 13",
"apiVersion: keda.sh/v1alpha1 kind: ScaledObject metadata: name: cron-scaledobject namespace: default spec: scaleTargetRef: name: my-deployment minReplicaCount: 0 1 maxReplicaCount: 100 2 cooldownPeriod: 300 triggers: - type: cron 3 metadata: timezone: Asia/Kolkata 4 start: \"0 6 * * *\" 5 end: \"30 18 * * *\" 6 desiredReplicas: \"100\" 7",
"apiVersion: v1 kind: Secret metadata: name: my-basic-secret namespace: default data: username: \"dXNlcm5hbWU=\" 1 password: \"cGFzc3dvcmQ=\"",
"kind: TriggerAuthentication apiVersion: keda.sh/v1alpha1 metadata: name: secret-triggerauthentication namespace: my-namespace 1 spec: secretTargetRef: 2 - parameter: username 3 name: my-basic-secret 4 key: username 5 - parameter: password name: my-basic-secret key: password",
"kind: ClusterTriggerAuthentication apiVersion: keda.sh/v1alpha1 metadata: 1 name: secret-cluster-triggerauthentication spec: secretTargetRef: 2 - parameter: username 3 name: my-basic-secret 4 key: username 5 - parameter: password name: my-basic-secret key: password",
"apiVersion: v1 kind: Secret metadata: name: my-secret namespace: my-namespace data: ca-cert.pem: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0... 1 client-cert.pem: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0... 2 client-key.pem: LS0tLS1CRUdJTiBQUklWQVRFIEtFWS0t",
"kind: TriggerAuthentication apiVersion: keda.sh/v1alpha1 metadata: name: secret-triggerauthentication namespace: my-namespace 1 spec: secretTargetRef: 2 - parameter: key 3 name: my-secret 4 key: client-key.pem 5 - parameter: ca 6 name: my-secret 7 key: ca-cert.pem 8",
"apiVersion: v1 kind: Secret metadata: name: my-secret namespace: my-namespace data: bearerToken: \"eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXV\" 1",
"kind: TriggerAuthentication apiVersion: keda.sh/v1alpha1 metadata: name: token-triggerauthentication namespace: my-namespace 1 spec: secretTargetRef: 2 - parameter: bearerToken 3 name: my-secret 4 key: bearerToken 5",
"kind: TriggerAuthentication apiVersion: keda.sh/v1alpha1 metadata: name: env-var-triggerauthentication namespace: my-namespace 1 spec: env: 2 - parameter: access_key 3 name: ACCESS_KEY 4 containerName: my-container 5",
"kind: TriggerAuthentication apiVersion: keda.sh/v1alpha1 metadata: name: pod-id-triggerauthentication namespace: my-namespace 1 spec: podIdentity: 2 provider: aws-eks 3",
"apiVersion: v1 kind: Secret metadata: name: my-secret data: user-name: <base64_USER_NAME> password: <base64_USER_PASSWORD>",
"kind: TriggerAuthentication apiVersion: keda.sh/v1alpha1 metadata: name: prom-triggerauthentication namespace: my-namespace spec: secretTargetRef: - parameter: user-name name: my-secret key: USER_NAME - parameter: password name: my-secret key: USER_PASSWORD",
"oc create -f <filename>.yaml",
"apiVersion: keda.sh/v1alpha1 kind: ScaledObject metadata: name: scaledobject namespace: my-namespace spec: scaleTargetRef: name: example-deployment maxReplicaCount: 100 minReplicaCount: 0 pollingInterval: 30 triggers: - type: prometheus metadata: serverAddress: https://thanos-querier.openshift-monitoring.svc.cluster.local:9092 namespace: kedatest # replace <NAMESPACE> metricName: http_requests_total threshold: '5' query: sum(rate(http_requests_total{job=\"test-app\"}[1m])) authModes: \"basic\" authenticationRef: name: prom-triggerauthentication 1 kind: TriggerAuthentication 2",
"apiVersion: keda.sh/v1alpha1 kind: ScaledObject metadata: name: scaledobject namespace: my-namespace spec: scaleTargetRef: name: example-deployment maxReplicaCount: 100 minReplicaCount: 0 pollingInterval: 30 triggers: - type: prometheus metadata: serverAddress: https://thanos-querier.openshift-monitoring.svc.cluster.local:9092 namespace: kedatest # replace <NAMESPACE> metricName: http_requests_total threshold: '5' query: sum(rate(http_requests_total{job=\"test-app\"}[1m])) authModes: \"basic\" authenticationRef: name: prom-cluster-triggerauthentication 1 kind: ClusterTriggerAuthentication 2",
"oc apply -f <filename>",
"apiVersion: keda.sh/v1alpha1 kind: ScaledObject metadata: annotations: autoscaling.keda.sh/paused-replicas: \"4\"",
"oc edit ScaledObject scaledobject",
"apiVersion: keda.sh/v1alpha1 kind: ScaledObject metadata: annotations: autoscaling.keda.sh/paused-replicas: \"4\" 1 creationTimestamp: \"2023-02-08T14:41:01Z\" generation: 1 name: scaledobject namespace: my-project resourceVersion: '65729' uid: f5aec682-acdf-4232-a783-58b5b82f5dd0",
"apiVersion: keda.sh/v1alpha1 kind: ScaledObject metadata: annotations: autoscaling.keda.sh/paused-replicas: \"4\"",
"oc edit ScaledObject scaledobject",
"apiVersion: keda.sh/v1alpha1 kind: ScaledObject metadata: annotations: autoscaling.keda.sh/paused-replicas: \"4\" 1 creationTimestamp: \"2023-02-08T14:41:01Z\" generation: 1 name: scaledobject namespace: my-project resourceVersion: '65729' uid: f5aec682-acdf-4232-a783-58b5b82f5dd0",
"kind: KedaController apiVersion: keda.sh/v1alpha1 metadata: name: keda namespace: keda spec: metricsServer: auditConfig: logFormat: \"json\" 1 logOutputVolumeClaim: \"pvc-audit-log\" 2 policy: rules: 3 - level: Metadata omitStages: \"RequestReceived\" 4 omitManagedFields: false 5 lifetime: 6 maxAge: \"2\" maxBackup: \"1\" maxSize: \"50\"",
"get pod -n keda",
"NAME READY STATUS RESTARTS AGE custom-metrics-autoscaler-operator-5cb44cd75d-9v4lv 1/1 Running 0 8m20s keda-metrics-apiserver-65c7cc44fd-rrl4r 1/1 Running 0 2m55s keda-operator-776cbb6768-zpj5b 1/1 Running 0 2m55s",
"oc logs keda-metrics-apiserver-<hash>|grep -i metadata 1",
"oc logs keda-metrics-apiserver-65c7cc44fd-rrl4r|grep -i metadata",
"{\"kind\":\"Event\",\"apiVersion\":\"audit.k8s.io/v1\",\"level\":\"Metadata\",\"auditID\":\"4c81d41b-3dab-4675-90ce-20b87ce24013\",\"stage\":\"ResponseComplete\",\"requestURI\":\"/healthz\",\"verb\":\"get\",\"user\":{\"username\":\"system:anonymous\",\"groups\":[\"system:unauthenticated\"]},\"sourceIPs\":[\"10.131.0.1\"],\"userAgent\":\"kube-probe/1.28\",\"responseStatus\":{\"metadata\":{},\"code\":200},\"requestReceivedTimestamp\":\"2023-02-16T13:00:03.554567Z\",\"stageTimestamp\":\"2023-02-16T13:00:03.555032Z\",\"annotations\":{\"authorization.k8s.io/decision\":\"allow\",\"authorization.k8s.io/reason\":\"\"}}",
"oc rsh pod/keda-metrics-apiserver-<hash> -n keda",
"oc rsh pod/keda-metrics-apiserver-65c7cc44fd-rrl4r -n keda",
"sh-4.4USD cd /var/audit-policy/",
"sh-4.4USD ls",
"log-2023.02.17-14:50 policy.yaml",
"sh-4.4USD cat <log_name>/<pvc_name>|grep -i <log_level> 1",
"sh-4.4USD cat log-2023.02.17-14:50/pvc-audit-log|grep -i Request",
"{\"kind\":\"Event\",\"apiVersion\":\"audit.k8s.io/v1\",\"level\":\"Request\",\"auditID\":\"63e7f68c-04ec-4f4d-8749-bf1656572a41\",\"stage\":\"ResponseComplete\",\"requestURI\":\"/openapi/v2\",\"verb\":\"get\",\"user\":{\"username\":\"system:aggregator\",\"groups\":[\"system:authenticated\"]},\"sourceIPs\":[\"10.128.0.1\"],\"responseStatus\":{\"metadata\":{},\"code\":304},\"requestReceivedTimestamp\":\"2023-02-17T13:12:55.035478Z\",\"stageTimestamp\":\"2023-02-17T13:12:55.038346Z\",\"annotations\":{\"authorization.k8s.io/decision\":\"allow\",\"authorization.k8s.io/reason\":\"RBAC: allowed by ClusterRoleBinding \\\"system:discovery\\\" of ClusterRole \\\"system:discovery\\\" to Group \\\"system:authenticated\\\"\"}}",
"oc adm must-gather --image=\"USD(oc get packagemanifests openshift-custom-metrics-autoscaler-operator -n openshift-marketplace -o jsonpath='{.status.channels[?(@.name==\"stable\")].currentCSVDesc.annotations.containerImage}')\"",
"oc adm must-gather --image=\"USD(oc get packagemanifests openshift-custom-metrics-autoscaler-operator -n openshift-marketplace -o jsonpath='{.status.channels[?(@.name==\"stable\")].currentCSVDesc.annotations.containerImage}')\"",
"IMAGE=\"USD(oc get packagemanifests openshift-custom-metrics-autoscaler-operator -n openshift-marketplace -o jsonpath='{.status.channels[?(@.name==\"stable\")].currentCSVDesc.annotations.containerImage}')\"",
"oc adm must-gather --image-stream=openshift/must-gather --image=USD{IMAGE}",
"└── keda ├── apps │ ├── daemonsets.yaml │ ├── deployments.yaml │ ├── replicasets.yaml │ └── statefulsets.yaml ├── apps.openshift.io │ └── deploymentconfigs.yaml ├── autoscaling │ └── horizontalpodautoscalers.yaml ├── batch │ ├── cronjobs.yaml │ └── jobs.yaml ├── build.openshift.io │ ├── buildconfigs.yaml │ └── builds.yaml ├── core │ ├── configmaps.yaml │ ├── endpoints.yaml │ ├── events.yaml │ ├── persistentvolumeclaims.yaml │ ├── pods.yaml │ ├── replicationcontrollers.yaml │ ├── secrets.yaml │ └── services.yaml ├── discovery.k8s.io │ └── endpointslices.yaml ├── image.openshift.io │ └── imagestreams.yaml ├── k8s.ovn.org │ ├── egressfirewalls.yaml │ └── egressqoses.yaml ├── keda.sh │ ├── kedacontrollers │ │ └── keda.yaml │ ├── scaledobjects │ │ └── example-scaledobject.yaml │ └── triggerauthentications │ └── example-triggerauthentication.yaml ├── monitoring.coreos.com │ └── servicemonitors.yaml ├── networking.k8s.io │ └── networkpolicies.yaml ├── keda.yaml ├── pods │ ├── custom-metrics-autoscaler-operator-58bd9f458-ptgwx │ │ ├── custom-metrics-autoscaler-operator │ │ │ └── custom-metrics-autoscaler-operator │ │ │ └── logs │ │ │ ├── current.log │ │ │ ├── previous.insecure.log │ │ │ └── previous.log │ │ └── custom-metrics-autoscaler-operator-58bd9f458-ptgwx.yaml │ ├── custom-metrics-autoscaler-operator-58bd9f458-thbsh │ │ └── custom-metrics-autoscaler-operator │ │ └── custom-metrics-autoscaler-operator │ │ └── logs │ ├── keda-metrics-apiserver-65c7cc44fd-6wq4g │ │ ├── keda-metrics-apiserver │ │ │ └── keda-metrics-apiserver │ │ │ └── logs │ │ │ ├── current.log │ │ │ ├── previous.insecure.log │ │ │ └── previous.log │ │ └── keda-metrics-apiserver-65c7cc44fd-6wq4g.yaml │ └── keda-operator-776cbb6768-fb6m5 │ ├── keda-operator │ │ └── keda-operator │ │ └── logs │ │ ├── current.log │ │ ├── previous.insecure.log │ │ └── previous.log │ └── keda-operator-776cbb6768-fb6m5.yaml ├── policy │ └── poddisruptionbudgets.yaml └── route.openshift.io └── routes.yaml",
"tar cvaf must-gather.tar.gz must-gather.local.5421342344627712289/ 1",
"oc describe PodMetrics openshift-kube-scheduler-ip-10-0-135-131.ec2.internal",
"Name: openshift-kube-scheduler-ip-10-0-135-131.ec2.internal Namespace: openshift-kube-scheduler Labels: <none> Annotations: <none> API Version: metrics.k8s.io/v1beta1 Containers: Name: wait-for-host-port Usage: Memory: 0 Name: scheduler Usage: Cpu: 8m Memory: 45440Ki Kind: PodMetrics Metadata: Creation Timestamp: 2019-05-23T18:47:56Z Self Link: /apis/metrics.k8s.io/v1beta1/namespaces/openshift-kube-scheduler/pods/openshift-kube-scheduler-ip-10-0-135-131.ec2.internal Timestamp: 2019-05-23T18:47:56Z Window: 1m0s Events: <none>",
"apiVersion: v1 kind: Pod spec: containers: - name: app image: images.my-company.example/app:v4 resources: limits: memory: \"128Mi\" cpu: \"500m\"",
"apiVersion: keda.sh/v1alpha1 kind: ScaledObject metadata: annotations: autoscaling.keda.sh/paused-replicas: \"0\" 1 name: scaledobject 2 namespace: my-namespace spec: scaleTargetRef: apiVersion: apps/v1 3 name: example-deployment 4 kind: Deployment 5 envSourceContainerName: .spec.template.spec.containers[0] 6 cooldownPeriod: 200 7 maxReplicaCount: 100 8 minReplicaCount: 0 9 metricsServer: 10 auditConfig: logFormat: \"json\" logOutputVolumeClaim: \"persistentVolumeClaimName\" policy: rules: - level: Metadata omitStages: \"RequestReceived\" omitManagedFields: false lifetime: maxAge: \"2\" maxBackup: \"1\" maxSize: \"50\" fallback: 11 failureThreshold: 3 replicas: 6 pollingInterval: 30 12 advanced: restoreToOriginalReplicaCount: false 13 horizontalPodAutoscalerConfig: name: keda-hpa-scale-down 14 behavior: 15 scaleDown: stabilizationWindowSeconds: 300 policies: - type: Percent value: 100 periodSeconds: 15 triggers: - type: prometheus 16 metadata: serverAddress: https://thanos-querier.openshift-monitoring.svc.cluster.local:9092 namespace: kedatest metricName: http_requests_total threshold: '5' query: sum(rate(http_requests_total{job=\"test-app\"}[1m])) authModes: basic authenticationRef: 17 name: prom-triggerauthentication kind: TriggerAuthentication",
"oc create -f <filename>.yaml",
"oc get scaledobject <scaled_object_name>",
"NAME SCALETARGETKIND SCALETARGETNAME MIN MAX TRIGGERS AUTHENTICATION READY ACTIVE FALLBACK AGE scaledobject apps/v1.Deployment example-deployment 0 50 prometheus prom-triggerauthentication True True True 17s",
"oc delete crd clustertriggerauthentications.keda.sh kedacontrollers.keda.sh scaledjobs.keda.sh scaledobjects.keda.sh triggerauthentications.keda.sh",
"oc get clusterrole | grep keda.sh",
"oc delete clusterrole.keda.sh-v1alpha1-admin",
"oc get clusterrolebinding | grep keda.sh",
"oc delete clusterrolebinding.keda.sh-v1alpha1-admin",
"oc delete project keda",
"oc delete operator/openshift-custom-metrics-autoscaler-operator.keda",
"apiVersion: v1 kind: Pod metadata: name: with-pod-affinity spec: securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault affinity: podAffinity: 1 requiredDuringSchedulingIgnoredDuringExecution: 2 - labelSelector: matchExpressions: - key: security 3 operator: In 4 values: - S1 5 topologyKey: topology.kubernetes.io/zone containers: - name: with-pod-affinity image: docker.io/ocpqe/hello-pod securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL]",
"apiVersion: v1 kind: Pod metadata: name: with-pod-antiaffinity spec: securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault affinity: podAntiAffinity: 1 preferredDuringSchedulingIgnoredDuringExecution: 2 - weight: 100 3 podAffinityTerm: labelSelector: matchExpressions: - key: security 4 operator: In 5 values: - S2 topologyKey: kubernetes.io/hostname containers: - name: with-pod-affinity image: docker.io/ocpqe/hello-pod securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL]",
"apiVersion: v1 kind: Pod metadata: name: security-s1 labels: security: S1 spec: securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault containers: - name: security-s1 image: docker.io/ocpqe/hello-pod securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault",
"oc create -f <pod-spec>.yaml",
"apiVersion: v1 kind: Pod metadata: name: security-s1-east spec: affinity: 1 podAffinity: requiredDuringSchedulingIgnoredDuringExecution: 2 - labelSelector: matchExpressions: - key: security 3 values: - S1 operator: In 4 topologyKey: topology.kubernetes.io/zone 5",
"oc create -f <pod-spec>.yaml",
"apiVersion: v1 kind: Pod metadata: name: security-s1 labels: security: S1 spec: securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault containers: - name: security-s1 image: docker.io/ocpqe/hello-pod securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL]",
"oc create -f <pod-spec>.yaml",
"apiVersion: v1 kind: Pod metadata: name: security-s2-east spec: affinity: 1 podAntiAffinity: preferredDuringSchedulingIgnoredDuringExecution: 2 - weight: 100 3 podAffinityTerm: labelSelector: matchExpressions: - key: security 4 values: - S1 operator: In 5 topologyKey: kubernetes.io/hostname 6",
"oc create -f <pod-spec>.yaml",
"apiVersion: v1 kind: Pod metadata: name: team4 labels: team: \"4\" spec: securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault containers: - name: ocp image: docker.io/ocpqe/hello-pod securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL]",
"apiVersion: v1 kind: Pod metadata: name: team4a spec: securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault affinity: podAffinity: requiredDuringSchedulingIgnoredDuringExecution: - labelSelector: matchExpressions: - key: team operator: In values: - \"4\" topologyKey: kubernetes.io/hostname containers: - name: pod-affinity image: docker.io/ocpqe/hello-pod securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL]",
"apiVersion: v1 kind: Pod metadata: name: pod-s1 labels: security: s1 spec: securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault containers: - name: ocp image: docker.io/ocpqe/hello-pod securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL]",
"apiVersion: v1 kind: Pod metadata: name: pod-s2 spec: securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault affinity: podAntiAffinity: requiredDuringSchedulingIgnoredDuringExecution: - labelSelector: matchExpressions: - key: security operator: In values: - s1 topologyKey: kubernetes.io/hostname containers: - name: pod-antiaffinity image: docker.io/ocpqe/hello-pod securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL]",
"apiVersion: v1 kind: Pod metadata: name: pod-s1 labels: security: s1 spec: securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault containers: - name: ocp image: docker.io/ocpqe/hello-pod securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL]",
"apiVersion: v1 kind: Pod metadata: name: pod-s2 spec: securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault affinity: podAffinity: requiredDuringSchedulingIgnoredDuringExecution: - labelSelector: matchExpressions: - key: security operator: In values: - s2 topologyKey: kubernetes.io/hostname containers: - name: pod-affinity image: docker.io/ocpqe/hello-pod securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL]",
"NAME READY STATUS RESTARTS AGE IP NODE pod-s2 0/1 Pending 0 32s <none>",
"apiVersion: v1 kind: Pod metadata: name: with-node-affinity spec: securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault affinity: nodeAffinity: 1 requiredDuringSchedulingIgnoredDuringExecution: 2 nodeSelectorTerms: - matchExpressions: - key: e2e-az-NorthSouth 3 operator: In 4 values: - e2e-az-North 5 - e2e-az-South 6 containers: - name: with-node-affinity image: docker.io/ocpqe/hello-pod securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL]",
"apiVersion: v1 kind: Pod metadata: name: with-node-affinity spec: securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault affinity: nodeAffinity: 1 preferredDuringSchedulingIgnoredDuringExecution: 2 - weight: 1 3 preference: matchExpressions: - key: e2e-az-EastWest 4 operator: In 5 values: - e2e-az-East 6 - e2e-az-West 7 containers: - name: with-node-affinity image: docker.io/ocpqe/hello-pod securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL]",
"apiVersion: v1 kind: Pod metadata: name: s1 spec: affinity: 1 nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution: 2 nodeSelectorTerms: - matchExpressions: - key: e2e-az-name 3 values: - e2e-az1 - e2e-az2 operator: In 4 #",
"oc create -f <file-name>.yaml",
"apiVersion: v1 kind: Pod metadata: name: s1 spec: affinity: 1 nodeAffinity: preferredDuringSchedulingIgnoredDuringExecution: 2 - weight: 3 preference: matchExpressions: - key: e2e-az-name 4 values: - e2e-az3 operator: In 5 #",
"oc create -f <file-name>.yaml",
"oc label node node1 zone=us",
"kind: Node apiVersion: v1 metadata: name: <node_name> labels: zone: us #",
"cat pod-s1.yaml",
"apiVersion: v1 kind: Pod metadata: name: pod-s1 spec: securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault containers: - image: \"docker.io/ocpqe/hello-pod\" name: hello-pod securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL] affinity: nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution: nodeSelectorTerms: - matchExpressions: - key: \"zone\" operator: In values: - us #",
"oc get pod -o wide",
"NAME READY STATUS RESTARTS AGE IP NODE pod-s1 1/1 Running 0 4m IP1 node1",
"oc label node node1 zone=emea",
"kind: Node apiVersion: v1 metadata: name: <node_name> labels: zone: emea #",
"cat pod-s1.yaml",
"apiVersion: v1 kind: Pod metadata: name: pod-s1 spec: securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault containers: - image: \"docker.io/ocpqe/hello-pod\" name: hello-pod securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL] affinity: nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution: nodeSelectorTerms: - matchExpressions: - key: \"zone\" operator: In values: - us #",
"oc describe pod pod-s1",
"Events: FirstSeen LastSeen Count From SubObjectPath Type Reason --------- -------- ----- ---- ------------- -------- ------ 1m 33s 8 default-scheduler Warning FailedScheduling No nodes are available that match all of the following predicates:: MatchNodeSelector (1).",
"sysctl -a |grep commit",
"# vm.overcommit_memory = 0 #",
"sysctl -a |grep panic",
"# vm.panic_on_oom = 0 #",
"kind: Node apiVersion: v1 metadata: name: ip-10-0-131-14.ec2.internal selfLink: /api/v1/nodes/ip-10-0-131-14.ec2.internal uid: 7bc2580a-8b8e-11e9-8e01-021ab4174c74 resourceVersion: '478704' creationTimestamp: '2019-06-10T14:46:08Z' labels: kubernetes.io/os: linux topology.kubernetes.io/zone: us-east-1a node.openshift.io/os_version: '4.5' node-role.kubernetes.io/worker: '' topology.kubernetes.io/region: us-east-1 node.openshift.io/os_id: rhcos node.kubernetes.io/instance-type: m4.large kubernetes.io/hostname: ip-10-0-131-14 kubernetes.io/arch: amd64 region: east 1 type: user-node #",
"apiVersion: v1 kind: Pod metadata: name: s1 # spec: nodeSelector: 1 region: east type: user-node #",
"apiVersion: config.openshift.io/v1 kind: Scheduler metadata: name: cluster # spec: defaultNodeSelector: type=user-node,region=east #",
"apiVersion: v1 kind: Node metadata: name: ci-ln-qg1il3k-f76d1-hlmhl-worker-b-df2s4 # labels: region: east type: user-node #",
"apiVersion: v1 kind: Pod metadata: name: s1 # spec: nodeSelector: region: east #",
"NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES pod-s1 1/1 Running 0 20s 10.131.2.6 ci-ln-qg1il3k-f76d1-hlmhl-worker-b-df2s4 <none> <none>",
"apiVersion: v1 kind: Namespace metadata: name: east-region annotations: openshift.io/node-selector: \"region=east\" #",
"apiVersion: v1 kind: Node metadata: name: ci-ln-qg1il3k-f76d1-hlmhl-worker-b-df2s4 # labels: region: east type: user-node #",
"apiVersion: v1 kind: Pod metadata: namespace: east-region # spec: nodeSelector: region: east type: user-node #",
"NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES pod-s1 1/1 Running 0 20s 10.131.2.6 ci-ln-qg1il3k-f76d1-hlmhl-worker-b-df2s4 <none> <none>",
"apiVersion: v1 kind: Pod metadata: name: west-region # spec: nodeSelector: region: west #",
"oc describe pod router-default-66d5cf9464-7pwkc",
"kind: Pod apiVersion: v1 metadata: Name: router-default-66d5cf9464-7pwkc Namespace: openshift-ingress Controlled By: ReplicaSet/router-default-66d5cf9464",
"apiVersion: v1 kind: Pod metadata: name: router-default-66d5cf9464-7pwkc ownerReferences: - apiVersion: apps/v1 kind: ReplicaSet name: router-default-66d5cf9464 uid: d81dd094-da26-11e9-a48a-128e7edf0312 controller: true blockOwnerDeletion: true",
"kind: ReplicaSet apiVersion: apps/v1 metadata: name: hello-node-6fbccf8d9 spec: template: metadata: creationTimestamp: null labels: ingresscontroller.operator.openshift.io/deployment-ingresscontroller: default pod-template-hash: 66d5cf9464 spec: nodeSelector: kubernetes.io/os: linux node-role.kubernetes.io/worker: '' type: user-node 1",
"apiVersion: v1 kind: Pod metadata: name: hello-node-6fbccf8d9 spec: nodeSelector: region: east type: user-node",
"apiVersion: v1 kind: Pod metadata: name: my-pod labels: region: us-east spec: securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault topologySpreadConstraints: - maxSkew: 1 1 topologyKey: topology.kubernetes.io/zone 2 whenUnsatisfiable: DoNotSchedule 3 labelSelector: 4 matchLabels: region: us-east 5 matchLabelKeys: - my-pod-label 6 containers: - image: \"docker.io/ocpqe/hello-pod\" name: hello-pod securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL]",
"kind: Pod apiVersion: v1 metadata: name: my-pod labels: region: us-east spec: securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault topologySpreadConstraints: - maxSkew: 1 topologyKey: topology.kubernetes.io/zone whenUnsatisfiable: DoNotSchedule labelSelector: matchLabels: region: us-east containers: - image: \"docker.io/ocpqe/hello-pod\" name: hello-pod securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL]",
"kind: Pod apiVersion: v1 metadata: name: my-pod-2 labels: region: us-east spec: securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault topologySpreadConstraints: - maxSkew: 1 topologyKey: node whenUnsatisfiable: DoNotSchedule labelSelector: matchLabels: region: us-east - maxSkew: 1 topologyKey: rack whenUnsatisfiable: DoNotSchedule labelSelector: matchLabels: region: us-east containers: - image: \"docker.io/ocpqe/hello-pod\" name: hello-pod securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL]",
"kind: Pod apiVersion: v1 metadata: name: hello-node-6fbccf8d9-9tmzr # spec: nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution: nodeSelectorTerms: - matchFields: - key: metadata.name operator: In values: - target-host-name #",
"oc patch namespace myproject -p '{\"metadata\": {\"annotations\": {\"openshift.io/node-selector\": \"\"}}}'",
"apiVersion: v1 kind: Namespace metadata: name: <namespace> annotations: openshift.io/node-selector: '' #",
"apiVersion: apps/v1 kind: DaemonSet metadata: name: hello-daemonset spec: selector: matchLabels: name: hello-daemonset 1 template: metadata: labels: name: hello-daemonset 2 spec: nodeSelector: 3 role: worker containers: - image: openshift/hello-openshift imagePullPolicy: Always name: registry ports: - containerPort: 80 protocol: TCP resources: {} terminationMessagePath: /dev/termination-log serviceAccount: default terminationGracePeriodSeconds: 10 #",
"oc create -f daemonset.yaml",
"oc get pods",
"hello-daemonset-cx6md 1/1 Running 0 2m hello-daemonset-e3md9 1/1 Running 0 2m",
"oc describe pod/hello-daemonset-cx6md|grep Node",
"Node: openshift-node01.hostname.com/10.14.20.134",
"oc describe pod/hello-daemonset-e3md9|grep Node",
"Node: openshift-node02.hostname.com/10.14.20.137",
"apiVersion: batch/v1 kind: Job metadata: name: pi spec: parallelism: 1 1 completions: 1 2 activeDeadlineSeconds: 1800 3 backoffLimit: 6 4 template: 5 metadata: name: pi spec: containers: - name: pi image: perl command: [\"perl\", \"-Mbignum=bpi\", \"-wle\", \"print bpi(2000)\"] restartPolicy: OnFailure 6 #",
"apiVersion: batch/v1 kind: Job metadata: name: pi spec: parallelism: 1 1 completions: 1 2 activeDeadlineSeconds: 1800 3 backoffLimit: 6 4 template: 5 metadata: name: pi spec: containers: - name: pi image: perl command: [\"perl\", \"-Mbignum=bpi\", \"-wle\", \"print bpi(2000)\"] restartPolicy: OnFailure 6 #",
"oc create -f <file-name>.yaml",
"oc create job pi --image=perl -- perl -Mbignum=bpi -wle 'print bpi(2000)'",
"apiVersion: batch/v1 kind: CronJob metadata: name: pi spec: schedule: \"*/1 * * * *\" 1 concurrencyPolicy: \"Replace\" 2 startingDeadlineSeconds: 200 3 suspend: true 4 successfulJobsHistoryLimit: 3 5 failedJobsHistoryLimit: 1 6 jobTemplate: 7 spec: template: metadata: labels: 8 parent: \"cronjobpi\" spec: containers: - name: pi image: perl command: [\"perl\", \"-Mbignum=bpi\", \"-wle\", \"print bpi(2000)\"] restartPolicy: OnFailure 9",
"oc create -f <file-name>.yaml",
"oc create cronjob pi --image=perl --schedule='*/1 * * * *' -- perl -Mbignum=bpi -wle 'print bpi(2000)'",
"oc get nodes",
"oc get nodes",
"NAME STATUS ROLES AGE VERSION master.example.com Ready master 7h v1.31.3 node1.example.com Ready worker 7h v1.31.3 node2.example.com Ready worker 7h v1.31.3",
"oc get nodes",
"NAME STATUS ROLES AGE VERSION master.example.com Ready master 7h v1.31.3 node1.example.com NotReady,SchedulingDisabled worker 7h v1.31.3 node2.example.com Ready worker 7h v1.31.3",
"oc get nodes -o wide",
"NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME master.example.com Ready master 171m v1.31.3 10.0.129.108 <none> Red Hat Enterprise Linux CoreOS 48.83.202103210901-0 (Ootpa) 4.18.0-240.15.1.el8_3.x86_64 cri-o://1.31.3-30.rhaos4.10.gitf2f339d.el8-dev node1.example.com Ready worker 72m v1.31.3 10.0.129.222 <none> Red Hat Enterprise Linux CoreOS 48.83.202103210901-0 (Ootpa) 4.18.0-240.15.1.el8_3.x86_64 cri-o://1.31.3-30.rhaos4.10.gitf2f339d.el8-dev node2.example.com Ready worker 164m v1.31.3 10.0.142.150 <none> Red Hat Enterprise Linux CoreOS 48.83.202103210901-0 (Ootpa) 4.18.0-240.15.1.el8_3.x86_64 cri-o://1.31.3-30.rhaos4.10.gitf2f339d.el8-dev",
"oc get node <node>",
"oc get node node1.example.com",
"NAME STATUS ROLES AGE VERSION node1.example.com Ready worker 7h v1.31.3",
"oc describe node <node>",
"oc describe node node1.example.com",
"Name: node1.example.com 1 Roles: worker 2 Labels: kubernetes.io/os=linux kubernetes.io/hostname=ip-10-0-131-14 kubernetes.io/arch=amd64 3 node-role.kubernetes.io/worker= node.kubernetes.io/instance-type=m4.large node.openshift.io/os_id=rhcos node.openshift.io/os_version=4.5 region=east topology.kubernetes.io/region=us-east-1 topology.kubernetes.io/zone=us-east-1a Annotations: cluster.k8s.io/machine: openshift-machine-api/ahardin-worker-us-east-2a-q5dzc 4 machineconfiguration.openshift.io/currentConfig: worker-309c228e8b3a92e2235edd544c62fea8 machineconfiguration.openshift.io/desiredConfig: worker-309c228e8b3a92e2235edd544c62fea8 machineconfiguration.openshift.io/state: Done volumes.kubernetes.io/controller-managed-attach-detach: true CreationTimestamp: Wed, 13 Feb 2019 11:05:57 -0500 Taints: <none> 5 Unschedulable: false Conditions: 6 Type Status LastHeartbeatTime LastTransitionTime Reason Message ---- ------ ----------------- ------------------ ------ ------- OutOfDisk False Wed, 13 Feb 2019 15:09:42 -0500 Wed, 13 Feb 2019 11:05:57 -0500 KubeletHasSufficientDisk kubelet has sufficient disk space available MemoryPressure False Wed, 13 Feb 2019 15:09:42 -0500 Wed, 13 Feb 2019 11:05:57 -0500 KubeletHasSufficientMemory kubelet has sufficient memory available DiskPressure False Wed, 13 Feb 2019 15:09:42 -0500 Wed, 13 Feb 2019 11:05:57 -0500 KubeletHasNoDiskPressure kubelet has no disk pressure PIDPressure False Wed, 13 Feb 2019 15:09:42 -0500 Wed, 13 Feb 2019 11:05:57 -0500 KubeletHasSufficientPID kubelet has sufficient PID available Ready True Wed, 13 Feb 2019 15:09:42 -0500 Wed, 13 Feb 2019 11:07:09 -0500 KubeletReady kubelet is posting ready status Addresses: 7 InternalIP: 10.0.140.16 InternalDNS: ip-10-0-140-16.us-east-2.compute.internal Hostname: ip-10-0-140-16.us-east-2.compute.internal Capacity: 8 attachable-volumes-aws-ebs: 39 cpu: 2 hugepages-1Gi: 0 hugepages-2Mi: 0 memory: 8172516Ki pods: 250 Allocatable: attachable-volumes-aws-ebs: 39 cpu: 1500m hugepages-1Gi: 0 hugepages-2Mi: 0 memory: 7558116Ki pods: 250 System Info: 9 Machine ID: 63787c9534c24fde9a0cde35c13f1f66 System UUID: EC22BF97-A006-4A58-6AF8-0A38DEEA122A Boot ID: f24ad37d-2594-46b4-8830-7f7555918325 Kernel Version: 3.10.0-957.5.1.el7.x86_64 OS Image: Red Hat Enterprise Linux CoreOS 410.8.20190520.0 (Ootpa) Operating System: linux Architecture: amd64 Container Runtime Version: cri-o://1.31.3-0.6.dev.rhaos4.3.git9ad059b.el8-rc2 Kubelet Version: v1.31.3 Kube-Proxy Version: v1.31.3 PodCIDR: 10.128.4.0/24 ProviderID: aws:///us-east-2a/i-04e87b31dc6b3e171 Non-terminated Pods: (12 in total) 10 Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits --------- ---- ------------ ---------- --------------- ------------- openshift-cluster-node-tuning-operator tuned-hdl5q 0 (0%) 0 (0%) 0 (0%) 0 (0%) openshift-dns dns-default-l69zr 0 (0%) 0 (0%) 0 (0%) 0 (0%) openshift-image-registry node-ca-9hmcg 0 (0%) 0 (0%) 0 (0%) 0 (0%) openshift-ingress router-default-76455c45c-c5ptv 0 (0%) 0 (0%) 0 (0%) 0 (0%) openshift-machine-config-operator machine-config-daemon-cvqw9 20m (1%) 0 (0%) 50Mi (0%) 0 (0%) openshift-marketplace community-operators-f67fh 0 (0%) 0 (0%) 0 (0%) 0 (0%) openshift-monitoring alertmanager-main-0 50m (3%) 50m (3%) 210Mi (2%) 10Mi (0%) openshift-monitoring node-exporter-l7q8d 10m (0%) 20m (1%) 20Mi (0%) 40Mi (0%) openshift-monitoring prometheus-adapter-75d769c874-hvb85 0 (0%) 0 (0%) 0 (0%) 0 (0%) openshift-multus multus-kw8w5 0 (0%) 0 (0%) 0 (0%) 0 (0%) openshift-ovn-kubernetes ovnkube-node-t4dsn 80m (0%) 0 (0%) 1630Mi (0%) 0 (0%) Allocated resources: (Total limits may be over 100 percent, i.e., overcommitted.) Resource Requests Limits -------- -------- ------ cpu 380m (25%) 270m (18%) memory 880Mi (11%) 250Mi (3%) attachable-volumes-aws-ebs 0 0 Events: 11 Type Reason Age From Message ---- ------ ---- ---- ------- Normal NodeHasSufficientPID 6d (x5 over 6d) kubelet, m01.example.com Node m01.example.com status is now: NodeHasSufficientPID Normal NodeAllocatableEnforced 6d kubelet, m01.example.com Updated Node Allocatable limit across pods Normal NodeHasSufficientMemory 6d (x6 over 6d) kubelet, m01.example.com Node m01.example.com status is now: NodeHasSufficientMemory Normal NodeHasNoDiskPressure 6d (x6 over 6d) kubelet, m01.example.com Node m01.example.com status is now: NodeHasNoDiskPressure Normal NodeHasSufficientDisk 6d (x6 over 6d) kubelet, m01.example.com Node m01.example.com status is now: NodeHasSufficientDisk Normal NodeHasSufficientPID 6d kubelet, m01.example.com Node m01.example.com status is now: NodeHasSufficientPID Normal Starting 6d kubelet, m01.example.com Starting kubelet. #",
"oc get pod --selector=<nodeSelector>",
"oc get pod --selector=kubernetes.io/os",
"oc get pod -l=<nodeSelector>",
"oc get pod -l kubernetes.io/os=linux",
"oc get pod --all-namespaces --field-selector=spec.nodeName=<nodename>",
"oc adm top nodes",
"NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% ip-10-0-12-143.ec2.compute.internal 1503m 100% 4533Mi 61% ip-10-0-132-16.ec2.compute.internal 76m 5% 1391Mi 18% ip-10-0-140-137.ec2.compute.internal 398m 26% 2473Mi 33% ip-10-0-142-44.ec2.compute.internal 656m 43% 6119Mi 82% ip-10-0-146-165.ec2.compute.internal 188m 12% 3367Mi 45% ip-10-0-19-62.ec2.compute.internal 896m 59% 5754Mi 77% ip-10-0-44-193.ec2.compute.internal 632m 42% 5349Mi 72%",
"oc adm top node --selector=''",
"oc adm cordon <node1>",
"node/<node1> cordoned",
"oc get node <node1>",
"NAME STATUS ROLES AGE VERSION <node1> Ready,SchedulingDisabled worker 1d v1.31.3",
"oc adm drain <node1> <node2> [--pod-selector=<pod_selector>]",
"oc adm drain <node1> <node2> --force=true",
"oc adm drain <node1> <node2> --grace-period=-1",
"oc adm drain <node1> <node2> --ignore-daemonsets=true",
"oc adm drain <node1> <node2> --timeout=5s",
"oc adm drain <node1> <node2> --delete-emptydir-data=true",
"oc adm drain <node1> <node2> --dry-run=true",
"oc adm uncordon <node1>",
"get tuned.tuned.openshift.io/default -o yaml -n openshift-cluster-node-tuning-operator",
"profile: - name: tuned_profile_1 data: | # TuneD profile specification [main] summary=Description of tuned_profile_1 profile [sysctl] net.ipv4.ip_forward=1 # ... other sysctl's or other TuneD daemon plugins supported by the containerized TuneD - name: tuned_profile_n data: | # TuneD profile specification [main] summary=Description of tuned_profile_n profile # tuned_profile_n profile settings",
"recommend: <recommend-item-1> <recommend-item-n>",
"- machineConfigLabels: 1 <mcLabels> 2 match: 3 <match> 4 priority: <priority> 5 profile: <tuned_profile_name> 6 operand: 7 debug: <bool> 8 tunedConfig: reapply_sysctl: <bool> 9",
"- label: <label_name> 1 value: <label_value> 2 type: <label_type> 3 <match> 4",
"- match: - label: tuned.openshift.io/elasticsearch match: - label: node-role.kubernetes.io/master - label: node-role.kubernetes.io/infra type: pod priority: 10 profile: openshift-control-plane-es - match: - label: node-role.kubernetes.io/master - label: node-role.kubernetes.io/infra priority: 20 profile: openshift-control-plane - priority: 30 profile: openshift-node",
"apiVersion: tuned.openshift.io/v1 kind: Tuned metadata: name: openshift-node-custom namespace: openshift-cluster-node-tuning-operator spec: profile: - data: | [main] summary=Custom OpenShift node profile with an additional kernel parameter include=openshift-node [bootloader] cmdline_openshift_node_custom=+skew_tick=1 name: openshift-node-custom recommend: - machineConfigLabels: machineconfiguration.openshift.io/role: \"worker-custom\" priority: 20 profile: openshift-node-custom",
"apiVersion: tuned.openshift.io/v1 kind: Tuned metadata: name: provider-gce namespace: openshift-cluster-node-tuning-operator spec: profile: - data: | [main] summary=GCE Cloud provider-specific profile # Your tuning for GCE Cloud provider goes here. name: provider-gce",
"apiVersion: tuned.openshift.io/v1 kind: Tuned metadata: name: default namespace: openshift-cluster-node-tuning-operator spec: profile: - data: | [main] summary=Optimize systems running OpenShift (provider specific parent profile) include=-provider-USD{f:exec:cat:/var/lib/ocp-tuned/provider},openshift name: openshift recommend: - profile: openshift-control-plane priority: 30 match: - label: node-role.kubernetes.io/master - label: node-role.kubernetes.io/infra - profile: openshift-node priority: 40",
"oc exec USDtuned_pod -n openshift-cluster-node-tuning-operator -- find /usr/lib/tuned/openshift{,-control-plane,-node} -name tuned.conf -exec grep -H ^ {} \\;",
"USD(nproc) X 1/2 MiB",
"for i in {1..100}; do sleep 1; if dig myservice; then exit 0; fi; done; exit 1",
"curl -X POST http://USDMANAGEMENT_SERVICE_HOST:USDMANAGEMENT_SERVICE_PORT/register -d 'instance=USD()&ip=USD()'",
"apiVersion: v1 kind: Pod metadata: name: myapp-pod labels: app: myapp spec: securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault containers: - name: myapp-container image: registry.access.redhat.com/ubi9/ubi:latest command: ['sh', '-c', 'echo The app is running! && sleep 3600'] securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL] initContainers: - name: init-myservice image: registry.access.redhat.com/ubi9/ubi:latest command: ['sh', '-c', 'until getent hosts myservice; do echo waiting for myservice; sleep 2; done;'] securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL] - name: init-mydb image: registry.access.redhat.com/ubi9/ubi:latest command: ['sh', '-c', 'until getent hosts mydb; do echo waiting for mydb; sleep 2; done;'] securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL]",
"oc create -f myapp.yaml",
"oc get pods",
"NAME READY STATUS RESTARTS AGE myapp-pod 0/1 Init:0/2 0 5s",
"kind: Service apiVersion: v1 metadata: name: myservice spec: ports: - protocol: TCP port: 80 targetPort: 9376",
"oc create -f myservice.yaml",
"oc get pods",
"NAME READY STATUS RESTARTS AGE myapp-pod 0/1 Init:1/2 0 5s",
"kind: Service apiVersion: v1 metadata: name: mydb spec: ports: - protocol: TCP port: 80 targetPort: 9377",
"oc create -f mydb.yaml",
"oc get pods",
"NAME READY STATUS RESTARTS AGE myapp-pod 1/1 Running 0 2m",
"oc set volume <object_selection> <operation> <mandatory_parameters> <options>",
"oc set volume <object_type>/<name> [options]",
"oc set volume pod/p1",
"oc set volume dc --all --name=v1",
"oc set volume <object_type>/<name> --add [options]",
"oc set volume dc/registry --add",
"kind: DeploymentConfig apiVersion: apps.openshift.io/v1 metadata: name: registry namespace: registry spec: replicas: 3 selector: app: httpd template: metadata: labels: app: httpd spec: volumes: 1 - name: volume-pppsw emptyDir: {} containers: - name: httpd image: >- image-registry.openshift-image-registry.svc:5000/openshift/httpd:latest ports: - containerPort: 8080 protocol: TCP",
"oc set volume rc/r1 --add --name=v1 --type=secret --secret-name='secret1' --mount-path=/data",
"kind: ReplicationController apiVersion: v1 metadata: name: example-1 namespace: example spec: replicas: 0 selector: app: httpd deployment: example-1 deploymentconfig: example template: metadata: creationTimestamp: null labels: app: httpd deployment: example-1 deploymentconfig: example spec: volumes: 1 - name: v1 secret: secretName: secret1 defaultMode: 420 containers: - name: httpd image: >- image-registry.openshift-image-registry.svc:5000/openshift/httpd:latest volumeMounts: 2 - name: v1 mountPath: /data",
"oc set volume -f dc.json --add --name=v1 --type=persistentVolumeClaim --claim-name=pvc1 --mount-path=/data --containers=c1",
"kind: DeploymentConfig apiVersion: apps.openshift.io/v1 metadata: name: example namespace: example spec: replicas: 3 selector: app: httpd template: metadata: labels: app: httpd spec: volumes: - name: volume-pppsw emptyDir: {} - name: v1 1 persistentVolumeClaim: claimName: pvc1 containers: - name: httpd image: >- image-registry.openshift-image-registry.svc:5000/openshift/httpd:latest ports: - containerPort: 8080 protocol: TCP volumeMounts: 2 - name: v1 mountPath: /data",
"oc set volume rc --all --add --name=v1 --source='{\"gitRepo\": { \"repository\": \"https://github.com/namespace1/project1\", \"revision\": \"5125c45f9f563\" }}'",
"oc set volume <object_type>/<name> --add --overwrite [options]",
"oc set volume rc/r1 --add --overwrite --name=v1 --type=persistentVolumeClaim --claim-name=pvc1",
"kind: ReplicationController apiVersion: v1 metadata: name: example-1 namespace: example spec: replicas: 0 selector: app: httpd deployment: example-1 deploymentconfig: example template: metadata: labels: app: httpd deployment: example-1 deploymentconfig: example spec: volumes: - name: v1 1 persistentVolumeClaim: claimName: pvc1 containers: - name: httpd image: >- image-registry.openshift-image-registry.svc:5000/openshift/httpd:latest ports: - containerPort: 8080 protocol: TCP volumeMounts: - name: v1 mountPath: /data",
"oc set volume dc/d1 --add --overwrite --name=v1 --mount-path=/opt",
"kind: DeploymentConfig apiVersion: apps.openshift.io/v1 metadata: name: example namespace: example spec: replicas: 3 selector: app: httpd template: metadata: labels: app: httpd spec: volumes: - name: volume-pppsw emptyDir: {} - name: v2 persistentVolumeClaim: claimName: pvc1 - name: v1 persistentVolumeClaim: claimName: pvc1 containers: - name: httpd image: >- image-registry.openshift-image-registry.svc:5000/openshift/httpd:latest ports: - containerPort: 8080 protocol: TCP volumeMounts: 1 - name: v1 mountPath: /opt",
"oc set volume <object_type>/<name> --remove [options]",
"oc set volume dc/d1 --remove --name=v1",
"oc set volume dc/d1 --remove --name=v1 --containers=c1",
"oc set volume rc/r1 --remove --confirm",
"oc rsh <pod>",
"sh-4.2USD ls /path/to/volume/subpath/mount example_file1 example_file2 example_file3",
"apiVersion: v1 kind: Pod metadata: name: my-site spec: securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault containers: - name: mysql image: mysql volumeMounts: - mountPath: /var/lib/mysql name: site-data subPath: mysql 1 securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL] - name: php image: php volumeMounts: - mountPath: /var/www/html name: site-data subPath: html 2 securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL] volumes: - name: site-data persistentVolumeClaim: claimName: my-site-data",
"apiVersion: v1 kind: Pod metadata: name: volume-test spec: securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault containers: - name: container-test image: busybox volumeMounts: 1 - name: all-in-one mountPath: \"/projected-volume\" 2 readOnly: true 3 securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL] volumes: 4 - name: all-in-one 5 projected: defaultMode: 0400 6 sources: - secret: name: mysecret 7 items: - key: username path: my-group/my-username 8 - downwardAPI: 9 items: - path: \"labels\" fieldRef: fieldPath: metadata.labels - path: \"cpu_limit\" resourceFieldRef: containerName: container-test resource: limits.cpu - configMap: 10 name: myconfigmap items: - key: config path: my-group/my-config mode: 0777 11",
"apiVersion: v1 kind: Pod metadata: name: volume-test spec: securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault containers: - name: container-test image: busybox volumeMounts: - name: all-in-one mountPath: \"/projected-volume\" readOnly: true securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL] volumes: - name: all-in-one projected: defaultMode: 0755 sources: - secret: name: mysecret items: - key: username path: my-group/my-username - secret: name: mysecret2 items: - key: password path: my-group/my-password mode: 511",
"apiVersion: v1 kind: Pod metadata: name: volume-test spec: securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault containers: - name: container-test image: busybox volumeMounts: - name: all-in-one mountPath: \"/projected-volume\" readOnly: true securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL] volumes: - name: all-in-one projected: sources: - secret: name: mysecret items: - key: username path: my-group/data - configMap: name: myconfigmap items: - key: config path: my-group/data",
"echo -n \"admin\" | base64",
"YWRtaW4=",
"echo -n \"1f2d1e2e67df\" | base64",
"MWYyZDFlMmU2N2Rm",
"apiVersion: v1 kind: Secret metadata: name: mysecret type: Opaque data: pass: MWYyZDFlMmU2N2Rm user: YWRtaW4=",
"oc create -f <secrets-filename>",
"oc create -f secret.yaml",
"secret \"mysecret\" created",
"oc get secret <secret-name>",
"oc get secret mysecret",
"NAME TYPE DATA AGE mysecret Opaque 2 17h",
"oc get secret <secret-name> -o yaml",
"oc get secret mysecret -o yaml",
"apiVersion: v1 data: pass: MWYyZDFlMmU2N2Rm user: YWRtaW4= kind: Secret metadata: creationTimestamp: 2017-05-30T20:21:38Z name: mysecret namespace: default resourceVersion: \"2107\" selfLink: /api/v1/namespaces/default/secrets/mysecret uid: 959e0424-4575-11e7-9f97-fa163e4bd54c type: Opaque",
"kind: Pod metadata: name: test-projected-volume spec: securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault containers: - name: test-projected-volume image: busybox args: - sleep - \"86400\" volumeMounts: - name: all-in-one mountPath: \"/projected-volume\" readOnly: true securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL] volumes: - name: all-in-one projected: sources: - secret: name: mysecret 1",
"oc create -f <your_yaml_file>.yaml",
"oc create -f secret-pod.yaml",
"pod \"test-projected-volume\" created",
"oc get pod <name>",
"oc get pod test-projected-volume",
"NAME READY STATUS RESTARTS AGE test-projected-volume 1/1 Running 0 14s",
"oc exec -it <pod> <command>",
"oc exec -it test-projected-volume -- /bin/sh",
"/ # ls",
"bin home root tmp dev proc run usr etc projected-volume sys var",
"apiVersion: v1 kind: Pod metadata: name: dapi-env-test-pod spec: securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault containers: - name: env-test-container image: gcr.io/google_containers/busybox command: [ \"/bin/sh\", \"-c\", \"env\" ] env: - name: MY_POD_NAME valueFrom: fieldRef: fieldPath: metadata.name - name: MY_POD_NAMESPACE valueFrom: fieldRef: fieldPath: metadata.namespace securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL] restartPolicy: Never",
"oc create -f pod.yaml",
"oc logs -p dapi-env-test-pod",
"kind: Pod apiVersion: v1 metadata: labels: zone: us-east-coast cluster: downward-api-test-cluster1 rack: rack-123 name: dapi-volume-test-pod annotations: annotation1: \"345\" annotation2: \"456\" spec: securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault containers: - name: volume-test-container image: gcr.io/google_containers/busybox command: [\"sh\", \"-c\", \"cat /tmp/etc/pod_labels /tmp/etc/pod_annotations\"] volumeMounts: - name: podinfo mountPath: /tmp/etc readOnly: false securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL] volumes: - name: podinfo downwardAPI: defaultMode: 420 items: - fieldRef: fieldPath: metadata.name path: pod_name - fieldRef: fieldPath: metadata.namespace path: pod_namespace - fieldRef: fieldPath: metadata.labels path: pod_labels - fieldRef: fieldPath: metadata.annotations path: pod_annotations restartPolicy: Never",
"oc create -f volume-pod.yaml",
"oc logs -p dapi-volume-test-pod",
"cluster=downward-api-test-cluster1 rack=rack-123 zone=us-east-coast annotation1=345 annotation2=456 kubernetes.io/config.source=api",
"apiVersion: v1 kind: Pod metadata: name: dapi-env-test-pod spec: containers: - name: test-container image: gcr.io/google_containers/busybox:1.24 command: [ \"/bin/sh\", \"-c\", \"env\" ] resources: requests: memory: \"32Mi\" cpu: \"125m\" limits: memory: \"64Mi\" cpu: \"250m\" env: - name: MY_CPU_REQUEST valueFrom: resourceFieldRef: resource: requests.cpu - name: MY_CPU_LIMIT valueFrom: resourceFieldRef: resource: limits.cpu - name: MY_MEM_REQUEST valueFrom: resourceFieldRef: resource: requests.memory - name: MY_MEM_LIMIT valueFrom: resourceFieldRef: resource: limits.memory",
"oc create -f pod.yaml",
"apiVersion: v1 kind: Pod metadata: name: dapi-env-test-pod spec: containers: - name: client-container image: gcr.io/google_containers/busybox:1.24 command: [\"sh\", \"-c\", \"while true; do echo; if [[ -e /etc/cpu_limit ]]; then cat /etc/cpu_limit; fi; if [[ -e /etc/cpu_request ]]; then cat /etc/cpu_request; fi; if [[ -e /etc/mem_limit ]]; then cat /etc/mem_limit; fi; if [[ -e /etc/mem_request ]]; then cat /etc/mem_request; fi; sleep 5; done\"] resources: requests: memory: \"32Mi\" cpu: \"125m\" limits: memory: \"64Mi\" cpu: \"250m\" volumeMounts: - name: podinfo mountPath: /etc readOnly: false volumes: - name: podinfo downwardAPI: items: - path: \"cpu_limit\" resourceFieldRef: containerName: client-container resource: limits.cpu - path: \"cpu_request\" resourceFieldRef: containerName: client-container resource: requests.cpu - path: \"mem_limit\" resourceFieldRef: containerName: client-container resource: limits.memory - path: \"mem_request\" resourceFieldRef: containerName: client-container resource: requests.memory",
"oc create -f volume-pod.yaml",
"apiVersion: v1 kind: Secret metadata: name: mysecret data: password: <password> username: <username> type: kubernetes.io/basic-auth",
"oc create -f secret.yaml",
"apiVersion: v1 kind: Pod metadata: name: dapi-env-test-pod spec: securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault containers: - name: env-test-container image: gcr.io/google_containers/busybox command: [ \"/bin/sh\", \"-c\", \"env\" ] env: - name: MY_SECRET_USERNAME valueFrom: secretKeyRef: name: mysecret key: username securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL] restartPolicy: Never",
"oc create -f pod.yaml",
"oc logs -p dapi-env-test-pod",
"apiVersion: v1 kind: ConfigMap metadata: name: myconfigmap data: mykey: myvalue",
"oc create -f configmap.yaml",
"apiVersion: v1 kind: Pod metadata: name: dapi-env-test-pod spec: securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault containers: - name: env-test-container image: gcr.io/google_containers/busybox command: [ \"/bin/sh\", \"-c\", \"env\" ] env: - name: MY_CONFIGMAP_VALUE valueFrom: configMapKeyRef: name: myconfigmap key: mykey securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL] restartPolicy: Always",
"oc create -f pod.yaml",
"oc logs -p dapi-env-test-pod",
"apiVersion: v1 kind: Pod metadata: name: dapi-env-test-pod spec: securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault containers: - name: env-test-container image: gcr.io/google_containers/busybox command: [ \"/bin/sh\", \"-c\", \"env\" ] env: - name: MY_EXISTING_ENV value: my_value - name: MY_ENV_VAR_REF_ENV value: USD(MY_EXISTING_ENV) securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL] restartPolicy: Never",
"oc create -f pod.yaml",
"oc logs -p dapi-env-test-pod",
"apiVersion: v1 kind: Pod metadata: name: dapi-env-test-pod spec: securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault containers: - name: env-test-container image: gcr.io/google_containers/busybox command: [ \"/bin/sh\", \"-c\", \"env\" ] env: - name: MY_NEW_ENV value: USDUSD(SOME_OTHER_ENV) securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL] restartPolicy: Never",
"oc create -f pod.yaml",
"oc logs -p dapi-env-test-pod",
"oc rsync <source> <destination> [-c <container>]",
"<pod name>:<dir>",
"oc rsync <local-dir> <pod-name>:/<remote-dir> -c <container-name>",
"oc rsync /home/user/source devpod1234:/src -c user-container",
"oc rsync devpod1234:/src /home/user/source",
"oc rsync devpod1234:/src/status.txt /home/user/",
"rsync --rsh='oc rsh' --exclude-from=<file_name> <local-dir> <pod-name>:/<remote-dir>",
"export RSYNC_RSH='oc rsh'",
"rsync --exclude-from=<file_name> <local-dir> <pod-name>:/<remote-dir>",
"oc exec <pod> [-c <container>] -- <command> [<arg_1> ... <arg_n>]",
"oc exec mypod date",
"Thu Apr 9 02:21:53 UTC 2015",
"/proxy/nodes/<node_name>/exec/<namespace>/<pod>/<container>?command=<command>",
"/proxy/nodes/node123.openshift.com/exec/myns/mypod/mycontainer?command=date",
"oc port-forward <pod> [<local_port>:]<remote_port> [...[<local_port_n>:]<remote_port_n>]",
"oc port-forward <pod> [<local_port>:]<remote_port> [...[<local_port_n>:]<remote_port_n>]",
"oc port-forward <pod> 5000 6000",
"Forwarding from 127.0.0.1:5000 -> 5000 Forwarding from [::1]:5000 -> 5000 Forwarding from 127.0.0.1:6000 -> 6000 Forwarding from [::1]:6000 -> 6000",
"oc port-forward <pod> 8888:5000",
"Forwarding from 127.0.0.1:8888 -> 5000 Forwarding from [::1]:8888 -> 5000",
"oc port-forward <pod> :5000",
"Forwarding from 127.0.0.1:42390 -> 5000 Forwarding from [::1]:42390 -> 5000",
"oc port-forward <pod> 0:5000",
"/proxy/nodes/<node_name>/portForward/<namespace>/<pod>",
"/proxy/nodes/node123.openshift.com/portForward/myns/mypod",
"oc get events [-n <project>] 1",
"oc get events -n openshift-config",
"LAST SEEN TYPE REASON OBJECT MESSAGE 97m Normal Scheduled pod/dapi-env-test-pod Successfully assigned openshift-config/dapi-env-test-pod to ip-10-0-171-202.ec2.internal 97m Normal Pulling pod/dapi-env-test-pod pulling image \"gcr.io/google_containers/busybox\" 97m Normal Pulled pod/dapi-env-test-pod Successfully pulled image \"gcr.io/google_containers/busybox\" 97m Normal Created pod/dapi-env-test-pod Created container 9m5s Warning FailedCreatePodSandBox pod/dapi-volume-test-pod Failed create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dapi-volume-test-pod_openshift-config_6bc60c1f-452e-11e9-9140-0eec59c23068_0(748c7a40db3d08c07fb4f9eba774bd5effe5f0d5090a242432a73eee66ba9e22): Multus: Err adding pod to network \"ovn-kubernetes\": cannot set \"ovn-kubernetes\" ifname to \"eth0\": no netns: failed to Statfs \"/proc/33366/ns/net\": no such file or directory 8m31s Normal Scheduled pod/dapi-volume-test-pod Successfully assigned openshift-config/dapi-volume-test-pod to ip-10-0-171-202.ec2.internal #",
"apiVersion: v1 kind: Pod metadata: name: small-pod labels: app: guestbook tier: frontend spec: securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault containers: - name: php-redis image: gcr.io/google-samples/gb-frontend:v4 imagePullPolicy: Always resources: limits: cpu: 150m memory: 100Mi requests: cpu: 150m memory: 100Mi securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL]",
"oc create -f <file_name>.yaml",
"oc create -f pod-spec.yaml",
"podman login registry.redhat.io",
"podman pull registry.redhat.io/openshift4/ose-cluster-capacity",
"podman run -v USDHOME/.kube:/kube:Z -v USD(pwd):/cc:Z ose-cluster-capacity /bin/cluster-capacity --kubeconfig /kube/config --<pod_spec>.yaml /cc/<pod_spec>.yaml --verbose",
"small-pod pod requirements: - CPU: 150m - Memory: 100Mi The cluster can schedule 88 instance(s) of the pod small-pod. Termination reason: Unschedulable: 0/5 nodes are available: 2 Insufficient cpu, 3 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate. Pod distribution among nodes: small-pod - 192.168.124.214: 45 instance(s) - 192.168.124.120: 43 instance(s)",
"kind: ClusterRole apiVersion: rbac.authorization.k8s.io/v1 metadata: name: cluster-capacity-role rules: - apiGroups: [\"\"] resources: [\"pods\", \"nodes\", \"persistentvolumeclaims\", \"persistentvolumes\", \"services\", \"replicationcontrollers\"] verbs: [\"get\", \"watch\", \"list\"] - apiGroups: [\"apps\"] resources: [\"replicasets\", \"statefulsets\"] verbs: [\"get\", \"watch\", \"list\"] - apiGroups: [\"policy\"] resources: [\"poddisruptionbudgets\"] verbs: [\"get\", \"watch\", \"list\"] - apiGroups: [\"storage.k8s.io\"] resources: [\"storageclasses\"] verbs: [\"get\", \"watch\", \"list\"]",
"oc create -f <file_name>.yaml",
"oc create sa cluster-capacity-sa",
"oc create sa cluster-capacity-sa -n default",
"oc adm policy add-cluster-role-to-user cluster-capacity-role system:serviceaccount:<namespace>:cluster-capacity-sa",
"apiVersion: v1 kind: Pod metadata: name: small-pod labels: app: guestbook tier: frontend spec: securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault containers: - name: php-redis image: gcr.io/google-samples/gb-frontend:v4 imagePullPolicy: Always resources: limits: cpu: 150m memory: 100Mi requests: cpu: 150m memory: 100Mi securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL]",
"oc create -f <file_name>.yaml",
"oc create -f pod.yaml",
"oc create configmap cluster-capacity-configmap --from-file=pod.yaml=pod.yaml",
"apiVersion: batch/v1 kind: Job metadata: name: cluster-capacity-job spec: parallelism: 1 completions: 1 template: metadata: name: cluster-capacity-pod spec: containers: - name: cluster-capacity image: openshift/origin-cluster-capacity imagePullPolicy: \"Always\" volumeMounts: - mountPath: /test-pod name: test-volume env: - name: CC_INCLUSTER 1 value: \"true\" command: - \"/bin/sh\" - \"-ec\" - | /bin/cluster-capacity --podspec=/test-pod/pod.yaml --verbose restartPolicy: \"Never\" serviceAccountName: cluster-capacity-sa volumes: - name: test-volume configMap: name: cluster-capacity-configmap",
"oc create -f cluster-capacity-job.yaml",
"oc logs jobs/cluster-capacity-job",
"small-pod pod requirements: - CPU: 150m - Memory: 100Mi The cluster can schedule 52 instance(s) of the pod small-pod. Termination reason: Unschedulable: No nodes are available that match all of the following predicates:: Insufficient cpu (2). Pod distribution among nodes: small-pod - 192.168.124.214: 26 instance(s) - 192.168.124.120: 26 instance(s)",
"apiVersion: \"v1\" kind: \"LimitRange\" metadata: name: \"resource-limits\" spec: limits: - type: \"Container\" max: cpu: \"2\" memory: \"1Gi\" min: cpu: \"100m\" memory: \"4Mi\" default: cpu: \"300m\" memory: \"200Mi\" defaultRequest: cpu: \"200m\" memory: \"100Mi\" maxLimitRequestRatio: cpu: \"10\"",
"apiVersion: \"v1\" kind: \"LimitRange\" metadata: name: \"resource-limits\" 1 spec: limits: - type: \"Container\" max: cpu: \"2\" 2 memory: \"1Gi\" 3 min: cpu: \"100m\" 4 memory: \"4Mi\" 5 default: cpu: \"300m\" 6 memory: \"200Mi\" 7 defaultRequest: cpu: \"200m\" 8 memory: \"100Mi\" 9 maxLimitRequestRatio: cpu: \"10\" 10",
"apiVersion: \"v1\" kind: \"LimitRange\" metadata: name: \"resource-limits\" 1 spec: limits: - type: \"Pod\" max: cpu: \"2\" 2 memory: \"1Gi\" 3 min: cpu: \"200m\" 4 memory: \"6Mi\" 5 maxLimitRequestRatio: cpu: \"10\" 6",
"apiVersion: \"v1\" kind: \"LimitRange\" metadata: name: \"resource-limits\" 1 spec: limits: - type: openshift.io/Image max: storage: 1Gi 2",
"apiVersion: \"v1\" kind: \"LimitRange\" metadata: name: \"resource-limits\" 1 spec: limits: - type: openshift.io/ImageStream max: openshift.io/image-tags: 20 2 openshift.io/images: 30 3",
"apiVersion: \"v1\" kind: \"LimitRange\" metadata: name: \"resource-limits\" 1 spec: limits: - type: \"PersistentVolumeClaim\" min: storage: \"2Gi\" 2 max: storage: \"50Gi\" 3",
"apiVersion: \"v1\" kind: \"LimitRange\" metadata: name: \"resource-limits\" 1 spec: limits: - type: \"Pod\" 2 max: cpu: \"2\" memory: \"1Gi\" min: cpu: \"200m\" memory: \"6Mi\" - type: \"Container\" 3 max: cpu: \"2\" memory: \"1Gi\" min: cpu: \"100m\" memory: \"4Mi\" default: 4 cpu: \"300m\" memory: \"200Mi\" defaultRequest: 5 cpu: \"200m\" memory: \"100Mi\" maxLimitRequestRatio: 6 cpu: \"10\" - type: openshift.io/Image 7 max: storage: 1Gi - type: openshift.io/ImageStream 8 max: openshift.io/image-tags: 20 openshift.io/images: 30 - type: \"PersistentVolumeClaim\" 9 min: storage: \"2Gi\" max: storage: \"50Gi\"",
"oc create -f <limit_range_file> -n <project> 1",
"oc get limits -n demoproject",
"NAME CREATED AT resource-limits 2020-07-15T17:14:23Z",
"oc describe limits resource-limits -n demoproject",
"Name: resource-limits Namespace: demoproject Type Resource Min Max Default Request Default Limit Max Limit/Request Ratio ---- -------- --- --- --------------- ------------- ----------------------- Pod cpu 200m 2 - - - Pod memory 6Mi 1Gi - - - Container cpu 100m 2 200m 300m 10 Container memory 4Mi 1Gi 100Mi 200Mi - openshift.io/Image storage - 1Gi - - - openshift.io/ImageStream openshift.io/image - 12 - - - openshift.io/ImageStream openshift.io/image-tags - 10 - - - PersistentVolumeClaim storage - 50Gi - - -",
"oc delete limits <limit_name>",
"-XX:+UseParallelGC -XX:MinHeapFreeRatio=5 -XX:MaxHeapFreeRatio=10 -XX:GCTimeRatio=4 -XX:AdaptiveSizePolicyWeight=90.",
"JAVA_TOOL_OPTIONS=\"-XX:+UnlockExperimentalVMOptions -XX:+UseCGroupMemoryLimitForHeap -Dsun.zip.disableMemoryMapping=true\"",
"apiVersion: v1 kind: Pod metadata: name: test spec: securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault containers: - name: test image: fedora:latest command: - sleep - \"3600\" env: - name: MEMORY_REQUEST 1 valueFrom: resourceFieldRef: containerName: test resource: requests.memory - name: MEMORY_LIMIT 2 valueFrom: resourceFieldRef: containerName: test resource: limits.memory resources: requests: memory: 384Mi limits: memory: 512Mi securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL]",
"oc create -f <file_name>.yaml",
"oc rsh test",
"env | grep MEMORY | sort",
"MEMORY_LIMIT=536870912 MEMORY_REQUEST=402653184",
"oc rsh test",
"grep '^oom_kill ' /sys/fs/cgroup/memory/memory.oom_control",
"oom_kill 0",
"sed -e '' </dev/zero",
"Killed",
"echo USD?",
"137",
"grep '^oom_kill ' /sys/fs/cgroup/memory/memory.oom_control",
"oom_kill 1",
"oc get pod test",
"NAME READY STATUS RESTARTS AGE test 0/1 OOMKilled 0 1m",
"oc get pod test -o yaml",
"status: containerStatuses: - name: test ready: false restartCount: 0 state: terminated: exitCode: 137 reason: OOMKilled phase: Failed",
"oc get pod test -o yaml",
"status: containerStatuses: - name: test ready: true restartCount: 1 lastState: terminated: exitCode: 137 reason: OOMKilled state: running: phase: Running",
"oc get pod test",
"NAME READY STATUS RESTARTS AGE test 0/1 Evicted 0 1m",
"oc get pod test -o yaml",
"status: message: 'Pod The node was low on resource: [MemoryPressure].' phase: Failed reason: Evicted",
"rosa edit namespace/<project_name>",
"apiVersion: v1 kind: Namespace metadata: annotations: quota.openshift.io/cluster-resource-override-enabled: \"false\" <.>"
] | https://docs.redhat.com/en/documentation/red_hat_openshift_service_on_aws/4/html-single/nodes/index |
1.2. About glusterFS | 1.2. About glusterFS glusterFS aggregates various storage servers over network interconnects into one large parallel network file system. Based on a stackable user space design, it delivers exceptional performance for diverse workloads and is a key building block of Red Hat Gluster Storage. The POSIX compatible glusterFS servers, which use XFS file system format to store data on disks, can be accessed using industry-standard access protocols including Network File System (NFS) and Server Message Block (SMB) (also known as CIFS). | null | https://docs.redhat.com/en/documentation/red_hat_gluster_storage/3.5/html/administration_guide/about_glusterfs |
Part II. User Management | Part II. User Management | null | https://docs.redhat.com/en/documentation/red_hat_jboss_data_virtualization/6.4/html/administration_and_configuration_guide/part-user_management |
Chapter 1. Requirements for scaling storage nodes | Chapter 1. Requirements for scaling storage nodes Before you proceed to scale the storage nodes, refer to the following sections to understand the node requirements for your specific Red Hat OpenShift Data Foundation instance: Platform requirements Storage device requirements Dynamic storage devices Local storage devices Capacity planning Important Always ensure that you have plenty of storage capacity. If storage ever fills completely, it is not possible to add capacity or delete or migrate content away from the storage to free up space completely. Full storage is very difficult to recover. Capacity alerts are issued when cluster storage capacity reaches 75% (near-full) and 85% (full) of total capacity. Always address capacity warnings promptly, and review your storage regularly to ensure that you do not run out of storage space. If you do run out of storage space completely, contact Red Hat Customer Support . 1.1. Supported Deployments for Red Hat OpenShift Data Foundation User-provisioned infrastructure: Amazon Web Services (AWS) VMware Bare metal IBM Power IBM Z or LinuxONE Installer-provisioned infrastructure: Amazon Web Services (AWS) Microsoft Azure Red Hat Virtualization VMware | null | https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.9/html/scaling_storage/requirements-for-scaling-storage-nodes |
Chapter 8. application | Chapter 8. application This chapter describes the commands under the application command. 8.1. application credential create Create new application credential Usage: Table 8.1. Positional arguments Value Summary <name> Name of the application credential Table 8.2. Command arguments Value Summary -h, --help Show this help message and exit --secret <secret> Secret to use for authentication (if not provided, one will be generated) --role <role> Roles to authorize (name or id) (repeat option to set multiple values) --expiration <expiration> Sets an expiration date for the application credential, format of YYYY-mm-ddTHH:MM:SS (if not provided, the application credential will not expire) --description <description> Application credential description --unrestricted Enable application credential to create and delete other application credentials and trusts (this is potentially dangerous behavior and is disabled by default) --restricted Prohibit application credential from creating and deleting other application credentials and trusts (this is the default behavior) --access-rules <access-rules> Either a string or file path containing a json- formatted list of access rules, each containing a request method, path, and service, for example [{"method": "GET", "path": "/v2.1/servers", "service": "compute"}] Table 8.3. Output formatter options Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns Table 8.4. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 8.5. Shell formatter options Value Summary --prefix PREFIX Add a prefix to all variable names Table 8.6. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 8.2. application credential delete Delete application credentials(s) Usage: Table 8.7. Positional arguments Value Summary <application-credential> Application credentials(s) to delete (name or id) Table 8.8. Command arguments Value Summary -h, --help Show this help message and exit 8.3. application credential list List application credentials Usage: Table 8.9. Command arguments Value Summary -h, --help Show this help message and exit --user <user> User whose application credentials to list (name or ID) --user-domain <user-domain> Domain the user belongs to (name or id). this can be used in case collisions between user names exist. Table 8.10. Output formatter options Value Summary -f {csv,json,table,value,yaml}, --format {csv,json,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns --sort-column SORT_COLUMN Specify the column(s) to sort the data (columns specified first have a priority, non-existing columns are ignored), can be repeated --sort-ascending Sort the column(s) in ascending order --sort-descending Sort the column(s) in descending order Table 8.11. CSV formatter options Value Summary --quote {all,minimal,none,nonnumeric} When to include quotes, defaults to nonnumeric Table 8.12. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 8.13. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 8.4. application credential show Display application credential details Usage: Table 8.14. Positional arguments Value Summary <application-credential> Application credential to display (name or id) Table 8.15. Command arguments Value Summary -h, --help Show this help message and exit Table 8.16. Output formatter options Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns Table 8.17. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 8.18. Shell formatter options Value Summary --prefix PREFIX Add a prefix to all variable names Table 8.19. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. | [
"openstack application credential create [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] [--secret <secret>] [--role <role>] [--expiration <expiration>] [--description <description>] [--unrestricted] [--restricted] [--access-rules <access-rules>] <name>",
"openstack application credential delete [-h] <application-credential> [<application-credential> ...]",
"openstack application credential list [-h] [-f {csv,json,table,value,yaml}] [-c COLUMN] [--quote {all,minimal,none,nonnumeric}] [--noindent] [--max-width <integer>] [--fit-width] [--print-empty] [--sort-column SORT_COLUMN] [--sort-ascending | --sort-descending] [--user <user>] [--user-domain <user-domain>]",
"openstack application credential show [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] <application-credential>"
] | https://docs.redhat.com/en/documentation/red_hat_openstack_services_on_openshift/18.0/html/command_line_interface_reference/application |
Chapter 25. Configuring an Installed Linux on System z Instance | Chapter 25. Configuring an Installed Linux on System z Instance For more information about Linux on System z, see the publications listed in Chapter 27, IBM System z References . Some of the most common tasks are described here. 25.1. Adding DASDs This section explains how to set a Direct Access Storage Device (DASD) online, format it, and how to make sure it is attached to the system persistently, making it automatically available after a reboot. Note Make sure the device is attached or linked to the Linux system if running under z/VM. To link a mini disk to which you have access, issue, for example: See z/VM: CP Commands and Utilities Reference, SC24-6175 for details about these commands. 25.1.1. Dynamically Setting DASDs Online The following procedure describes bringing a DASD online dynamically (not persistently). This is the first step when configuring a new DASD; later procedures will explain how to make it available persistently. Procedure 25.1. Adding DASD Disks on IBM System z Using the VMCP Driver Enable the VMCP driver: Use the cio_ignore command to remove the DASD from the list of ignored devices and make it visible to Linux: Replace DeviceNumber with the device number of the DASD. For example: Link the disk to the virtual machine: Replace DeviceNumber with the device number of the DASD. Set the device online. Use a command of the following form: Replace DeviceNumber with the device number of the DASD. Verify that the disk is ready using the lsdasd command: In the above example, device 0102 (shown as 0.0.0102 in the Bus-ID column) is being accessed as /dev/dasdf . If you followed the above procedure, the new DASD is attached for the current session only. This means that the DASD will not still be attached after you reboot the system. See Section 25.1.2, "Persistently setting DASDs online" for information about attaching the storage device permanently. You can also find more information in the DASD Chapter in Linux on System z Device Drivers, Features, and Commands on Red Hat Enterprise Linux 6 . 25.1.2. Persistently setting DASDs online The instructions in Section 25.1.1, "Dynamically Setting DASDs Online" described how to activate DASDs dynamically in a running system. Such changes are not persistent; the DASDs will not be attached after the system reboots. Procedures described in this section assume that you have already attached the DASD dynamically. Making changes to the DASD configuration persistent in your Linux system depends on whether the DASDs belong to the root ( / ) file system. Those DASDs required for the root file system need to be activated early during the boot process by the initramfs to be able to mount the root file system. The DASDs which are not part of the root file system can be activated later, simplifying the configuration process. The list of ignored devices ( cio_ignore ) is handled transparently for persistent device configurations. You do not need to free devices from the ignore list manually. 25.1.2.1. DASDs Which Are Part of the Root File System If you are attaching a new DASD as part of the root file system, you will have to edit the zipl boot loader's configuration and then regenerate the initramfs so that your changes will take effect after the reboot. The following procedure explains the steps you need to take. Procedure 25.2. Permanently Attaching DASDs as Root Devices Edit the /etc/dasd.conf configuration file using a plain text editor such as Vim , and append a line to this file with your DASD's configuration. You can use parts of the file that describe previously configured devices for reference. A valid configuration line will look similar to the following: Edit the /etc/zipl.conf configuration file. An example zipl.conf file will look similar to the following: Note the multiple rd_DASD= options on the parameters= line. You must add the new DASD to this line, using the same syntax - the rd_DASD= keyword, followed by the device ID and a comma-separated list of options. See the dasd= parameter description in the DASD device driver chapter in Linux on System z Device Drivers, Features, and Commands on Red Hat Enterprise Linux 6 for details. The step is to rebuild the initrd : Then, rebuild the boot loader configuration using the zipl command. You can use the -V option for more detailed output: After completing this procedure, the new DASD is persistently attached and can be used as part of the root file system. However, the root file system still needs to be expanded to the new DASD. If your system uses an LVM logical volume for the root file system, you will also need to expand this volume (and the volume group which contains it) to the new DASD. This can be done using the built-in pvcreate , vgextend and lvextend commands to create a physical volume for LVM, expand the existing volume group and expand the root logical volume, respectively. See Section 25.1.5, "Expanding Existing LVM Volumes to New Storage Devices" for details. 25.1.3. DASDs Which Are Not Part of the Root File System DASDs that are not part of the root file system, that is, data disks , are persistently configured in the file /etc/dasd.conf . It contains one DASD per line. Each line begins with the device bus ID of a DASD. Optionally, each line can continue with options separated by space or tab characters. Options consist of key-value-pairs, where the key and value are separated by an equals sign. The key corresponds to any valid sysfs attribute a DASD may have. The value will be written to the key's sysfs attribute. Entries in /etc/dasd.conf are activated and configured by udev when a DASD is added to the system. At boot time, all DASDs visible to the system get added and trigger udev . Example content of /etc/dasd.conf : Modifications of /etc/dasd.conf only become effective after a reboot of the system or after the dynamic addition of a new DASD by changing the system's I/O configuration (that is, the DASD is attached under z/VM). Alternatively, you can trigger the activation of a new entry in /etc/dasd.conf for a DASD which was previously not active, by executing the following commands: Procedure 25.3. Permanently Attaching DASDs as Non-root Devices Trigger the activation by writing to the uevent attribute of the device: For example: 25.1.4. Preparing a New DASD with Low-level Formatting The step after bringing the DASD online is to format it, if you need to do so. The following procedure explains the necessary steps. Warning This procedure will wipe all existing data on the disk. Make sure to back up any data you want to keep before proceeding. Procedure 25.4. Formatting a DASD Wipe all existing data on the DASD using the dasdfmt command. Replace DeviceNumber with the device number of the DASD. When prompted for confirmation (as shown in the example below), type yes to proceed. When the progress bar reaches the end and the format is complete, dasdfmt prints the following output: See the dasdfmt(8) man page for information about the syntax of the dasdfmt command. Use the fdasd command to write a new Linux-compatible partition table to the DASD. Replace DeviceNumber with the device number of the DASD. This example uses the -a option to create a single partition spanning the entire disk. Other layouts are possible; up to three partitions can be created on a single DASD. For information about the syntax of the fdasd command and available options, see the fdasd(8) man page. Create a new partition with fdisk . Replace DeviceName with the device name of the DASD. After you execute fdisk , a series of prompts will appear in your terminal. These prompts can be used to manipulate the disk partition table, creating new partitions or editing existing one. For information about using fdisk , see the fdisk(8) man page. After a (low-level formatted) DASD is online, it can be used like any other disk under Linux. For instance, you can create file systems, LVM physical volumes, or swap space on its partitions, for example /dev/disk/by-path/ccw-0.0.4b2e-part1 . Never use the full DASD device ( dev/dasdb ) for anything but the commands dasdfmt and fdasd . If you want to use the entire DASD, create one partition spanning the entire drive as in the fdasd example above. Note To add additional disks later without breaking existing disk entries in, for example, /etc/fstab , use the persistent device symbolic links under /dev/disk/by-path/ . 25.1.5. Expanding Existing LVM Volumes to New Storage Devices If your system uses LVM, you need to expand an existing volume group and one or more logical volumes so that they contain the new DASD which you attached by following the procedures described earlier in this chapter. Otherwise, the DASD will be attached to the system, but you will not be able to use it. The following procedure explains how to use the entire capacity of the new DASD to expand an existing logical volume. If you want to use the new DASD for multiple logical volumes, you will need to create multiple LVM physical volumes on this partition, and repeat this procedure for each logical volume (and volume group) you want to expand. This procedure assumes you followed the steps in Section 25.1.1, "Dynamically Setting DASDs Online" to attach the new DASD dynamically, then Section 25.1.2.1, "DASDs Which Are Part of the Root File System" to attach it persistently and prepare it to be used for the root volume, and that you formatted it as described in Section 25.1.4, "Preparing a New DASD with Low-level Formatting" and created a single partition on it. Procedure 25.5. Expanding Existing Logical Volume to Use a New DASD Create a new physical volume for LVM on the DASD using the pvcreate command: Important The device name must be specified as a partition - for example, /dev/dasdf1 . Do not specify the entire block device. List existing physical volumes using the pvs command to verify that the physical volume has been created: As you can see in the above example, /dev/dasdf1 now contains an empty physical volume which is not assigned to any volume group. Use the vgextend command to expand an existing volume group containing the volume you want to use the new DASD for: Replace VolumeGroup with the name of the volume group you are expanding, and PhysicalVolume with the name of the physical volume (for example, /dev/dasdf1 ). Use the lvextend command to expand a logical volume you want to use the new DASD for: For example: After you complete the procedure, an existing logical volume is expanded and contains the new DASD in addition to any previously assigned storage devices. You can also use the pvs , vgs , and lvs commands as root to view existing LVM physical volumes, volume groups and logical volumes at any point during the procedure. | [
"CP ATTACH EB1C TO *",
"CP LINK RHEL6X 4B2E 4B2E MR DASD 4B2E LINKED R/W",
"modprobe vmcp",
"cio_ignore -r DeviceNumber",
"cio_ignore -r 0102",
"vmcp 'link * DeviceNumber DeviceNumber rw'",
"# chccwdev -e DeviceNumber",
"lsdasd Bus-ID Status Name Device Type BlkSz Size Blocks ============================================================================== 0.0.0100 active dasda 94:0 ECKD 4096 2347MB 600840 0.0.0301 active dasdb 94:4 FBA 512 512MB 1048576 0.0.0300 active dasdc 94:8 FBA 512 256MB 524288 0.0.0101 active dasdd 94:12 ECKD 4096 2347MB 600840 0.0.0200 active dasde 94:16 ECKD 4096 781MB 200160 0.0.0102 active dasdf 94:20 ECKD 4096 2347MB 600840",
"0.0.0102 use_diag=0 readonly=0 erplog=0 failfast=0",
"[defaultboot] default=linux target=/boot/ [linux] image=/boot/vmlinuz-2.6.32-19.el6.s390x ramdisk=/boot/initramfs-2.6.32-19.el6.s390x.img parameters=\"root=/dev/mapper/vg_devel1-lv_root rd_DASD=0.0.0200,use_diag=0,readonly=0,erplog=0,failfast=0 rd_DASD=0.0.0207,use_diag=0,readonly=0,erplog=0,failfast=0 rd_LVM_LV=vg_devel1/lv_root rd_NO_LUKS rd_NO_MD rd_NO_DM LANG=en_US.UTF-8 SYSFONT=latarcyrheb-sun16 KEYTABLE=us cio_ignore=all,!0.0.0009\"",
"mkinitrd -f /boot/initramfs-2.6.32-71.el6.s390x.img `uname -r`",
"zipl -V Using config file '/etc/zipl.conf' Target device information Device..........................: 5e:00 Partition.......................: 5e:01 Device name.....................: dasda DASD device number..............: 0201 Type............................: disk partition Disk layout.....................: ECKD/compatible disk layout Geometry - heads................: 15 Geometry - sectors..............: 12 Geometry - cylinders............: 3308 Geometry - start................: 24 File system block size..........: 4096 Physical block size.............: 4096 Device size in physical blocks..: 595416 Building bootmap in '/boot/' Building menu 'rh-automatic-menu' Adding #1: IPL section 'linux' (default) kernel image......: /boot/vmlinuz-2.6.32-19.el6.s390x kernel parmline...: 'root=/dev/mapper/vg_devel1-lv_root rd_DASD=0.0.0200,use_diag=0,readonly=0,erplog=0,failfast=0 rd_DASD=0.0.0207,use_diag=0,readonly=0,erplog=0,failfast=0 rd_LVM_LV=vg_devel1/lv_root rd_NO_LUKS rd_NO_MD rd_NO_DM LANG=en_US.UTF-8 SYSFONT=latarcyrheb-sun16 KEYTABLE=us cio_ignore=all,!0.0.0009' initial ramdisk...: /boot/initramfs-2.6.32-19.el6.s390x.img component address: kernel image....: 0x00010000-0x00a70fff parmline........: 0x00001000-0x00001fff initial ramdisk.: 0x02000000-0x022d2fff internal loader.: 0x0000a000-0x0000afff Preparing boot device: dasda (0201). Preparing boot menu Interactive prompt......: enabled Menu timeout............: 15 seconds Default configuration...: 'linux' Syncing disks Done.",
"0.0.0207 0.0.0200 use_diag=1 readonly=1",
"echo add > /sys/bus/ccw/devices/ device.bus,ID /uevent",
"echo add > /sys/bus/ccw/devices/0.0.021a/uevent",
"dasdfmt -b 4096 -d cdl -p /dev/disk/by-path/ccw-0.0. DeviceNumber Drive Geometry: 10017 Cylinders * 15 Heads = 150255 Tracks I am going to format the device /dev/disk/by-path/ccw-0.0.0102 in the following way: Device number of device : 0x4b2e Labelling device : yes Disk label : VOL1 Disk identifier : 0X0102 Extent start (trk no) : 0 Extent end (trk no) : 150254 Compatible Disk Layout : yes Blocksize : 4096 --->> ATTENTION! <<--- All data of that device will be lost. Type \"yes\" to continue, no will leave the disk untouched: yes cyl 97 of 3338 |#----------------------------------------------| 2%",
"Rereading the partition table Exiting",
"fdasd -a /dev/disk/by-path/ccw- DeviceNumber auto-creating one partition for the whole disk writing volume label writing VTOC checking ! wrote NATIVE! rereading partition table",
"fdisk /dev/ DeviceName",
"pvcreate /dev/ DeviceName",
"pvs PV VG Fmt Attr PSize PFree /dev/dasda2 vg_local lvm2 a-- 1,29g 0 /dev/dasdd1 vg_local lvm2 a-- 2,29g 0 /dev/dasdf1 lvm2 a-- 2,29g 2,29g /dev/mapper/mpathb vgextnotshared lvm2 a-- 200,00g 1020,00m",
"vgextend VolumeGroup PhysicalVolume",
"lvextend -L + Size /dev/mapper/ VolumeGroup - LogicalVolume",
"lvextend -L +2G /dev/mapper/vg_local-lv_root Extending logical volume lv_root to 2,58 GiB Logical volume lv_root successfully resized"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/installation_guide/ap-s390info |
Chapter 5. Managing namespace buckets | Chapter 5. Managing namespace buckets Namespace buckets let you connect data repositories on different providers together, so that you can interact with all of your data through a single unified view. Add the object bucket associated with each provider to the namespace bucket, and access your data through the namespace bucket to see all of your object buckets at once. This lets you write to your preferred storage provider while reading from multiple other storage providers, greatly reducing the cost of migrating to a new storage provider. Note A namespace bucket can only be used if its write target is available and functional. 5.1. Amazon S3 API endpoints for objects in namespace buckets You can interact with objects in the namespace buckets using the Amazon Simple Storage Service (S3) API. Ensure that the credentials provided for the Multicloud Object Gateway (MCG) enables you to perform the AWS S3 namespace bucket operations. You can use the AWS tool, aws-cli to verify that all the operations can be performed on the target bucket. Also, the list bucket which is using this MCG account shows the target bucket. Red Hat OpenShift Data Foundation supports the following namespace bucket operations: ListBuckets ListObjects ListMultipartUploads ListObjectVersions GetObject HeadObject CopyObject PutObject CreateMultipartUpload UploadPartCopy UploadPart ListParts AbortMultipartUpload PubObjectTagging DeleteObjectTagging GetObjectTagging GetObjectAcl PutObjectAcl DeleteObject DeleteObjects See the Amazon S3 API reference documentation for the most up-to-date information about these operations and how to use them. Additional resources Amazon S3 REST API Reference Amazon S3 CLI Reference 5.2. Adding a namespace bucket using the Multicloud Object Gateway CLI and YAML For more information about namespace buckets, see Managing namespace buckets . Depending on the type of your deployment and whether you want to use YAML or the Multicloud Object Gateway (MCG) CLI, choose one of the following procedures to add a namespace bucket: Adding an AWS S3 namespace bucket using YAML Adding an IBM COS namespace bucket using YAML Adding an AWS S3 namespace bucket using the Multicloud Object Gateway CLI Adding an IBM COS namespace bucket using the Multicloud Object Gateway CLI 5.2.1. Adding an AWS S3 namespace bucket using YAML Prerequisites Openshift Container Platform with OpenShift Data Foundation operator installed. Access to the Multicloud Object Gateway (MCG). For information, see Chapter 2, Accessing the Multicloud Object Gateway with your applications . Procedure Create a secret with the credentials: where <namespacestore-secret-name> is a unique NamespaceStore name. You must provide and encode your own AWS access key ID and secret access key using Base64 , and use the results in place of <AWS ACCESS KEY ID ENCODED IN BASE64> and <AWS SECRET ACCESS KEY ENCODED IN BASE64> . Create a NamespaceStore resource using OpenShift custom resource definitions (CRDs). A NamespaceStore represents underlying storage to be used as a read or write target for the data in the MCG namespace buckets. To create a NamespaceStore resource, apply the following YAML: <resource-name> The name you want to give to the resource. <namespacestore-secret-name> The secret created in the step. <namespace-secret> The namespace where the secret can be found. <target-bucket> The target bucket you created for the NamespaceStore. Create a namespace bucket class that defines a namespace policy for the namespace buckets. The namespace policy requires a type of either single or multi . A namespace policy of type single requires the following configuration: <my-bucket-class> The unique namespace bucket class name. <resource> The name of a single NamespaceStore that defines the read and write target of the namespace bucket. A namespace policy of type multi requires the following configuration: <my-bucket-class> A unique bucket class name. <write-resource> The name of a single NamespaceStore that defines the write target of the namespace bucket. <read-resources> A list of the names of the NamespaceStores that defines the read targets of the namespace bucket. Create a bucket using an Object Bucket Class (OBC) resource that uses the bucket class defined in the earlier step using the following YAML: <resource-name> The name you want to give to the resource. <my-bucket> The name you want to give to the bucket. <my-bucket-class> The bucket class created in the step. After the OBC is provisioned by the operator, a bucket is created in the MCG, and the operator creates a Secret and ConfigMap with the same name and in the same namespace as that of the OBC. 5.2.2. Adding an IBM COS namespace bucket using YAML Prerequisites Openshift Container Platform with OpenShift Data Foundation operator installed. Access to the Multicloud Object Gateway (MCG), see Chapter 2, Accessing the Multicloud Object Gateway with your applications . Procedure Create a secret with the credentials: <namespacestore-secret-name> A unique NamespaceStore name. You must provide and encode your own IBM COS access key ID and secret access key using Base64 , and use the results in place of <IBM COS ACCESS KEY ID ENCODED IN BASE64> and <IBM COS SECRET ACCESS KEY ENCODED IN BASE64> . Create a NamespaceStore resource using OpenShift custom resource definitions (CRDs). A NamespaceStore represents underlying storage to be used as a read or write target for the data in the MCG namespace buckets. To create a NamespaceStore resource, apply the following YAML: <IBM COS ENDPOINT> The appropriate IBM COS endpoint. <namespacestore-secret-name> The secret created in the step. <namespace-secret> The namespace where the secret can be found. <target-bucket> The target bucket you created for the NamespaceStore. Create a namespace bucket class that defines a namespace policy for the namespace buckets. The namespace policy requires a type of either single or multi . The namespace policy of type single requires the following configuration: <my-bucket-class> The unique namespace bucket class name. <resource> The name of a single NamespaceStore that defines the read and write target of the namespace bucket. The namespace policy of type multi requires the following configuration: <my-bucket-class> The unique bucket class name. <write-resource> The name of a single NamespaceStore that defines the write target of the namespace bucket. <read-resources> A list of the NamespaceStores names that defines the read targets of the namespace bucket. To create a bucket using an Object Bucket Class (OBC) resource that uses the bucket class defined in the step, apply the following YAML: <resource-name> The name you want to give to the resource. <my-bucket> The name you want to give to the bucket. <my-bucket-class> The bucket class created in the step. After the OBC is provisioned by the operator, a bucket is created in the MCG, and the operator creates a Secret and ConfigMap with the same name and in the same namespace as that of the OBC. 5.2.3. Adding an AWS S3 namespace bucket using the Multicloud Object Gateway CLI Prerequisites Openshift Container Platform with OpenShift Data Foundation operator installed. Access to the Multicloud Object Gateway (MCG), see Chapter 2, Accessing the Multicloud Object Gateway with your applications . Download the Multicloud Object Gateway (MCG) command-line interface binary from the customer portal and make it executable. Note Choose the correct product variant according to your architecture. Available platforms are Linux(x86_64), Windows, and Mac OS Procedure In the MCG command-line interface, create a NamespaceStore resource. A NamespaceStore represents an underlying storage to be used as a read or write target for the data in MCG namespace buckets. <namespacestore> The name of the NamespaceStore. <AWS ACCESS KEY> and <AWS SECRET ACCESS KEY> The AWS access key ID and secret access key you created for this purpose. <bucket-name> The existing AWS bucket name. This argument tells the MCG which bucket to use as a target bucket for its backing store, and subsequently, data storage and administration. Create a namespace bucket class that defines a namespace policy for the namespace buckets. The namespace policy can be either single or multi . To create a namespace bucket class with a namespace policy of type single : <resource-name> The name you want to give the resource. <my-bucket-class> A unique bucket class name. <resource> A single namespace-store that defines the read and write target of the namespace bucket. To create a namespace bucket class with a namespace policy of type multi : <resource-name> The name you want to give the resource. <my-bucket-class> A unique bucket class name. <write-resource> A single namespace-store that defines the write target of the namespace bucket. <read-resources>s A list of namespace-stores separated by commas that defines the read targets of the namespace bucket. Create a bucket using an Object Bucket Class (OBC) resource that uses the bucket class defined in the step. <bucket-name> A bucket name of your choice. <custom-bucket-class> The name of the bucket class created in the step. After the OBC is provisioned by the operator, a bucket is created in the MCG, and the operator creates a Secret and a ConfigMap with the same name and in the same namespace as that of the OBC. 5.2.4. Adding an IBM COS namespace bucket using the Multicloud Object Gateway CLI Prerequisites Openshift Container Platform with OpenShift Data Foundation operator installed. Access to the Multicloud Object Gateway (MCG), see Chapter 2, Accessing the Multicloud Object Gateway with your applications . Download the MCG command-line interface binary from the customer portal and make it executable. Note Choose either Linux(x86_64), Windows, or Mac OS. Procedure In the MCG command-line interface, create a NamespaceStore resource. A NamespaceStore represents an underlying storage to be used as a read or write target for the data in the MCG namespace buckets. <namespacestore> The name of the NamespaceStore. <IBM ACCESS KEY> , <IBM SECRET ACCESS KEY> , <IBM COS ENDPOINT> An IBM access key ID, secret access key, and the appropriate regional endpoint that corresponds to the location of the existing IBM bucket. <bucket-name> An existing IBM bucket name. This argument tells the MCG which bucket to use as a target bucket for its backing store, and subsequently, data storage and administration. Create a namespace bucket class that defines a namespace policy for the namespace buckets. The namespace policy requires a type of either single or multi . To create a namespace bucket class with a namespace policy of type single : <resource-name> The name you want to give the resource. <my-bucket-class> A unique bucket class name. <resource> A single NamespaceStore that defines the read and write target of the namespace bucket. To create a namespace bucket class with a namespace policy of type multi : <resource-name> The name you want to give the resource. <my-bucket-class> A unique bucket class name. <write-resource> A single NamespaceStore that defines the write target of the namespace bucket. <read-resources> A comma-separated list of NamespaceStores that defines the read targets of the namespace bucket. Create a bucket using an Object Bucket Class (OBC) resource that uses the bucket class defined in the earlier step. <bucket-name> A bucket name of your choice. <custom-bucket-class> The name of the bucket class created in the step. After the OBC is provisioned by the operator, a bucket is created in the MCG, and the operator creates a Secret and ConfigMap with the same name and in the same namespace as that of the OBC. 5.3. Adding a namespace bucket using the OpenShift Container Platform user interface You can add namespace buckets using the OpenShift Container Platform user interface. For information about namespace buckets, see Managing namespace buckets . Prerequisites Ensure that Openshift Container Platform with OpenShift Data Foundation operator is already installed. Access to the Multicloud Object Gateway (MCG). Procedure On the OpenShift Web Console, navigate to Storage Object Storage Namespace Store tab. Click Create namespace store to create a namespacestore resources to be used in the namespace bucket. Enter a namespacestore name. Choose a provider and region. Either select an existing secret, or click Switch to credentials to create a secret by entering a secret key and secret access key. Enter a target bucket. Click Create . On the Namespace Store tab, verify that the newly created namespacestore is in the Ready state. Repeat steps 2 and 3 until you have created all the desired amount of resources. Navigate to Bucket Class tab and click Create Bucket Class . Choose Namespace BucketClass type radio button. Enter a BucketClass name and click . Choose a Namespace Policy Type for your namespace bucket, and then click . If your namespace policy type is Single , you need to choose a read resource. If your namespace policy type is Multi , you need to choose read resources and a write resource. If your namespace policy type is Cache , you need to choose a Hub namespace store that defines the read and write target of the namespace bucket. Select one Read and Write NamespaceStore which defines the read and write targets of the namespace bucket and click . Review your new bucket class details, and then click Create Bucket Class . Navigate to Bucket Class tab and verify that your newly created resource is in the Ready phase. Navigate to Object Bucket Claims tab and click Create Object Bucket Claim . Enter ObjectBucketClaim Name for the namespace bucket. Select StorageClass as openshift-storage.noobaa.io . Select the BucketClass that you created earlier for your namespacestore from the list. By default, noobaa-default-bucket-class gets selected. Click Create . The namespace bucket is created along with Object Bucket Claim for your namespace. Navigate to Object Bucket Claims tab and verify that the Object Bucket Claim created is in Bound state. Navigate to Object Buckets tab and verify that the your namespace bucket is present in the list and is in Bound state. 5.4. Sharing legacy application data with cloud native application using S3 protocol Many legacy applications use file systems to share data sets. You can access and share the legacy data in the file system by using the S3 operations. To share data you need to do the following: Export the pre-existing file system datasets, that is, RWX volume such as Ceph FileSystem (CephFS) or create a new file system datasets using the S3 protocol. Access file system datasets from both file system and S3 protocol. Configure S3 accounts and map them to the existing or a new file system unique identifiers (UIDs) and group identifiers (GIDs). 5.4.1. Creating a NamespaceStore to use a file system Prerequisites Openshift Container Platform with OpenShift Data Foundation operator installed. Access to the Multicloud Object Gateway (MCG). Procedure Log into the OpenShift Web Console. Click Storage Object Storage . Click the NamespaceStore tab to create NamespaceStore resources to be used in the namespace bucket. Click Create namespacestore . Enter a name for the NamespaceStore. Choose Filesystem as the provider. Choose the Persistent volume claim. Enter a folder name. If the folder name exists, then that folder is used to create the NamespaceStore or else a folder with that name is created. Click Create . Verify the NamespaceStore is in the Ready state. 5.4.2. Creating accounts with NamespaceStore filesystem configuration You can either create a new account with NamespaceStore filesystem configuration or convert an existing normal account into a NamespaceStore filesystem account by editing the YAML. Note You cannot remove a NamespaceStore filesystem configuration from an account. Prerequisites Download the Multicloud Object Gateway (MCG) command-line interface binary from the customer portal and make it executable. Note Choose the correct product variant according to your architecture. Available platforms are Linux(x86_64), Windows, and Mac OS. Procedure Create a new account with NamespaceStore filesystem configuration using the MCG command-line interface. For example: allow_bucket_create Indicates whether the account is allowed to create new buckets. Supported values are true or false . Default value is true . allowed_buckets A comma separated list of bucket names to which the user is allowed to have access and management rights. default_resource The NamespaceStore resource on which the new buckets will be created when using the S3 CreateBucket operation. The NamespaceStore must be backed by an RWX (ReadWriteMany) persistent volume claim (PVC). full_permission Indicates whether the account should be allowed full permission or not. Supported values are true or false . Default value is false . new_buckets_path The filesystem path where directories corresponding to new buckets will be created. The path is inside the filesystem of NamespaceStore filesystem PVCs where new directories are created to act as the filesystem mapping of newly created object bucket classes. nsfs_account_config A mandatory field that indicates if the account is used for NamespaceStore filesystem. nsfs_only Indicates whether the account is used only for NamespaceStore filesystem or not. Supported values are true or false . Default value is false . If it is set to 'true', it limits you from accessing other types of buckets. uid The user ID of the filesystem to which the MCG account will be mapped and it is used to access and manage data on the filesystem gid The group ID of the filesystem to which the MCG account will be mapped and it is used to access and manage data on the filesystem The MCG system sends a response with the account configuration and its S3 credentials: You can list all the custom resource definition (CRD) based accounts by using the following command: If you are interested in a particular account, you can read its custom resource definition (CRD) directly by the account name: 5.4.3. Accessing legacy application data from the openshift-storage namespace When using the Multicloud Object Gateway (MCG) NamespaceStore filesystem (NSFS) feature, you need to have the Persistent Volume Claim (PVC) where the data resides in the openshift-storage namespace. In almost all cases, the data you need to access is not in the openshift-storage namespace, but in the namespace that the legacy application uses. In order to access data stored in another namespace, you need to create a PVC in the openshift-storage namespace that points to the same CephFS volume that the legacy application uses. Procedure Display the application namespace with scc : <application_namespace> Specify the name of the application namespace. For example: Navigate into the application namespace: For example: Ensure that a ReadWriteMany (RWX) PVC is mounted on the pod that you want to consume from the noobaa S3 endpoint using the MCG NSFS feature: Check the mount point of the Persistent Volume (PV) inside your pod. Get the volume name of the PV from the pod: <pod_name> Specify the name of the pod. For example: In this example, the name of the volume for the PVC is cephfs-write-workload-generator-no-cache-pv-claim . List all the mounts in the pod, and check for the mount point of the volume that you identified in the step: For example: Confirm the mount point of the RWX PV in your pod: <mount_path> Specify the path to the mount point that you identified in the step. For example: Ensure that the UID and SELinux labels are the same as the ones that the legacy namespace uses: For example: Get the information of the legacy application RWX PV that you want to make accessible from the openshift-storage namespace: <pv_name> Specify the name of the PV. For example: Ensure that the PVC from the legacy application is accessible from the openshift-storage namespace so that one or more noobaa-endpoint pods can access the PVC. Find the values of the subvolumePath and volumeHandle from the volumeAttributes . You can get these values from the YAML description of the legacy application PV: For example: Use the subvolumePath and volumeHandle values that you identified in the step to create a new PV and PVC object in the openshift-storage namespace that points to the same CephFS volume as the legacy application PV: Example YAML file : 1 The storage capacity of the PV that you are creating in the openshift-storage namespace must be the same as the original PV. 2 The volume handle for the target PV that you create in openshift-storage needs to have a different handle than the original application PV, for example, add -clone at the end of the volume handle. 3 The storage capacity of the PVC that you are creating in the openshift-storage namespace must be the same as the original PVC. Create the PV and PVC in the openshift-storage namespace using the YAML file specified in the step: <YAML_file> Specify the name of the YAML file. For example: Ensure that the PVC is available in the openshift-storage namespace: Navigate into the openshift-storage project: Create the NSFS namespacestore: <nsfs_namespacestore> Specify the name of the NSFS namespacestore. <cephfs_pvc_name> Specify the name of the CephFS PVC in the openshift-storage namespace. For example: Ensure that the noobaa-endpoint pod restarts and that it successfully mounts the PVC at the NSFS namespacestore, for example, /nsfs/legacy-namespace mountpoint: <noobaa_endpoint_pod_name> Specify the name of the noobaa-endpoint pod. For example: Create a MCG user account: <user_account> Specify the name of the MCG user account. <gid_number> Specify the GID number. <uid_number> Specify the UID number. Important Use the same UID and GID as that of the legacy application. You can find it from the output. For example: Create a MCG bucket. Create a dedicated folder for S3 inside the NSFS share on the CephFS PV and PVC of the legacy application pod: For example: Create the MCG bucket using the nsfs/ path: For example: Check the SELinux labels of the folders residing in the PVCs in the legacy application and openshift-storage namespaces: For example: For example: In these examples, you can see that the SELinux labels are not the same which results in permission denied or access issues. Ensure that the legacy application and openshift-storage pods use the same SELinux labels on the files. You can do this in one of the following ways: Section 5.4.3.1, "Changing the default SELinux label on the legacy application project to match the one in the openshift-storage project" . Section 5.4.3.2, "Modifying the SELinux label only for the deployment config that has the pod which mounts the legacy application PVC" . Delete the NSFS namespacestore: Delete the MCG bucket: For example: Delete the MCG user account: For example: Delete the NSFS namespacestore: For example: Delete the PV and PVC: Important Before you delete the PV and PVC, ensure that the PV has a retain policy configured. <cephfs_pv_name> Specify the CephFS PV name of the legacy application. <cephfs_pvc_name> Specify the CephFS PVC name of the legacy application. For example: 5.4.3.1. Changing the default SELinux label on the legacy application project to match the one in the openshift-storage project Display the current openshift-storage namespace with sa.scc.mcs : Edit the legacy application namespace, and modify the sa.scc.mcs with the value from the sa.scc.mcs of the openshift-storage namespace: For example: For example: Restart the legacy application pod. A relabel of all the files take place and now the SELinux labels match with the openshift-storage deployment. 5.4.3.2. Modifying the SELinux label only for the deployment config that has the pod which mounts the legacy application PVC Create a new scc with the MustRunAs and seLinuxOptions options, with the Multi Category Security (MCS) that the openshift-storage project uses. Example YAML file: Create a service account for the deployment and add it to the newly created scc . Create a service account: <service_account_name>` Specify the name of the service account. For example: Add the service account to the newly created scc : For example: Patch the legacy application deployment so that it uses the newly created service account. This allows you to specify the SELinux label in the deployment: For example: Edit the deployment to specify the security context to use at the SELinux label in the deployment configuration: Add the following lines: <security_context_value> You can find this value when you execute the command to create a dedicated folder for S3 inside the NSFS share, on the CephFS PV and PVC of the legacy application pod. For example: Ensure that the security context to be used at the SELinux label in the deployment configuration is specified correctly: For example" The legacy application is restarted and begins using the same SELinux labels as the openshift-storage namespace. | [
"apiVersion: v1 kind: Secret metadata: name: <namespacestore-secret-name> type: Opaque data: AWS_ACCESS_KEY_ID: <AWS ACCESS KEY ID ENCODED IN BASE64> AWS_SECRET_ACCESS_KEY: <AWS SECRET ACCESS KEY ENCODED IN BASE64>",
"apiVersion: noobaa.io/v1alpha1 kind: NamespaceStore metadata: finalizers: - noobaa.io/finalizer labels: app: noobaa name: <resource-name> namespace: openshift-storage spec: awsS3: secret: name: <namespacestore-secret-name> namespace: <namespace-secret> targetBucket: <target-bucket> type: aws-s3",
"apiVersion: noobaa.io/v1alpha1 kind: BucketClass metadata: labels: app: noobaa name: <my-bucket-class> namespace: openshift-storage spec: namespacePolicy: type: single: resource: <resource>",
"apiVersion: noobaa.io/v1alpha1 kind: BucketClass metadata: labels: app: noobaa name: <my-bucket-class> namespace: openshift-storage spec: namespacePolicy: type: Multi multi: writeResource: <write-resource> readResources: - <read-resources> - <read-resources>",
"apiVersion: objectbucket.io/v1alpha1 kind: ObjectBucketClaim metadata: name: <resource-name> namespace: openshift-storage spec: generateBucketName: <my-bucket> storageClassName: openshift-storage.noobaa.io additionalConfig: bucketclass: <my-bucket-class>",
"apiVersion: v1 kind: Secret metadata: name: <namespacestore-secret-name> type: Opaque data: IBM_COS_ACCESS_KEY_ID: <IBM COS ACCESS KEY ID ENCODED IN BASE64> IBM_COS_SECRET_ACCESS_KEY: <IBM COS SECRET ACCESS KEY ENCODED IN BASE64>",
"apiVersion: noobaa.io/v1alpha1 kind: NamespaceStore metadata: finalizers: - noobaa.io/finalizer labels: app: noobaa name: bs namespace: openshift-storage spec: s3Compatible: endpoint: <IBM COS ENDPOINT> secret: name: <namespacestore-secret-name> namespace: <namespace-secret> signatureVersion: v2 targetBucket: <target-bucket> type: ibm-cos",
"apiVersion: noobaa.io/v1alpha1 kind: BucketClass metadata: labels: app: noobaa name: <my-bucket-class> namespace: openshift-storage spec: namespacePolicy: type: single: resource: <resource>",
"apiVersion: noobaa.io/v1alpha1 kind: BucketClass metadata: labels: app: noobaa name: <my-bucket-class> namespace: openshift-storage spec: namespacePolicy: type: Multi multi: writeResource: <write-resource> readResources: - <read-resources> - <read-resources>",
"apiVersion: objectbucket.io/v1alpha1 kind: ObjectBucketClaim metadata: name: <resource-name> namespace: openshift-storage spec: generateBucketName: <my-bucket> storageClassName: openshift-storage.noobaa.io additionalConfig: bucketclass: <my-bucket-class>",
"noobaa namespacestore create aws-s3 <namespacestore> --access-key <AWS ACCESS KEY> --secret-key <AWS SECRET ACCESS KEY> --target-bucket <bucket-name> -n openshift-storage",
"noobaa bucketclass create namespace-bucketclass single <my-bucket-class> --resource <resource> -n openshift-storage",
"noobaa bucketclass create namespace-bucketclass multi <my-bucket-class> --write-resource <write-resource> --read-resources <read-resources> -n openshift-storage",
"noobaa obc create my-bucket-claim -n openshift-storage --app-namespace my-app --bucketclass <custom-bucket-class>",
"noobaa namespacestore create ibm-cos <namespacestore> --endpoint <IBM COS ENDPOINT> --access-key <IBM ACCESS KEY> --secret-key <IBM SECRET ACCESS KEY> --target-bucket <bucket-name> -n openshift-storage",
"noobaa bucketclass create namespace-bucketclass single <my-bucket-class> --resource <resource> -n openshift-storage",
"noobaa bucketclass create namespace-bucketclass multi <my-bucket-class> --write-resource <write-resource> --read-resources <read-resources> -n openshift-storage",
"noobaa obc create my-bucket-claim -n openshift-storage --app-namespace my-app --bucketclass <custom-bucket-class>",
"noobaa account create <noobaa-account-name> [flags]",
"noobaa account create testaccount --full_permission --nsfs_account_config --gid 10001 --uid 10001 -default_resource fs_namespacestore",
"NooBaaAccount spec: allow_bucket_creation: true Allowed_buckets: full_permission: true permission_list: [] default_resource: noobaa-default-namespace-store Nsfs_account_config: gid: 10001 new_buckets_path: / nsfs_only: true uid: 10001 INFO[0006] ✅ Exists: Secret \"noobaa-account-testaccount\" Connection info: AWS_ACCESS_KEY_ID : <aws-access-key-id> AWS_SECRET_ACCESS_KEY : <aws-secret-access-key>",
"noobaa account list NAME ALLOWED_BUCKETS DEFAULT_RESOURCE PHASE AGE testaccount [*] noobaa-default-backing-store Ready 1m17s",
"oc get noobaaaccount/testaccount -o yaml spec: allow_bucket_creation: true allowed_buckets: full_permission: true permission_list: [] default_resource: noobaa-default-namespace-store nsfs_account_config: gid: 10001 new_buckets_path: / nsfs_only: true uid: 10001",
"oc get ns <application_namespace> -o yaml | grep scc",
"oc get ns testnamespace -o yaml | grep scc openshift.io/sa.scc.mcs: s0:c26,c5 openshift.io/sa.scc.supplemental-groups: 1000660000/10000 openshift.io/sa.scc.uid-range: 1000660000/10000",
"oc project <application_namespace>",
"oc project testnamespace",
"oc get pvc NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE cephfs-write-workload-generator-no-cache-pv-claim Bound pvc-aa58fb91-c3d2-475b-bbee-68452a613e1a 10Gi RWX ocs-storagecluster-cephfs 12s",
"oc get pod NAME READY STATUS RESTARTS AGE cephfs-write-workload-generator-no-cache-1-cv892 1/1 Running 0 11s",
"oc get pods <pod_name> -o jsonpath='{.spec.volumes[]}'",
"oc get pods cephfs-write-workload-generator-no-cache-1-cv892 -o jsonpath='{.spec.volumes[]}' {\"name\":\"app-persistent-storage\",\"persistentVolumeClaim\":{\"claimName\":\"cephfs-write-workload-generator-no-cache-pv-claim\"}}",
"oc get pods <pod_name> -o jsonpath='{.spec.containers[].volumeMounts}'",
"oc get pods cephfs-write-workload-generator-no-cache-1-cv892 -o jsonpath='{.spec.containers[].volumeMounts}' [{\"mountPath\":\"/mnt/pv\",\"name\":\"app-persistent-storage\"},{\"mountPath\":\"/var/run/secrets/kubernetes.io/serviceaccount\",\"name\":\"kube-api-access-8tnc5\",\"readOnly\":true}]",
"oc exec -it <pod_name> -- df <mount_path>",
"oc exec -it cephfs-write-workload-generator-no-cache-1-cv892 -- df /mnt/pv main Filesystem 1K-blocks Used Available Use% Mounted on 172.30.202.87:6789,172.30.120.254:6789,172.30.77.247:6789:/volumes/csi/csi-vol-cc416d9e-dbf3-11ec-b286-0a580a810213/edcfe4d5-bdcb-4b8e-8824-8a03ad94d67c 10485760 0 10485760 0% /mnt/pv",
"oc exec -it <pod_name> -- ls -latrZ <mount_path>",
"oc exec -it cephfs-write-workload-generator-no-cache-1-cv892 -- ls -latrZ /mnt/pv/ total 567 drwxrwxrwx. 3 root root system_u:object_r:container_file_t:s0:c26,c5 2 May 25 06:35 . -rw-r--r--. 1 1000660000 root system_u:object_r:container_file_t:s0:c26,c5 580138 May 25 06:35 fs_write_cephfs-write-workload-generator-no-cache-1-cv892-data.log drwxrwxrwx. 3 root root system_u:object_r:container_file_t:s0:c26,c5 30 May 25 06:35 ..",
"oc get pv | grep <pv_name>",
"oc get pv | grep pvc-aa58fb91-c3d2-475b-bbee-68452a613e1a pvc-aa58fb91-c3d2-475b-bbee-68452a613e1a 10Gi RWX Delete Bound testnamespace/cephfs-write-workload-generator-no-cache-pv-claim ocs-storagecluster-cephfs 47s",
"oc get pv <pv_name> -o yaml",
"oc get pv pvc-aa58fb91-c3d2-475b-bbee-68452a613e1a -o yaml apiVersion: v1 kind: PersistentVolume metadata: annotations: pv.kubernetes.io/provisioned-by: openshift-storage.cephfs.csi.ceph.com creationTimestamp: \"2022-05-25T06:27:49Z\" finalizers: - kubernetes.io/pv-protection name: pvc-aa58fb91-c3d2-475b-bbee-68452a613e1a resourceVersion: \"177458\" uid: 683fa87b-5192-4ccf-af2f-68c6bcf8f500 spec: accessModes: - ReadWriteMany capacity: storage: 10Gi claimRef: apiVersion: v1 kind: PersistentVolumeClaim name: cephfs-write-workload-generator-no-cache-pv-claim namespace: testnamespace resourceVersion: \"177453\" uid: aa58fb91-c3d2-475b-bbee-68452a613e1a csi: controllerExpandSecretRef: name: rook-csi-cephfs-provisioner namespace: openshift-storage driver: openshift-storage.cephfs.csi.ceph.com nodeStageSecretRef: name: rook-csi-cephfs-node namespace: openshift-storage volumeAttributes: clusterID: openshift-storage fsName: ocs-storagecluster-cephfilesystem storage.kubernetes.io/csiProvisionerIdentity: 1653458225664-8081-openshift-storage.cephfs.csi.ceph.com subvolumeName: csi-vol-cc416d9e-dbf3-11ec-b286-0a580a810213 subvolumePath: /volumes/csi/csi-vol-cc416d9e-dbf3-11ec-b286-0a580a810213/edcfe4d5-bdcb-4b8e-8824-8a03ad94d67c volumeHandle: 0001-0011-openshift-storage-0000000000000001-cc416d9e-dbf3-11ec-b286-0a580a810213 persistentVolumeReclaimPolicy: Delete storageClassName: ocs-storagecluster-cephfs volumeMode: Filesystem status: phase: Bound",
"cat << EOF >> pv-openshift-storage.yaml apiVersion: v1 kind: PersistentVolume metadata: name: cephfs-pv-legacy-openshift-storage spec: storageClassName: \"\" accessModes: - ReadWriteMany capacity: storage: 10Gi 1 csi: driver: openshift-storage.cephfs.csi.ceph.com nodeStageSecretRef: name: rook-csi-cephfs-node namespace: openshift-storage volumeAttributes: # Volume Attributes can be copied from the Source testnamespace PV \"clusterID\": \"openshift-storage\" \"fsName\": \"ocs-storagecluster-cephfilesystem\" \"staticVolume\": \"true\" # rootpath is the subvolumePath: you copied from the Source testnamespace PV \"rootPath\": /volumes/csi/csi-vol-cc416d9e-dbf3-11ec-b286-0a580a810213/edcfe4d5-bdcb-4b8e-8824-8a03ad94d67c volumeHandle: 0001-0011-openshift-storage-0000000000000001-cc416d9e-dbf3-11ec-b286-0a580a810213-clone 2 persistentVolumeReclaimPolicy: Retain volumeMode: Filesystem --- apiVersion: v1 kind: PersistentVolumeClaim metadata: name: cephfs-pvc-legacy namespace: openshift-storage spec: storageClassName: \"\" accessModes: - ReadWriteMany resources: requests: storage: 10Gi 3 volumeMode: Filesystem # volumeName should be same as PV name volumeName: cephfs-pv-legacy-openshift-storage EOF",
"oc create -f <YAML_file>",
"oc create -f pv-openshift-storage.yaml persistentvolume/cephfs-pv-legacy-openshift-storage created persistentvolumeclaim/cephfs-pvc-legacy created",
"oc get pvc -n openshift-storage NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE cephfs-pvc-legacy Bound cephfs-pv-legacy-openshift-storage 10Gi RWX 14s",
"oc project openshift-storage Now using project \"openshift-storage\" on server \"https://api.cluster-5f6ng.5f6ng.sandbox65.opentlc.com:6443\".",
"noobaa namespacestore create nsfs <nsfs_namespacestore> --pvc-name=' <cephfs_pvc_name> ' --fs-backend='CEPH_FS'",
"noobaa namespacestore create nsfs legacy-namespace --pvc-name='cephfs-pvc-legacy' --fs-backend='CEPH_FS'",
"oc exec -it <noobaa_endpoint_pod_name> -- df -h /nsfs/ <nsfs_namespacestore>",
"oc exec -it noobaa-endpoint-5875f467f5-546c6 -- df -h /nsfs/legacy-namespace Filesystem Size Used Avail Use% Mounted on 172.30.202.87:6789,172.30.120.254:6789,172.30.77.247:6789:/volumes/csi/csi-vol-cc416d9e-dbf3-11ec-b286-0a580a810213/edcfe4d5-bdcb-4b8e-8824-8a03ad94d67c 10G 0 10G 0% /nsfs/legacy-namespace",
"noobaa account create <user_account> --full_permission --allow_bucket_create=true --new_buckets_path='/' --nsfs_only=true --nsfs_account_config=true --gid <gid_number> --uid <uid_number> --default_resource='legacy-namespace'",
"noobaa account create leguser --full_permission --allow_bucket_create=true --new_buckets_path='/' --nsfs_only=true --nsfs_account_config=true --gid 0 --uid 1000660000 --default_resource='legacy-namespace'",
"oc exec -it <pod_name> -- mkdir <mount_path> /nsfs",
"oc exec -it cephfs-write-workload-generator-no-cache-1-cv892 -- mkdir /mnt/pv/nsfs",
"noobaa api bucket_api create_bucket '{ \"name\": \" <bucket_name> \", \"namespace\":{ \"write_resource\": { \"resource\": \" <nsfs_namespacestore> \", \"path\": \"nsfs/\" }, \"read_resources\": [ { \"resource\": \" <nsfs_namespacestore> \", \"path\": \"nsfs/\" }] } }'",
"noobaa api bucket_api create_bucket '{ \"name\": \"legacy-bucket\", \"namespace\":{ \"write_resource\": { \"resource\": \"legacy-namespace\", \"path\": \"nsfs/\" }, \"read_resources\": [ { \"resource\": \"legacy-namespace\", \"path\": \"nsfs/\" }] } }'",
"oc exec -it <noobaa_endpoint_pod_name> -n openshift-storage -- ls -ltraZ /nsfs/ <nsfs_namespacstore>",
"oc exec -it noobaa-endpoint-5875f467f5-546c6 -n openshift-storage -- ls -ltraZ /nsfs/legacy-namespace total 567 drwxrwxrwx. 3 root root system_u:object_r:container_file_t:s0:c0,c26 2 May 25 06:35 . -rw-r--r--. 1 1000660000 root system_u:object_r:container_file_t:s0:c0,c26 580138 May 25 06:35 fs_write_cephfs-write-workload-generator-no-cache-1-cv892-data.log drwxrwxrwx. 3 root root system_u:object_r:container_file_t:s0:c0,c26 30 May 25 06:35 ..",
"oc exec -it <pod_name> -- ls -latrZ <mount_path>",
"oc exec -it cephfs-write-workload-generator-no-cache-1-cv892 -- ls -latrZ /mnt/pv/ total 567 drwxrwxrwx. 3 root root system_u:object_r:container_file_t:s0:c26,c5 2 May 25 06:35 . -rw-r--r--. 1 1000660000 root system_u:object_r:container_file_t:s0:c26,c5 580138 May 25 06:35 fs_write_cephfs-write-workload-generator-no-cache-1-cv892-data.log drwxrwxrwx. 3 root root system_u:object_r:container_file_t:s0:c26,c5 30 May 25 06:35 ..",
"noobaa bucket delete <bucket_name>",
"noobaa bucket delete legacy-bucket",
"noobaa account delete <user_account>",
"noobaa account delete leguser",
"noobaa namespacestore delete <nsfs_namespacestore>",
"noobaa namespacestore delete legacy-namespace",
"oc delete pv <cephfs_pv_name>",
"oc delete pvc <cephfs_pvc_name>",
"oc delete pv cephfs-pv-legacy-openshift-storage",
"oc delete pvc cephfs-pvc-legacy",
"oc get ns openshift-storage -o yaml | grep sa.scc.mcs openshift.io/sa.scc.mcs: s0:c26,c0",
"oc edit ns <appplication_namespace>",
"oc edit ns testnamespace",
"oc get ns <application_namespace> -o yaml | grep sa.scc.mcs",
"oc get ns testnamespace -o yaml | grep sa.scc.mcs openshift.io/sa.scc.mcs: s0:c26,c0",
"cat << EOF >> scc.yaml allowHostDirVolumePlugin: false allowHostIPC: false allowHostNetwork: false allowHostPID: false allowHostPorts: false allowPrivilegeEscalation: true allowPrivilegedContainer: false allowedCapabilities: null apiVersion: security.openshift.io/v1 defaultAddCapabilities: null fsGroup: type: MustRunAs groups: - system:authenticated kind: SecurityContextConstraints metadata: annotations: name: restricted-pvselinux priority: null readOnlyRootFilesystem: false requiredDropCapabilities: - KILL - MKNOD - SETUID - SETGID runAsUser: type: MustRunAsRange seLinuxContext: seLinuxOptions: level: s0:c26,c0 type: MustRunAs supplementalGroups: type: RunAsAny users: [] volumes: - configMap - downwardAPI - emptyDir - persistentVolumeClaim - projected - secret EOF",
"oc create -f scc.yaml",
"oc create serviceaccount <service_account_name>",
"oc create serviceaccount testnamespacesa",
"oc adm policy add-scc-to-user restricted-pvselinux -z <service_account_name>",
"oc adm policy add-scc-to-user restricted-pvselinux -z testnamespacesa",
"oc patch dc/ <pod_name> '{\"spec\":{\"template\":{\"spec\":{\"serviceAccountName\": \" <service_account_name> \"}}}}'",
"oc patch dc/cephfs-write-workload-generator-no-cache --patch '{\"spec\":{\"template\":{\"spec\":{\"serviceAccountName\": \"testnamespacesa\"}}}}'",
"oc edit dc <pod_name> -n <application_namespace>",
"spec: template: metadata: securityContext: seLinuxOptions: Level: <security_context_value>",
"oc edit dc cephfs-write-workload-generator-no-cache -n testnamespace",
"spec: template: metadata: securityContext: seLinuxOptions: level: s0:c26,c0",
"oc get dc <pod_name> -n <application_namespace> -o yaml | grep -A 2 securityContext",
"oc get dc cephfs-write-workload-generator-no-cache -n testnamespace -o yaml | grep -A 2 securityContext securityContext: seLinuxOptions: level: s0:c26,c0"
] | https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.18/html/managing_hybrid_and_multicloud_resources/Managing-namespace-buckets_rhodf |
Chapter 4. Identity and access management | Chapter 4. Identity and access management The Identity service (keystone) provides authentication and authorization for cloud users in a Red Hat OpenStack Platform environment. You can use the Identity service for direct end-user authentication, or configure it to use external authentication methods to meet your security requirements or to match your current authentication infrastructure. 4.1. Red Hat OpenStack Platform fernet tokens After you authenticate, the Identity service (keystone): Issues an encrypted bearer token known as a fernet token. This token represents your identity. Authorizes you you to perform operations based on your role. Each fernet token remains valid for up to an hour, by default. This allows a user to perform a series of tasks without needing to reauthenticate. Fernet is the default token provider that replaces the UUID token provider. Additional resources Using Fernet keys for encryption in the overcloud 4.2. OpenStack Identity service entities The Red Hat OpenStack Identity service (keystone) recognizes the following entities: Users OpenStack Identity service (keystone) users are the atomic unit of authentication. A user must be assigned a role on a project in order to authenticate. Groups OpenStack Identity service groups are a logical grouping of users. A group can be provided access to projects under specific roles. Managing groups instead of users can simplify the management of roles. Roles OpenStack Identity service roles define the OpenStack APIs that are accessible to users or groups who are assigned those roles. Projects OpenStack Identity service projects are isolated groups of users who have common access to a shared quota of physical resources and the virtual infrastructure built from those physical resources. Domains OpenStack Identity service domains are high-level security boundaries for projects, users, and groups. You can use OpenStack Identity domains to centrally manage all keystone-based identity components. Red Hat OpenStack Platform supports multiple domains. You can represent users of different domains by using separate authentication backends. 4.3. Authenticating with keystone You can adjust the authentication security requirements required by OpenStack Identity service (keystone). Table 4.1. Identity service authentication parameters Parameter Description KeystoneChangePasswordUponFirstUse Enabling this option requires users to change their password when the user is created, or upon administrative reset. KeystoneDisableUserAccountDaysInactive The maximum number of days a user can go without authenticating before being considered "inactive" and automatically disabled (locked). KeystoneLockoutDuration The number of seconds a user account is locked when the maximum number of failed authentication attempts (as specified by KeystoneLockoutFailureAttempts ) is exceeded. KeystoneLockoutFailureAttempts The maximum number of times that a user can fail to authenticate before the user account is locked for the number of seconds specified by KeystoneLockoutDuration . KeystoneMinimumPasswordAge The number of days that a password must be used before the user can change it. This prevents users from changing their passwords immediately in order to wipe out their password history and reuse an old password. KeystonePasswordExpiresDays The number of days for which a password is considered valid before requiring users to change it. KeystoneUniqueLastPasswordCount This controls the number of user password iterations to keep in history, in order to enforce that newly created passwords are unique. Additional resources Identity (keystone) parameters. 4.4. Using Identity service heat parameters to stop invalid login attempts Repetitive failed login attempts can be a sign of an attempted brute-force attack. You can use the Identity Service to limit access to accounts after repeated unsuccessful login attempts. Prerequisites You have an installed Red Hat OpenStack Platform director environment. You are logged into the director as stack. Procedure To configure the maximum number of times that a user can fail to authenticate before the user account is locked, set the value of the KeystoneLockoutFailureAttempts and KeystoneLockoutDuration heat parameters in an environment file. In the following example, the KeystoneLockoutDuration is set to one hour: Include the environment file in your deploy script. When you run your deploy script on a previously deployed environment, it is updated with the additional parameters: 4.5. Authenticating with external identity providers You can use an external identity provider (IdP) to authenticate to OpenStack service providers (SP). SPs are the services provided by an OpenStack cloud. When you use a separate IdP, external authentication credentials are separate from the databases used by other OpenStack services. This separation reduces the risk of a compromise of stored credentials. Each external IdP has a one-to-one mapping to an OpenStack Identity service (keystone) domain. You can have multiple coexisting domains with Red Hat OpenStack Platform. External authentication provides a way to use existing credentials to access resources in Red Hat OpenStack Platform without creating additional identities. The credential is maintained by the user's IdP. You can use IdPs such as Red Hat Identity Management (IdM), and Microsoft Active Directory Domain Services (AD DS) for identity management. In this configuration, the OpenStack Identity service has read-only access to the LDAP user database. The management of API access based on user or group role is performed by keystone. Roles are assigned to the LDAP accounts by using the OpenStack Identity service. 4.5.1. How LDAP integration works In the diagram below, keystone uses an encrypted LDAPS connection to connect to an Active Directory Domain Controller. When a user logs in to horizon, keystone receives the supplied user credentials and passes them to Active Directory. Additional resources Integrating OpenStack Identity (keystone) with Active Directory Integrating OpenStack Identity (keystone) with Red Hat Identity Manager (IdM) Configuring director to use domain specific LDAP backends | [
"parameter_defaults KeystoneLockoutDuration: 3600 KeystoneLockoutFailureAttempts: 3",
"openstack overcloud deploy --templates -e keystone_config.yaml"
] | https://docs.redhat.com/en/documentation/red_hat_openstack_platform/16.2/html/security_and_hardening_guide/identity_and_access_management |
Preface | Preface Making open source more inclusive Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see our CTO Chris Wright's message . | null | https://docs.redhat.com/en/documentation/red_hat_build_of_apache_camel/4.8/html/migrating_to_red_hat_build_of_apache_camel_for_spring_boot/pr01 |
Chapter 6. Control plane backup and restore | Chapter 6. Control plane backup and restore 6.1. Backing up etcd etcd is the key-value store for OpenShift Container Platform, which persists the state of all resource objects. Back up your cluster's etcd data regularly and store in a secure location ideally outside the OpenShift Container Platform environment. Do not take an etcd backup before the first certificate rotation completes, which occurs 24 hours after installation, otherwise the backup will contain expired certificates. It is also recommended to take etcd backups during non-peak usage hours because the etcd snapshot has a high I/O cost. Be sure to take an etcd backup before you update your cluster. Taking a backup before you update is important because when you restore your cluster, you must use an etcd backup that was taken from the same z-stream release. For example, an OpenShift Container Platform 4.17.5 cluster must use an etcd backup that was taken from 4.17.5. Important Back up your cluster's etcd data by performing a single invocation of the backup script on a control plane host. Do not take a backup for each control plane host. After you have an etcd backup, you can restore to a cluster state . 6.1.1. Backing up etcd data Follow these steps to back up etcd data by creating an etcd snapshot and backing up the resources for the static pods. This backup can be saved and used at a later time if you need to restore etcd. Important Only save a backup from a single control plane host. Do not take a backup from each control plane host in the cluster. Prerequisites You have access to the cluster as a user with the cluster-admin role. You have checked whether the cluster-wide proxy is enabled. Tip You can check whether the proxy is enabled by reviewing the output of oc get proxy cluster -o yaml . The proxy is enabled if the httpProxy , httpsProxy , and noProxy fields have values set. Procedure Start a debug session as root for a control plane node: USD oc debug --as-root node/<node_name> Change your root directory to /host in the debug shell: sh-4.4# chroot /host If the cluster-wide proxy is enabled, export the NO_PROXY , HTTP_PROXY , and HTTPS_PROXY environment variables by running the following commands: USD export HTTP_PROXY=http://<your_proxy.example.com>:8080 USD export HTTPS_PROXY=https://<your_proxy.example.com>:8080 USD export NO_PROXY=<example.com> Run the cluster-backup.sh script in the debug shell and pass in the location to save the backup to. Tip The cluster-backup.sh script is maintained as a component of the etcd Cluster Operator and is a wrapper around the etcdctl snapshot save command. sh-4.4# /usr/local/bin/cluster-backup.sh /home/core/assets/backup Example script output found latest kube-apiserver: /etc/kubernetes/static-pod-resources/kube-apiserver-pod-6 found latest kube-controller-manager: /etc/kubernetes/static-pod-resources/kube-controller-manager-pod-7 found latest kube-scheduler: /etc/kubernetes/static-pod-resources/kube-scheduler-pod-6 found latest etcd: /etc/kubernetes/static-pod-resources/etcd-pod-3 ede95fe6b88b87ba86a03c15e669fb4aa5bf0991c180d3c6895ce72eaade54a1 etcdctl version: 3.4.14 API version: 3.4 {"level":"info","ts":1624647639.0188997,"caller":"snapshot/v3_snapshot.go:119","msg":"created temporary db file","path":"/home/core/assets/backup/snapshot_2021-06-25_190035.db.part"} {"level":"info","ts":"2021-06-25T19:00:39.030Z","caller":"clientv3/maintenance.go:200","msg":"opened snapshot stream; downloading"} {"level":"info","ts":1624647639.0301006,"caller":"snapshot/v3_snapshot.go:127","msg":"fetching snapshot","endpoint":"https://10.0.0.5:2379"} {"level":"info","ts":"2021-06-25T19:00:40.215Z","caller":"clientv3/maintenance.go:208","msg":"completed snapshot read; closing"} {"level":"info","ts":1624647640.6032252,"caller":"snapshot/v3_snapshot.go:142","msg":"fetched snapshot","endpoint":"https://10.0.0.5:2379","size":"114 MB","took":1.584090459} {"level":"info","ts":1624647640.6047094,"caller":"snapshot/v3_snapshot.go:152","msg":"saved","path":"/home/core/assets/backup/snapshot_2021-06-25_190035.db"} Snapshot saved at /home/core/assets/backup/snapshot_2021-06-25_190035.db {"hash":3866667823,"revision":31407,"totalKey":12828,"totalSize":114446336} snapshot db and kube resources are successfully saved to /home/core/assets/backup In this example, two files are created in the /home/core/assets/backup/ directory on the control plane host: snapshot_<datetimestamp>.db : This file is the etcd snapshot. The cluster-backup.sh script confirms its validity. static_kuberesources_<datetimestamp>.tar.gz : This file contains the resources for the static pods. If etcd encryption is enabled, it also contains the encryption keys for the etcd snapshot. Note If etcd encryption is enabled, it is recommended to store this second file separately from the etcd snapshot for security reasons. However, this file is required to restore from the etcd snapshot. Keep in mind that etcd encryption only encrypts values, not keys. This means that resource types, namespaces, and object names are unencrypted. 6.1.2. Additional resources Recovering an unhealthy etcd cluster 6.1.3. Creating automated etcd backups The automated backup feature for etcd supports both recurring and single backups. Recurring backups create a cron job that starts a single backup each time the job triggers. Important Automating etcd backups is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . Follow these steps to enable automated backups for etcd. Warning Enabling the TechPreviewNoUpgrade feature set on your cluster prevents minor version updates. The TechPreviewNoUpgrade feature set cannot be disabled. Do not enable this feature set on production clusters. Prerequisites You have access to the cluster as a user with the cluster-admin role. You have access to the OpenShift CLI ( oc ). Procedure Create a FeatureGate custom resource (CR) file named enable-tech-preview-no-upgrade.yaml with the following contents: apiVersion: config.openshift.io/v1 kind: FeatureGate metadata: name: cluster spec: featureSet: TechPreviewNoUpgrade Apply the CR and enable automated backups: USD oc apply -f enable-tech-preview-no-upgrade.yaml It takes time to enable the related APIs. Verify the creation of the custom resource definition (CRD) by running the following command: USD oc get crd | grep backup Example output backups.config.openshift.io 2023-10-25T13:32:43Z etcdbackups.operator.openshift.io 2023-10-25T13:32:04Z 6.1.3.1. Creating a single etcd backup Follow these steps to create a single etcd backup by creating and applying a custom resource (CR). Prerequisites You have access to the cluster as a user with the cluster-admin role. You have access to the OpenShift CLI ( oc ). Procedure If dynamically-provisioned storage is available, complete the following steps to create a single automated etcd backup: Create a persistent volume claim (PVC) named etcd-backup-pvc.yaml with contents such as the following example: kind: PersistentVolumeClaim apiVersion: v1 metadata: name: etcd-backup-pvc namespace: openshift-etcd spec: accessModes: - ReadWriteOnce resources: requests: storage: 200Gi 1 volumeMode: Filesystem 1 The amount of storage available to the PVC. Adjust this value for your requirements. Apply the PVC by running the following command: USD oc apply -f etcd-backup-pvc.yaml Verify the creation of the PVC by running the following command: USD oc get pvc Example output NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE etcd-backup-pvc Bound 51s Note Dynamic PVCs stay in the Pending state until they are mounted. Create a CR file named etcd-single-backup.yaml with contents such as the following example: apiVersion: operator.openshift.io/v1alpha1 kind: EtcdBackup metadata: name: etcd-single-backup namespace: openshift-etcd spec: pvcName: etcd-backup-pvc 1 1 The name of the PVC to save the backup to. Adjust this value according to your environment. Apply the CR to start a single backup: USD oc apply -f etcd-single-backup.yaml If dynamically-provisioned storage is not available, complete the following steps to create a single automated etcd backup: Create a StorageClass CR file named etcd-backup-local-storage.yaml with the following contents: apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: etcd-backup-local-storage provisioner: kubernetes.io/no-provisioner volumeBindingMode: Immediate Apply the StorageClass CR by running the following command: USD oc apply -f etcd-backup-local-storage.yaml Create a PV named etcd-backup-pv-fs.yaml with contents such as the following example: apiVersion: v1 kind: PersistentVolume metadata: name: etcd-backup-pv-fs spec: capacity: storage: 100Gi 1 volumeMode: Filesystem accessModes: - ReadWriteOnce persistentVolumeReclaimPolicy: Retain storageClassName: etcd-backup-local-storage local: path: /mnt nodeAffinity: required: nodeSelectorTerms: - matchExpressions: - key: kubernetes.io/hostname operator: In values: - <example_master_node> 2 1 The amount of storage available to the PV. Adjust this value for your requirements. 2 Replace this value with the node to attach this PV to. Verify the creation of the PV by running the following command: USD oc get pv Example output NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE etcd-backup-pv-fs 100Gi RWO Retain Available etcd-backup-local-storage 10s Create a PVC named etcd-backup-pvc.yaml with contents such as the following example: kind: PersistentVolumeClaim apiVersion: v1 metadata: name: etcd-backup-pvc namespace: openshift-etcd spec: accessModes: - ReadWriteOnce volumeMode: Filesystem resources: requests: storage: 10Gi 1 1 The amount of storage available to the PVC. Adjust this value for your requirements. Apply the PVC by running the following command: USD oc apply -f etcd-backup-pvc.yaml Create a CR file named etcd-single-backup.yaml with contents such as the following example: apiVersion: operator.openshift.io/v1alpha1 kind: EtcdBackup metadata: name: etcd-single-backup namespace: openshift-etcd spec: pvcName: etcd-backup-pvc 1 1 The name of the persistent volume claim (PVC) to save the backup to. Adjust this value according to your environment. Apply the CR to start a single backup: USD oc apply -f etcd-single-backup.yaml 6.1.3.2. Creating recurring etcd backups Follow these steps to create automated recurring backups of etcd. Use dynamically-provisioned storage to keep the created etcd backup data in a safe, external location if possible. If dynamically-provisioned storage is not available, consider storing the backup data on an NFS share to make backup recovery more accessible. Prerequisites You have access to the cluster as a user with the cluster-admin role. You have access to the OpenShift CLI ( oc ). Procedure If dynamically-provisioned storage is available, complete the following steps to create automated recurring backups: Create a persistent volume claim (PVC) named etcd-backup-pvc.yaml with contents such as the following example: kind: PersistentVolumeClaim apiVersion: v1 metadata: name: etcd-backup-pvc namespace: openshift-etcd spec: accessModes: - ReadWriteOnce resources: requests: storage: 200Gi 1 volumeMode: Filesystem storageClassName: etcd-backup-local-storage 1 The amount of storage available to the PVC. Adjust this value for your requirements. Note Each of the following providers require changes to the accessModes and storageClassName keys: Provider accessModes value storageClassName value AWS with the versioned-installer-efc_operator-ci profile - ReadWriteMany efs-sc Google Cloud Platform - ReadWriteMany filestore-csi Microsoft Azure - ReadWriteMany azurefile-csi Apply the PVC by running the following command: USD oc apply -f etcd-backup-pvc.yaml Verify the creation of the PVC by running the following command: USD oc get pvc Example output NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE etcd-backup-pvc Bound 51s Note Dynamic PVCs stay in the Pending state until they are mounted. If dynamically-provisioned storage is unavailable, create a local storage PVC by completing the following steps: Warning If you delete or otherwise lose access to the node that contains the stored backup data, you can lose data. Create a StorageClass CR file named etcd-backup-local-storage.yaml with the following contents: apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: etcd-backup-local-storage provisioner: kubernetes.io/no-provisioner volumeBindingMode: Immediate Apply the StorageClass CR by running the following command: USD oc apply -f etcd-backup-local-storage.yaml Create a PV named etcd-backup-pv-fs.yaml from the applied StorageClass with contents such as the following example: apiVersion: v1 kind: PersistentVolume metadata: name: etcd-backup-pv-fs spec: capacity: storage: 100Gi 1 volumeMode: Filesystem accessModes: - ReadWriteMany persistentVolumeReclaimPolicy: Delete storageClassName: etcd-backup-local-storage local: path: /mnt/ nodeAffinity: required: nodeSelectorTerms: - matchExpressions: - key: kubernetes.io/hostname operator: In values: - <example_master_node> 2 1 The amount of storage available to the PV. Adjust this value for your requirements. 2 Replace this value with the master node to attach this PV to. Tip Run the following command to list the available nodes: USD oc get nodes Verify the creation of the PV by running the following command: USD oc get pv Example output NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE etcd-backup-pv-fs 100Gi RWX Delete Available etcd-backup-local-storage 10s Create a PVC named etcd-backup-pvc.yaml with contents such as the following example: kind: PersistentVolumeClaim apiVersion: v1 metadata: name: etcd-backup-pvc spec: accessModes: - ReadWriteMany volumeMode: Filesystem resources: requests: storage: 10Gi 1 storageClassName: etcd-backup-local-storage 1 The amount of storage available to the PVC. Adjust this value for your requirements. Apply the PVC by running the following command: USD oc apply -f etcd-backup-pvc.yaml Create a custom resource definition (CRD) file named etcd-recurring-backups.yaml . The contents of the created CRD define the schedule and retention type of automated backups. For the default retention type of RetentionNumber with 15 retained backups, use contents such as the following example: apiVersion: config.openshift.io/v1alpha1 kind: Backup metadata: name: etcd-recurring-backup spec: etcd: schedule: "20 4 * * *" 1 timeZone: "UTC" pvcName: etcd-backup-pvc 1 The CronTab schedule for recurring backups. Adjust this value for your needs. To use retention based on the maximum number of backups, add the following key-value pairs to the etcd key: spec: etcd: retentionPolicy: retentionType: RetentionNumber 1 retentionNumber: maxNumberOfBackups: 5 2 1 The retention type. Defaults to RetentionNumber if unspecified. 2 The maximum number of backups to retain. Adjust this value for your needs. Defaults to 15 backups if unspecified. Warning A known issue causes the number of retained backups to be one greater than the configured value. For retention based on the file size of backups, use the following: spec: etcd: retentionPolicy: retentionType: RetentionSize retentionSize: maxSizeOfBackupsGb: 20 1 1 The maximum file size of the retained backups in gigabytes. Adjust this value for your needs. Defaults to 10 GB if unspecified. Warning A known issue causes the maximum size of retained backups to be up to 10 GB greater than the configured value. Create the cron job defined by the CRD by running the following command: USD oc create -f etcd-recurring-backup.yaml To find the created cron job, run the following command: USD oc get cronjob -n openshift-etcd 6.2. Replacing an unhealthy etcd member This document describes the process to replace a single unhealthy etcd member. This process depends on whether the etcd member is unhealthy because the machine is not running or the node is not ready, or whether it is unhealthy because the etcd pod is crashlooping. Note If you have lost the majority of your control plane hosts, follow the disaster recovery procedure to restore to a cluster state instead of this procedure. If the control plane certificates are not valid on the member being replaced, then you must follow the procedure to recover from expired control plane certificates instead of this procedure. If a control plane node is lost and a new one is created, the etcd cluster Operator handles generating the new TLS certificates and adding the node as an etcd member. 6.2.1. Prerequisites Take an etcd backup prior to replacing an unhealthy etcd member. 6.2.2. Identifying an unhealthy etcd member You can identify if your cluster has an unhealthy etcd member. Prerequisites Access to the cluster as a user with the cluster-admin role. Procedure Check the status of the EtcdMembersAvailable status condition using the following command: USD oc get etcd -o=jsonpath='{range .items[0].status.conditions[?(@.type=="EtcdMembersAvailable")]}{.message}{"\n"}' Review the output: 2 of 3 members are available, ip-10-0-131-183.ec2.internal is unhealthy This example output shows that the ip-10-0-131-183.ec2.internal etcd member is unhealthy. 6.2.3. Determining the state of the unhealthy etcd member The steps to replace an unhealthy etcd member depend on which of the following states your etcd member is in: The machine is not running or the node is not ready The etcd pod is crashlooping This procedure determines which state your etcd member is in. This enables you to know which procedure to follow to replace the unhealthy etcd member. Note If you are aware that the machine is not running or the node is not ready, but you expect it to return to a healthy state soon, then you do not need to perform a procedure to replace the etcd member. The etcd cluster Operator will automatically sync when the machine or node returns to a healthy state. Prerequisites You have access to the cluster as a user with the cluster-admin role. You have identified an unhealthy etcd member. Procedure Determine if the machine is not running : USD oc get machines -A -ojsonpath='{range .items[*]}{@.status.nodeRef.name}{"\t"}{@.status.providerStatus.instanceState}{"\n"}' | grep -v running Example output ip-10-0-131-183.ec2.internal stopped 1 1 This output lists the node and the status of the node's machine. If the status is anything other than running , then the machine is not running . If the machine is not running , then follow the Replacing an unhealthy etcd member whose machine is not running or whose node is not ready procedure. Determine if the node is not ready . If either of the following scenarios are true, then the node is not ready . If the machine is running, then check whether the node is unreachable: USD oc get nodes -o jsonpath='{range .items[*]}{"\n"}{.metadata.name}{"\t"}{range .spec.taints[*]}{.key}{" "}' | grep unreachable Example output ip-10-0-131-183.ec2.internal node-role.kubernetes.io/master node.kubernetes.io/unreachable node.kubernetes.io/unreachable 1 1 If the node is listed with an unreachable taint, then the node is not ready . If the node is still reachable, then check whether the node is listed as NotReady : USD oc get nodes -l node-role.kubernetes.io/master | grep "NotReady" Example output ip-10-0-131-183.ec2.internal NotReady master 122m v1.31.3 1 1 If the node is listed as NotReady , then the node is not ready . If the node is not ready , then follow the Replacing an unhealthy etcd member whose machine is not running or whose node is not ready procedure. Determine if the etcd pod is crashlooping . If the machine is running and the node is ready, then check whether the etcd pod is crashlooping. Verify that all control plane nodes are listed as Ready : USD oc get nodes -l node-role.kubernetes.io/master Example output NAME STATUS ROLES AGE VERSION ip-10-0-131-183.ec2.internal Ready master 6h13m v1.31.3 ip-10-0-164-97.ec2.internal Ready master 6h13m v1.31.3 ip-10-0-154-204.ec2.internal Ready master 6h13m v1.31.3 Check whether the status of an etcd pod is either Error or CrashloopBackoff : USD oc -n openshift-etcd get pods -l k8s-app=etcd Example output etcd-ip-10-0-131-183.ec2.internal 2/3 Error 7 6h9m 1 etcd-ip-10-0-164-97.ec2.internal 3/3 Running 0 6h6m etcd-ip-10-0-154-204.ec2.internal 3/3 Running 0 6h6m 1 Since this status of this pod is Error , then the etcd pod is crashlooping . If the etcd pod is crashlooping , then follow the Replacing an unhealthy etcd member whose etcd pod is crashlooping procedure. 6.2.4. Replacing the unhealthy etcd member Depending on the state of your unhealthy etcd member, use one of the following procedures: Replacing an unhealthy etcd member whose machine is not running or whose node is not ready Installing a primary control plane node on an unhealthy cluster Replacing an unhealthy etcd member whose etcd pod is crashlooping Replacing an unhealthy stopped baremetal etcd member 6.2.4.1. Replacing an unhealthy etcd member whose machine is not running or whose node is not ready This procedure details the steps to replace an etcd member that is unhealthy either because the machine is not running or because the node is not ready. Note If your cluster uses a control plane machine set, see "Recovering a degraded etcd Operator" in "Troubleshooting the control plane machine set" for a more simple etcd recovery procedure. Prerequisites You have identified the unhealthy etcd member. You have verified that either the machine is not running or the node is not ready. Important You must wait if you power off other control plane nodes. The control plane nodes must remain powered off until the replacement of an unhealthy etcd member is complete. You have access to the cluster as a user with the cluster-admin role. You have taken an etcd backup. Important It is important to take an etcd backup before performing this procedure, so that you can restore your cluster if you experience any issues. Procedure Remove the unhealthy member. Choose a pod that is not on the affected node: In a terminal that has access to the cluster as a cluster-admin user, run the following command: USD oc -n openshift-etcd get pods -l k8s-app=etcd Example output etcd-ip-10-0-131-183.ec2.internal 3/3 Running 0 123m etcd-ip-10-0-164-97.ec2.internal 3/3 Running 0 123m etcd-ip-10-0-154-204.ec2.internal 3/3 Running 0 124m Connect to the running etcd container, passing in the name of a pod that is not on the affected node: In a terminal that has access to the cluster as a cluster-admin user, run the following command: USD oc rsh -n openshift-etcd etcd-ip-10-0-154-204.ec2.internal View the member list: sh-4.2# etcdctl member list -w table Example output +------------------+---------+------------------------------+---------------------------+---------------------------+ | ID | STATUS | NAME | PEER ADDRS | CLIENT ADDRS | +------------------+---------+------------------------------+---------------------------+---------------------------+ | 6fc1e7c9db35841d | started | ip-10-0-131-183.ec2.internal | https://10.0.131.183:2380 | https://10.0.131.183:2379 | | 757b6793e2408b6c | started | ip-10-0-164-97.ec2.internal | https://10.0.164.97:2380 | https://10.0.164.97:2379 | | ca8c2990a0aa29d1 | started | ip-10-0-154-204.ec2.internal | https://10.0.154.204:2380 | https://10.0.154.204:2379 | +------------------+---------+------------------------------+---------------------------+---------------------------+ Take note of the ID and the name of the unhealthy etcd member, because these values are needed later in the procedure. The USD etcdctl endpoint health command will list the removed member until the procedure of replacement is finished and a new member is added. Remove the unhealthy etcd member by providing the ID to the etcdctl member remove command: sh-4.2# etcdctl member remove 6fc1e7c9db35841d Example output Member 6fc1e7c9db35841d removed from cluster ead669ce1fbfb346 View the member list again and verify that the member was removed: sh-4.2# etcdctl member list -w table Example output +------------------+---------+------------------------------+---------------------------+---------------------------+ | ID | STATUS | NAME | PEER ADDRS | CLIENT ADDRS | +------------------+---------+------------------------------+---------------------------+---------------------------+ | 757b6793e2408b6c | started | ip-10-0-164-97.ec2.internal | https://10.0.164.97:2380 | https://10.0.164.97:2379 | | ca8c2990a0aa29d1 | started | ip-10-0-154-204.ec2.internal | https://10.0.154.204:2380 | https://10.0.154.204:2379 | +------------------+---------+------------------------------+---------------------------+---------------------------+ You can now exit the node shell. Turn off the quorum guard by entering the following command: USD oc patch etcd/cluster --type=merge -p '{"spec": {"unsupportedConfigOverrides": {"useUnsupportedUnsafeNonHANonProductionUnstableEtcd": true}}}' This command ensures that you can successfully re-create secrets and roll out the static pods. Important After you turn off the quorum guard, the cluster might be unreachable for a short time while the remaining etcd instances reboot to reflect the configuration change. Note etcd cannot tolerate any additional member failure when running with two members. Restarting either remaining member breaks the quorum and causes downtime in your cluster. The quorum guard protects etcd from restarts due to configuration changes that could cause downtime, so it must be disabled to complete this procedure. Delete the affected node by running the following command: USD oc delete node <node_name> Example command USD oc delete node ip-10-0-131-183.ec2.internal Remove the old secrets for the unhealthy etcd member that was removed. List the secrets for the unhealthy etcd member that was removed. USD oc get secrets -n openshift-etcd | grep ip-10-0-131-183.ec2.internal 1 1 Pass in the name of the unhealthy etcd member that you took note of earlier in this procedure. There is a peer, serving, and metrics secret as shown in the following output: Example output etcd-peer-ip-10-0-131-183.ec2.internal kubernetes.io/tls 2 47m etcd-serving-ip-10-0-131-183.ec2.internal kubernetes.io/tls 2 47m etcd-serving-metrics-ip-10-0-131-183.ec2.internal kubernetes.io/tls 2 47m Delete the secrets for the unhealthy etcd member that was removed. Delete the peer secret: USD oc delete secret -n openshift-etcd etcd-peer-ip-10-0-131-183.ec2.internal Delete the serving secret: USD oc delete secret -n openshift-etcd etcd-serving-ip-10-0-131-183.ec2.internal Delete the metrics secret: USD oc delete secret -n openshift-etcd etcd-serving-metrics-ip-10-0-131-183.ec2.internal Delete and re-create the control plane machine. After this machine is re-created, a new revision is forced and etcd scales up automatically. If you are running installer-provisioned infrastructure, or you used the Machine API to create your machines, follow these steps. Otherwise, you must create the new master by using the same method that was used to originally create it. Obtain the machine for the unhealthy member. In a terminal that has access to the cluster as a cluster-admin user, run the following command: USD oc get machines -n openshift-machine-api -o wide Example output NAME PHASE TYPE REGION ZONE AGE NODE PROVIDERID STATE clustername-8qw5l-master-0 Running m4.xlarge us-east-1 us-east-1a 3h37m ip-10-0-131-183.ec2.internal aws:///us-east-1a/i-0ec2782f8287dfb7e stopped 1 clustername-8qw5l-master-1 Running m4.xlarge us-east-1 us-east-1b 3h37m ip-10-0-154-204.ec2.internal aws:///us-east-1b/i-096c349b700a19631 running clustername-8qw5l-master-2 Running m4.xlarge us-east-1 us-east-1c 3h37m ip-10-0-164-97.ec2.internal aws:///us-east-1c/i-02626f1dba9ed5bba running clustername-8qw5l-worker-us-east-1a-wbtgd Running m4.large us-east-1 us-east-1a 3h28m ip-10-0-129-226.ec2.internal aws:///us-east-1a/i-010ef6279b4662ced running clustername-8qw5l-worker-us-east-1b-lrdxb Running m4.large us-east-1 us-east-1b 3h28m ip-10-0-144-248.ec2.internal aws:///us-east-1b/i-0cb45ac45a166173b running clustername-8qw5l-worker-us-east-1c-pkg26 Running m4.large us-east-1 us-east-1c 3h28m ip-10-0-170-181.ec2.internal aws:///us-east-1c/i-06861c00007751b0a running 1 This is the control plane machine for the unhealthy node, ip-10-0-131-183.ec2.internal . Delete the machine of the unhealthy member: USD oc delete machine -n openshift-machine-api clustername-8qw5l-master-0 1 1 Specify the name of the control plane machine for the unhealthy node. A new machine is automatically provisioned after deleting the machine of the unhealthy member. Verify that a new machine has been created: USD oc get machines -n openshift-machine-api -o wide Example output NAME PHASE TYPE REGION ZONE AGE NODE PROVIDERID STATE clustername-8qw5l-master-1 Running m4.xlarge us-east-1 us-east-1b 3h37m ip-10-0-154-204.ec2.internal aws:///us-east-1b/i-096c349b700a19631 running clustername-8qw5l-master-2 Running m4.xlarge us-east-1 us-east-1c 3h37m ip-10-0-164-97.ec2.internal aws:///us-east-1c/i-02626f1dba9ed5bba running clustername-8qw5l-master-3 Provisioning m4.xlarge us-east-1 us-east-1a 85s ip-10-0-133-53.ec2.internal aws:///us-east-1a/i-015b0888fe17bc2c8 running 1 clustername-8qw5l-worker-us-east-1a-wbtgd Running m4.large us-east-1 us-east-1a 3h28m ip-10-0-129-226.ec2.internal aws:///us-east-1a/i-010ef6279b4662ced running clustername-8qw5l-worker-us-east-1b-lrdxb Running m4.large us-east-1 us-east-1b 3h28m ip-10-0-144-248.ec2.internal aws:///us-east-1b/i-0cb45ac45a166173b running clustername-8qw5l-worker-us-east-1c-pkg26 Running m4.large us-east-1 us-east-1c 3h28m ip-10-0-170-181.ec2.internal aws:///us-east-1c/i-06861c00007751b0a running 1 The new machine, clustername-8qw5l-master-3 is being created and is ready once the phase changes from Provisioning to Running . It might take a few minutes for the new machine to be created. The etcd cluster Operator will automatically sync when the machine or node returns to a healthy state. Note Verify the subnet IDs that you are using for your machine sets to ensure that they end up in the correct availability zone. Turn the quorum guard back on by entering the following command: USD oc patch etcd/cluster --type=merge -p '{"spec": {"unsupportedConfigOverrides": null}}' You can verify that the unsupportedConfigOverrides section is removed from the object by entering this command: USD oc get etcd/cluster -oyaml If you are using single-node OpenShift, restart the node. Otherwise, you might experience the following error in the etcd cluster Operator: Example output EtcdCertSignerControllerDegraded: [Operation cannot be fulfilled on secrets "etcd-peer-sno-0": the object has been modified; please apply your changes to the latest version and try again, Operation cannot be fulfilled on secrets "etcd-serving-sno-0": the object has been modified; please apply your changes to the latest version and try again, Operation cannot be fulfilled on secrets "etcd-serving-metrics-sno-0": the object has been modified; please apply your changes to the latest version and try again] Verification Verify that all etcd pods are running properly. In a terminal that has access to the cluster as a cluster-admin user, run the following command: USD oc -n openshift-etcd get pods -l k8s-app=etcd Example output etcd-ip-10-0-133-53.ec2.internal 3/3 Running 0 7m49s etcd-ip-10-0-164-97.ec2.internal 3/3 Running 0 123m etcd-ip-10-0-154-204.ec2.internal 3/3 Running 0 124m If the output from the command only lists two pods, you can manually force an etcd redeployment. In a terminal that has access to the cluster as a cluster-admin user, run the following command: USD oc patch etcd cluster -p='{"spec": {"forceRedeploymentReason": "recovery-'"USD( date --rfc-3339=ns )"'"}}' --type=merge 1 1 The forceRedeploymentReason value must be unique, which is why a timestamp is appended. Verify that there are exactly three etcd members. Connect to the running etcd container, passing in the name of a pod that was not on the affected node: In a terminal that has access to the cluster as a cluster-admin user, run the following command: USD oc rsh -n openshift-etcd etcd-ip-10-0-154-204.ec2.internal View the member list: sh-4.2# etcdctl member list -w table Example output +------------------+---------+------------------------------+---------------------------+---------------------------+ | ID | STATUS | NAME | PEER ADDRS | CLIENT ADDRS | +------------------+---------+------------------------------+---------------------------+---------------------------+ | 5eb0d6b8ca24730c | started | ip-10-0-133-53.ec2.internal | https://10.0.133.53:2380 | https://10.0.133.53:2379 | | 757b6793e2408b6c | started | ip-10-0-164-97.ec2.internal | https://10.0.164.97:2380 | https://10.0.164.97:2379 | | ca8c2990a0aa29d1 | started | ip-10-0-154-204.ec2.internal | https://10.0.154.204:2380 | https://10.0.154.204:2379 | +------------------+---------+------------------------------+---------------------------+---------------------------+ If the output from the command lists more than three etcd members, you must carefully remove the unwanted member. Warning Be sure to remove the correct etcd member; removing a good etcd member might lead to quorum loss. Additional resources Recovering a degraded etcd Operator Installing a primary control plane node on an unhealthy cluster 6.2.4.2. Replacing an unhealthy etcd member whose etcd pod is crashlooping This procedure details the steps to replace an etcd member that is unhealthy because the etcd pod is crashlooping. Prerequisites You have identified the unhealthy etcd member. You have verified that the etcd pod is crashlooping. You have access to the cluster as a user with the cluster-admin role. You have taken an etcd backup. Important It is important to take an etcd backup before performing this procedure so that your cluster can be restored if you encounter any issues. Procedure Stop the crashlooping etcd pod. Debug the node that is crashlooping. In a terminal that has access to the cluster as a cluster-admin user, run the following command: USD oc debug node/ip-10-0-131-183.ec2.internal 1 1 Replace this with the name of the unhealthy node. Change your root directory to /host : sh-4.2# chroot /host Move the existing etcd pod file out of the kubelet manifest directory: sh-4.2# mkdir /var/lib/etcd-backup sh-4.2# mv /etc/kubernetes/manifests/etcd-pod.yaml /var/lib/etcd-backup/ Move the etcd data directory to a different location: sh-4.2# mv /var/lib/etcd/ /tmp You can now exit the node shell. Remove the unhealthy member. Choose a pod that is not on the affected node. In a terminal that has access to the cluster as a cluster-admin user, run the following command: USD oc -n openshift-etcd get pods -l k8s-app=etcd Example output etcd-ip-10-0-131-183.ec2.internal 2/3 Error 7 6h9m etcd-ip-10-0-164-97.ec2.internal 3/3 Running 0 6h6m etcd-ip-10-0-154-204.ec2.internal 3/3 Running 0 6h6m Connect to the running etcd container, passing in the name of a pod that is not on the affected node. In a terminal that has access to the cluster as a cluster-admin user, run the following command: USD oc rsh -n openshift-etcd etcd-ip-10-0-154-204.ec2.internal View the member list: sh-4.2# etcdctl member list -w table Example output +------------------+---------+------------------------------+---------------------------+---------------------------+ | ID | STATUS | NAME | PEER ADDRS | CLIENT ADDRS | +------------------+---------+------------------------------+---------------------------+---------------------------+ | 62bcf33650a7170a | started | ip-10-0-131-183.ec2.internal | https://10.0.131.183:2380 | https://10.0.131.183:2379 | | b78e2856655bc2eb | started | ip-10-0-164-97.ec2.internal | https://10.0.164.97:2380 | https://10.0.164.97:2379 | | d022e10b498760d5 | started | ip-10-0-154-204.ec2.internal | https://10.0.154.204:2380 | https://10.0.154.204:2379 | +------------------+---------+------------------------------+---------------------------+---------------------------+ Take note of the ID and the name of the unhealthy etcd member, because these values are needed later in the procedure. Remove the unhealthy etcd member by providing the ID to the etcdctl member remove command: sh-4.2# etcdctl member remove 62bcf33650a7170a Example output Member 62bcf33650a7170a removed from cluster ead669ce1fbfb346 View the member list again and verify that the member was removed: sh-4.2# etcdctl member list -w table Example output +------------------+---------+------------------------------+---------------------------+---------------------------+ | ID | STATUS | NAME | PEER ADDRS | CLIENT ADDRS | +------------------+---------+------------------------------+---------------------------+---------------------------+ | b78e2856655bc2eb | started | ip-10-0-164-97.ec2.internal | https://10.0.164.97:2380 | https://10.0.164.97:2379 | | d022e10b498760d5 | started | ip-10-0-154-204.ec2.internal | https://10.0.154.204:2380 | https://10.0.154.204:2379 | +------------------+---------+------------------------------+---------------------------+---------------------------+ You can now exit the node shell. Turn off the quorum guard by entering the following command: USD oc patch etcd/cluster --type=merge -p '{"spec": {"unsupportedConfigOverrides": {"useUnsupportedUnsafeNonHANonProductionUnstableEtcd": true}}}' This command ensures that you can successfully re-create secrets and roll out the static pods. Remove the old secrets for the unhealthy etcd member that was removed. List the secrets for the unhealthy etcd member that was removed. USD oc get secrets -n openshift-etcd | grep ip-10-0-131-183.ec2.internal 1 1 Pass in the name of the unhealthy etcd member that you took note of earlier in this procedure. There is a peer, serving, and metrics secret as shown in the following output: Example output etcd-peer-ip-10-0-131-183.ec2.internal kubernetes.io/tls 2 47m etcd-serving-ip-10-0-131-183.ec2.internal kubernetes.io/tls 2 47m etcd-serving-metrics-ip-10-0-131-183.ec2.internal kubernetes.io/tls 2 47m Delete the secrets for the unhealthy etcd member that was removed. Delete the peer secret: USD oc delete secret -n openshift-etcd etcd-peer-ip-10-0-131-183.ec2.internal Delete the serving secret: USD oc delete secret -n openshift-etcd etcd-serving-ip-10-0-131-183.ec2.internal Delete the metrics secret: USD oc delete secret -n openshift-etcd etcd-serving-metrics-ip-10-0-131-183.ec2.internal Force etcd redeployment. In a terminal that has access to the cluster as a cluster-admin user, run the following command: USD oc patch etcd cluster -p='{"spec": {"forceRedeploymentReason": "single-master-recovery-'"USD( date --rfc-3339=ns )"'"}}' --type=merge 1 1 The forceRedeploymentReason value must be unique, which is why a timestamp is appended. When the etcd cluster Operator performs a redeployment, it ensures that all control plane nodes have a functioning etcd pod. Turn the quorum guard back on by entering the following command: USD oc patch etcd/cluster --type=merge -p '{"spec": {"unsupportedConfigOverrides": null}}' You can verify that the unsupportedConfigOverrides section is removed from the object by entering this command: USD oc get etcd/cluster -oyaml If you are using single-node OpenShift, restart the node. Otherwise, you might encounter the following error in the etcd cluster Operator: Example output EtcdCertSignerControllerDegraded: [Operation cannot be fulfilled on secrets "etcd-peer-sno-0": the object has been modified; please apply your changes to the latest version and try again, Operation cannot be fulfilled on secrets "etcd-serving-sno-0": the object has been modified; please apply your changes to the latest version and try again, Operation cannot be fulfilled on secrets "etcd-serving-metrics-sno-0": the object has been modified; please apply your changes to the latest version and try again] Verification Verify that the new member is available and healthy. Connect to the running etcd container again. In a terminal that has access to the cluster as a cluster-admin user, run the following command: USD oc rsh -n openshift-etcd etcd-ip-10-0-154-204.ec2.internal Verify that all members are healthy: sh-4.2# etcdctl endpoint health Example output https://10.0.131.183:2379 is healthy: successfully committed proposal: took = 16.671434ms https://10.0.154.204:2379 is healthy: successfully committed proposal: took = 16.698331ms https://10.0.164.97:2379 is healthy: successfully committed proposal: took = 16.621645ms 6.2.4.3. Replacing an unhealthy bare metal etcd member whose machine is not running or whose node is not ready This procedure details the steps to replace a bare metal etcd member that is unhealthy either because the machine is not running or because the node is not ready. If you are running installer-provisioned infrastructure or you used the Machine API to create your machines, follow these steps. Otherwise you must create the new control plane node using the same method that was used to originally create it. Prerequisites You have identified the unhealthy bare metal etcd member. You have verified that either the machine is not running or the node is not ready. You have access to the cluster as a user with the cluster-admin role. You have taken an etcd backup. Important You must take an etcd backup before performing this procedure so that your cluster can be restored if you encounter any issues. Procedure Verify and remove the unhealthy member. Choose a pod that is not on the affected node: In a terminal that has access to the cluster as a cluster-admin user, run the following command: USD oc -n openshift-etcd get pods -l k8s-app=etcd -o wide Example output etcd-openshift-control-plane-0 5/5 Running 11 3h56m 192.168.10.9 openshift-control-plane-0 <none> <none> etcd-openshift-control-plane-1 5/5 Running 0 3h54m 192.168.10.10 openshift-control-plane-1 <none> <none> etcd-openshift-control-plane-2 5/5 Running 0 3h58m 192.168.10.11 openshift-control-plane-2 <none> <none> Connect to the running etcd container, passing in the name of a pod that is not on the affected node: In a terminal that has access to the cluster as a cluster-admin user, run the following command: USD oc rsh -n openshift-etcd etcd-openshift-control-plane-0 View the member list: sh-4.2# etcdctl member list -w table Example output +------------------+---------+--------------------+---------------------------+---------------------------+---------------------+ | ID | STATUS | NAME | PEER ADDRS | CLIENT ADDRS | IS LEARNER | +------------------+---------+--------------------+---------------------------+---------------------------+---------------------+ | 7a8197040a5126c8 | started | openshift-control-plane-2 | https://192.168.10.11:2380/ | https://192.168.10.11:2379/ | false | | 8d5abe9669a39192 | started | openshift-control-plane-1 | https://192.168.10.10:2380/ | https://192.168.10.10:2379/ | false | | cc3830a72fc357f9 | started | openshift-control-plane-0 | https://192.168.10.9:2380/ | https://192.168.10.9:2379/ | false | +------------------+---------+--------------------+---------------------------+---------------------------+---------------------+ Take note of the ID and the name of the unhealthy etcd member, because these values are required later in the procedure. The etcdctl endpoint health command will list the removed member until the replacement procedure is completed and the new member is added. Remove the unhealthy etcd member by providing the ID to the etcdctl member remove command: Warning Be sure to remove the correct etcd member; removing a good etcd member might lead to quorum loss. sh-4.2# etcdctl member remove 7a8197040a5126c8 Example output Member 7a8197040a5126c8 removed from cluster b23536c33f2cdd1b View the member list again and verify that the member was removed: sh-4.2# etcdctl member list -w table Example output +------------------+---------+--------------------+---------------------------+---------------------------+-------------------------+ | ID | STATUS | NAME | PEER ADDRS | CLIENT ADDRS | IS LEARNER | +------------------+---------+--------------------+---------------------------+---------------------------+-------------------------+ | cc3830a72fc357f9 | started | openshift-control-plane-2 | https://192.168.10.11:2380/ | https://192.168.10.11:2379/ | false | | 8d5abe9669a39192 | started | openshift-control-plane-1 | https://192.168.10.10:2380/ | https://192.168.10.10:2379/ | false | +------------------+---------+--------------------+---------------------------+---------------------------+-------------------------+ You can now exit the node shell. Important After you remove the member, the cluster might be unreachable for a short time while the remaining etcd instances reboot. Turn off the quorum guard by entering the following command: USD oc patch etcd/cluster --type=merge -p '{"spec": {"unsupportedConfigOverrides": {"useUnsupportedUnsafeNonHANonProductionUnstableEtcd": true}}}' This command ensures that you can successfully re-create secrets and roll out the static pods. Remove the old secrets for the unhealthy etcd member that was removed by running the following commands. List the secrets for the unhealthy etcd member that was removed. USD oc get secrets -n openshift-etcd | grep openshift-control-plane-2 Pass in the name of the unhealthy etcd member that you took note of earlier in this procedure. There is a peer, serving, and metrics secret as shown in the following output: etcd-peer-openshift-control-plane-2 kubernetes.io/tls 2 134m etcd-serving-metrics-openshift-control-plane-2 kubernetes.io/tls 2 134m etcd-serving-openshift-control-plane-2 kubernetes.io/tls 2 134m Delete the secrets for the unhealthy etcd member that was removed. Delete the peer secret: USD oc delete secret etcd-peer-openshift-control-plane-2 -n openshift-etcd secret "etcd-peer-openshift-control-plane-2" deleted Delete the serving secret: USD oc delete secret etcd-serving-metrics-openshift-control-plane-2 -n openshift-etcd secret "etcd-serving-metrics-openshift-control-plane-2" deleted Delete the metrics secret: USD oc delete secret etcd-serving-openshift-control-plane-2 -n openshift-etcd secret "etcd-serving-openshift-control-plane-2" deleted Obtain the machine for the unhealthy member. In a terminal that has access to the cluster as a cluster-admin user, run the following command: USD oc get machines -n openshift-machine-api -o wide Example output NAME PHASE TYPE REGION ZONE AGE NODE PROVIDERID STATE examplecluster-control-plane-0 Running 3h11m openshift-control-plane-0 baremetalhost:///openshift-machine-api/openshift-control-plane-0/da1ebe11-3ff2-41c5-b099-0aa41222964e externally provisioned 1 examplecluster-control-plane-1 Running 3h11m openshift-control-plane-1 baremetalhost:///openshift-machine-api/openshift-control-plane-1/d9f9acbc-329c-475e-8d81-03b20280a3e1 externally provisioned examplecluster-control-plane-2 Running 3h11m openshift-control-plane-2 baremetalhost:///openshift-machine-api/openshift-control-plane-2/3354bdac-61d8-410f-be5b-6a395b056135 externally provisioned examplecluster-compute-0 Running 165m openshift-compute-0 baremetalhost:///openshift-machine-api/openshift-compute-0/3d685b81-7410-4bb3-80ec-13a31858241f provisioned examplecluster-compute-1 Running 165m openshift-compute-1 baremetalhost:///openshift-machine-api/openshift-compute-1/0fdae6eb-2066-4241-91dc-e7ea72ab13b9 provisioned 1 This is the control plane machine for the unhealthy node, examplecluster-control-plane-2 . Ensure that the Bare Metal Operator is available by running the following command: USD oc get clusteroperator baremetal Example output NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MESSAGE baremetal 4.18.0 True False False 3d15h Remove the old BareMetalHost object by running the following command: USD oc delete bmh openshift-control-plane-2 -n openshift-machine-api Example output baremetalhost.metal3.io "openshift-control-plane-2" deleted Delete the machine of the unhealthy member by running the following command: USD oc delete machine -n openshift-machine-api examplecluster-control-plane-2 After you remove the BareMetalHost and Machine objects, then the Machine controller automatically deletes the Node object. If deletion of the machine is delayed for any reason or the command is obstructed and delayed, you can force deletion by removing the machine object finalizer field. Important Do not interrupt machine deletion by pressing Ctrl+c . You must allow the command to proceed to completion. Open a new terminal window to edit and delete the finalizer fields. A new machine is automatically provisioned after deleting the machine of the unhealthy member. Edit the machine configuration by running the following command: USD oc edit machine -n openshift-machine-api examplecluster-control-plane-2 Delete the following fields in the Machine custom resource, and then save the updated file: finalizers: - machine.machine.openshift.io Example output machine.machine.openshift.io/examplecluster-control-plane-2 edited Verify that the machine was deleted by running the following command: USD oc get machines -n openshift-machine-api -o wide Example output NAME PHASE TYPE REGION ZONE AGE NODE PROVIDERID STATE examplecluster-control-plane-0 Running 3h11m openshift-control-plane-0 baremetalhost:///openshift-machine-api/openshift-control-plane-0/da1ebe11-3ff2-41c5-b099-0aa41222964e externally provisioned examplecluster-control-plane-1 Running 3h11m openshift-control-plane-1 baremetalhost:///openshift-machine-api/openshift-control-plane-1/d9f9acbc-329c-475e-8d81-03b20280a3e1 externally provisioned examplecluster-compute-0 Running 165m openshift-compute-0 baremetalhost:///openshift-machine-api/openshift-compute-0/3d685b81-7410-4bb3-80ec-13a31858241f provisioned examplecluster-compute-1 Running 165m openshift-compute-1 baremetalhost:///openshift-machine-api/openshift-compute-1/0fdae6eb-2066-4241-91dc-e7ea72ab13b9 provisioned Verify that the node has been deleted by running the following command: USD oc get nodes NAME STATUS ROLES AGE VERSION openshift-control-plane-0 Ready master 3h24m v1.31.3 openshift-control-plane-1 Ready master 3h24m v1.31.3 openshift-compute-0 Ready worker 176m v1.31.3 openshift-compute-1 Ready worker 176m v1.31.3 Create the new BareMetalHost object and the secret to store the BMC credentials: USD cat <<EOF | oc apply -f - apiVersion: v1 kind: Secret metadata: name: openshift-control-plane-2-bmc-secret namespace: openshift-machine-api data: password: <password> username: <username> type: Opaque --- apiVersion: metal3.io/v1alpha1 kind: BareMetalHost metadata: name: openshift-control-plane-2 namespace: openshift-machine-api spec: automatedCleaningMode: disabled bmc: address: redfish://10.46.61.18:443/redfish/v1/Systems/1 credentialsName: openshift-control-plane-2-bmc-secret disableCertificateVerification: true bootMACAddress: 48:df:37:b0:8a:a0 bootMode: UEFI externallyProvisioned: false online: true rootDeviceHints: deviceName: /dev/disk/by-id/scsi-<serial_number> userData: name: master-user-data-managed namespace: openshift-machine-api EOF Note The username and password can be found from the other bare metal host's secrets. The protocol to use in bmc:address can be taken from other bmh objects. Important If you reuse the BareMetalHost object definition from an existing control plane host, do not leave the externallyProvisioned field set to true . Existing control plane BareMetalHost objects may have the externallyProvisioned flag set to true if they were provisioned by the OpenShift Container Platform installation program. After the inspection is complete, the BareMetalHost object is created and available to be provisioned. Verify the creation process using available BareMetalHost objects: USD oc get bmh -n openshift-machine-api NAME STATE CONSUMER ONLINE ERROR AGE openshift-control-plane-0 externally provisioned examplecluster-control-plane-0 true 4h48m openshift-control-plane-1 externally provisioned examplecluster-control-plane-1 true 4h48m openshift-control-plane-2 available examplecluster-control-plane-3 true 47m openshift-compute-0 provisioned examplecluster-compute-0 true 4h48m openshift-compute-1 provisioned examplecluster-compute-1 true 4h48m Verify that a new machine has been created: USD oc get machines -n openshift-machine-api -o wide Example output NAME PHASE TYPE REGION ZONE AGE NODE PROVIDERID STATE examplecluster-control-plane-0 Running 3h11m openshift-control-plane-0 baremetalhost:///openshift-machine-api/openshift-control-plane-0/da1ebe11-3ff2-41c5-b099-0aa41222964e externally provisioned 1 examplecluster-control-plane-1 Running 3h11m openshift-control-plane-1 baremetalhost:///openshift-machine-api/openshift-control-plane-1/d9f9acbc-329c-475e-8d81-03b20280a3e1 externally provisioned examplecluster-control-plane-2 Running 3h11m openshift-control-plane-2 baremetalhost:///openshift-machine-api/openshift-control-plane-2/3354bdac-61d8-410f-be5b-6a395b056135 externally provisioned examplecluster-compute-0 Running 165m openshift-compute-0 baremetalhost:///openshift-machine-api/openshift-compute-0/3d685b81-7410-4bb3-80ec-13a31858241f provisioned examplecluster-compute-1 Running 165m openshift-compute-1 baremetalhost:///openshift-machine-api/openshift-compute-1/0fdae6eb-2066-4241-91dc-e7ea72ab13b9 provisioned 1 The new machine, clustername-8qw5l-master-3 is being created and is ready after the phase changes from Provisioning to Running . It should take a few minutes for the new machine to be created. The etcd cluster Operator will automatically sync when the machine or node returns to a healthy state. Verify that the bare metal host becomes provisioned and no error reported by running the following command: USD oc get bmh -n openshift-machine-api Example output USD oc get bmh -n openshift-machine-api NAME STATE CONSUMER ONLINE ERROR AGE openshift-control-plane-0 externally provisioned examplecluster-control-plane-0 true 4h48m openshift-control-plane-1 externally provisioned examplecluster-control-plane-1 true 4h48m openshift-control-plane-2 provisioned examplecluster-control-plane-3 true 47m openshift-compute-0 provisioned examplecluster-compute-0 true 4h48m openshift-compute-1 provisioned examplecluster-compute-1 true 4h48m Verify that the new node is added and in a ready state by running this command: USD oc get nodes Example output USD oc get nodes NAME STATUS ROLES AGE VERSION openshift-control-plane-0 Ready master 4h26m v1.31.3 openshift-control-plane-1 Ready master 4h26m v1.31.3 openshift-control-plane-2 Ready master 12m v1.31.3 openshift-compute-0 Ready worker 3h58m v1.31.3 openshift-compute-1 Ready worker 3h58m v1.31.3 Turn the quorum guard back on by entering the following command: USD oc patch etcd/cluster --type=merge -p '{"spec": {"unsupportedConfigOverrides": null}}' You can verify that the unsupportedConfigOverrides section is removed from the object by entering this command: USD oc get etcd/cluster -oyaml If you are using single-node OpenShift, restart the node. Otherwise, you might encounter the following error in the etcd cluster Operator: Example output EtcdCertSignerControllerDegraded: [Operation cannot be fulfilled on secrets "etcd-peer-sno-0": the object has been modified; please apply your changes to the latest version and try again, Operation cannot be fulfilled on secrets "etcd-serving-sno-0": the object has been modified; please apply your changes to the latest version and try again, Operation cannot be fulfilled on secrets "etcd-serving-metrics-sno-0": the object has been modified; please apply your changes to the latest version and try again] Verification Verify that all etcd pods are running properly. In a terminal that has access to the cluster as a cluster-admin user, run the following command: USD oc -n openshift-etcd get pods -l k8s-app=etcd Example output etcd-openshift-control-plane-0 5/5 Running 0 105m etcd-openshift-control-plane-1 5/5 Running 0 107m etcd-openshift-control-plane-2 5/5 Running 0 103m If the output from the command only lists two pods, you can manually force an etcd redeployment. In a terminal that has access to the cluster as a cluster-admin user, run the following command: USD oc patch etcd cluster -p='{"spec": {"forceRedeploymentReason": "recovery-'"USD( date --rfc-3339=ns )"'"}}' --type=merge 1 1 The forceRedeploymentReason value must be unique, which is why a timestamp is appended. To verify there are exactly three etcd members, connect to the running etcd container, passing in the name of a pod that was not on the affected node. In a terminal that has access to the cluster as a cluster-admin user, run the following command: USD oc rsh -n openshift-etcd etcd-openshift-control-plane-0 View the member list: sh-4.2# etcdctl member list -w table Example output +------------------+---------+--------------------+---------------------------+---------------------------+-----------------+ | ID | STATUS | NAME | PEER ADDRS | CLIENT ADDRS | IS LEARNER | +------------------+---------+--------------------+---------------------------+---------------------------+-----------------+ | 7a8197040a5126c8 | started | openshift-control-plane-2 | https://192.168.10.11:2380 | https://192.168.10.11:2379 | false | | 8d5abe9669a39192 | started | openshift-control-plane-1 | https://192.168.10.10:2380 | https://192.168.10.10:2379 | false | | cc3830a72fc357f9 | started | openshift-control-plane-0 | https://192.168.10.9:2380 | https://192.168.10.9:2379 | false | +------------------+---------+--------------------+---------------------------+---------------------------+-----------------+ Note If the output from the command lists more than three etcd members, you must carefully remove the unwanted member. Verify that all etcd members are healthy by running the following command: # etcdctl endpoint health --cluster Example output https://192.168.10.10:2379 is healthy: successfully committed proposal: took = 8.973065ms https://192.168.10.9:2379 is healthy: successfully committed proposal: took = 11.559829ms https://192.168.10.11:2379 is healthy: successfully committed proposal: took = 11.665203ms Validate that all nodes are at the latest revision by running the following command: USD oc get etcd -o=jsonpath='{range.items[0].status.conditions[?(@.type=="NodeInstallerProgressing")]}{.reason}{"\n"}{.message}{"\n"}' 6.2.5. Additional resources Quorum protection with machine lifecycle hooks 6.3. Disaster recovery 6.3.1. About disaster recovery The disaster recovery documentation provides information for administrators on how to recover from several disaster situations that might occur with their OpenShift Container Platform cluster. As an administrator, you might need to follow one or more of the following procedures to return your cluster to a working state. Important Disaster recovery requires you to have at least one healthy control plane host. Quorum restoration This solution handles situations where you have lost the majority of your control plane hosts, leading to etcd quorum loss and the cluster going offline. This solution does not require an etcd backup. Note If you have a majority of your control plane nodes still available and have an etcd quorum, then replace a single unhealthy etcd member . Restoring to a cluster state This solution handles situations where you want to restore your cluster to a state, for example, if an administrator deletes something critical. If you have taken an etcd backup, you can restore your cluster to a state. If applicable, you might also need to recover from expired control plane certificates . Warning Restoring to a cluster state is a destructive and destablizing action to take on a running cluster. This procedure should only be used as a last resort. Prior to performing a restore, see About restoring cluster state for more information on the impact to the cluster. Recovering from expired control plane certificates This solution handles situations where your control plane certificates have expired. For example, if you shut down your cluster before the first certificate rotation, which occurs 24 hours after installation, your certificates will not be rotated and will expire. You can follow this procedure to recover from expired control plane certificates. 6.3.1.1. Testing restore procedures Testing the restore procedure is important to ensure that your automation and workload handle the new cluster state gracefully. Due to the complex nature of etcd quorum and the etcd Operator attempting to mend automatically, it is often difficult to correctly bring your cluster into a broken enough state that it can be restored. Warning You must have SSH access to the cluster. Your cluster might be entirely lost without SSH access. Prerequisites You have SSH access to control plane hosts. You have installed the OpenShift CLI ( oc ). Procedure Use SSH to connect to each of your nonrecovery nodes and run the following commands to disable etcd and the kubelet service: Disable etcd by running the following command: USD sudo /usr/local/bin/disable-etcd.sh Delete variable data for etcd by running the following command: USD sudo rm -rf /var/lib/etcd Disable the kubelet service by running the following command: USD sudo systemctl disable kubelet.service Exit every SSH session. Run the following command to ensure that your nonrecovery nodes are in a NOT READY state: USD oc get nodes Follow the steps in "Restoring to a cluster state" to restore your cluster. After you restore the cluster and the API responds, use SSH to connect to each nonrecovery node and enable the kubelet service: USD sudo systemctl enable kubelet.service Exit every SSH session. Run the following command to observe your nodes coming back into the READY state: USD oc get nodes Run the following command to verify that etcd is available: USD oc get pods -n openshift-etcd Additional resources Restoring to a cluster state 6.3.2. Quorum restoration You can use the quorum-restore.sh script to restore etcd quorum on clusters that are offline due to quorum loss. When quorum is lost, the OpenShift Container Platform API becomes read-only. After quorum is restored, the OpenShift Container Platform API returns to read/write mode. 6.3.2.1. Restoring etcd quorum for high availability clusters You can use the quorum-restore.sh script to instantly bring back a new single-member etcd cluster based on its local data directory and mark all other members as invalid by retiring the cluster identifier. No prior backup is required to restore the control plane from. Warning You might experience data loss if the host that runs the restoration does not have all data replicated to it. Important Quorum restoration should not be used to decrease the number of nodes outside of the restoration process. Decreasing the number of nodes results in an unsupported cluster configuration. Prerequisites You have SSH access to the node used to restore quorum. Procedure Select a control plane host to use as the recovery host. You run the restore operation on this host. List the running etcd pods by running the following command: USD oc get pods -n openshift-etcd -l app=etcd --field-selector="status.phase==Running" Choose a pod and run the following command to obtain its IP address: USD oc exec -n openshift-etcd <etcd-pod> -c etcdctl -- etcdctl endpoint status -w table Note the IP address of a member that is not a learner and has the highest Raft index. Run the following command and note the node name that corresponds to the IP address of the chosen etcd member: USD oc get nodes -o jsonpath='{range .items[*]}[{.metadata.name},{.status.addresses[?(@.type=="InternalIP")].address}]{end}' Using SSH, connect to the chosen recovery node and run the following command to restore etcd quorum: USD sudo -E /usr/local/bin/quorum-restore.sh After a few minutes, the nodes that went down are automatically synchronized with the node that the recovery script was run on. Any remaining online nodes automatically rejoin the new etcd cluster created by the quorum-restore.sh script. This process takes a few minutes. Exit the SSH session. Return to a three-node configuration if any nodes are offline. Repeat the following steps for each node that is offline to delete and re-create them. After the machines are re-created, a new revision is forced and etcd automatically scales up. If you use a user-provisioned bare-metal installation, you can re-create a control plane machine by using the same method that you used to originally create it. For more information, see "Installing a user-provisioned cluster on bare metal". Warning Do not delete and re-create the machine for the recovery host. If you are running installer-provisioned infrastructure, or you used the Machine API to create your machines, follow these steps: Warning Do not delete and re-create the machine for the recovery host. For bare-metal installations on installer-provisioned infrastructure, control plane machines are not re-created. For more information, see "Replacing a bare-metal control plane node". Obtain the machine for one of the offline nodes. In a terminal that has access to the cluster as a cluster-admin user, run the following command: USD oc get machines -n openshift-machine-api -o wide Example output: NAME PHASE TYPE REGION ZONE AGE NODE PROVIDERID STATE clustername-8qw5l-master-0 Running m4.xlarge us-east-1 us-east-1a 3h37m ip-10-0-131-183.ec2.internal aws:///us-east-1a/i-0ec2782f8287dfb7e stopped 1 clustername-8qw5l-master-1 Running m4.xlarge us-east-1 us-east-1b 3h37m ip-10-0-143-125.ec2.internal aws:///us-east-1b/i-096c349b700a19631 running clustername-8qw5l-master-2 Running m4.xlarge us-east-1 us-east-1c 3h37m ip-10-0-154-194.ec2.internal aws:///us-east-1c/i-02626f1dba9ed5bba running clustername-8qw5l-worker-us-east-1a-wbtgd Running m4.large us-east-1 us-east-1a 3h28m ip-10-0-129-226.ec2.internal aws:///us-east-1a/i-010ef6279b4662ced running clustername-8qw5l-worker-us-east-1b-lrdxb Running m4.large us-east-1 us-east-1b 3h28m ip-10-0-144-248.ec2.internal aws:///us-east-1b/i-0cb45ac45a166173b running clustername-8qw5l-worker-us-east-1c-pkg26 Running m4.large us-east-1 us-east-1c 3h28m ip-10-0-170-181.ec2.internal aws:///us-east-1c/i-06861c00007751b0a running 1 This is the control plane machine for the offline node, ip-10-0-131-183.ec2.internal . Delete the machine of the offline node by running: USD oc delete machine -n openshift-machine-api clustername-8qw5l-master-0 1 1 Specify the name of the control plane machine for the offline node. A new machine is automatically provisioned after deleting the machine of the offline node. Verify that a new machine has been created by running: USD oc get machines -n openshift-machine-api -o wide Example output: NAME PHASE TYPE REGION ZONE AGE NODE PROVIDERID STATE clustername-8qw5l-master-1 Running m4.xlarge us-east-1 us-east-1b 3h37m ip-10-0-143-125.ec2.internal aws:///us-east-1b/i-096c349b700a19631 running clustername-8qw5l-master-2 Running m4.xlarge us-east-1 us-east-1c 3h37m ip-10-0-154-194.ec2.internal aws:///us-east-1c/i-02626f1dba9ed5bba running clustername-8qw5l-master-3 Provisioning m4.xlarge us-east-1 us-east-1a 85s ip-10-0-173-171.ec2.internal aws:///us-east-1a/i-015b0888fe17bc2c8 running 1 clustername-8qw5l-worker-us-east-1a-wbtgd Running m4.large us-east-1 us-east-1a 3h28m ip-10-0-129-226.ec2.internal aws:///us-east-1a/i-010ef6279b4662ced running clustername-8qw5l-worker-us-east-1b-lrdxb Running m4.large us-east-1 us-east-1b 3h28m ip-10-0-144-248.ec2.internal aws:///us-east-1b/i-0cb45ac45a166173b running clustername-8qw5l-worker-us-east-1c-pkg26 Running m4.large us-east-1 us-east-1c 3h28m ip-10-0-170-181.ec2.internal aws:///us-east-1c/i-06861c00007751b0a running 1 The new machine, clustername-8qw5l-master-3 is being created and is ready after the phase changes from Provisioning to Running . It might take a few minutes for the new machine to be created. The etcd cluster Operator will automatically synchronize when the machine or node returns to a healthy state. Repeat these steps for each node that is offline. Wait until the control plane recovers by running the following command: USD oc adm wait-for-stable-cluster Note It can take up to 15 minutes for the control plane to recover. Troubleshooting If you see no progress rolling out the etcd static pods, you can force redeployment from the etcd cluster Operator by running the following command: USD oc patch etcd cluster -p='{"spec": {"forceRedeploymentReason": "recovery-'"USD(date --rfc-3339=ns )"'"}}' --type=merge 6.3.2.2. Additional resources Installing a user-provisioned cluster on bare metal Replacing a bare-metal control plane node 6.3.3. Restoring to a cluster state To restore the cluster to a state, you must have previously backed up etcd data by creating a snapshot. You will use this snapshot to restore the cluster state. 6.3.3.1. About restoring cluster state You can use an etcd backup to restore your cluster to a state. This can be used to recover from the following situations: The cluster has lost the majority of control plane hosts (quorum loss). An administrator has deleted something critical and must restore to recover the cluster. Warning Restoring to a cluster state is a destructive and destablizing action to take on a running cluster. This should only be used as a last resort. If you are able to retrieve data using the Kubernetes API server, then etcd is available and you should not restore using an etcd backup. Restoring etcd effectively takes a cluster back in time and all clients will experience a conflicting, parallel history. This can impact the behavior of watching components like kubelets, Kubernetes controller managers, persistent volume controllers, and OpenShift Container Platform Operators, including the network Operator. It can cause Operator churn when the content in etcd does not match the actual content on disk, causing Operators for the Kubernetes API server, Kubernetes controller manager, Kubernetes scheduler, and etcd to get stuck when files on disk conflict with content in etcd. This can require manual actions to resolve the issues. In extreme cases, the cluster can lose track of persistent volumes, delete critical workloads that no longer exist, reimage machines, and rewrite CA bundles with expired certificates. 6.3.3.2. Restoring to a cluster state for a single node You can use a saved etcd backup to restore a cluster state on a single node. Important When you restore your cluster, you must use an etcd backup that was taken from the same z-stream release. For example, an OpenShift Container Platform 4.18.2 cluster must use an etcd backup that was taken from 4.18.2. Prerequisites Access to the cluster as a user with the cluster-admin role through a certificate-based kubeconfig file, like the one that was used during installation. You have SSH access to control plane hosts. A backup directory containing both the etcd snapshot and the resources for the static pods, which were from the same backup. The file names in the directory must be in the following formats: snapshot_<datetimestamp>.db and static_kuberesources_<datetimestamp>.tar.gz . Procedure Use SSH to connect to the single node and copy the etcd backup to the /home/core directory by running the following command: USD cp <etcd_backup_directory> /home/core Run the following command in the single node to restore the cluster from a backup: USD sudo -E /usr/local/bin/cluster-restore.sh /home/core/<etcd_backup_directory> Exit the SSH session. Monitor the recovery progress of the control plane by running the following command: USD oc adm wait-for-stable-cluster Note It can take up to 15 minutes for the control plane to recover. 6.3.3.3. Restoring to a cluster state You can use a saved etcd backup to restore a cluster state or restore a cluster that has lost the majority of control plane hosts. For high availability (HA) clusters, a three-node HA cluster requires you to shutdown etcd on two hosts to avoid a cluster split. Quorum requires a simple majority of nodes. The minimum number of nodes required for quorum on a three-node HA cluster is two. If you start a new cluster from backup on your recovery host, the other etcd members might still be able to form quorum and continue service. Note If your cluster uses a control plane machine set, see "Troubleshooting the control plane machine set" for a more simple etcd recovery procedure. For OpenShift Container Platform on a single node, see "Restoring to a cluster state for a single node". Important When you restore your cluster, you must use an etcd backup that was taken from the same z-stream release. For example, an OpenShift Container Platform 4.18.2 cluster must use an etcd backup that was taken from 4.18.2. Prerequisites Access to the cluster as a user with the cluster-admin role through a certificate-based kubeconfig file, like the one that was used during installation. A healthy control plane host to use as the recovery host. You have SSH access to control plane hosts. A backup directory containing both the etcd snapshot and the resources for the static pods, which were from the same backup. The file names in the directory must be in the following formats: snapshot_<datetimestamp>.db and static_kuberesources_<datetimestamp>.tar.gz . Important For non-recovery control plane nodes, it is not required to establish SSH connectivity or to stop the static pods. You can delete and recreate other non-recovery, control plane machines, one by one. Procedure Select a control plane host to use as the recovery host. This is the host that you run the restore operation on. Establish SSH connectivity to each of the control plane nodes, including the recovery host. kube-apiserver becomes inaccessible after the restore process starts, so you cannot access the control plane nodes. For this reason, it is recommended to establish SSH connectivity to each control plane host in a separate terminal. Important If you do not complete this step, you will not be able to access the control plane hosts to complete the restore procedure, and you will be unable to recover your cluster from this state. Using SSH, connect to each control plane node and run the following command to disable etcd: USD sudo -E /usr/local/bin/disable-etcd.sh Copy the etcd backup directory to the recovery control plane host. This procedure assumes that you copied the backup directory containing the etcd snapshot and the resources for the static pods to the /home/core/ directory of your recovery control plane host. Use SSH to connect to the recovery host and restore the cluster from a backup by running the following command: USD sudo -E /usr/local/bin/cluster-restore.sh /home/core/<etcd-backup-directory> Exit the SSH session. Once the API responds, turn off the etcd Operator quorum guard by runnning the following command: USD oc patch etcd/cluster --type=merge -p '{"spec": {"unsupportedConfigOverrides": {"useUnsupportedUnsafeNonHANonProductionUnstableEtcd": true}}}' Monitor the recovery progress of the control plane by running the following command: USD oc adm wait-for-stable-cluster Note It can take up to 15 minutes for the control plane to recover. Once recovered, enable the quorum guard by running the following command: USD oc patch etcd/cluster --type=merge -p '{"spec": {"unsupportedConfigOverrides": null}}' Troubleshooting If you see no progress rolling out the etcd static pods, you can force redeployment from the cluster-etcd-operator by running the following command: USD oc patch etcd cluster -p='{"spec": {"forceRedeploymentReason": "recovery-'"USD(date --rfc-3339=ns )"'"}}' --type=merge 6.3.3.4. Additional resources Installing a user-provisioned cluster on bare metal Creating a bastion host to access OpenShift Container Platform instances and the control plane nodes with SSH Replacing a bare-metal control plane node 6.3.3.5. Issues and workarounds for restoring a persistent storage state If your OpenShift Container Platform cluster uses persistent storage of any form, a state of the cluster is typically stored outside etcd. It might be an Elasticsearch cluster running in a pod or a database running in a StatefulSet object. When you restore from an etcd backup, the status of the workloads in OpenShift Container Platform is also restored. However, if the etcd snapshot is old, the status might be invalid or outdated. Important The contents of persistent volumes (PVs) are never part of the etcd snapshot. When you restore an OpenShift Container Platform cluster from an etcd snapshot, non-critical workloads might gain access to critical data, or vice-versa. The following are some example scenarios that produce an out-of-date status: MySQL database is running in a pod backed up by a PV object. Restoring OpenShift Container Platform from an etcd snapshot does not bring back the volume on the storage provider, and does not produce a running MySQL pod, despite the pod repeatedly attempting to start. You must manually restore this pod by restoring the volume on the storage provider, and then editing the PV to point to the new volume. Pod P1 is using volume A, which is attached to node X. If the etcd snapshot is taken while another pod uses the same volume on node Y, then when the etcd restore is performed, pod P1 might not be able to start correctly due to the volume still being attached to node Y. OpenShift Container Platform is not aware of the attachment, and does not automatically detach it. When this occurs, the volume must be manually detached from node Y so that the volume can attach on node X, and then pod P1 can start. Cloud provider or storage provider credentials were updated after the etcd snapshot was taken. This causes any CSI drivers or Operators that depend on the those credentials to not work. You might have to manually update the credentials required by those drivers or Operators. A device is removed or renamed from OpenShift Container Platform nodes after the etcd snapshot is taken. The Local Storage Operator creates symlinks for each PV that it manages from /dev/disk/by-id or /dev directories. This situation might cause the local PVs to refer to devices that no longer exist. To fix this problem, an administrator must: Manually remove the PVs with invalid devices. Remove symlinks from respective nodes. Delete LocalVolume or LocalVolumeSet objects (see Storage Configuring persistent storage Persistent storage using local volumes Deleting the Local Storage Operator Resources ). 6.3.4. Recovering from expired control plane certificates 6.3.4.1. Recovering from expired control plane certificates The cluster can automatically recover from expired control plane certificates. However, you must manually approve the pending node-bootstrapper certificate signing requests (CSRs) to recover kubelet certificates. For user-provisioned installations, you might also need to approve pending kubelet serving CSRs. Use the following steps to approve the pending CSRs: Procedure Get the list of current CSRs: USD oc get csr Example output 1 A pending kubelet service CSR (for user-provisioned installations). 2 A pending node-bootstrapper CSR. Review the details of a CSR to verify that it is valid: USD oc describe csr <csr_name> 1 1 <csr_name> is the name of a CSR from the list of current CSRs. Approve each valid node-bootstrapper CSR: USD oc adm certificate approve <csr_name> For user-provisioned installations, approve each valid kubelet serving CSR: USD oc adm certificate approve <csr_name> | [
"oc debug --as-root node/<node_name>",
"sh-4.4# chroot /host",
"export HTTP_PROXY=http://<your_proxy.example.com>:8080",
"export HTTPS_PROXY=https://<your_proxy.example.com>:8080",
"export NO_PROXY=<example.com>",
"sh-4.4# /usr/local/bin/cluster-backup.sh /home/core/assets/backup",
"found latest kube-apiserver: /etc/kubernetes/static-pod-resources/kube-apiserver-pod-6 found latest kube-controller-manager: /etc/kubernetes/static-pod-resources/kube-controller-manager-pod-7 found latest kube-scheduler: /etc/kubernetes/static-pod-resources/kube-scheduler-pod-6 found latest etcd: /etc/kubernetes/static-pod-resources/etcd-pod-3 ede95fe6b88b87ba86a03c15e669fb4aa5bf0991c180d3c6895ce72eaade54a1 etcdctl version: 3.4.14 API version: 3.4 {\"level\":\"info\",\"ts\":1624647639.0188997,\"caller\":\"snapshot/v3_snapshot.go:119\",\"msg\":\"created temporary db file\",\"path\":\"/home/core/assets/backup/snapshot_2021-06-25_190035.db.part\"} {\"level\":\"info\",\"ts\":\"2021-06-25T19:00:39.030Z\",\"caller\":\"clientv3/maintenance.go:200\",\"msg\":\"opened snapshot stream; downloading\"} {\"level\":\"info\",\"ts\":1624647639.0301006,\"caller\":\"snapshot/v3_snapshot.go:127\",\"msg\":\"fetching snapshot\",\"endpoint\":\"https://10.0.0.5:2379\"} {\"level\":\"info\",\"ts\":\"2021-06-25T19:00:40.215Z\",\"caller\":\"clientv3/maintenance.go:208\",\"msg\":\"completed snapshot read; closing\"} {\"level\":\"info\",\"ts\":1624647640.6032252,\"caller\":\"snapshot/v3_snapshot.go:142\",\"msg\":\"fetched snapshot\",\"endpoint\":\"https://10.0.0.5:2379\",\"size\":\"114 MB\",\"took\":1.584090459} {\"level\":\"info\",\"ts\":1624647640.6047094,\"caller\":\"snapshot/v3_snapshot.go:152\",\"msg\":\"saved\",\"path\":\"/home/core/assets/backup/snapshot_2021-06-25_190035.db\"} Snapshot saved at /home/core/assets/backup/snapshot_2021-06-25_190035.db {\"hash\":3866667823,\"revision\":31407,\"totalKey\":12828,\"totalSize\":114446336} snapshot db and kube resources are successfully saved to /home/core/assets/backup",
"apiVersion: config.openshift.io/v1 kind: FeatureGate metadata: name: cluster spec: featureSet: TechPreviewNoUpgrade",
"oc apply -f enable-tech-preview-no-upgrade.yaml",
"oc get crd | grep backup",
"backups.config.openshift.io 2023-10-25T13:32:43Z etcdbackups.operator.openshift.io 2023-10-25T13:32:04Z",
"kind: PersistentVolumeClaim apiVersion: v1 metadata: name: etcd-backup-pvc namespace: openshift-etcd spec: accessModes: - ReadWriteOnce resources: requests: storage: 200Gi 1 volumeMode: Filesystem",
"oc apply -f etcd-backup-pvc.yaml",
"oc get pvc",
"NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE etcd-backup-pvc Bound 51s",
"apiVersion: operator.openshift.io/v1alpha1 kind: EtcdBackup metadata: name: etcd-single-backup namespace: openshift-etcd spec: pvcName: etcd-backup-pvc 1",
"oc apply -f etcd-single-backup.yaml",
"apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: etcd-backup-local-storage provisioner: kubernetes.io/no-provisioner volumeBindingMode: Immediate",
"oc apply -f etcd-backup-local-storage.yaml",
"apiVersion: v1 kind: PersistentVolume metadata: name: etcd-backup-pv-fs spec: capacity: storage: 100Gi 1 volumeMode: Filesystem accessModes: - ReadWriteOnce persistentVolumeReclaimPolicy: Retain storageClassName: etcd-backup-local-storage local: path: /mnt nodeAffinity: required: nodeSelectorTerms: - matchExpressions: - key: kubernetes.io/hostname operator: In values: - <example_master_node> 2",
"oc get pv",
"NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE etcd-backup-pv-fs 100Gi RWO Retain Available etcd-backup-local-storage 10s",
"kind: PersistentVolumeClaim apiVersion: v1 metadata: name: etcd-backup-pvc namespace: openshift-etcd spec: accessModes: - ReadWriteOnce volumeMode: Filesystem resources: requests: storage: 10Gi 1",
"oc apply -f etcd-backup-pvc.yaml",
"apiVersion: operator.openshift.io/v1alpha1 kind: EtcdBackup metadata: name: etcd-single-backup namespace: openshift-etcd spec: pvcName: etcd-backup-pvc 1",
"oc apply -f etcd-single-backup.yaml",
"kind: PersistentVolumeClaim apiVersion: v1 metadata: name: etcd-backup-pvc namespace: openshift-etcd spec: accessModes: - ReadWriteOnce resources: requests: storage: 200Gi 1 volumeMode: Filesystem storageClassName: etcd-backup-local-storage",
"oc apply -f etcd-backup-pvc.yaml",
"oc get pvc",
"NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE etcd-backup-pvc Bound 51s",
"apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: etcd-backup-local-storage provisioner: kubernetes.io/no-provisioner volumeBindingMode: Immediate",
"oc apply -f etcd-backup-local-storage.yaml",
"apiVersion: v1 kind: PersistentVolume metadata: name: etcd-backup-pv-fs spec: capacity: storage: 100Gi 1 volumeMode: Filesystem accessModes: - ReadWriteMany persistentVolumeReclaimPolicy: Delete storageClassName: etcd-backup-local-storage local: path: /mnt/ nodeAffinity: required: nodeSelectorTerms: - matchExpressions: - key: kubernetes.io/hostname operator: In values: - <example_master_node> 2",
"oc get nodes",
"oc get pv",
"NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE etcd-backup-pv-fs 100Gi RWX Delete Available etcd-backup-local-storage 10s",
"kind: PersistentVolumeClaim apiVersion: v1 metadata: name: etcd-backup-pvc spec: accessModes: - ReadWriteMany volumeMode: Filesystem resources: requests: storage: 10Gi 1 storageClassName: etcd-backup-local-storage",
"oc apply -f etcd-backup-pvc.yaml",
"apiVersion: config.openshift.io/v1alpha1 kind: Backup metadata: name: etcd-recurring-backup spec: etcd: schedule: \"20 4 * * *\" 1 timeZone: \"UTC\" pvcName: etcd-backup-pvc",
"spec: etcd: retentionPolicy: retentionType: RetentionNumber 1 retentionNumber: maxNumberOfBackups: 5 2",
"spec: etcd: retentionPolicy: retentionType: RetentionSize retentionSize: maxSizeOfBackupsGb: 20 1",
"oc create -f etcd-recurring-backup.yaml",
"oc get cronjob -n openshift-etcd",
"oc get etcd -o=jsonpath='{range .items[0].status.conditions[?(@.type==\"EtcdMembersAvailable\")]}{.message}{\"\\n\"}'",
"2 of 3 members are available, ip-10-0-131-183.ec2.internal is unhealthy",
"oc get machines -A -ojsonpath='{range .items[*]}{@.status.nodeRef.name}{\"\\t\"}{@.status.providerStatus.instanceState}{\"\\n\"}' | grep -v running",
"ip-10-0-131-183.ec2.internal stopped 1",
"oc get nodes -o jsonpath='{range .items[*]}{\"\\n\"}{.metadata.name}{\"\\t\"}{range .spec.taints[*]}{.key}{\" \"}' | grep unreachable",
"ip-10-0-131-183.ec2.internal node-role.kubernetes.io/master node.kubernetes.io/unreachable node.kubernetes.io/unreachable 1",
"oc get nodes -l node-role.kubernetes.io/master | grep \"NotReady\"",
"ip-10-0-131-183.ec2.internal NotReady master 122m v1.31.3 1",
"oc get nodes -l node-role.kubernetes.io/master",
"NAME STATUS ROLES AGE VERSION ip-10-0-131-183.ec2.internal Ready master 6h13m v1.31.3 ip-10-0-164-97.ec2.internal Ready master 6h13m v1.31.3 ip-10-0-154-204.ec2.internal Ready master 6h13m v1.31.3",
"oc -n openshift-etcd get pods -l k8s-app=etcd",
"etcd-ip-10-0-131-183.ec2.internal 2/3 Error 7 6h9m 1 etcd-ip-10-0-164-97.ec2.internal 3/3 Running 0 6h6m etcd-ip-10-0-154-204.ec2.internal 3/3 Running 0 6h6m",
"oc -n openshift-etcd get pods -l k8s-app=etcd",
"etcd-ip-10-0-131-183.ec2.internal 3/3 Running 0 123m etcd-ip-10-0-164-97.ec2.internal 3/3 Running 0 123m etcd-ip-10-0-154-204.ec2.internal 3/3 Running 0 124m",
"oc rsh -n openshift-etcd etcd-ip-10-0-154-204.ec2.internal",
"sh-4.2# etcdctl member list -w table",
"+------------------+---------+------------------------------+---------------------------+---------------------------+ | ID | STATUS | NAME | PEER ADDRS | CLIENT ADDRS | +------------------+---------+------------------------------+---------------------------+---------------------------+ | 6fc1e7c9db35841d | started | ip-10-0-131-183.ec2.internal | https://10.0.131.183:2380 | https://10.0.131.183:2379 | | 757b6793e2408b6c | started | ip-10-0-164-97.ec2.internal | https://10.0.164.97:2380 | https://10.0.164.97:2379 | | ca8c2990a0aa29d1 | started | ip-10-0-154-204.ec2.internal | https://10.0.154.204:2380 | https://10.0.154.204:2379 | +------------------+---------+------------------------------+---------------------------+---------------------------+",
"sh-4.2# etcdctl member remove 6fc1e7c9db35841d",
"Member 6fc1e7c9db35841d removed from cluster ead669ce1fbfb346",
"sh-4.2# etcdctl member list -w table",
"+------------------+---------+------------------------------+---------------------------+---------------------------+ | ID | STATUS | NAME | PEER ADDRS | CLIENT ADDRS | +------------------+---------+------------------------------+---------------------------+---------------------------+ | 757b6793e2408b6c | started | ip-10-0-164-97.ec2.internal | https://10.0.164.97:2380 | https://10.0.164.97:2379 | | ca8c2990a0aa29d1 | started | ip-10-0-154-204.ec2.internal | https://10.0.154.204:2380 | https://10.0.154.204:2379 | +------------------+---------+------------------------------+---------------------------+---------------------------+",
"oc patch etcd/cluster --type=merge -p '{\"spec\": {\"unsupportedConfigOverrides\": {\"useUnsupportedUnsafeNonHANonProductionUnstableEtcd\": true}}}'",
"oc delete node <node_name>",
"oc delete node ip-10-0-131-183.ec2.internal",
"oc get secrets -n openshift-etcd | grep ip-10-0-131-183.ec2.internal 1",
"etcd-peer-ip-10-0-131-183.ec2.internal kubernetes.io/tls 2 47m etcd-serving-ip-10-0-131-183.ec2.internal kubernetes.io/tls 2 47m etcd-serving-metrics-ip-10-0-131-183.ec2.internal kubernetes.io/tls 2 47m",
"oc delete secret -n openshift-etcd etcd-peer-ip-10-0-131-183.ec2.internal",
"oc delete secret -n openshift-etcd etcd-serving-ip-10-0-131-183.ec2.internal",
"oc delete secret -n openshift-etcd etcd-serving-metrics-ip-10-0-131-183.ec2.internal",
"oc get machines -n openshift-machine-api -o wide",
"NAME PHASE TYPE REGION ZONE AGE NODE PROVIDERID STATE clustername-8qw5l-master-0 Running m4.xlarge us-east-1 us-east-1a 3h37m ip-10-0-131-183.ec2.internal aws:///us-east-1a/i-0ec2782f8287dfb7e stopped 1 clustername-8qw5l-master-1 Running m4.xlarge us-east-1 us-east-1b 3h37m ip-10-0-154-204.ec2.internal aws:///us-east-1b/i-096c349b700a19631 running clustername-8qw5l-master-2 Running m4.xlarge us-east-1 us-east-1c 3h37m ip-10-0-164-97.ec2.internal aws:///us-east-1c/i-02626f1dba9ed5bba running clustername-8qw5l-worker-us-east-1a-wbtgd Running m4.large us-east-1 us-east-1a 3h28m ip-10-0-129-226.ec2.internal aws:///us-east-1a/i-010ef6279b4662ced running clustername-8qw5l-worker-us-east-1b-lrdxb Running m4.large us-east-1 us-east-1b 3h28m ip-10-0-144-248.ec2.internal aws:///us-east-1b/i-0cb45ac45a166173b running clustername-8qw5l-worker-us-east-1c-pkg26 Running m4.large us-east-1 us-east-1c 3h28m ip-10-0-170-181.ec2.internal aws:///us-east-1c/i-06861c00007751b0a running",
"oc delete machine -n openshift-machine-api clustername-8qw5l-master-0 1",
"oc get machines -n openshift-machine-api -o wide",
"NAME PHASE TYPE REGION ZONE AGE NODE PROVIDERID STATE clustername-8qw5l-master-1 Running m4.xlarge us-east-1 us-east-1b 3h37m ip-10-0-154-204.ec2.internal aws:///us-east-1b/i-096c349b700a19631 running clustername-8qw5l-master-2 Running m4.xlarge us-east-1 us-east-1c 3h37m ip-10-0-164-97.ec2.internal aws:///us-east-1c/i-02626f1dba9ed5bba running clustername-8qw5l-master-3 Provisioning m4.xlarge us-east-1 us-east-1a 85s ip-10-0-133-53.ec2.internal aws:///us-east-1a/i-015b0888fe17bc2c8 running 1 clustername-8qw5l-worker-us-east-1a-wbtgd Running m4.large us-east-1 us-east-1a 3h28m ip-10-0-129-226.ec2.internal aws:///us-east-1a/i-010ef6279b4662ced running clustername-8qw5l-worker-us-east-1b-lrdxb Running m4.large us-east-1 us-east-1b 3h28m ip-10-0-144-248.ec2.internal aws:///us-east-1b/i-0cb45ac45a166173b running clustername-8qw5l-worker-us-east-1c-pkg26 Running m4.large us-east-1 us-east-1c 3h28m ip-10-0-170-181.ec2.internal aws:///us-east-1c/i-06861c00007751b0a running",
"oc patch etcd/cluster --type=merge -p '{\"spec\": {\"unsupportedConfigOverrides\": null}}'",
"oc get etcd/cluster -oyaml",
"EtcdCertSignerControllerDegraded: [Operation cannot be fulfilled on secrets \"etcd-peer-sno-0\": the object has been modified; please apply your changes to the latest version and try again, Operation cannot be fulfilled on secrets \"etcd-serving-sno-0\": the object has been modified; please apply your changes to the latest version and try again, Operation cannot be fulfilled on secrets \"etcd-serving-metrics-sno-0\": the object has been modified; please apply your changes to the latest version and try again]",
"oc -n openshift-etcd get pods -l k8s-app=etcd",
"etcd-ip-10-0-133-53.ec2.internal 3/3 Running 0 7m49s etcd-ip-10-0-164-97.ec2.internal 3/3 Running 0 123m etcd-ip-10-0-154-204.ec2.internal 3/3 Running 0 124m",
"oc patch etcd cluster -p='{\"spec\": {\"forceRedeploymentReason\": \"recovery-'\"USD( date --rfc-3339=ns )\"'\"}}' --type=merge 1",
"oc rsh -n openshift-etcd etcd-ip-10-0-154-204.ec2.internal",
"sh-4.2# etcdctl member list -w table",
"+------------------+---------+------------------------------+---------------------------+---------------------------+ | ID | STATUS | NAME | PEER ADDRS | CLIENT ADDRS | +------------------+---------+------------------------------+---------------------------+---------------------------+ | 5eb0d6b8ca24730c | started | ip-10-0-133-53.ec2.internal | https://10.0.133.53:2380 | https://10.0.133.53:2379 | | 757b6793e2408b6c | started | ip-10-0-164-97.ec2.internal | https://10.0.164.97:2380 | https://10.0.164.97:2379 | | ca8c2990a0aa29d1 | started | ip-10-0-154-204.ec2.internal | https://10.0.154.204:2380 | https://10.0.154.204:2379 | +------------------+---------+------------------------------+---------------------------+---------------------------+",
"oc debug node/ip-10-0-131-183.ec2.internal 1",
"sh-4.2# chroot /host",
"sh-4.2# mkdir /var/lib/etcd-backup",
"sh-4.2# mv /etc/kubernetes/manifests/etcd-pod.yaml /var/lib/etcd-backup/",
"sh-4.2# mv /var/lib/etcd/ /tmp",
"oc -n openshift-etcd get pods -l k8s-app=etcd",
"etcd-ip-10-0-131-183.ec2.internal 2/3 Error 7 6h9m etcd-ip-10-0-164-97.ec2.internal 3/3 Running 0 6h6m etcd-ip-10-0-154-204.ec2.internal 3/3 Running 0 6h6m",
"oc rsh -n openshift-etcd etcd-ip-10-0-154-204.ec2.internal",
"sh-4.2# etcdctl member list -w table",
"+------------------+---------+------------------------------+---------------------------+---------------------------+ | ID | STATUS | NAME | PEER ADDRS | CLIENT ADDRS | +------------------+---------+------------------------------+---------------------------+---------------------------+ | 62bcf33650a7170a | started | ip-10-0-131-183.ec2.internal | https://10.0.131.183:2380 | https://10.0.131.183:2379 | | b78e2856655bc2eb | started | ip-10-0-164-97.ec2.internal | https://10.0.164.97:2380 | https://10.0.164.97:2379 | | d022e10b498760d5 | started | ip-10-0-154-204.ec2.internal | https://10.0.154.204:2380 | https://10.0.154.204:2379 | +------------------+---------+------------------------------+---------------------------+---------------------------+",
"sh-4.2# etcdctl member remove 62bcf33650a7170a",
"Member 62bcf33650a7170a removed from cluster ead669ce1fbfb346",
"sh-4.2# etcdctl member list -w table",
"+------------------+---------+------------------------------+---------------------------+---------------------------+ | ID | STATUS | NAME | PEER ADDRS | CLIENT ADDRS | +------------------+---------+------------------------------+---------------------------+---------------------------+ | b78e2856655bc2eb | started | ip-10-0-164-97.ec2.internal | https://10.0.164.97:2380 | https://10.0.164.97:2379 | | d022e10b498760d5 | started | ip-10-0-154-204.ec2.internal | https://10.0.154.204:2380 | https://10.0.154.204:2379 | +------------------+---------+------------------------------+---------------------------+---------------------------+",
"oc patch etcd/cluster --type=merge -p '{\"spec\": {\"unsupportedConfigOverrides\": {\"useUnsupportedUnsafeNonHANonProductionUnstableEtcd\": true}}}'",
"oc get secrets -n openshift-etcd | grep ip-10-0-131-183.ec2.internal 1",
"etcd-peer-ip-10-0-131-183.ec2.internal kubernetes.io/tls 2 47m etcd-serving-ip-10-0-131-183.ec2.internal kubernetes.io/tls 2 47m etcd-serving-metrics-ip-10-0-131-183.ec2.internal kubernetes.io/tls 2 47m",
"oc delete secret -n openshift-etcd etcd-peer-ip-10-0-131-183.ec2.internal",
"oc delete secret -n openshift-etcd etcd-serving-ip-10-0-131-183.ec2.internal",
"oc delete secret -n openshift-etcd etcd-serving-metrics-ip-10-0-131-183.ec2.internal",
"oc patch etcd cluster -p='{\"spec\": {\"forceRedeploymentReason\": \"single-master-recovery-'\"USD( date --rfc-3339=ns )\"'\"}}' --type=merge 1",
"oc patch etcd/cluster --type=merge -p '{\"spec\": {\"unsupportedConfigOverrides\": null}}'",
"oc get etcd/cluster -oyaml",
"EtcdCertSignerControllerDegraded: [Operation cannot be fulfilled on secrets \"etcd-peer-sno-0\": the object has been modified; please apply your changes to the latest version and try again, Operation cannot be fulfilled on secrets \"etcd-serving-sno-0\": the object has been modified; please apply your changes to the latest version and try again, Operation cannot be fulfilled on secrets \"etcd-serving-metrics-sno-0\": the object has been modified; please apply your changes to the latest version and try again]",
"oc rsh -n openshift-etcd etcd-ip-10-0-154-204.ec2.internal",
"sh-4.2# etcdctl endpoint health",
"https://10.0.131.183:2379 is healthy: successfully committed proposal: took = 16.671434ms https://10.0.154.204:2379 is healthy: successfully committed proposal: took = 16.698331ms https://10.0.164.97:2379 is healthy: successfully committed proposal: took = 16.621645ms",
"oc -n openshift-etcd get pods -l k8s-app=etcd -o wide",
"etcd-openshift-control-plane-0 5/5 Running 11 3h56m 192.168.10.9 openshift-control-plane-0 <none> <none> etcd-openshift-control-plane-1 5/5 Running 0 3h54m 192.168.10.10 openshift-control-plane-1 <none> <none> etcd-openshift-control-plane-2 5/5 Running 0 3h58m 192.168.10.11 openshift-control-plane-2 <none> <none>",
"oc rsh -n openshift-etcd etcd-openshift-control-plane-0",
"sh-4.2# etcdctl member list -w table",
"+------------------+---------+--------------------+---------------------------+---------------------------+---------------------+ | ID | STATUS | NAME | PEER ADDRS | CLIENT ADDRS | IS LEARNER | +------------------+---------+--------------------+---------------------------+---------------------------+---------------------+ | 7a8197040a5126c8 | started | openshift-control-plane-2 | https://192.168.10.11:2380/ | https://192.168.10.11:2379/ | false | | 8d5abe9669a39192 | started | openshift-control-plane-1 | https://192.168.10.10:2380/ | https://192.168.10.10:2379/ | false | | cc3830a72fc357f9 | started | openshift-control-plane-0 | https://192.168.10.9:2380/ | https://192.168.10.9:2379/ | false | +------------------+---------+--------------------+---------------------------+---------------------------+---------------------+",
"sh-4.2# etcdctl member remove 7a8197040a5126c8",
"Member 7a8197040a5126c8 removed from cluster b23536c33f2cdd1b",
"sh-4.2# etcdctl member list -w table",
"+------------------+---------+--------------------+---------------------------+---------------------------+-------------------------+ | ID | STATUS | NAME | PEER ADDRS | CLIENT ADDRS | IS LEARNER | +------------------+---------+--------------------+---------------------------+---------------------------+-------------------------+ | cc3830a72fc357f9 | started | openshift-control-plane-2 | https://192.168.10.11:2380/ | https://192.168.10.11:2379/ | false | | 8d5abe9669a39192 | started | openshift-control-plane-1 | https://192.168.10.10:2380/ | https://192.168.10.10:2379/ | false | +------------------+---------+--------------------+---------------------------+---------------------------+-------------------------+",
"oc patch etcd/cluster --type=merge -p '{\"spec\": {\"unsupportedConfigOverrides\": {\"useUnsupportedUnsafeNonHANonProductionUnstableEtcd\": true}}}'",
"oc get secrets -n openshift-etcd | grep openshift-control-plane-2",
"etcd-peer-openshift-control-plane-2 kubernetes.io/tls 2 134m etcd-serving-metrics-openshift-control-plane-2 kubernetes.io/tls 2 134m etcd-serving-openshift-control-plane-2 kubernetes.io/tls 2 134m",
"oc delete secret etcd-peer-openshift-control-plane-2 -n openshift-etcd secret \"etcd-peer-openshift-control-plane-2\" deleted",
"oc delete secret etcd-serving-metrics-openshift-control-plane-2 -n openshift-etcd secret \"etcd-serving-metrics-openshift-control-plane-2\" deleted",
"oc delete secret etcd-serving-openshift-control-plane-2 -n openshift-etcd secret \"etcd-serving-openshift-control-plane-2\" deleted",
"oc get machines -n openshift-machine-api -o wide",
"NAME PHASE TYPE REGION ZONE AGE NODE PROVIDERID STATE examplecluster-control-plane-0 Running 3h11m openshift-control-plane-0 baremetalhost:///openshift-machine-api/openshift-control-plane-0/da1ebe11-3ff2-41c5-b099-0aa41222964e externally provisioned 1 examplecluster-control-plane-1 Running 3h11m openshift-control-plane-1 baremetalhost:///openshift-machine-api/openshift-control-plane-1/d9f9acbc-329c-475e-8d81-03b20280a3e1 externally provisioned examplecluster-control-plane-2 Running 3h11m openshift-control-plane-2 baremetalhost:///openshift-machine-api/openshift-control-plane-2/3354bdac-61d8-410f-be5b-6a395b056135 externally provisioned examplecluster-compute-0 Running 165m openshift-compute-0 baremetalhost:///openshift-machine-api/openshift-compute-0/3d685b81-7410-4bb3-80ec-13a31858241f provisioned examplecluster-compute-1 Running 165m openshift-compute-1 baremetalhost:///openshift-machine-api/openshift-compute-1/0fdae6eb-2066-4241-91dc-e7ea72ab13b9 provisioned",
"oc get clusteroperator baremetal",
"NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MESSAGE baremetal 4.18.0 True False False 3d15h",
"oc delete bmh openshift-control-plane-2 -n openshift-machine-api",
"baremetalhost.metal3.io \"openshift-control-plane-2\" deleted",
"oc delete machine -n openshift-machine-api examplecluster-control-plane-2",
"oc edit machine -n openshift-machine-api examplecluster-control-plane-2",
"finalizers: - machine.machine.openshift.io",
"machine.machine.openshift.io/examplecluster-control-plane-2 edited",
"oc get machines -n openshift-machine-api -o wide",
"NAME PHASE TYPE REGION ZONE AGE NODE PROVIDERID STATE examplecluster-control-plane-0 Running 3h11m openshift-control-plane-0 baremetalhost:///openshift-machine-api/openshift-control-plane-0/da1ebe11-3ff2-41c5-b099-0aa41222964e externally provisioned examplecluster-control-plane-1 Running 3h11m openshift-control-plane-1 baremetalhost:///openshift-machine-api/openshift-control-plane-1/d9f9acbc-329c-475e-8d81-03b20280a3e1 externally provisioned examplecluster-compute-0 Running 165m openshift-compute-0 baremetalhost:///openshift-machine-api/openshift-compute-0/3d685b81-7410-4bb3-80ec-13a31858241f provisioned examplecluster-compute-1 Running 165m openshift-compute-1 baremetalhost:///openshift-machine-api/openshift-compute-1/0fdae6eb-2066-4241-91dc-e7ea72ab13b9 provisioned",
"oc get nodes NAME STATUS ROLES AGE VERSION openshift-control-plane-0 Ready master 3h24m v1.31.3 openshift-control-plane-1 Ready master 3h24m v1.31.3 openshift-compute-0 Ready worker 176m v1.31.3 openshift-compute-1 Ready worker 176m v1.31.3",
"cat <<EOF | oc apply -f - apiVersion: v1 kind: Secret metadata: name: openshift-control-plane-2-bmc-secret namespace: openshift-machine-api data: password: <password> username: <username> type: Opaque --- apiVersion: metal3.io/v1alpha1 kind: BareMetalHost metadata: name: openshift-control-plane-2 namespace: openshift-machine-api spec: automatedCleaningMode: disabled bmc: address: redfish://10.46.61.18:443/redfish/v1/Systems/1 credentialsName: openshift-control-plane-2-bmc-secret disableCertificateVerification: true bootMACAddress: 48:df:37:b0:8a:a0 bootMode: UEFI externallyProvisioned: false online: true rootDeviceHints: deviceName: /dev/disk/by-id/scsi-<serial_number> userData: name: master-user-data-managed namespace: openshift-machine-api EOF",
"oc get bmh -n openshift-machine-api NAME STATE CONSUMER ONLINE ERROR AGE openshift-control-plane-0 externally provisioned examplecluster-control-plane-0 true 4h48m openshift-control-plane-1 externally provisioned examplecluster-control-plane-1 true 4h48m openshift-control-plane-2 available examplecluster-control-plane-3 true 47m openshift-compute-0 provisioned examplecluster-compute-0 true 4h48m openshift-compute-1 provisioned examplecluster-compute-1 true 4h48m",
"oc get machines -n openshift-machine-api -o wide",
"NAME PHASE TYPE REGION ZONE AGE NODE PROVIDERID STATE examplecluster-control-plane-0 Running 3h11m openshift-control-plane-0 baremetalhost:///openshift-machine-api/openshift-control-plane-0/da1ebe11-3ff2-41c5-b099-0aa41222964e externally provisioned 1 examplecluster-control-plane-1 Running 3h11m openshift-control-plane-1 baremetalhost:///openshift-machine-api/openshift-control-plane-1/d9f9acbc-329c-475e-8d81-03b20280a3e1 externally provisioned examplecluster-control-plane-2 Running 3h11m openshift-control-plane-2 baremetalhost:///openshift-machine-api/openshift-control-plane-2/3354bdac-61d8-410f-be5b-6a395b056135 externally provisioned examplecluster-compute-0 Running 165m openshift-compute-0 baremetalhost:///openshift-machine-api/openshift-compute-0/3d685b81-7410-4bb3-80ec-13a31858241f provisioned examplecluster-compute-1 Running 165m openshift-compute-1 baremetalhost:///openshift-machine-api/openshift-compute-1/0fdae6eb-2066-4241-91dc-e7ea72ab13b9 provisioned",
"oc get bmh -n openshift-machine-api",
"oc get bmh -n openshift-machine-api NAME STATE CONSUMER ONLINE ERROR AGE openshift-control-plane-0 externally provisioned examplecluster-control-plane-0 true 4h48m openshift-control-plane-1 externally provisioned examplecluster-control-plane-1 true 4h48m openshift-control-plane-2 provisioned examplecluster-control-plane-3 true 47m openshift-compute-0 provisioned examplecluster-compute-0 true 4h48m openshift-compute-1 provisioned examplecluster-compute-1 true 4h48m",
"oc get nodes",
"oc get nodes NAME STATUS ROLES AGE VERSION openshift-control-plane-0 Ready master 4h26m v1.31.3 openshift-control-plane-1 Ready master 4h26m v1.31.3 openshift-control-plane-2 Ready master 12m v1.31.3 openshift-compute-0 Ready worker 3h58m v1.31.3 openshift-compute-1 Ready worker 3h58m v1.31.3",
"oc patch etcd/cluster --type=merge -p '{\"spec\": {\"unsupportedConfigOverrides\": null}}'",
"oc get etcd/cluster -oyaml",
"EtcdCertSignerControllerDegraded: [Operation cannot be fulfilled on secrets \"etcd-peer-sno-0\": the object has been modified; please apply your changes to the latest version and try again, Operation cannot be fulfilled on secrets \"etcd-serving-sno-0\": the object has been modified; please apply your changes to the latest version and try again, Operation cannot be fulfilled on secrets \"etcd-serving-metrics-sno-0\": the object has been modified; please apply your changes to the latest version and try again]",
"oc -n openshift-etcd get pods -l k8s-app=etcd",
"etcd-openshift-control-plane-0 5/5 Running 0 105m etcd-openshift-control-plane-1 5/5 Running 0 107m etcd-openshift-control-plane-2 5/5 Running 0 103m",
"oc patch etcd cluster -p='{\"spec\": {\"forceRedeploymentReason\": \"recovery-'\"USD( date --rfc-3339=ns )\"'\"}}' --type=merge 1",
"oc rsh -n openshift-etcd etcd-openshift-control-plane-0",
"sh-4.2# etcdctl member list -w table",
"+------------------+---------+--------------------+---------------------------+---------------------------+-----------------+ | ID | STATUS | NAME | PEER ADDRS | CLIENT ADDRS | IS LEARNER | +------------------+---------+--------------------+---------------------------+---------------------------+-----------------+ | 7a8197040a5126c8 | started | openshift-control-plane-2 | https://192.168.10.11:2380 | https://192.168.10.11:2379 | false | | 8d5abe9669a39192 | started | openshift-control-plane-1 | https://192.168.10.10:2380 | https://192.168.10.10:2379 | false | | cc3830a72fc357f9 | started | openshift-control-plane-0 | https://192.168.10.9:2380 | https://192.168.10.9:2379 | false | +------------------+---------+--------------------+---------------------------+---------------------------+-----------------+",
"etcdctl endpoint health --cluster",
"https://192.168.10.10:2379 is healthy: successfully committed proposal: took = 8.973065ms https://192.168.10.9:2379 is healthy: successfully committed proposal: took = 11.559829ms https://192.168.10.11:2379 is healthy: successfully committed proposal: took = 11.665203ms",
"oc get etcd -o=jsonpath='{range.items[0].status.conditions[?(@.type==\"NodeInstallerProgressing\")]}{.reason}{\"\\n\"}{.message}{\"\\n\"}'",
"AllNodesAtLatestRevision",
"sudo /usr/local/bin/disable-etcd.sh",
"sudo rm -rf /var/lib/etcd",
"sudo systemctl disable kubelet.service",
"oc get nodes",
"sudo systemctl enable kubelet.service",
"oc get nodes",
"oc get pods -n openshift-etcd",
"oc get pods -n openshift-etcd -l app=etcd --field-selector=\"status.phase==Running\"",
"oc exec -n openshift-etcd <etcd-pod> -c etcdctl -- etcdctl endpoint status -w table",
"oc get nodes -o jsonpath='{range .items[*]}[{.metadata.name},{.status.addresses[?(@.type==\"InternalIP\")].address}]{end}'",
"sudo -E /usr/local/bin/quorum-restore.sh",
"oc get machines -n openshift-machine-api -o wide",
"NAME PHASE TYPE REGION ZONE AGE NODE PROVIDERID STATE clustername-8qw5l-master-0 Running m4.xlarge us-east-1 us-east-1a 3h37m ip-10-0-131-183.ec2.internal aws:///us-east-1a/i-0ec2782f8287dfb7e stopped 1 clustername-8qw5l-master-1 Running m4.xlarge us-east-1 us-east-1b 3h37m ip-10-0-143-125.ec2.internal aws:///us-east-1b/i-096c349b700a19631 running clustername-8qw5l-master-2 Running m4.xlarge us-east-1 us-east-1c 3h37m ip-10-0-154-194.ec2.internal aws:///us-east-1c/i-02626f1dba9ed5bba running clustername-8qw5l-worker-us-east-1a-wbtgd Running m4.large us-east-1 us-east-1a 3h28m ip-10-0-129-226.ec2.internal aws:///us-east-1a/i-010ef6279b4662ced running clustername-8qw5l-worker-us-east-1b-lrdxb Running m4.large us-east-1 us-east-1b 3h28m ip-10-0-144-248.ec2.internal aws:///us-east-1b/i-0cb45ac45a166173b running clustername-8qw5l-worker-us-east-1c-pkg26 Running m4.large us-east-1 us-east-1c 3h28m ip-10-0-170-181.ec2.internal aws:///us-east-1c/i-06861c00007751b0a running",
"oc delete machine -n openshift-machine-api clustername-8qw5l-master-0 1",
"oc get machines -n openshift-machine-api -o wide",
"NAME PHASE TYPE REGION ZONE AGE NODE PROVIDERID STATE clustername-8qw5l-master-1 Running m4.xlarge us-east-1 us-east-1b 3h37m ip-10-0-143-125.ec2.internal aws:///us-east-1b/i-096c349b700a19631 running clustername-8qw5l-master-2 Running m4.xlarge us-east-1 us-east-1c 3h37m ip-10-0-154-194.ec2.internal aws:///us-east-1c/i-02626f1dba9ed5bba running clustername-8qw5l-master-3 Provisioning m4.xlarge us-east-1 us-east-1a 85s ip-10-0-173-171.ec2.internal aws:///us-east-1a/i-015b0888fe17bc2c8 running 1 clustername-8qw5l-worker-us-east-1a-wbtgd Running m4.large us-east-1 us-east-1a 3h28m ip-10-0-129-226.ec2.internal aws:///us-east-1a/i-010ef6279b4662ced running clustername-8qw5l-worker-us-east-1b-lrdxb Running m4.large us-east-1 us-east-1b 3h28m ip-10-0-144-248.ec2.internal aws:///us-east-1b/i-0cb45ac45a166173b running clustername-8qw5l-worker-us-east-1c-pkg26 Running m4.large us-east-1 us-east-1c 3h28m ip-10-0-170-181.ec2.internal aws:///us-east-1c/i-06861c00007751b0a running",
"oc adm wait-for-stable-cluster",
"oc patch etcd cluster -p='{\"spec\": {\"forceRedeploymentReason\": \"recovery-'\"USD(date --rfc-3339=ns )\"'\"}}' --type=merge",
"cp <etcd_backup_directory> /home/core",
"sudo -E /usr/local/bin/cluster-restore.sh /home/core/<etcd_backup_directory>",
"oc adm wait-for-stable-cluster",
"sudo -E /usr/local/bin/disable-etcd.sh",
"sudo -E /usr/local/bin/cluster-restore.sh /home/core/<etcd-backup-directory>",
"oc patch etcd/cluster --type=merge -p '{\"spec\": {\"unsupportedConfigOverrides\": {\"useUnsupportedUnsafeNonHANonProductionUnstableEtcd\": true}}}'",
"oc adm wait-for-stable-cluster",
"oc patch etcd/cluster --type=merge -p '{\"spec\": {\"unsupportedConfigOverrides\": null}}'",
"oc patch etcd cluster -p='{\"spec\": {\"forceRedeploymentReason\": \"recovery-'\"USD(date --rfc-3339=ns )\"'\"}}' --type=merge",
"oc get csr",
"NAME AGE SIGNERNAME REQUESTOR CONDITION csr-2s94x 8m3s kubernetes.io/kubelet-serving system:node:<node_name> Pending 1 csr-4bd6t 8m3s kubernetes.io/kubelet-serving system:node:<node_name> Pending csr-4hl85 13m kubernetes.io/kube-apiserver-client-kubelet system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending 2 csr-zhhhp 3m8s kubernetes.io/kube-apiserver-client-kubelet system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending",
"oc describe csr <csr_name> 1",
"oc adm certificate approve <csr_name>",
"oc adm certificate approve <csr_name>"
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.18/html/backup_and_restore/control-plane-backup-and-restore |
Providing Feedback on Red Hat Documentation | Providing Feedback on Red Hat Documentation We appreciate your input on our documentation. Please let us know how we could make it better. You can submit feedback by filing a ticket in Bugzilla: Navigate to the Bugzilla website. In the Component field, use Documentation . In the Description field, enter your suggestion for improvement. Include a link to the relevant parts of the documentation. Click Submit Bug . | null | https://docs.redhat.com/en/documentation/red_hat_satellite/6.11/html/satellite_overview_concepts_and_deployment_considerations/providing-feedback-on-red-hat-documentation_planning |
14.9. Additional Resources | 14.9. Additional Resources The following sections give you the means to explore Samba in greater detail. 14.9.1. Installed Documentation /usr/share/doc/samba-< version-number >/ - All additional files included with the Samba distribution. This includes all helper scripts, sample configuration files, and documentation. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/reference_guide/s1-samba-resources |
Chapter 6. Post-installation network configuration | Chapter 6. Post-installation network configuration After installing OpenShift Container Platform, you can further expand and customize your network to your requirements. 6.1. Cluster Network Operator configuration The configuration for the cluster network is specified as part of the Cluster Network Operator (CNO) configuration and stored in a custom resource (CR) object that is named cluster . The CR specifies the fields for the Network API in the operator.openshift.io API group. The CNO configuration inherits the following fields during cluster installation from the Network API in the Network.config.openshift.io API group and these fields cannot be changed: clusterNetwork IP address pools from which pod IP addresses are allocated. serviceNetwork IP address pool for services. defaultNetwork.type Cluster network provider, such as OpenShift SDN or OVN-Kubernetes. Note After cluster installation, you cannot modify the fields listed in the section. 6.2. Enabling the cluster-wide proxy The Proxy object is used to manage the cluster-wide egress proxy. When a cluster is installed or upgraded without the proxy configured, a Proxy object is still generated but it will have a nil spec . For example: apiVersion: config.openshift.io/v1 kind: Proxy metadata: name: cluster spec: trustedCA: name: "" status: A cluster administrator can configure the proxy for OpenShift Container Platform by modifying this cluster Proxy object. Note Only the Proxy object named cluster is supported, and no additional proxies can be created. Prerequisites Cluster administrator permissions OpenShift Container Platform oc CLI tool installed Procedure Create a ConfigMap that contains any additional CA certificates required for proxying HTTPS connections. Note You can skip this step if the proxy's identity certificate is signed by an authority from the RHCOS trust bundle. Create a file called user-ca-bundle.yaml with the following contents, and provide the values of your PEM-encoded certificates: apiVersion: v1 data: ca-bundle.crt: | 1 <MY_PEM_ENCODED_CERTS> 2 kind: ConfigMap metadata: name: user-ca-bundle 3 namespace: openshift-config 4 1 This data key must be named ca-bundle.crt . 2 One or more PEM-encoded X.509 certificates used to sign the proxy's identity certificate. 3 The ConfigMap name that will be referenced from the Proxy object. 4 The ConfigMap must be in the openshift-config namespace. Create the ConfigMap from this file: USD oc create -f user-ca-bundle.yaml Use the oc edit command to modify the Proxy object: USD oc edit proxy/cluster Configure the necessary fields for the proxy: apiVersion: config.openshift.io/v1 kind: Proxy metadata: name: cluster spec: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: http://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 readinessEndpoints: - http://www.google.com 4 - https://www.google.com trustedCA: name: user-ca-bundle 5 1 A proxy URL to use for creating HTTP connections outside the cluster. The URL scheme must be http . 2 A proxy URL to use for creating HTTPS connections outside the cluster. 3 A comma-separated list of destination domain names, domains, IP addresses or other network CIDRs to exclude proxying. Preface a domain with . to match subdomains only. For example, .y.com matches x.y.com , but not y.com . Use * to bypass proxy for all destinations. If you scale up workers that are not included in the network defined by the networking.machineNetwork[].cidr field from the installation configuration, you must add them to this list to prevent connection issues. This field is ignored if neither the httpProxy or httpsProxy fields are set. 4 One or more URLs external to the cluster to use to perform a readiness check before writing the httpProxy and httpsProxy values to status. 5 A reference to the ConfigMap in the openshift-config namespace that contains additional CA certificates required for proxying HTTPS connections. Note that the ConfigMap must already exist before referencing it here. This field is required unless the proxy's identity certificate is signed by an authority from the RHCOS trust bundle. Save the file to apply the changes. 6.3. Setting DNS to private After you deploy a cluster, you can modify its DNS to use only a private zone. Procedure Review the DNS custom resource for your cluster: USD oc get dnses.config.openshift.io/cluster -o yaml Example output apiVersion: config.openshift.io/v1 kind: DNS metadata: creationTimestamp: "2019-10-25T18:27:09Z" generation: 2 name: cluster resourceVersion: "37966" selfLink: /apis/config.openshift.io/v1/dnses/cluster uid: 0e714746-f755-11f9-9cb1-02ff55d8f976 spec: baseDomain: <base_domain> privateZone: tags: Name: <infrastructure_id>-int kubernetes.io/cluster/<infrastructure_id>: owned publicZone: id: Z2XXXXXXXXXXA4 status: {} Note that the spec section contains both a private and a public zone. Patch the DNS custom resource to remove the public zone: USD oc patch dnses.config.openshift.io/cluster --type=merge --patch='{"spec": {"publicZone": null}}' dns.config.openshift.io/cluster patched Because the Ingress Controller consults the DNS definition when it creates Ingress objects, when you create or modify Ingress objects, only private records are created. Important DNS records for the existing Ingress objects are not modified when you remove the public zone. Optional: Review the DNS custom resource for your cluster and confirm that the public zone was removed: USD oc get dnses.config.openshift.io/cluster -o yaml Example output apiVersion: config.openshift.io/v1 kind: DNS metadata: creationTimestamp: "2019-10-25T18:27:09Z" generation: 2 name: cluster resourceVersion: "37966" selfLink: /apis/config.openshift.io/v1/dnses/cluster uid: 0e714746-f755-11f9-9cb1-02ff55d8f976 spec: baseDomain: <base_domain> privateZone: tags: Name: <infrastructure_id>-int kubernetes.io/cluster/<infrastructure_id>-wfpg4: owned status: {} 6.4. Configuring ingress cluster traffic OpenShift Container Platform provides the following methods for communicating from outside the cluster with services running in the cluster: If you have HTTP/HTTPS, use an Ingress Controller. If you have a TLS-encrypted protocol other than HTTPS, such as TLS with the SNI header, use an Ingress Controller. Otherwise, use a load balancer, an external IP, or a node port. Method Purpose Use an Ingress Controller Allows access to HTTP/HTTPS traffic and TLS-encrypted protocols other than HTTPS, such as TLS with the SNI header. Automatically assign an external IP by using a load balancer service Allows traffic to non-standard ports through an IP address assigned from a pool. Manually assign an external IP to a service Allows traffic to non-standard ports through a specific IP address. Configure a NodePort Expose a service on all nodes in the cluster. 6.5. Configuring the node port service range As a cluster administrator, you can expand the available node port range. If your cluster uses of a large number of node ports, you might need to increase the number of available ports. The default port range is 30000-32767 . You can never reduce the port range, even if you first expand it beyond the default range. 6.5.1. Prerequisites Your cluster infrastructure must allow access to the ports that you specify within the expanded range. For example, if you expand the node port range to 30000-32900 , the inclusive port range of 32768-32900 must be allowed by your firewall or packet filtering configuration. 6.5.1.1. Expanding the node port range You can expand the node port range for the cluster. Prerequisites Install the OpenShift CLI ( oc ). Log in to the cluster with a user with cluster-admin privileges. Procedure To expand the node port range, enter the following command. Replace <port> with the largest port number in the new range. USD oc patch network.config.openshift.io cluster --type=merge -p \ '{ "spec": { "serviceNodePortRange": "30000-<port>" } }' Example output network.config.openshift.io/cluster patched To confirm that the configuration is active, enter the following command. It can take several minutes for the update to apply. USD oc get configmaps -n openshift-kube-apiserver config \ -o jsonpath="{.data['config\.yaml']}" | \ grep -Eo '"service-node-port-range":["[[:digit:]]+-[[:digit:]]+"]' Example output "service-node-port-range":["30000-33000"] 6.6. Configuring network policy As a cluster administrator or project administrator, you can configure network policies for a project. 6.6.1. About network policy In a cluster using a Kubernetes Container Network Interface (CNI) plug-in that supports Kubernetes network policy, network isolation is controlled entirely by NetworkPolicy objects. In OpenShift Container Platform 4.7, OpenShift SDN supports using network policy in its default network isolation mode. Note When using the OpenShift SDN cluster network provider, the following limitations apply regarding network policies: Egress network policy as specified by the egress field is not supported. IPBlock is supported by network policy, but without support for except clauses. If you create a policy with an IPBlock section that includes an except clause, the SDN pods log warnings and the entire IPBlock section of that policy is ignored. Warning Network policy does not apply to the host network namespace. Pods with host networking enabled are unaffected by network policy rules. By default, all pods in a project are accessible from other pods and network endpoints. To isolate one or more pods in a project, you can create NetworkPolicy objects in that project to indicate the allowed incoming connections. Project administrators can create and delete NetworkPolicy objects within their own project. If a pod is matched by selectors in one or more NetworkPolicy objects, then the pod will accept only connections that are allowed by at least one of those NetworkPolicy objects. A pod that is not selected by any NetworkPolicy objects is fully accessible. The following example NetworkPolicy objects demonstrate supporting different scenarios: Deny all traffic: To make a project deny by default, add a NetworkPolicy object that matches all pods but accepts no traffic: kind: NetworkPolicy apiVersion: networking.k8s.io/v1 metadata: name: deny-by-default spec: podSelector: {} ingress: [] Only allow connections from the OpenShift Container Platform Ingress Controller: To make a project allow only connections from the OpenShift Container Platform Ingress Controller, add the following NetworkPolicy object. apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: allow-from-openshift-ingress spec: ingress: - from: - namespaceSelector: matchLabels: network.openshift.io/policy-group: ingress podSelector: {} policyTypes: - Ingress Only accept connections from pods within a project: To make pods accept connections from other pods in the same project, but reject all other connections from pods in other projects, add the following NetworkPolicy object: kind: NetworkPolicy apiVersion: networking.k8s.io/v1 metadata: name: allow-same-namespace spec: podSelector: {} ingress: - from: - podSelector: {} Only allow HTTP and HTTPS traffic based on pod labels: To enable only HTTP and HTTPS access to the pods with a specific label ( role=frontend in following example), add a NetworkPolicy object similar to the following: kind: NetworkPolicy apiVersion: networking.k8s.io/v1 metadata: name: allow-http-and-https spec: podSelector: matchLabels: role: frontend ingress: - ports: - protocol: TCP port: 80 - protocol: TCP port: 443 Accept connections by using both namespace and pod selectors: To match network traffic by combining namespace and pod selectors, you can use a NetworkPolicy object similar to the following: kind: NetworkPolicy apiVersion: networking.k8s.io/v1 metadata: name: allow-pod-and-namespace-both spec: podSelector: matchLabels: name: test-pods ingress: - from: - namespaceSelector: matchLabels: project: project_name podSelector: matchLabels: name: test-pods NetworkPolicy objects are additive, which means you can combine multiple NetworkPolicy objects together to satisfy complex network requirements. For example, for the NetworkPolicy objects defined in samples, you can define both allow-same-namespace and allow-http-and-https policies within the same project. Thus allowing the pods with the label role=frontend , to accept any connection allowed by each policy. That is, connections on any port from pods in the same namespace, and connections on ports 80 and 443 from pods in any namespace. 6.6.2. Example NetworkPolicy object The following annotates an example NetworkPolicy object: kind: NetworkPolicy apiVersion: networking.k8s.io/v1 metadata: name: allow-27107 1 spec: podSelector: 2 matchLabels: app: mongodb ingress: - from: - podSelector: 3 matchLabels: app: app ports: 4 - protocol: TCP port: 27017 1 The name of the NetworkPolicy object. 2 A selector that describes the pods to which the policy applies. The policy object can only select pods in the project that defines the NetworkPolicy object. 3 A selector that matches the pods from which the policy object allows ingress traffic. The selector matches pods in the same namespace as the NetworkPolicy. 4 A list of one or more destination ports on which to accept traffic. 6.6.3. Creating a network policy To define granular rules describing ingress or egress network traffic allowed for namespaces in your cluster, you can create a network policy. Note If you log in with a user with the cluster-admin role, then you can create a network policy in any namespace in the cluster. Prerequisites Your cluster uses a cluster network provider that supports NetworkPolicy objects, such as the OVN-Kubernetes network provider or the OpenShift SDN network provider with mode: NetworkPolicy set. This mode is the default for OpenShift SDN. You installed the OpenShift CLI ( oc ). You are logged in to the cluster with a user with admin privileges. You are working in the namespace that the network policy applies to. Procedure Create a policy rule: Create a <policy_name>.yaml file: USD touch <policy_name>.yaml where: <policy_name> Specifies the network policy file name. Define a network policy in the file that you just created, such as in the following examples: Deny ingress from all pods in all namespaces kind: NetworkPolicy apiVersion: networking.k8s.io/v1 metadata: name: deny-by-default spec: podSelector: ingress: [] Allow ingress from all pods in the same namespace kind: NetworkPolicy apiVersion: networking.k8s.io/v1 metadata: name: allow-same-namespace spec: podSelector: ingress: - from: - podSelector: {} To create the network policy object, enter the following command: USD oc apply -f <policy_name>.yaml -n <namespace> where: <policy_name> Specifies the network policy file name. <namespace> Optional: Specifies the namespace if the object is defined in a different namespace than the current namespace. Example output networkpolicy "default-deny" created 6.6.4. Configuring multitenant isolation by using network policy You can configure your project to isolate it from pods and services in other project namespaces. Prerequisites Your cluster uses a cluster network provider that supports NetworkPolicy objects, such as the OVN-Kubernetes network provider or the OpenShift SDN network provider with mode: NetworkPolicy set. This mode is the default for OpenShift SDN. You installed the OpenShift CLI ( oc ). You are logged in to the cluster with a user with admin privileges. Procedure Create the following NetworkPolicy objects: A policy named allow-from-openshift-ingress . USD cat << EOF| oc create -f - apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: allow-from-openshift-ingress spec: ingress: - from: - namespaceSelector: matchLabels: policy-group.network.openshift.io/ingress: "" podSelector: {} policyTypes: - Ingress EOF Note policy-group.network.openshift.io/ingress: "" is the preferred namespace selector label for OpenShift SDN. You can use the network.openshift.io/policy-group: ingress namespace selector label, but this is a legacy label. A policy named allow-from-openshift-monitoring : USD cat << EOF| oc create -f - apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: allow-from-openshift-monitoring spec: ingress: - from: - namespaceSelector: matchLabels: network.openshift.io/policy-group: monitoring podSelector: {} policyTypes: - Ingress EOF A policy named allow-same-namespace : USD cat << EOF| oc create -f - kind: NetworkPolicy apiVersion: networking.k8s.io/v1 metadata: name: allow-same-namespace spec: podSelector: ingress: - from: - podSelector: {} EOF Optional: To confirm that the network policies exist in your current project, enter the following command: USD oc describe networkpolicy Example output Name: allow-from-openshift-ingress Namespace: example1 Created on: 2020-06-09 00:28:17 -0400 EDT Labels: <none> Annotations: <none> Spec: PodSelector: <none> (Allowing the specific traffic to all pods in this namespace) Allowing ingress traffic: To Port: <any> (traffic allowed to all ports) From: NamespaceSelector: network.openshift.io/policy-group: ingress Not affecting egress traffic Policy Types: Ingress Name: allow-from-openshift-monitoring Namespace: example1 Created on: 2020-06-09 00:29:57 -0400 EDT Labels: <none> Annotations: <none> Spec: PodSelector: <none> (Allowing the specific traffic to all pods in this namespace) Allowing ingress traffic: To Port: <any> (traffic allowed to all ports) From: NamespaceSelector: network.openshift.io/policy-group: monitoring Not affecting egress traffic Policy Types: Ingress 6.6.5. Creating default network policies for a new project As a cluster administrator, you can modify the new project template to automatically include NetworkPolicy objects when you create a new project. 6.6.6. Modifying the template for new projects As a cluster administrator, you can modify the default project template so that new projects are created using your custom requirements. To create your own custom project template: Procedure Log in as a user with cluster-admin privileges. Generate the default project template: USD oc adm create-bootstrap-project-template -o yaml > template.yaml Use a text editor to modify the generated template.yaml file by adding objects or modifying existing objects. The project template must be created in the openshift-config namespace. Load your modified template: USD oc create -f template.yaml -n openshift-config Edit the project configuration resource using the web console or CLI. Using the web console: Navigate to the Administration Cluster Settings page. Click Global Configuration to view all configuration resources. Find the entry for Project and click Edit YAML . Using the CLI: Edit the project.config.openshift.io/cluster resource: USD oc edit project.config.openshift.io/cluster Update the spec section to include the projectRequestTemplate and name parameters, and set the name of your uploaded project template. The default name is project-request . Project configuration resource with custom project template apiVersion: config.openshift.io/v1 kind: Project metadata: ... spec: projectRequestTemplate: name: <template_name> After you save your changes, create a new project to verify that your changes were successfully applied. 6.6.6.1. Adding network policies to the new project template As a cluster administrator, you can add network policies to the default template for new projects. OpenShift Container Platform will automatically create all the NetworkPolicy objects specified in the template in the project. Prerequisites Your cluster uses a default CNI network provider that supports NetworkPolicy objects, such as the OpenShift SDN network provider with mode: NetworkPolicy set. This mode is the default for OpenShift SDN. You installed the OpenShift CLI ( oc ). You must log in to the cluster with a user with cluster-admin privileges. You must have created a custom default project template for new projects. Procedure Edit the default template for a new project by running the following command: USD oc edit template <project_template> -n openshift-config Replace <project_template> with the name of the default template that you configured for your cluster. The default template name is project-request . In the template, add each NetworkPolicy object as an element to the objects parameter. The objects parameter accepts a collection of one or more objects. In the following example, the objects parameter collection includes several NetworkPolicy objects. Important For the OVN-Kubernetes network provider plug-in, when the Ingress Controller is configured to use the HostNetwork endpoint publishing strategy, there is no supported way to apply network policy so that ingress traffic is allowed and all other traffic is denied. objects: - apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: allow-from-same-namespace spec: podSelector: {} ingress: - from: - podSelector: {} - apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: allow-from-openshift-ingress spec: ingress: - from: - namespaceSelector: matchLabels: network.openshift.io/policy-group: ingress podSelector: {} policyTypes: - Ingress ... Optional: Create a new project to confirm that your network policy objects are created successfully by running the following commands: Create a new project: USD oc new-project <project> 1 1 Replace <project> with the name for the project you are creating. Confirm that the network policy objects in the new project template exist in the new project: USD oc get networkpolicy NAME POD-SELECTOR AGE allow-from-openshift-ingress <none> 7s allow-from-same-namespace <none> 7s 6.7. Supported configurations The following configurations are supported for the current release of Red Hat OpenShift Service Mesh. 6.7.1. Supported platforms The Red Hat OpenShift Service Mesh Operator supports multiple versions of the ServiceMeshControlPlane resource. Version 2.2 Service Mesh control planes are supported on the following platform versions: Red Hat OpenShift Container Platform version 4.9 or later. Red Hat OpenShift Dedicated version 4. Azure Red Hat OpenShift (ARO) version 4. Red Hat OpenShift Service on AWS (ROSA). 6.7.2. Unsupported configurations Explicitly unsupported cases include: OpenShift Online is not supported for Red Hat OpenShift Service Mesh. Red Hat OpenShift Service Mesh does not support the management of microservices outside the cluster where Service Mesh is running. 6.7.3. Supported network configurations Red Hat OpenShift Service Mesh supports the following network configurations. OpenShift-SDN OVN-Kubernetes is supported on OpenShift Container Platform 4.7.32+, OpenShift Container Platform 4.8.12+, and OpenShift Container Platform 4.9+. Third-Party Container Network Interface (CNI) plug-ins that have been certified on OpenShift Container Platform and passed Service Mesh conformance testing. See Certified OpenShift CNI Plug-ins for more information. 6.7.4. Supported configurations for Service Mesh This release of Red Hat OpenShift Service Mesh is only available on OpenShift Container Platform x86_64, IBM Z, and IBM Power Systems. IBM Z is only supported on OpenShift Container Platform 4.6 and later. IBM Power Systems is only supported on OpenShift Container Platform 4.6 and later. Configurations where all Service Mesh components are contained within a single OpenShift Container Platform cluster. Configurations that do not integrate external services such as virtual machines. Red Hat OpenShift Service Mesh does not support EnvoyFilter configuration except where explicitly documented. 6.7.5. Supported configurations for Kiali The Kiali console is only supported on the two most recent releases of the Chrome, Edge, Firefox, or Safari browsers. 6.7.6. Supported configurations for Distributed Tracing Jaeger agent as a sidecar is the only supported configuration for Jaeger. Jaeger as a daemonset is not supported for multitenant installations or OpenShift Dedicated. 6.7.7. Supported WebAssembly module 3scale WebAssembly is the only provided WebAssembly module. You can create custom WebAssembly modules. 6.7.8. Operator overview Red Hat OpenShift Service Mesh requires the following four Operators: OpenShift Elasticsearch - (Optional) Provides database storage for tracing and logging with the distributed tracing platform. It is based on the open source Elasticsearch project. Red Hat OpenShift distributed tracing platform - Provides distributed tracing to monitor and troubleshoot transactions in complex distributed systems. It is based on the open source Jaeger project. Kiali - Provides observability for your service mesh. Allows you to view configurations, monitor traffic, and analyze traces in a single console. It is based on the open source Kiali project. Red Hat OpenShift Service Mesh - Allows you to connect, secure, control, and observe the microservices that comprise your applications. The Service Mesh Operator defines and monitors the ServiceMeshControlPlane resources that manage the deployment, updating, and deletion of the Service Mesh components. It is based on the open source Istio project. steps Install Red Hat OpenShift Service Mesh in your OpenShift Container Platform environment. 6.8. Optimizing routing The OpenShift Container Platform HAProxy router scales to optimize performance. 6.8.1. Baseline Ingress Controller (router) performance The OpenShift Container Platform Ingress Controller, or router, is the Ingress point for all external traffic destined for OpenShift Container Platform services. When evaluating a single HAProxy router performance in terms of HTTP requests handled per second, the performance varies depending on many factors. In particular: HTTP keep-alive/close mode Route type TLS session resumption client support Number of concurrent connections per target route Number of target routes Back end server page size Underlying infrastructure (network/SDN solution, CPU, and so on) While performance in your specific environment will vary, Red Hat lab tests on a public cloud instance of size 4 vCPU/16GB RAM. A single HAProxy router handling 100 routes terminated by backends serving 1kB static pages is able to handle the following number of transactions per second. In HTTP keep-alive mode scenarios: Encryption LoadBalancerService HostNetwork none 21515 29622 edge 16743 22913 passthrough 36786 53295 re-encrypt 21583 25198 In HTTP close (no keep-alive) scenarios: Encryption LoadBalancerService HostNetwork none 5719 8273 edge 2729 4069 passthrough 4121 5344 re-encrypt 2320 2941 Default Ingress Controller configuration with ROUTER_THREADS=4 was used and two different endpoint publishing strategies (LoadBalancerService/HostNetwork) were tested. TLS session resumption was used for encrypted routes. With HTTP keep-alive, a single HAProxy router is capable of saturating 1 Gbit NIC at page sizes as small as 8 kB. When running on bare metal with modern processors, you can expect roughly twice the performance of the public cloud instance above. This overhead is introduced by the virtualization layer in place on public clouds and holds mostly true for private cloud-based virtualization as well. The following table is a guide to how many applications to use behind the router: Number of applications Application type 5-10 static file/web server or caching proxy 100-1000 applications generating dynamic content In general, HAProxy can support routes for 5 to 1000 applications, depending on the technology in use. Ingress Controller performance might be limited by the capabilities and performance of the applications behind it, such as language or static versus dynamic content. Ingress, or router, sharding should be used to serve more routes towards applications and help horizontally scale the routing tier. 6.8.2. Ingress Controller (router) performance optimizations OpenShift Container Platform no longer supports modifying Ingress Controller deployments by setting environment variables such as ROUTER_THREADS , ROUTER_DEFAULT_TUNNEL_TIMEOUT , ROUTER_DEFAULT_CLIENT_TIMEOUT , ROUTER_DEFAULT_SERVER_TIMEOUT , and RELOAD_INTERVAL . You can modify the Ingress Controller deployment, but if the Ingress Operator is enabled, the configuration is overwritten. 6.9. Post-installation RHOSP network configuration You can configure some aspects of a OpenShift Container Platform on Red Hat OpenStack Platform (RHOSP) cluster after installation. 6.9.1. Configuring application access with floating IP addresses After you install OpenShift Container Platform, configure Red Hat OpenStack Platform (RHOSP) to allow application network traffic. Note You do not need to perform this procedure if you provided values for platform.openstack.apiFloatingIP and platform.openstack.ingressFloatingIP in the install-config.yaml file, or os_api_fip and os_ingress_fip in the inventory.yaml playbook, during installation. The floating IP addresses are already set. Prerequisites OpenShift Container Platform cluster must be installed Floating IP addresses are enabled as described in the OpenShift Container Platform on RHOSP installation documentation. Procedure After you install the OpenShift Container Platform cluster, attach a floating IP address to the ingress port: Show the port: USD openstack port show <cluster_name>-<cluster_ID>-ingress-port Attach the port to the IP address: USD openstack floating ip set --port <ingress_port_ID> <apps_FIP> Add a wildcard A record for *apps. to your DNS file: *.apps.<cluster_name>.<base_domain> IN A <apps_FIP> Note If you do not control the DNS server but want to enable application access for non-production purposes, you can add these hostnames to /etc/hosts : <apps_FIP> console-openshift-console.apps.<cluster name>.<base domain> <apps_FIP> integrated-oauth-server-openshift-authentication.apps.<cluster name>.<base domain> <apps_FIP> oauth-openshift.apps.<cluster name>.<base domain> <apps_FIP> prometheus-k8s-openshift-monitoring.apps.<cluster name>.<base domain> <apps_FIP> grafana-openshift-monitoring.apps.<cluster name>.<base domain> <apps_FIP> <app name>.apps.<cluster name>.<base domain> 6.9.2. Kuryr ports pools A Kuryr ports pool maintains a number of ports on standby for pod creation. Keeping ports on standby minimizes pod creation time. Without ports pools, Kuryr must explicitly request port creation or deletion whenever a pod is created or deleted. The Neutron ports that Kuryr uses are created in subnets that are tied to namespaces. These pod ports are also added as subports to the primary port of OpenShift Container Platform cluster nodes. Because Kuryr keeps each namespace in a separate subnet, a separate ports pool is maintained for each namespace-worker pair. Prior to installing a cluster, you can set the following parameters in the cluster-network-03-config.yml manifest file to configure ports pool behavior: The enablePortPoolsPrepopulation parameter controls pool prepopulation, which forces Kuryr to add ports to the pool when it is created, such as when a new host is added, or a new namespace is created. The default value is false . The poolMinPorts parameter is the minimum number of free ports that are kept in the pool. The default value is 1 . The poolMaxPorts parameter is the maximum number of free ports that are kept in the pool. A value of 0 disables that upper bound. This is the default setting. If your OpenStack port quota is low, or you have a limited number of IP addresses on the pod network, consider setting this option to ensure that unneeded ports are deleted. The poolBatchPorts parameter defines the maximum number of Neutron ports that can be created at once. The default value is 3 . 6.9.3. Adjusting Kuryr ports pool settings in active deployments on RHOSP You can use a custom resource (CR) to configure how Kuryr manages Red Hat OpenStack Platform (RHOSP) Neutron ports to control the speed and efficiency of pod creation on a deployed cluster. Procedure From a command line, open the Cluster Network Operator (CNO) CR for editing: USD oc edit networks.operator.openshift.io cluster Edit the settings to meet your requirements. The following file is provided as an example: apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 serviceNetwork: - 172.30.0.0/16 defaultNetwork: type: Kuryr kuryrConfig: enablePortPoolsPrepopulation: false 1 poolMinPorts: 1 2 poolBatchPorts: 3 3 poolMaxPorts: 5 4 1 Set enablePortPoolsPrepopulation to true to make Kuryr create new Neutron ports after a namespace is created or a new node is added to the cluster. This setting raises the Neutron ports quota but can reduce the time that is required to spawn pods. The default value is false . 2 Kuryr creates new ports for a pool if the number of free ports in that pool is lower than the value of poolMinPorts . The default value is 1 . 3 poolBatchPorts controls the number of new ports that are created if the number of free ports is lower than the value of poolMinPorts . The default value is 3 . 4 If the number of free ports in a pool is higher than the value of poolMaxPorts , Kuryr deletes them until the number matches that value. Setting the value to 0 disables this upper bound, preventing pools from shrinking. The default value is 0 . Save your changes and quit the text editor to commit your changes. Important Modifying these options on a running cluster forces the kuryr-controller and kuryr-cni pods to restart. As a result, the creation of new pods and services will be delayed. | [
"apiVersion: config.openshift.io/v1 kind: Proxy metadata: name: cluster spec: trustedCA: name: \"\" status:",
"apiVersion: v1 data: ca-bundle.crt: | 1 <MY_PEM_ENCODED_CERTS> 2 kind: ConfigMap metadata: name: user-ca-bundle 3 namespace: openshift-config 4",
"oc create -f user-ca-bundle.yaml",
"oc edit proxy/cluster",
"apiVersion: config.openshift.io/v1 kind: Proxy metadata: name: cluster spec: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: http://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 readinessEndpoints: - http://www.google.com 4 - https://www.google.com trustedCA: name: user-ca-bundle 5",
"oc get dnses.config.openshift.io/cluster -o yaml",
"apiVersion: config.openshift.io/v1 kind: DNS metadata: creationTimestamp: \"2019-10-25T18:27:09Z\" generation: 2 name: cluster resourceVersion: \"37966\" selfLink: /apis/config.openshift.io/v1/dnses/cluster uid: 0e714746-f755-11f9-9cb1-02ff55d8f976 spec: baseDomain: <base_domain> privateZone: tags: Name: <infrastructure_id>-int kubernetes.io/cluster/<infrastructure_id>: owned publicZone: id: Z2XXXXXXXXXXA4 status: {}",
"oc patch dnses.config.openshift.io/cluster --type=merge --patch='{\"spec\": {\"publicZone\": null}}' dns.config.openshift.io/cluster patched",
"oc get dnses.config.openshift.io/cluster -o yaml",
"apiVersion: config.openshift.io/v1 kind: DNS metadata: creationTimestamp: \"2019-10-25T18:27:09Z\" generation: 2 name: cluster resourceVersion: \"37966\" selfLink: /apis/config.openshift.io/v1/dnses/cluster uid: 0e714746-f755-11f9-9cb1-02ff55d8f976 spec: baseDomain: <base_domain> privateZone: tags: Name: <infrastructure_id>-int kubernetes.io/cluster/<infrastructure_id>-wfpg4: owned status: {}",
"oc patch network.config.openshift.io cluster --type=merge -p '{ \"spec\": { \"serviceNodePortRange\": \"30000-<port>\" } }'",
"network.config.openshift.io/cluster patched",
"oc get configmaps -n openshift-kube-apiserver config -o jsonpath=\"{.data['config\\.yaml']}\" | grep -Eo '\"service-node-port-range\":[\"[[:digit:]]+-[[:digit:]]+\"]'",
"\"service-node-port-range\":[\"30000-33000\"]",
"kind: NetworkPolicy apiVersion: networking.k8s.io/v1 metadata: name: deny-by-default spec: podSelector: {} ingress: []",
"apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: allow-from-openshift-ingress spec: ingress: - from: - namespaceSelector: matchLabels: network.openshift.io/policy-group: ingress podSelector: {} policyTypes: - Ingress",
"kind: NetworkPolicy apiVersion: networking.k8s.io/v1 metadata: name: allow-same-namespace spec: podSelector: {} ingress: - from: - podSelector: {}",
"kind: NetworkPolicy apiVersion: networking.k8s.io/v1 metadata: name: allow-http-and-https spec: podSelector: matchLabels: role: frontend ingress: - ports: - protocol: TCP port: 80 - protocol: TCP port: 443",
"kind: NetworkPolicy apiVersion: networking.k8s.io/v1 metadata: name: allow-pod-and-namespace-both spec: podSelector: matchLabels: name: test-pods ingress: - from: - namespaceSelector: matchLabels: project: project_name podSelector: matchLabels: name: test-pods",
"kind: NetworkPolicy apiVersion: networking.k8s.io/v1 metadata: name: allow-27107 1 spec: podSelector: 2 matchLabels: app: mongodb ingress: - from: - podSelector: 3 matchLabels: app: app ports: 4 - protocol: TCP port: 27017",
"touch <policy_name>.yaml",
"kind: NetworkPolicy apiVersion: networking.k8s.io/v1 metadata: name: deny-by-default spec: podSelector: ingress: []",
"kind: NetworkPolicy apiVersion: networking.k8s.io/v1 metadata: name: allow-same-namespace spec: podSelector: ingress: - from: - podSelector: {}",
"oc apply -f <policy_name>.yaml -n <namespace>",
"networkpolicy \"default-deny\" created",
"cat << EOF| oc create -f - apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: allow-from-openshift-ingress spec: ingress: - from: - namespaceSelector: matchLabels: policy-group.network.openshift.io/ingress: \"\" podSelector: {} policyTypes: - Ingress EOF",
"cat << EOF| oc create -f - apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: allow-from-openshift-monitoring spec: ingress: - from: - namespaceSelector: matchLabels: network.openshift.io/policy-group: monitoring podSelector: {} policyTypes: - Ingress EOF",
"cat << EOF| oc create -f - kind: NetworkPolicy apiVersion: networking.k8s.io/v1 metadata: name: allow-same-namespace spec: podSelector: ingress: - from: - podSelector: {} EOF",
"oc describe networkpolicy",
"Name: allow-from-openshift-ingress Namespace: example1 Created on: 2020-06-09 00:28:17 -0400 EDT Labels: <none> Annotations: <none> Spec: PodSelector: <none> (Allowing the specific traffic to all pods in this namespace) Allowing ingress traffic: To Port: <any> (traffic allowed to all ports) From: NamespaceSelector: network.openshift.io/policy-group: ingress Not affecting egress traffic Policy Types: Ingress Name: allow-from-openshift-monitoring Namespace: example1 Created on: 2020-06-09 00:29:57 -0400 EDT Labels: <none> Annotations: <none> Spec: PodSelector: <none> (Allowing the specific traffic to all pods in this namespace) Allowing ingress traffic: To Port: <any> (traffic allowed to all ports) From: NamespaceSelector: network.openshift.io/policy-group: monitoring Not affecting egress traffic Policy Types: Ingress",
"oc adm create-bootstrap-project-template -o yaml > template.yaml",
"oc create -f template.yaml -n openshift-config",
"oc edit project.config.openshift.io/cluster",
"apiVersion: config.openshift.io/v1 kind: Project metadata: spec: projectRequestTemplate: name: <template_name>",
"oc edit template <project_template> -n openshift-config",
"objects: - apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: allow-from-same-namespace spec: podSelector: {} ingress: - from: - podSelector: {} - apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: allow-from-openshift-ingress spec: ingress: - from: - namespaceSelector: matchLabels: network.openshift.io/policy-group: ingress podSelector: {} policyTypes: - Ingress",
"oc new-project <project> 1",
"oc get networkpolicy NAME POD-SELECTOR AGE allow-from-openshift-ingress <none> 7s allow-from-same-namespace <none> 7s",
"openstack port show <cluster_name>-<cluster_ID>-ingress-port",
"openstack floating ip set --port <ingress_port_ID> <apps_FIP>",
"*.apps.<cluster_name>.<base_domain> IN A <apps_FIP>",
"<apps_FIP> console-openshift-console.apps.<cluster name>.<base domain> <apps_FIP> integrated-oauth-server-openshift-authentication.apps.<cluster name>.<base domain> <apps_FIP> oauth-openshift.apps.<cluster name>.<base domain> <apps_FIP> prometheus-k8s-openshift-monitoring.apps.<cluster name>.<base domain> <apps_FIP> grafana-openshift-monitoring.apps.<cluster name>.<base domain> <apps_FIP> <app name>.apps.<cluster name>.<base domain>",
"oc edit networks.operator.openshift.io cluster",
"apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 serviceNetwork: - 172.30.0.0/16 defaultNetwork: type: Kuryr kuryrConfig: enablePortPoolsPrepopulation: false 1 poolMinPorts: 1 2 poolBatchPorts: 3 3 poolMaxPorts: 5 4"
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.7/html/post-installation_configuration/post-install-network-configuration |
1.13. Connecting to a VDB as a Data Source | 1.13. Connecting to a VDB as a Data Source JBoss Data Virtualization virtual databases (VDBs) can be configured as an JBoss Enterprise Application Platform (EAP) data source. The data source can then be accessed from JNDI or injected into your Java EE applications. A JBoss Data Virtualization data source is deployed in the same way as any other database resource. Note The recommended approach for configuring data sources is to use JBoss CLI or Management Console, not directly editing the standalone.xml configuration file. See the Red Hat JBoss Enterprise Application Platform Administration and Configuration Guide for more information on how to configure data sources in JBoss EAP. | null | https://docs.redhat.com/en/documentation/red_hat_jboss_data_virtualization/6.4/html/development_guide_volume_1_client_development/connecting_to_a_vdb_as_a_data_source1 |
Monitoring Guide | Monitoring Guide Red Hat Gluster Storage 3.5 Monitoring Gluster Cluster Red Hat Gluster Storage Documentation Team | null | https://docs.redhat.com/en/documentation/red_hat_gluster_storage/3.5/html/monitoring_guide/index |
Updating clusters | Updating clusters OpenShift Container Platform 4.17 Updating OpenShift Container Platform clusters Red Hat OpenShift Documentation Team | [
"oc adm upgrade --include-not-recommended",
"Cluster version is 4.13.40 Upstream is unset, so the cluster will use an appropriate default. Channel: stable-4.14 (available channels: candidate-4.13, candidate-4.14, eus-4.14, fast-4.13, fast-4.14, stable-4.13, stable-4.14) Recommended updates: VERSION IMAGE 4.14.27 quay.io/openshift-release-dev/ocp-release@sha256:4d30b359aa6600a89ed49ce6a9a5fdab54092bcb821a25480fdfbc47e66af9ec 4.14.26 quay.io/openshift-release-dev/ocp-release@sha256:4fe7d4ccf4d967a309f83118f1a380a656a733d7fcee1dbaf4d51752a6372890 4.14.25 quay.io/openshift-release-dev/ocp-release@sha256:a0ef946ef8ae75aef726af1d9bbaad278559ad8cab2c1ed1088928a0087990b6 4.14.24 quay.io/openshift-release-dev/ocp-release@sha256:0a34eac4b834e67f1bca94493c237e307be2c0eae7b8956d4d8ef1c0c462c7b0 4.14.23 quay.io/openshift-release-dev/ocp-release@sha256:f8465817382128ec7c0bc676174bad0fb43204c353e49c146ddd83a5b3d58d92 4.13.42 quay.io/openshift-release-dev/ocp-release@sha256:dcf5c3ad7384f8bee3c275da8f886b0bc9aea7611d166d695d0cf0fff40a0b55 4.13.41 quay.io/openshift-release-dev/ocp-release@sha256:dbb8aa0cf53dc5ac663514e259ad2768d8c82fd1fe7181a4cfb484e3ffdbd3ba Updates with known issues: Version: 4.14.22 Image: quay.io/openshift-release-dev/ocp-release@sha256:7093fa606debe63820671cc92a1384e14d0b70058d4b4719d666571e1fc62190 Reason: MultipleReasons Message: Exposure to AzureRegistryImageMigrationUserProvisioned is unknown due to an evaluation failure: client-side throttling: only 18.061ms has elapsed since the last match call completed for this cluster condition backend; this cached cluster condition request has been queued for later execution In Azure clusters with the user-provisioned registry storage, the in-cluster image registry component may struggle to complete the cluster update. https://issues.redhat.com/browse/IR-468 Incoming HTTP requests to services exposed by Routes may fail while routers reload their configuration, especially when made with Apache HTTPClient versions before 5.0. The problem is more likely to occur in clusters with higher number of Routes and corresponding endpoints. https://issues.redhat.com/browse/NE-1689 Version: 4.14.21 Image: quay.io/openshift-release-dev/ocp-release@sha256:6e3fba19a1453e61f8846c6b0ad3abf41436a3550092cbfd364ad4ce194582b7 Reason: MultipleReasons Message: Exposure to AzureRegistryImageMigrationUserProvisioned is unknown due to an evaluation failure: client-side throttling: only 33.991ms has elapsed since the last match call completed for this cluster condition backend; this cached cluster condition request has been queued for later execution In Azure clusters with the user-provisioned registry storage, the in-cluster image registry component may struggle to complete the cluster update. https://issues.redhat.com/browse/IR-468 Incoming HTTP requests to services exposed by Routes may fail while routers reload their configuration, especially when made with Apache HTTPClient versions before 5.0. The problem is more likely to occur in clusters with higher number of Routes and corresponding endpoints. https://issues.redhat.com/browse/NE-1689",
"oc get clusterversion version -o json | jq '.status.availableUpdates'",
"[ { \"channels\": [ \"candidate-4.11\", \"candidate-4.12\", \"fast-4.11\", \"fast-4.12\" ], \"image\": \"quay.io/openshift-release-dev/ocp-release@sha256:400267c7f4e61c6bfa0a59571467e8bd85c9188e442cbd820cc8263809be3775\", \"url\": \"https://access.redhat.com/errata/RHBA-2023:3213\", \"version\": \"4.11.41\" }, ]",
"oc get clusterversion version -o json | jq '.status.conditionalUpdates'",
"[ { \"conditions\": [ { \"lastTransitionTime\": \"2023-05-30T16:28:59Z\", \"message\": \"The 4.11.36 release only resolves an installation issue https://issues.redhat.com//browse/OCPBUGS-11663 , which does not affect already running clusters. 4.11.36 does not include fixes delivered in recent 4.11.z releases and therefore upgrading from these versions would cause fixed bugs to reappear. Red Hat does not recommend upgrading clusters to 4.11.36 version for this reason. https://access.redhat.com/solutions/7007136\", \"reason\": \"PatchesOlderRelease\", \"status\": \"False\", \"type\": \"Recommended\" } ], \"release\": { \"channels\": [...], \"image\": \"quay.io/openshift-release-dev/ocp-release@sha256:8c04176b771a62abd801fcda3e952633566c8b5ff177b93592e8e8d2d1f8471d\", \"url\": \"https://access.redhat.com/errata/RHBA-2023:1733\", \"version\": \"4.11.36\" }, \"risks\": [...] }, ]",
"oc adm release extract <release image>",
"oc adm release extract quay.io/openshift-release-dev/ocp-release:4.12.6-x86_64 Extracted release payload from digest sha256:800d1e39d145664975a3bb7cbc6e674fbf78e3c45b5dde9ff2c5a11a8690c87b created at 2023-03-01T12:46:29Z ls 0000_03_authorization-openshift_01_rolebindingrestriction.crd.yaml 0000_03_config-operator_01_proxy.crd.yaml 0000_03_marketplace-operator_01_operatorhub.crd.yaml 0000_03_marketplace-operator_02_operatorhub.cr.yaml 0000_03_quota-openshift_01_clusterresourcequota.crd.yaml 1 0000_90_service-ca-operator_02_prometheusrolebinding.yaml 2 0000_90_service-ca-operator_03_servicemonitor.yaml 0000_99_machine-api-operator_00_tombstones.yaml image-references 3 release-metadata",
"0000_<runlevel>_<component>_<manifest-name>.yaml",
"0000_03_config-operator_01_proxy.crd.yaml",
"oc get mcp",
"NAME CONFIG UPDATED UPDATING DEGRADED MACHINECOUNT READYMACHINECOUNT UPDATEDMACHINECOUNT DEGRADEDMACHINECOUNT AGE master rendered-master-acd1358917e9f98cbdb599aea622d78b True False False 3 3 3 0 22h worker rendered-worker-1d871ac76e1951d32b2fe92369879826 False True False 2 1 1 0 22h",
"oc adm upgrade channel <channel>",
"oc get node",
"NAME STATUS ROLES AGE VERSION ip-10-0-137-31.us-east-2.compute.internal Ready,SchedulingDisabled worker 12d v1.23.5+3afdacb ip-10-0-151-208.us-east-2.compute.internal Ready master 12d v1.23.5+3afdacb ip-10-0-176-138.us-east-2.compute.internal Ready master 12d v1.23.5+3afdacb ip-10-0-183-194.us-east-2.compute.internal Ready worker 12d v1.23.5+3afdacb ip-10-0-204-102.us-east-2.compute.internal Ready master 12d v1.23.5+3afdacb ip-10-0-207-224.us-east-2.compute.internal Ready worker 12d v1.23.5+3afdacb",
"Cluster update time = CVO target update payload deployment time + (# node update iterations x MCO node update time)",
"Cluster update time = 60 + (6 x 5) = 90 minutes",
"Cluster update time = 60 + (3 x 5) = 75 minutes",
"oc get cloudcredentials cluster -o=jsonpath={.spec.credentialsMode}",
"oc get secret <secret_name> -n=kube-system",
"oc get authentication cluster -o jsonpath --template='{ .spec.serviceAccountIssuer }'",
"oc adm upgrade",
"Recommended updates: VERSION IMAGE 4.17.0 quay.io/openshift-release-dev/ocp-release@sha256:6a899c54dda6b844bb12a247e324a0f6cde367e880b73ba110c056df6d018032",
"RELEASE_IMAGE=<update_pull_spec>",
"quay.io/openshift-release-dev/ocp-release@sha256:6a899c54dda6b844bb12a247e324a0f6cde367e880b73ba110c056df6d018032",
"oc adm release extract --from=USDRELEASE_IMAGE --credentials-requests --included \\ 1 --to=<path_to_directory_for_credentials_requests> 2",
"apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: name: cloud-credential-operator-iam-ro namespace: openshift-cloud-credential-operator spec: providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: AWSProviderSpec statementEntries: - effect: Allow action: - iam:GetUser - iam:GetUserPolicy - iam:ListAccessKeys resource: \"*\" secretRef: name: cloud-credential-operator-iam-ro-creds namespace: openshift-cloud-credential-operator 1",
"oc create namespace <component_namespace>",
"RELEASE_IMAGE=USD(oc get clusterversion -o jsonpath={..desired.image})",
"CCO_IMAGE=USD(oc adm release info --image-for='cloud-credential-operator' USDRELEASE_IMAGE -a ~/.pull-secret)",
"oc image extract USDCCO_IMAGE --file=\"/usr/bin/ccoctl.<rhel_version>\" \\ 1 -a ~/.pull-secret",
"chmod 775 ccoctl.<rhel_version>",
"./ccoctl.rhel9",
"OpenShift credentials provisioning tool Usage: ccoctl [command] Available Commands: aws Manage credentials objects for AWS cloud azure Manage credentials objects for Azure gcp Manage credentials objects for Google cloud help Help about any command ibmcloud Manage credentials objects for {ibm-cloud-title} nutanix Manage credentials objects for Nutanix Flags: -h, --help help for ccoctl Use \"ccoctl [command] --help\" for more information about a command.",
"ccoctl aws create-all \\ 1 --name=<name> \\ 2 --region=<aws_region> \\ 3 --credentials-requests-dir=<path_to_credentials_requests_directory> \\ 4 --output-dir=<path_to_ccoctl_output_dir> \\ 5 --create-private-s3-bucket 6",
"ccoctl gcp create-all --name=<name> \\ 1 --region=<gcp_region> \\ 2 --project=<gcp_project_id> \\ 3 --credentials-requests-dir=<path_to_credentials_requests_directory> \\ 4 --output-dir=<path_to_ccoctl_output_dir> 5",
"ccoctl ibmcloud create-service-id --credentials-requests-dir=<path_to_credential_requests_directory> \\ 1 --name=<cluster_name> \\ 2 --output-dir=<installation_directory> \\ 3 --resource-group-name=<resource_group_name> 4",
"ccoctl azure create-managed-identities --name <azure_infra_name> \\ 1 --output-dir ./output_dir --region <azure_region> \\ 2 --subscription-id <azure_subscription_id> \\ 3 --credentials-requests-dir <path_to_directory_for_credentials_requests> --issuer-url \"USD{OIDC_ISSUER_URL}\" \\ 4 --dnszone-resource-group-name <azure_dns_zone_resourcegroup_name> \\ 5 --installation-resource-group-name \"USD{AZURE_INSTALL_RG}\" 6",
"oc get authentication cluster -o jsonpath --template='{ .spec.serviceAccountIssuer }'",
"oc get infrastructure cluster -o jsonpath --template '{ .status.platformStatus.azure.resourceGroupName }'",
"ccoctl nutanix create-shared-secrets --credentials-requests-dir=<path_to_credentials_requests_directory> \\ 1 --output-dir=<ccoctl_output_dir> \\ 2 --credentials-source-filepath=<path_to_credentials_file> 3",
"ls <path_to_ccoctl_output_dir>/manifests/*-credentials.yaml | xargs -I{} oc apply -f {}",
"apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: name: <component_credentials_request> namespace: openshift-cloud-credential-operator spec: providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: AWSProviderSpec statementEntries: - effect: Allow action: - s3:CreateBucket - s3:DeleteBucket resource: \"*\" secretRef: name: <component_secret> namespace: <component_namespace>",
"apiVersion: v1 kind: Secret metadata: name: <component_secret> namespace: <component_namespace> data: aws_access_key_id: <base64_encoded_aws_access_key_id> aws_secret_access_key: <base64_encoded_aws_secret_access_key>",
"apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: name: <component_credentials_request> namespace: openshift-cloud-credential-operator spec: providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: AzureProviderSpec roleBindings: - role: Contributor secretRef: name: <component_secret> namespace: <component_namespace>",
"apiVersion: v1 kind: Secret metadata: name: <component_secret> namespace: <component_namespace> data: azure_subscription_id: <base64_encoded_azure_subscription_id> azure_client_id: <base64_encoded_azure_client_id> azure_client_secret: <base64_encoded_azure_client_secret> azure_tenant_id: <base64_encoded_azure_tenant_id> azure_resource_prefix: <base64_encoded_azure_resource_prefix> azure_resourcegroup: <base64_encoded_azure_resourcegroup> azure_region: <base64_encoded_azure_region>",
"apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: name: <component_credentials_request> namespace: openshift-cloud-credential-operator spec: providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: GCPProviderSpec predefinedRoles: - roles/iam.securityReviewer - roles/iam.roleViewer skipServiceCheck: true secretRef: name: <component_secret> namespace: <component_namespace>",
"apiVersion: v1 kind: Secret metadata: name: <component_secret> namespace: <component_namespace> data: service_account.json: <base64_encoded_gcp_service_account_file>",
"oc edit cloudcredential cluster",
"metadata: annotations: cloudcredential.openshift.io/upgradeable-to: <version_number>",
"RUN depmod -b /opt USD{KERNEL_VERSION}",
"quay.io/openshift-release-dev/ocp-release@sha256:22e149142517dfccb47be828f012659b1ccf71d26620e6f62468c264a7ce7863",
"apiVersion: kmm.sigs.x-k8s.io/v1beta2 kind: PreflightValidationOCP metadata: name: preflight spec: releaseImage: quay.io/openshift-release-dev/ocp-release@sha256:22e149142517dfccb47be828f012659b1ccf71d26620e6f62468c264a7ce7863 pushBuiltImage: true",
"oc get machinehealthcheck -n openshift-machine-api",
"oc -n openshift-machine-api annotate mhc <mhc-name> cluster.x-k8s.io/paused=\"\"",
"apiVersion: machine.openshift.io/v1beta1 kind: MachineHealthCheck metadata: name: example namespace: openshift-machine-api annotations: cluster.x-k8s.io/paused: \"\" spec: selector: matchLabels: role: worker unhealthyConditions: - type: \"Ready\" status: \"Unknown\" timeout: \"300s\" - type: \"Ready\" status: \"False\" timeout: \"300s\" maxUnhealthy: \"40%\" status: currentHealthy: 5 expectedMachines: 5",
"oc -n openshift-machine-api annotate mhc <mhc-name> cluster.x-k8s.io/paused-",
"oc adm upgrade",
"Cluster version is 4.13.10 Upstream is unset, so the cluster will use an appropriate default. Channel: stable-4.13 (available channels: candidate-4.13, candidate-4.14, fast-4.13, stable-4.13) Recommended updates: VERSION IMAGE 4.13.14 quay.io/openshift-release-dev/ocp-release@sha256:406fcc160c097f61080412afcfa7fd65284ac8741ac7ad5b480e304aba73674b 4.13.13 quay.io/openshift-release-dev/ocp-release@sha256:d62495768e335c79a215ba56771ff5ae97e3cbb2bf49ed8fb3f6cefabcdc0f17 4.13.12 quay.io/openshift-release-dev/ocp-release@sha256:73946971c03b43a0dc6f7b0946b26a177c2f3c9d37105441315b4e3359373a55 4.13.11 quay.io/openshift-release-dev/ocp-release@sha256:e1c2377fdae1d063aaddc753b99acf25972b6997ab9a0b7e80cfef627b9ef3dd",
"oc adm upgrade channel <channel>",
"oc adm upgrade channel stable-4.17",
"oc adm upgrade --to-latest=true 1",
"oc adm upgrade --to=<version> 1",
"oc adm upgrade",
"oc adm upgrade",
"Cluster version is <version> Upstream is unset, so the cluster will use an appropriate default. Channel: stable-<version> (available channels: candidate-<version>, eus-<version>, fast-<version>, stable-<version>) No updates available. You may force an update to a specific release image, but doing so might not be supported and might result in downtime or data loss.",
"oc get nodes",
"NAME STATUS ROLES AGE VERSION ip-10-0-168-251.ec2.internal Ready master 82m v1.30.3 ip-10-0-170-223.ec2.internal Ready master 82m v1.30.3 ip-10-0-179-95.ec2.internal Ready worker 70m v1.30.3 ip-10-0-182-134.ec2.internal Ready worker 70m v1.30.3 ip-10-0-211-16.ec2.internal Ready master 82m v1.30.3 ip-10-0-250-100.ec2.internal Ready worker 69m v1.30.3",
"export OC_ENABLE_CMD_UPGRADE_STATUS=true",
"oc adm upgrade status",
"= Control Plane = Assessment: Progressing Target Version: 4.14.1 (from 4.14.0) Completion: 97% Duration: 54m Operator Status: 32 Healthy, 1 Unavailable Control Plane Nodes NAME ASSESSMENT PHASE VERSION EST MESSAGE ip-10-0-53-40.us-east-2.compute.internal Progressing Draining 4.14.0 +10m ip-10-0-30-217.us-east-2.compute.internal Outdated Pending 4.14.0 ? ip-10-0-92-180.us-east-2.compute.internal Outdated Pending 4.14.0 ? = Worker Upgrade = = Worker Pool = Worker Pool: worker Assessment: Progressing Completion: 0% Worker Status: 3 Total, 2 Available, 1 Progressing, 3 Outdated, 1 Draining, 0 Excluded, 0 Degraded Worker Pool Nodes NAME ASSESSMENT PHASE VERSION EST MESSAGE ip-10-0-4-159.us-east-2.compute.internal Progressing Draining 4.14.0 +10m ip-10-0-20-162.us-east-2.compute.internal Outdated Pending 4.14.0 ? ip-10-0-99-40.us-east-2.compute.internal Outdated Pending 4.14.0 ? = Worker Pool = Worker Pool: infra Assessment: Progressing Completion: 0% Worker Status: 1 Total, 0 Available, 1 Progressing, 1 Outdated, 1 Draining, 0 Excluded, 0 Degraded Worker Pool Node NAME ASSESSMENT PHASE VERSION EST MESSAGE ip-10-0-4-159-infra.us-east-2.compute.internal Progressing Draining 4.14.0 +10m = Update Health = SINCE LEVEL IMPACT MESSAGE 14m4s Info None Update is proceeding well",
"oc adm upgrade --include-not-recommended",
"oc adm upgrade --allow-not-recommended --to <version> <.>",
"oc patch clusterversion/version --patch '{\"spec\":{\"upstream\":\"<update-server-url>\"}}' --type=merge",
"clusterversion.config.openshift.io/version patched",
"spec: clusterID: db93436d-7b05-42cc-b856-43e11ad2d31a upstream: '<update-server-url>' 1",
"oc get mcp",
"NAME CONFIG UPDATED UPDATING master rendered-master-ecbb9582781c1091e1c9f19d50cf836c True False worker rendered-worker-00a3f0c68ae94e747193156b491553d5 True False",
"oc adm upgrade channel eus-<4.y+2>",
"oc patch mcp/worker --type merge --patch '{\"spec\":{\"paused\":true}}'",
"oc adm upgrade --to-latest",
"Updating to latest version <4.y+1.z>",
"oc adm upgrade",
"Cluster version is <4.y+1.z>",
"oc adm upgrade --to-latest",
"oc adm upgrade",
"Cluster version is <4.y+2.z>",
"oc patch mcp/worker --type merge --patch '{\"spec\":{\"paused\":false}}'",
"oc get mcp",
"NAME CONFIG UPDATED UPDATING master rendered-master-52da4d2760807cb2b96a3402179a9a4c True False worker rendered-worker-4756f60eccae96fb9dcb4c392c69d497 True False",
"oc get -l 'node-role.kubernetes.io/master!=' -o 'jsonpath={range .items[*]}{.metadata.name}{\"\\n\"}{end}' nodes",
"ci-ln-pwnll6b-f76d1-s8t9n-worker-a-s75z4 ci-ln-pwnll6b-f76d1-s8t9n-worker-b-dglj2 ci-ln-pwnll6b-f76d1-s8t9n-worker-c-lldbm",
"oc label node <node_name> node-role.kubernetes.io/<custom_label>=",
"oc label node ci-ln-0qv1yp2-f76d1-kl2tq-worker-a-j2ssz node-role.kubernetes.io/workerpool-canary=",
"node/ci-ln-gtrwm8t-f76d1-spbl7-worker-a-xk76k labeled",
"apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfigPool metadata: name: workerpool-canary 1 spec: machineConfigSelector: matchExpressions: - { key: machineconfiguration.openshift.io/role, operator: In, values: [worker,workerpool-canary] 2 } nodeSelector: matchLabels: node-role.kubernetes.io/workerpool-canary: \"\" 3",
"oc create -f <file_name>",
"machineconfigpool.machineconfiguration.openshift.io/workerpool-canary created",
"oc get machineconfigpool",
"NAME CONFIG UPDATED UPDATING DEGRADED MACHINECOUNT READYMACHINECOUNT UPDATEDMACHINECOUNT DEGRADEDMACHINECOUNT AGE master rendered-master-b0bb90c4921860f2a5d8a2f8137c1867 True False False 3 3 3 0 97m workerpool-canary rendered-workerpool-canary-87ba3dec1ad78cb6aecebf7fbb476a36 True False False 1 1 1 0 2m42s worker rendered-worker-87ba3dec1ad78cb6aecebf7fbb476a36 True False False 2 2 2 0 97m",
"apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfigPool metadata: name: worker-perf spec: machineConfigSelector: matchExpressions: - { key: machineconfiguration.openshift.io/role, operator: In, values: [worker,worker-perf] } nodeSelector: matchLabels: node-role.kubernetes.io/worker-perf: \"\"",
"oc create -f machineConfigPool.yaml",
"machineconfigpool.machineconfiguration.openshift.io/worker-perf created",
"oc label node worker-a node-role.kubernetes.io/worker-perf=''",
"oc label node worker-b node-role.kubernetes.io/worker-perf=''",
"oc label node worker-c node-role.kubernetes.io/worker-perf=''",
"apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: worker-perf name: 06-kdump-enable-worker-perf spec: config: ignition: version: 3.2.0 systemd: units: - enabled: true name: kdump.service kernelArguments: - crashkernel=512M",
"oc create -f new-machineconfig.yaml",
"oc label node worker-a node-role.kubernetes.io/worker-perf-canary=''",
"oc label node worker-a node-role.kubernetes.io/worker-perf-",
"apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfigPool metadata: name: worker-perf-canary spec: machineConfigSelector: matchExpressions: - { key: machineconfiguration.openshift.io/role, operator: In, values: [worker,worker-perf,worker-perf-canary] 1 } nodeSelector: matchLabels: node-role.kubernetes.io/worker-perf-canary: \"\"",
"oc create -f machineConfigPool-Canary.yaml",
"machineconfigpool.machineconfiguration.openshift.io/worker-perf-canary created",
"oc get mcp",
"NAME CONFIG UPDATED UPDATING DEGRADED MACHINECOUNT READYMACHINECOUNT UPDATEDMACHINECOUNT DEGRADEDMACHINECOUNT AGE master rendered-master-2bf1379b39e22bae858ea1a3ff54b2ac True False False 3 3 3 0 5d16h worker rendered-worker-b9576d51e030413cfab12eb5b9841f34 True False False 0 0 0 0 5d16h worker-perf rendered-worker-perf-b98a1f62485fa702c4329d17d9364f6a True False False 2 2 2 0 56m worker-perf-canary rendered-worker-perf-canary-b98a1f62485fa702c4329d17d9364f6a True False False 1 1 1 0 44m",
"oc get nodes",
"NAME STATUS ROLES AGE VERSION worker-a Ready worker,worker-perf-canary 5d15h v1.27.13+e709aa5 worker-b Ready worker,worker-perf 5d15h v1.27.13+e709aa5 worker-c Ready worker,worker-perf 5d15h v1.27.13+e709aa5",
"systemctl status kdump.service",
"NAME STATUS ROLES AGE VERSION kdump.service - Crash recovery kernel arming Loaded: loaded (/usr/lib/systemd/system/kdump.service; enabled; preset: disabled) Active: active (exited) since Tue 2024-09-03 12:44:43 UTC; 10s ago Process: 4151139 ExecStart=/usr/bin/kdumpctl start (code=exited, status=0/SUCCESS) Main PID: 4151139 (code=exited, status=0/SUCCESS)",
"cat /proc/cmdline",
"crashkernel=512M",
"oc label node worker-a node-role.kubernetes.io/worker-perf=''",
"oc label node worker-a node-role.kubernetes.io/worker-perf-canary-",
"oc patch mcp/<mcp_name> --patch '{\"spec\":{\"paused\":true}}' --type=merge",
"oc patch mcp/workerpool-canary --patch '{\"spec\":{\"paused\":true}}' --type=merge",
"machineconfigpool.machineconfiguration.openshift.io/workerpool-canary patched",
"oc patch mcp/<mcp_name> --patch '{\"spec\":{\"paused\":false}}' --type=merge",
"oc patch mcp/workerpool-canary --patch '{\"spec\":{\"paused\":false}}' --type=merge",
"machineconfigpool.machineconfiguration.openshift.io/workerpool-canary patched",
"oc get machineconfigpools",
"oc label node <node_name> node-role.kubernetes.io/<custom_label>-",
"oc label node ci-ln-0qv1yp2-f76d1-kl2tq-worker-a-j2ssz node-role.kubernetes.io/workerpool-canary-",
"node/ci-ln-0qv1yp2-f76d1-kl2tq-worker-a-j2ssz labeled",
"oc get mcp",
"NAME CONFIG UPDATED UPDATING DEGRADED MACHINECOUNT READYMACHINECOUNT UPDATEDMACHINECOUNT DEGRADEDMACHINECOUNT AGE master rendered-master-1203f157d053fd987c7cbd91e3fbc0ed True False False 3 3 3 0 61m workerpool-canary rendered-mcp-noupdate-5ad4791166c468f3a35cd16e734c9028 True False False 0 0 0 0 21m worker rendered-worker-5ad4791166c468f3a35cd16e734c9028 True False False 3 3 3 0 61m",
"oc delete mcp <mcp_name>",
"--- Trivial example forcing an operator to acknowledge the start of an upgrade file=/home/user/openshift-ansible/hooks/pre_compute.yml - name: note the start of a compute machine update debug: msg: \"Compute machine upgrade of {{ inventory_hostname }} is about to start\" - name: require the user agree to start an upgrade pause: prompt: \"Press Enter to start the compute machine update\"",
"[all:vars] openshift_node_pre_upgrade_hook=/home/user/openshift-ansible/hooks/pre_node.yml openshift_node_post_upgrade_hook=/home/user/openshift-ansible/hooks/post_node.yml",
"systemctl disable --now firewalld.service",
"subscription-manager repos --disable=rhocp-4.16-for-rhel-8-x86_64-rpms --enable=rhocp-4.17-for-rhel-8-x86_64-rpms",
"yum swap ansible ansible-core",
"yum update openshift-ansible openshift-clients",
"subscription-manager repos --disable=rhocp-4.16-for-rhel-8-x86_64-rpms --enable=rhocp-4.17-for-rhel-8-x86_64-rpms",
"[all:vars] ansible_user=root #ansible_become=True openshift_kubeconfig_path=\"~/.kube/config\" [workers] mycluster-rhel8-0.example.com mycluster-rhel8-1.example.com mycluster-rhel8-2.example.com mycluster-rhel8-3.example.com",
"cd /usr/share/ansible/openshift-ansible",
"ansible-playbook -i /<path>/inventory/hosts playbooks/upgrade.yml 1",
"oc get node",
"NAME STATUS ROLES AGE VERSION mycluster-control-plane-0 Ready master 145m v1.30.3 mycluster-control-plane-1 Ready master 145m v1.30.3 mycluster-control-plane-2 Ready master 145m v1.30.3 mycluster-rhel8-0 Ready worker 98m v1.30.3 mycluster-rhel8-1 Ready worker 98m v1.30.3 mycluster-rhel8-2 Ready worker 98m v1.30.3 mycluster-rhel8-3 Ready worker 98m v1.30.3",
"yum update",
"oc get nodes -l node-role.kubernetes.io/master",
"NAME STATUS ROLES AGE VERSION control-plane-node-0 Ready master 75m v1.30.3 control-plane-node-1 Ready master 75m v1.30.3 control-plane-node-2 Ready master 75m v1.30.3",
"oc adm cordon <control_plane_node>",
"oc wait --for=condition=Ready node/<control_plane_node>",
"oc adm uncordon <control_plane_node>",
"oc get nodes -l node-role.kubernetes.io/worker",
"NAME STATUS ROLES AGE VERSION compute-node-0 Ready worker 30m v1.30.3 compute-node-1 Ready worker 30m v1.30.3 compute-node-2 Ready worker 30m v1.30.3",
"oc adm cordon <compute_node>",
"oc adm drain <compute_node> [--pod-selector=<pod_selector>]",
"oc wait --for=condition=Ready node/<compute_node>",
"oc adm uncordon <compute_node>",
"oc get clusterversion/version -o=jsonpath=\"{.status.conditions[?(.type=='RetrievedUpdates')].status}\"",
"oc adm upgrade",
"oc adm upgrade channel <channel>",
"oc adm upgrade --to-multi-arch",
"oc adm upgrade",
"working towards USD{VERSION}: 106 of 841 done (12% complete), waiting on machine-config",
"oc adm upgrade status",
"# bootupctl status",
"Component EFI Installed: grub2-efi-x64-1:2.04-31.el8_4.1.x86_64,shim-x64-15-8.el8_1.x86_64 Update: At latest version",
"Component EFI Installed: grub2-efi-aa64-1:2.02-99.el8_4.1.aarch64,shim-aa64-15.4-2.el8_1.aarch64 Update: At latest version",
"# bootupctl adopt-and-update",
"Updated: grub2-efi-x64-1:2.04-31.el8_4.1.x86_64,shim-x64-15-8.el8_1.x86_64",
"# bootupctl update",
"Updated: grub2-efi-x64-1:2.04-31.el8_4.1.x86_64,shim-x64-15-8.el8_1.x86_64",
"variant: openshift version: 4.17.0 metadata: name: 99-worker-chrony 1 labels: machineconfiguration.openshift.io/role: worker 2 systemd: units: - name: bootupctl-update.service enabled: true contents: | [Unit] Description=Bootupd automatic update [Service] ExecStart=/usr/bin/bootupctl update RemainAfterExit=yes [Install] WantedBy=multi-user.target",
"butane 99-worker-bootupctl-update.bu -o 99-worker-bootupctl-update.yaml",
"oc apply -f ./99-worker-bootupctl-update.yaml",
"export OC_ENABLE_CMD_UPGRADE_STATUS=true",
"oc adm upgrade status",
"= Control Plane = Assessment: Progressing Target Version: 4.14.1 (from 4.14.0) Completion: 97% Duration: 54m Operator Status: 32 Healthy, 1 Unavailable Control Plane Nodes NAME ASSESSMENT PHASE VERSION EST MESSAGE ip-10-0-53-40.us-east-2.compute.internal Progressing Draining 4.14.0 +10m ip-10-0-30-217.us-east-2.compute.internal Outdated Pending 4.14.0 ? ip-10-0-92-180.us-east-2.compute.internal Outdated Pending 4.14.0 ? = Worker Upgrade = = Worker Pool = Worker Pool: worker Assessment: Progressing Completion: 0% Worker Status: 3 Total, 2 Available, 1 Progressing, 3 Outdated, 1 Draining, 0 Excluded, 0 Degraded Worker Pool Nodes NAME ASSESSMENT PHASE VERSION EST MESSAGE ip-10-0-4-159.us-east-2.compute.internal Progressing Draining 4.14.0 +10m ip-10-0-20-162.us-east-2.compute.internal Outdated Pending 4.14.0 ? ip-10-0-99-40.us-east-2.compute.internal Outdated Pending 4.14.0 ? = Worker Pool = Worker Pool: infra Assessment: Progressing Completion: 0% Worker Status: 1 Total, 0 Available, 1 Progressing, 1 Outdated, 1 Draining, 0 Excluded, 0 Degraded Worker Pool Node NAME ASSESSMENT PHASE VERSION EST MESSAGE ip-10-0-4-159-infra.us-east-2.compute.internal Progressing Draining 4.14.0 +10m = Update Health = SINCE LEVEL IMPACT MESSAGE 14m4s Info None Update is proceeding well",
"oc describe clusterversions/version",
"Desired: Channels: candidate-4.13 candidate-4.14 fast-4.13 fast-4.14 stable-4.13 Image: quay.io/openshift-release-dev/ocp-release@sha256:a148b19231e4634196717c3597001b7d0af91bf3a887c03c444f59d9582864f4 URL: https://access.redhat.com/errata/RHSA-2023:6130 Version: 4.13.19 History: Completion Time: 2023-11-07T20:26:04Z Image: quay.io/openshift-release-dev/ocp-release@sha256:a148b19231e4634196717c3597001b7d0af91bf3a887c03c444f59d9582864f4 Started Time: 2023-11-07T19:11:36Z State: Completed Verified: true Version: 4.13.19 Completion Time: 2023-10-04T18:53:29Z Image: quay.io/openshift-release-dev/ocp-release@sha256:eac141144d2ecd6cf27d24efe9209358ba516da22becc5f0abc199d25a9cfcec Started Time: 2023-10-04T17:26:31Z State: Completed Verified: true Version: 4.13.13 Completion Time: 2023-09-26T14:21:43Z Image: quay.io/openshift-release-dev/ocp-release@sha256:371328736411972e9640a9b24a07be0af16880863e1c1ab8b013f9984b4ef727 Started Time: 2023-09-26T14:02:33Z State: Completed Verified: false Version: 4.13.12 Observed Generation: 4 Version Hash: CMLl3sLq-EA= Events: <none>"
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.17/html-single/updating_clusters/index |
Chapter 11. Storage Concepts | Chapter 11. Storage Concepts This chapter introduces the concepts used for describing and managing storage devices. Terms such as Storage pools and Volumes are explained in the sections that follow. 11.1. Storage Pools A storage pool is a file, directory, or storage device managed by libvirt for the purpose of providing storage to guest virtual machines. The storage pool can be local or it can be shared over a network. A storage pool is a quantity of storage set aside by an administrator, often a dedicated storage administrator, for use by guest virtual machines. Storage pools are divided into storage volumes either by the storage administrator or the system administrator, and the volumes are assigned to guest virtual machines as block devices. In short storage volumes are to partitions what storage pools are to disks. Although the storage pool is a virtual container it is limited by two factors: maximum size allowed to it by qemu-kvm and the size of the disk on the host physical machine. Storage pools may not exceed the size of the disk on the host physical machine. The maximum sizes are as follows: virtio-blk = 2^63 bytes or 8 Exabytes(using raw files or disk) Ext4 = ~ 16 TB (using 4 KB block size) XFS = ~8 Exabytes qcow2 and host file systems keep their own metadata and scalability should be evaluated/tuned when trying very large image sizes. Using raw disks means fewer layers that could affect scalability or max size. libvirt uses a directory-based storage pool, the /var/lib/libvirt/images/ directory, as the default storage pool. The default storage pool can be changed to another storage pool. Local storage pools - Local storage pools are directly attached to the host physical machine server. Local storage pools include: local directories, directly attached disks, physical partitions, and LVM volume groups. These storage volumes store guest virtual machine images or are attached to guest virtual machines as additional storage. As local storage pools are directly attached to the host physical machine server, they are useful for development, testing and small deployments that do not require migration or large numbers of guest virtual machines. Local storage pools are not suitable for many production environments as local storage pools do not support live migration. Networked (shared) storage pools - Networked storage pools include storage devices shared over a network using standard protocols. Networked storage is required when migrating virtual machines between host physical machines with virt-manager, but is optional when migrating with virsh. Networked storage pools are managed by libvirt. Supported protocols for networked storage pools include: Fibre Channel-based LUNs iSCSI NFS GFS2 SCSI RDMA protocols (SCSI RCP), the block export protocol used in InfiniBand and 10GbE iWARP adapters. Note Multi-path storage pools should not be created or used as they are not fully supported. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/virtualization_administration_guide/chap-virtualization_administration_guide-storage_concepts |
probe::nfs.proc.handle_exception | probe::nfs.proc.handle_exception Name probe::nfs.proc.handle_exception - NFS client handling an NFSv4 exception Synopsis nfs.proc.handle_exception Values errorcode indicates the type of error Description This is the error handling routine for processes for NFSv4. | null | https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/systemtap_tapset_reference/api-nfs-proc-handle-exception |
Chapter 20. OperatorHub [config.openshift.io/v1] | Chapter 20. OperatorHub [config.openshift.io/v1] Description OperatorHub is the Schema for the operatorhubs API. It can be used to change the state of the default hub sources for OperatorHub on the cluster from enabled to disabled and vice versa. Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object 20.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata spec object OperatorHubSpec defines the desired state of OperatorHub status object OperatorHubStatus defines the observed state of OperatorHub. The current state of the default hub sources will always be reflected here. 20.1.1. .spec Description OperatorHubSpec defines the desired state of OperatorHub Type object Property Type Description disableAllDefaultSources boolean disableAllDefaultSources allows you to disable all the default hub sources. If this is true, a specific entry in sources can be used to enable a default source. If this is false, a specific entry in sources can be used to disable or enable a default source. sources array sources is the list of default hub sources and their configuration. If the list is empty, it implies that the default hub sources are enabled on the cluster unless disableAllDefaultSources is true. If disableAllDefaultSources is true and sources is not empty, the configuration present in sources will take precedence. The list of default hub sources and their current state will always be reflected in the status block. sources[] object HubSource is used to specify the hub source and its configuration 20.1.2. .spec.sources Description sources is the list of default hub sources and their configuration. If the list is empty, it implies that the default hub sources are enabled on the cluster unless disableAllDefaultSources is true. If disableAllDefaultSources is true and sources is not empty, the configuration present in sources will take precedence. The list of default hub sources and their current state will always be reflected in the status block. Type array 20.1.3. .spec.sources[] Description HubSource is used to specify the hub source and its configuration Type object Property Type Description disabled boolean disabled is used to disable a default hub source on cluster name string name is the name of one of the default hub sources 20.1.4. .status Description OperatorHubStatus defines the observed state of OperatorHub. The current state of the default hub sources will always be reflected here. Type object Property Type Description sources array sources encapsulates the result of applying the configuration for each hub source sources[] object HubSourceStatus is used to reflect the current state of applying the configuration to a default source 20.1.5. .status.sources Description sources encapsulates the result of applying the configuration for each hub source Type array 20.1.6. .status.sources[] Description HubSourceStatus is used to reflect the current state of applying the configuration to a default source Type object Property Type Description disabled boolean disabled is used to disable a default hub source on cluster message string message provides more information regarding failures name string name is the name of one of the default hub sources status string status indicates success or failure in applying the configuration 20.2. API endpoints The following API endpoints are available: /apis/config.openshift.io/v1/operatorhubs DELETE : delete collection of OperatorHub GET : list objects of kind OperatorHub POST : create an OperatorHub /apis/config.openshift.io/v1/operatorhubs/{name} DELETE : delete an OperatorHub GET : read the specified OperatorHub PATCH : partially update the specified OperatorHub PUT : replace the specified OperatorHub /apis/config.openshift.io/v1/operatorhubs/{name}/status GET : read status of the specified OperatorHub PATCH : partially update status of the specified OperatorHub PUT : replace status of the specified OperatorHub 20.2.1. /apis/config.openshift.io/v1/operatorhubs HTTP method DELETE Description delete collection of OperatorHub Table 20.1. HTTP responses HTTP code Reponse body 200 - OK Status schema 401 - Unauthorized Empty HTTP method GET Description list objects of kind OperatorHub Table 20.2. HTTP responses HTTP code Reponse body 200 - OK OperatorHubList schema 401 - Unauthorized Empty HTTP method POST Description create an OperatorHub Table 20.3. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 20.4. Body parameters Parameter Type Description body OperatorHub schema Table 20.5. HTTP responses HTTP code Reponse body 200 - OK OperatorHub schema 201 - Created OperatorHub schema 202 - Accepted OperatorHub schema 401 - Unauthorized Empty 20.2.2. /apis/config.openshift.io/v1/operatorhubs/{name} Table 20.6. Global path parameters Parameter Type Description name string name of the OperatorHub HTTP method DELETE Description delete an OperatorHub Table 20.7. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed Table 20.8. HTTP responses HTTP code Reponse body 200 - OK Status schema 202 - Accepted Status schema 401 - Unauthorized Empty HTTP method GET Description read the specified OperatorHub Table 20.9. HTTP responses HTTP code Reponse body 200 - OK OperatorHub schema 401 - Unauthorized Empty HTTP method PATCH Description partially update the specified OperatorHub Table 20.10. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 20.11. HTTP responses HTTP code Reponse body 200 - OK OperatorHub schema 401 - Unauthorized Empty HTTP method PUT Description replace the specified OperatorHub Table 20.12. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 20.13. Body parameters Parameter Type Description body OperatorHub schema Table 20.14. HTTP responses HTTP code Reponse body 200 - OK OperatorHub schema 201 - Created OperatorHub schema 401 - Unauthorized Empty 20.2.3. /apis/config.openshift.io/v1/operatorhubs/{name}/status Table 20.15. Global path parameters Parameter Type Description name string name of the OperatorHub HTTP method GET Description read status of the specified OperatorHub Table 20.16. HTTP responses HTTP code Reponse body 200 - OK OperatorHub schema 401 - Unauthorized Empty HTTP method PATCH Description partially update status of the specified OperatorHub Table 20.17. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 20.18. HTTP responses HTTP code Reponse body 200 - OK OperatorHub schema 401 - Unauthorized Empty HTTP method PUT Description replace status of the specified OperatorHub Table 20.19. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 20.20. Body parameters Parameter Type Description body OperatorHub schema Table 20.21. HTTP responses HTTP code Reponse body 200 - OK OperatorHub schema 201 - Created OperatorHub schema 401 - Unauthorized Empty | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.17/html/config_apis/operatorhub-config-openshift-io-v1 |
21.2. Types | 21.2. Types The main permission control method used in SELinux targeted policy to provide advanced process isolation is Type Enforcement. All files and processes are labeled with a type: types define a SELinux domain for processes and a SELinux type for files. SELinux policy rules define how types access each other, whether it be a domain accessing a type, or a domain accessing another domain. Access is only allowed if a specific SELinux policy rule exists that allows it. The following types are used with postgresql . Different types allow you to configure flexible access. Note that in the list below are used several regular expression to match the whole possible locations: postgresql_db_t This type is used for several locations. The locations labeled with this type are used for data files for PostgreSQL: /usr/lib/pgsql/test/regres /usr/share/jonas/pgsql /var/lib/pgsql/data /var/lib/postgres(ql)? postgresql_etc_t This type is used for configuration files in the /etc/postgresql/ directory. postgresql_exec_t This type is used for several locations. The locations labeled with this type are used for binaries for PostgreSQL: /usr/bin/initdb(.sepgsql)? /usr/bin/(se)?postgres /usr/lib(64)?/postgresql/bin/.* /usr/lib(64)?/pgsql/test/regress/pg_regress systemd_unit_file_t This type is used for the executable PostgreSQL-related files located in the /usr/lib/systemd/system/ directory. postgresql_log_t This type is used for several locations. The locations labeled with this type are used for log files: /var/lib/pgsql/logfile /var/lib/pgsql/pgstartup.log /var/lib/sepgsql/pgstartup.log /var/log/postgresql /var/log/postgres.log.* /var/log/rhdb/rhdb /var/log/sepostgresql.log.* postgresql_var_run_t This type is used for run-time files for PostgreSQL, such as the process id (PID) in the /var/run/postgresql/ directory. | null | https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/selinux_users_and_administrators_guide/sect-managing_confined_services-postgresql-types |
Chapter 8. Interacting with a running Red Hat build of Kogito microservice | Chapter 8. Interacting with a running Red Hat build of Kogito microservice After your Red Hat build of Kogito microservice is running, you can send REST API requests to interact with your application and execute your microservices according to how you set up the application. This example tests the /persons REST API endpoint that is automatically generated the decisions in the PersonDecisions.dmn file (or the rules in the PersonRules.drl file if you used a DRL rule unit). For this example, use a REST client, curl utility, or the Swagger UI configured for the application (such as http://localhost:8080/q/swagger-ui or http://localhost:8080/swagger-ui.html ) to send API requests with the following components: URL : http://localhost:8080/persons HTTP headers : For POST requests only: accept : application/json content-type : application/json HTTP methods : GET , POST , or DELETE Example POST request body to add an adult (JSON) { "person": { "name": "John Quark", "age": 20 } } Example curl command to add an adult Example response (JSON) { "id": "3af806dd-8819-4734-a934-728f4c819682", "person": { "name": "John Quark", "age": 20, "adult": false }, "isAdult": true } This example procedure uses curl commands for convenience. Procedure In a command terminal window that is separate from your running application, navigate to the project that contains your Red Hat build of Kogito microservice and use any of the following curl commands with JSON requests to interact with your running microservice: Note On Spring Boot, you might need to modify how your application exposes API endpoints in order for these example requests to function. For more information, see the README file included in the example Spring Boot project that you created for this tutorial. Add an adult person: Example request Example response Add an underage person: Example request Example response Complete the evaluation using the returned UUIDs: Example request | [
"{ \"person\": { \"name\": \"John Quark\", \"age\": 20 } }",
"curl -X POST http://localhost:8080/persons -H 'content-type: application/json' -H 'accept: application/json' -d '{\"person\": {\"name\":\"John Quark\", \"age\": 20}}'",
"{ \"id\": \"3af806dd-8819-4734-a934-728f4c819682\", \"person\": { \"name\": \"John Quark\", \"age\": 20, \"adult\": false }, \"isAdult\": true }",
"curl -X POST http://localhost:8080/persons -H 'content-type: application/json' -H 'accept: application/json' -d '{\"person\": {\"name\":\"John Quark\", \"age\": 20}}'",
"{\"id\":\"3af806dd-8819-4734-a934-728f4c819682\",\"person\":{\"name\":\"John Quark\",\"age\":20,\"adult\":false},\"isAdult\":true}",
"curl -X POST http://localhost:8080/persons -H 'content-type: application/json' -H 'accept: application/json' -d '{\"person\": {\"name\":\"Jenny Quark\", \"age\": 15}}'",
"{\"id\":\"8eef502b-012b-4628-acb7-73418a089c08\",\"person\":{\"name\":\"Jenny Quark\",\"age\":15,\"adult\":false},\"isAdult\":false}",
"curl -X POST http://localhost:8080/persons/8eef502b-012b-4628-acb7-73418a089c08/ChildrenHandling/cdec4241-d676-47de-8c55-4ee4f9598bac -H 'content-type: application/json' -H 'accept: application/json' -d '{}'"
] | https://docs.redhat.com/en/documentation/red_hat_decision_manager/7.13/html/getting_started_with_red_hat_build_of_kogito_in_red_hat_decision_manager/proc-kogito-microservice-interacting-app_getting-started-kogito-microservices |
Chapter 3. Ruby Examples | Chapter 3. Ruby Examples 3.1. Connecting to the Red Hat Virtualization Manager The Connection class is the entry point of the software development kit. It provides access to the services of the Red Hat Virtualization Manager's REST API. The parameters of the Connection class are: url - Base URL of the Red Hat Virtualization Manager API username password ca_file - PEM file containing the trusted CA certificates. The ca.pem file is required when connecting to a server protected by TLS. If you do not specify the ca_file , the system-wide CA certificate store is used. Connecting to the Red Hat Virtualization Manager connection = OvirtSDK4::Connection.new( url: 'https://engine.example.com/ovirt-engine/api', username: 'admin@internal', password: '...', ca_file: 'ca.pem', ) Important The connection holds critical resources, including a pool of HTTP connections to the server and an authentication token. You must free these resources when they are no longer in use: The connection, and all the services obtained from it, cannot be used after the connection has been closed. If the connection fails, the software development kit will raise an Error exception, containing details of the failure. For more information, see http://www.rubydoc.info/gems/ovirt-engine-sdk/OvirtSDK4/Connection:initialize . | [
"connection = OvirtSDK4::Connection.new( url: 'https://engine.example.com/ovirt-engine/api', username: 'admin@internal', password: '...', ca_file: 'ca.pem', )",
"connection.close"
] | https://docs.redhat.com/en/documentation/red_hat_virtualization/4.3/html/ruby_sdk_guide/chap-Ruby_Examples |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.