title
stringlengths
4
168
content
stringlengths
7
1.74M
commands
sequencelengths
1
5.62k
url
stringlengths
79
342
Chapter 2. Specifying the RHEL kernel to run
Chapter 2. Specifying the RHEL kernel to run You can boot any installed kernel, standard or Real Time by selecting the required kernel manually in the GRUB menu during booting. You can also configure the kernel to boot by default. When the RHEL for Real Time kernel is installed, it is automatically set to be the default kernel and is used on the boot. 2.1. Displaying the default kernel You can display the kernel configured to boot by default. Procedure To view the default kernel: The rt in the output of the command shows that the default kernel is a real time kernel. 2.2. Displaying the running kernel You can display the currently running kernel Procedure To show which kernel the system is currently running. Note When the system receives a minor update, for example, from 8.3 to 8.4, the default kernel might automatically change from the Real Time kernel back to the standard kernel. 2.3. Configuring kernel-rt as the default boot kernel On a newly installed system, the stock RHEL kernel is set as the default boot kernel and is used as the default kernel on the boot and subsequent system updates. You can change this configuration and set kernel-rt as the default kernel to boot with and also make this configuration persistent across the system updates. Configuring kernel-rt is a one-time procedure, which you can change or revert to another kernel if necessary. You can also configure other existing kernels variants, such as, kernel , kernel-debug , or kernel-rt-debug , as the default boot kernel. Procedure To configure kernel-rt as the default boot kernel, enter the following command: RT_VMLINUZ is the name of the vmlinux file that is associated with the kernel-rt kernel. For example: To configure kernel-rt as default boot kernel on system updates, enter the following command: The UPDATEDEFAULT variable when specified as yes , sets the default kernel to change with system updates. In the example output, the path for the default kernel is specific to the kernel-rt-core package installed. You can determine the path to the kernel from a package by using the rpm -q kernel-rt-core command. Optional: If you need to determine the path to the kernel from a package, first list the installed packages: To use the latest installed package as the default, enter the following command to find the path to the boot image from that package: To configure kernel-rt as the default boot kernel, enter the following command: Verification To verify kernel-rt is the default kernel, enter the following command:
[ "grubby --default-kernel /boot/vmlinuz-kernel-rt-5.14.0-70.13.1.rt21.83.el9_0", "~]# uname -a Linux rt-server.example.com 4.18.0-80.rt9.138.el8.x86_64 ...", "grubby --set-default= <RT_VMLINUZ>", "grubby --set-default=/boot/vmlinuz-5.14.0-284.11.1.rt14.296.el9_2.x86_64+rt", "sed -i 's/UPDATEDEFAULT=.*/UPDATEDEFAULT=yes/g'/etc/sysconfig/kernel sed -i 's/DEFAULTKERNEL=.*/DEFAULTKERNEL=kernel-rt-core/g'/etc/sysconfig/kernel", "rpm -q kernel-rt-core kernel-rt-core-5.14.0-284.11.1.rt14.296.el9_2.x86_64 kernel-rt-core-5.14.0-284.10.1.rt14.295.el9_2.x86_64 kernel-rt-core-5.14.0-284.9.1.rt14.294.el9_2.x86_64", "rpm -ql kernel-rt-core-5.14.0-284.11.1.rt14.296.el9_2.x86_64|grep'^/boot/vmlinu' /boot/vmlinuz-5.14.0-284.11.1.rt14.296.el9_2.x86_64.x86_64+rt", "grubby --set-default=/boot/vmlinuz-5.14.0-284.11.1.rt14.296.el9_2.x86_64.x86_64+rt", "grubby --default-kernel /boot/vmlinuz-5.14.0-284.11.1.rt14.296.el9_2.x86_64.x86_64+rt" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux_for_real_time/9/html/installing_rhel_9_for_real_time/assembly_specifying-the-kernel-to-run_installing-rhel-9-for-real-time
10.9.2. Related Books
10.9.2. Related Books Apache Desktop Reference by Ralf S. Engelschall; Addison Wesley - Written by ASF member and mod_ssl author Ralf Engelschall, the Apache Desktop Reference provides a concise but comprehensive reference guide to using the Apache HTTP Server at compilation, configuration, and run time. This book is available online at http://www.apacheref.com/ . Professional Apache by Peter Wainwright; Wrox Press Ltd - Professional Apache is from Wrox Press Ltd's "Programmer to Programmer" series and is aimed at both experienced and novice Web server administrators. Administering Apache by Mark Allan Arnold; Osborne Media Group - This book is targeted at Internet Service Providers who aim to provide more secure services. Apache Server Unleashed by Richard Bowen, et al; SAMS BOOKS - An encyclopedic source for the Apache HTTP Server. Apache Pocket Reference by Andrew Ford, Gigi Estabrook; O'Reilly - This is the latest addition to the O'Reilly Pocket Reference series. System Administrators Guide ; Red Hat, Inc - Contains a chapter about configuring the Apache HTTP Server using the HTTP Configuration Tool and a chapter about configuring the Apache HTTP Server Secure Server. Security Guide ; Red Hat, Inc - The Server Security chapter explains ways to secure Apache HTTP Server and other services.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/reference_guide/s2-apache-related-books
Chapter 3. Verifying OpenShift Data Foundation deployment for internal mode
Chapter 3. Verifying OpenShift Data Foundation deployment for internal mode Use this section to verify that OpenShift Data Foundation is deployed correctly. Verify the state of the pods . Verify that the OpenShift Data Foundation cluster is healthy . Verify that the Multicloud Object Gateway is healthy . Verify that the OpenShift Data Foundation specific storage classes exist . 3.1. Verifying the state of the pods To determine if OpenShift Data Foundation is deployed successfully, you can verify that the pods are in Running state. Procedure Click Workloads Pods from the left pane of the OpenShift Web Console. Select openshift-storage from the Project drop-down list. Note If the Show default projects option is disabled, use the toggle button to list all the default projects. For more information on the expected number of pods for each component and how it varies depending on the number of nodes, see Table 3.1, "Pods corresponding to OpenShift Data Foundation cluster" . Verify that the following pods are in running and completed state by clicking the Running and the Completed tabs: Table 3.1. Pods corresponding to OpenShift Data Foundation cluster Component Corresponding pods OpenShift Data Foundation Operator ocs-operator-* (1 pod on any storage node) ocs-metrics-exporter-* (1 pod on any storage node) odf-operator-controller-manager-* (1 pod on any storage node) odf-console-* (1 pod on any storage node) csi-addons-controller-manager-* (1 pod on any storage node) Rook-ceph Operator rook-ceph-operator-* (1 pod on any storage node) OpenShift Data Foundation Client Operator ocs-client-operator-console-* (1 pod on any storage node) ocs-client-operator-controller-manager-* (1 pod on any storage node) UX Backend ux-backend-server-* (1 pod on any storage node) Multicloud Object Gateway noobaa-operator-* (1 pod on any storage node) noobaa-core-* (1 pod on any storage node) noobaa-db-pg-* (1 pod on any storage node) noobaa-endpoint-* (1 pod on any storage node) MON rook-ceph-mon-* (3 pods on each storage node) MGR rook-ceph-mgr-* (2 pods distributed across storage nodes) MDS rook-ceph-mds-ocs-storagecluster-cephfilesystem-* (2 pods distributed across storage node) RGW rook-ceph-rgw-ocs-storagecluster-cephobjectstore-* (1 pod on any storage node) CSI cephfs csi-cephfsplugin-* (1 pod on each storage node) csi-cephfsplugin-provisioner-* (2 pods distributed across storage nodes) rbd csi-rbdplugin-* (1 pod on each storage node) csi-rbdplugin-provisioner-* (2 pods distributed across storage nodes) rook-ceph-crashcollector rook-ceph-crashcollector-* (1 pod on each storage node) rook-ceph-exporter rook-ceph-exporter-worker-* (1 pod on each storage node) OSD rook-ceph-osd-* (1 pod for each device) rook-ceph-osd-prepare-* (1 pod for each device) 3.2. Verifying the OpenShift Data Foundation cluster is healthy Procedure In the OpenShift Web Console, click Storage Data Foundation . Click the Storage Systems tab and then click on ocs-storagecluster-storagesystem . In the Status card of Block and File dashboard under Overview tab, verify that both Storage Cluster and Data Resiliency has a green tick mark. In the Details card , verify that the cluster information is displayed. For more information on the health of the OpenShift Data Foundation cluster using the Block and File dashboard, see Monitoring OpenShift Data Foundation . 3.3. Verifying the Multicloud Object Gateway is healthy Procedure In the OpenShift Web Console, click Storage Data Foundation . In the Status card of the Overview tab, click Storage System and then click the storage system link from the pop up that appears. In the Status card of the Object tab, verify that both Object Service and Data Resiliency have a green tick. In the Details card, verify that the MCG information is displayed. For more information on the health of the OpenShift Data Foundation cluster using the object service dashboard, see Monitoring OpenShift Data Foundation . Important The Multicloud Object Gateway only has a single copy of the database (NooBaa DB). This means if NooBaa DB PVC gets corrupted and we are unable to recover it, can result in total data loss of applicative data residing on the Multicloud Object Gateway. Because of this, Red Hat recommends taking a backup of NooBaa DB PVC regularly. If NooBaa DB fails and cannot be recovered, then you can revert to the latest backed-up version. For instructions on backing up your NooBaa DB, follow the steps in this knowledgabase article . 3.4. Verifying that the specific storage classes exist Procedure Click Storage Storage Classes from the left pane of the OpenShift Web Console. Verify that the following storage classes are created with the OpenShift Data Foundation cluster creation: ocs-storagecluster-ceph-rbd ocs-storagecluster-cephfs openshift-storage.noobaa.io ocs-storagecluster-ceph-rgw
null
https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.17/html/deploying_openshift_data_foundation_using_ibm_power/verifying_openshift_data_foundation_deployment_for_internal_mode
Chapter 2. SELinux Contexts
Chapter 2. SELinux Contexts Processes and files are labeled with an SELinux context that contains additional information, such as an SELinux user, role, type, and, optionally, a level. When running SELinux, all of this information is used to make access control decisions. In Red Hat Enterprise Linux, SELinux provides a combination of Role-Based Access Control (RBAC), Type Enforcement (TE), and, optionally, Multi-Level Security (MLS). The following is an example showing SELinux context. SELinux contexts are used on processes, Linux users, and files, on Linux operating systems that run SELinux. Use the following command to view the SELinux context of files and directories: SELinux contexts follow the SELinux user:role:type:level syntax. The fields are as follows: SELinux user The SELinux user identity is an identity known to the policy that is authorized for a specific set of roles, and for a specific MLS/MCS range. Each Linux user is mapped to an SELinux user using SELinux policy. This allows Linux users to inherit the restrictions placed on SELinux users. The mapped SELinux user identity is used in the SELinux context for processes in that session, in order to define what roles and levels they can enter. Enter the following command as root to view a list of mappings between SELinux and Linux user accounts (you need to have the policycoreutils-python package installed): Output may differ slightly from system to system: The Login Name column lists Linux users. The SELinux User column lists which SELinux user the Linux user is mapped to. For processes, the SELinux user limits which roles and levels are accessible. The MLS/MCS Range column, is the level used by Multi-Level Security (MLS) and Multi-Category Security (MCS). The Service column determines the correct SELinux context, in which the Linux user is supposed to be logged in to the system. By default, the asterisk ( * ) character is used, which stands for any service. role Part of SELinux is the Role-Based Access Control (RBAC) security model. The role is an attribute of RBAC. SELinux users are authorized for roles, and roles are authorized for domains. The role serves as an intermediary between domains and SELinux users. The roles that can be entered determine which domains can be entered; ultimately, this controls which object types can be accessed. This helps reduce vulnerability to privilege escalation attacks. type The type is an attribute of Type Enforcement. The type defines a domain for processes, and a type for files. SELinux policy rules define how types can access each other, whether it be a domain accessing a type, or a domain accessing another domain. Access is only allowed if a specific SELinux policy rule exists that allows it. level The level is an attribute of MLS and MCS. An MLS range is a pair of levels, written as lowlevel-highlevel if the levels differ, or lowlevel if the levels are identical ( s0-s0 is the same as s0 ). Each level is a sensitivity-category pair, with categories being optional. If there are categories, the level is written as sensitivity:category-set . If there are no categories, it is written as sensitivity . If the category set is a contiguous series, it can be abbreviated. For example, c0.c3 is the same as c0,c1,c2,c3 . The /etc/selinux/targeted/setrans.conf file maps levels ( s0:c0 ) to human-readable form (that is CompanyConfidential ). In Red Hat Enterprise Linux, targeted policy enforces MCS, and in MCS, there is just one sensitivity, s0 . MCS in Red Hat Enterprise Linux supports 1024 different categories: c0 through to c1023 . s0-s0:c0.c1023 is sensitivity s0 and authorized for all categories. MLS enforces the Bell-La Padula Mandatory Access Model, and is used in Labeled Security Protection Profile (LSPP) environments. To use MLS restrictions, install the selinux-policy-mls package, and configure MLS to be the default SELinux policy. The MLS policy shipped with Red Hat Enterprise Linux omits many program domains that were not part of the evaluated configuration, and therefore, MLS on a desktop workstation is unusable (no support for the X Window System); however, an MLS policy from the upstream SELinux Reference Policy can be built that includes all program domains. For more information on MLS configuration, see Section 4.13, "Multi-Level Security (MLS)" . 2.1. Domain Transitions A process in one domain transitions to another domain by executing an application that has the entrypoint type for the new domain. The entrypoint permission is used in SELinux policy and controls which applications can be used to enter a domain. The following example demonstrates a domain transition: Procedure 2.1. An Example of a Domain Transition A user wants to change their password. To do this, they run the passwd utility. The /usr/bin/passwd executable is labeled with the passwd_exec_t type: The passwd utility accesses /etc/shadow , which is labeled with the shadow_t type: An SELinux policy rule states that processes running in the passwd_t domain are allowed to read and write to files labeled with the shadow_t type. The shadow_t type is only applied to files that are required for a password change. This includes /etc/gshadow , /etc/shadow , and their backup files. An SELinux policy rule states that the passwd_t domain has its entrypoint permission set to the passwd_exec_t type. When a user runs the passwd utility, the user's shell process transitions to the passwd_t domain. With SELinux, since the default action is to deny, and a rule exists that allows (among other things) applications running in the passwd_t domain to access files labeled with the shadow_t type, the passwd application is allowed to access /etc/shadow , and update the user's password. This example is not exhaustive, and is used as a basic example to explain domain transition. Although there is an actual rule that allows subjects running in the passwd_t domain to access objects labeled with the shadow_t file type, other SELinux policy rules must be met before the subject can transition to a new domain. In this example, Type Enforcement ensures: The passwd_t domain can only be entered by executing an application labeled with the passwd_exec_t type; can only execute from authorized shared libraries, such as the lib_t type; and cannot execute any other applications. Only authorized domains, such as passwd_t , can write to files labeled with the shadow_t type. Even if other processes are running with superuser privileges, those processes cannot write to files labeled with the shadow_t type, as they are not running in the passwd_t domain. Only authorized domains can transition to the passwd_t domain. For example, the sendmail process running in the sendmail_t domain does not have a legitimate reason to execute passwd ; therefore, it can never transition to the passwd_t domain. Processes running in the passwd_t domain can only read and write to authorized types, such as files labeled with the etc_t or shadow_t types. This prevents the passwd application from being tricked into reading or writing arbitrary files.
[ "~]USD ls -Z file1 -rwxrw-r-- user1 group1 unconfined_u:object_r:user_home_t:s0 file1", "~]# semanage login -l Login Name SELinux User MLS/MCS Range Service __default__ unconfined_u s0-s0:c0.c1023 * root unconfined_u s0-s0:c0.c1023 * system_u system_u s0-s0:c0.c1023 *", "~]USD ls -Z /usr/bin/passwd -rwsr-xr-x root root system_u:object_r:passwd_exec_t:s0 /usr/bin/passwd", "~]USD ls -Z /etc/shadow -r--------. root root system_u:object_r:shadow_t:s0 /etc/shadow" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/selinux_users_and_administrators_guide/chap-Security-Enhanced_Linux-SELinux_Contexts
Chapter 5. Known issues
Chapter 5. Known issues This section documents known issues found in this release of Red Hat Ceph Storage. 5.1. The Cephadm utility Cephadm does not maintain the OSD weight when draining an OSD Cephadm does not maintain the OSD weight when draining an OSD. Due to this, if the ceph orch osd rm <osd-id> command is run and later, the OSD removal is stopped, Cephadm will not set the crush weight of the OSD back to its original value. The crush weight will remain at 0. As a workaround, users have to manually adjust the crush weight of the OSD to its original value, or complete removal of the OSD and deploy a new one. Users should be careful when cancelling a ceph orch osd rm operation, as the crush weight of the OSD will not be returned to its original value before the removal process begins. Bugzilla:2247211 Repeated use of the Ceph Object Gateway realm bootstrap command causes setting the zonegroup hostname to fail Setting the zonegroup hostnames using the Ceph Object Gateway realm bootstrap command fails when being done multiple times. Due to this, the repeated use of the Ceph Object Gateway realm bootstrap command to recreate a realm/zonegroup/zone does not work properly with zonegroup_hostnames field and the hostnames will not be set in the zonegroup. As a workaround, set the zonegroup hostnames manually using the radosgw-admin tool. Bugzilla:2241321 5.2. Ceph Object Gateway Processing a query on a large Parquet object causes Ceph Object gateway processes to stop Previously, in some cases, upon processing a query on a Parquet object, that object would be read chunk after chunk and these chunks could be quite big. This would cause the Ceph Object Gateway to load a large buffer into memory that is too big for a low-end machine; especially, when Ceph Object Gateway is co-located with OSD processes, which consumes a large amount of memory. This situation would trigger the OS to kill the Ceph Object Gateway process. As a workaround, place the Ceph Object Gateway on a separate node and as a result, more memory is left for Ceph Object gateway, enabling it to complete processing successfully. Bugzilla:2275323 Current RGW STS implementation does not support encryption keys larger than 1024 bytes The current RGW STS implementation does not support encryption keys larger than 1024 bytes. As a workaround, in Keycloak: realm settings - keys , edit the 'rsa-enc-generated' provider to have priority 90 rather than 100 and keySize as 1024 instead of 2048. Bugzilla:2276931 Intel QAT Acceleration for Object Compression & Encryption Intel QuickAssist Technology (QAT) is implemented to help reduce node CPU usage and improve the performance of Ceph Object Gateway when enabling compression and encryption. In this release, QAT can only be configured on new setups (Greenfield), which is a limitation of this feature. QAT Ceph Object Gateway daemons cannot be configured in the same cluster as non-QAT (regular) Ceph Object Gateway daemons. Bugzilla:2284394
null
https://docs.redhat.com/en/documentation/red_hat_ceph_storage/7/html/7.1_release_notes/known-issues
Network Functions Virtualization Planning and Configuration Guide
Network Functions Virtualization Planning and Configuration Guide Red Hat OpenStack Platform 16.2 Planning and Configuring the Network Functions Virtualization (NFV) OpenStack Deployment OpenStack Documentation Team [email protected]
null
https://docs.redhat.com/en/documentation/red_hat_openstack_platform/16.2/html/network_functions_virtualization_planning_and_configuration_guide/index
Chapter 3. Enhancements
Chapter 3. Enhancements This section describes the major enhancements introduced in Red Hat OpenShift Data foundation 4.13. 3.1. Disable Multicloud Object Gateway external service during deployment With this release, there is an option to deploy OpenShift data Foundation without the Multicloud Object Gateway load balancer service using the command line interface (CLI). You need to use the disableLoadBalancerService variable in the storagecluster CRD. This provides enhanced security and does not expose services externally to the cluster. For more information, see the knowledgebase article Install Red Hat OpenShift Data Foundation (previously known as OpenShift Container Storage) 4.X in internal mode using command line interface and Disabling Multicloud Object Gateway external service after deploying OpenShift Data Foundation . 3.2. Network File System metrics for enhanced observability Network File System (NFS) metrics dashboard provides observability for NFS mounts such as the following: Mount point for any exported NFS shares Number of client mounts A breakdown statistics of the clients that are connected to help determine internal versus the external client mounts Grace period status of the Ganesha server Health statuses of the Ganesha server For more information, see Network File System metrics . 3.3. Metrics to improve reporting of unhealthy blocklisted nodes With this enhancement, alerts are displayed in OpenShift Web Console to inform about the blocklisted kernel RBD client on a worker node. This helps to reduce any potential operational issue or troubleshooting time. 3.4. Enable Ceph exporter with labeled performance counters in Rook With this enhancement, Ceph exporter is enabled in Rook and provided with labeled performance counters for rbd-mirror metrics thereby enhancing scalability for a larger number of images. 3.5. New Amazon Web Services (AWS) regions for Multicloud Object Gateway backing store With this enhancement, the new regions that were recently added to AWS are included in the list of regions for Multicloud Object Gateway backing store. As a result, it is now possible to deploy default backing store on the new regions. 3.6. Allow RBD pool name with an underscore or period Previously, creating a storage system in OpenShift Data Foundation using an external Ceph cluster would fail if the RADOS block device (RBD) pool name contained an underscore (_) or a period(.). With this fix, the Python script ( ceph-external-cluster-details-exporter.py ) is enhanced to contain underscore and period so that an alias for the RBD pool names can be passed in. This alias allows the OpenShift Data Foundation to adopt an external Ceph cluster with RBD pool names containing an underscore(_) or a period(.). 3.7. OSD replicas are set to match the number of failure domains Previously, an unbalanced situation used to occur when the number of repliacas did not match the number of failure domains. With this enhancement, OSD replicas are set to match the number of failure domains thereby avoiding the imbalance. For example, when a cluster is deployed on 4 zone cluster with 4 nodes, 4 OSD replicas are created. 3.8. Change in default permission and FSGroupPolicy Permissions of newly created volumes now defaults to a more secure 755 instead of 777. FSGroupPolicy is now set to File (instead of ReadWriteOnceWithFSType in ODF 4.11) to allow application access to volumes based on FSGroup. This involves Kubernetes using fsGroup to change permissions and ownership of the volume to match user requested fsGroup in the pod's SecurityPolicy. Note Existing volumes with a huge number of files may take a long time to mount since changing permissions and ownership takes a lot of time. For more information, see this knowledgebase solution .
null
https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.13/html/4.13_release_notes/enhancements
Making open source more inclusive
Making open source more inclusive Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see our CTO Chris Wright's message .
null
https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.15/html/planning_your_deployment/making-open-source-more-inclusive
Chapter 21. Interceptors
Chapter 21. Interceptors JBoss EAP messaging supports interceptors to intercept packets entering and exiting the server. Incoming and outgoing interceptors are called for every packet entering or exiting the server respectively. This allows custom code to be executed, such as for auditing or filtering packets. Interceptors can modify the packets they intercept. This makes interceptors powerful, but also potentially dangerous. 21.1. Implementing Interceptors An interceptor must implement the Interceptor interface: package org.apache.artemis.activemq.api.core.interceptor; public interface Interceptor { boolean intercept(Packet packet, RemotingConnection connection) throws ActiveMQException; } The returned boolean value is important: if true is returned, the process continues normally if false is returned, the process is aborted, no other interceptors will be called and the packet will not be processed further by the server. Interceptor classes should be added to JBoss EAP as a module. See Create a Custom Module in the JBoss EAP Configuration Guide for more information. 21.2. Configuring Interceptors After adding their module to JBoss EAP as a custom module, both incoming and outgoing interceptors are added to the messaging subsystem configuration by using the management CLI. Note You must start JBoss EAP in administrator only mode before the new interceptor configuration will be accepted. See Running JBoss EAP in Admin-only Mode in the JBoss EAP Configuration Guide for details. Restart the server in normal mode after the new configuration is processed. Each interceptor should be added according to the example management CLI command below. The examples assume each interceptor has already been added to JBoss EAP as a custom module. Adding an outgoing interceptor follows a similar syntax, as the example below illustrates.
[ "package org.apache.artemis.activemq.api.core.interceptor; public interface Interceptor { boolean intercept(Packet packet, RemotingConnection connection) throws ActiveMQException; }", "/subsystem=messaging-activemq/server=default:list-add(name=incoming-interceptors, value={name => \"foo.bar.MyIncomingInterceptor\", module=>\"foo.bar.interceptors\"})", "/subsystem=messaging-activemq/server=default:list-add(name=outgoing-interceptors, value={name => \"foo.bar.MyOutgoingInterceptor\", module=>\"foo.bar.interceptors\"})" ]
https://docs.redhat.com/en/documentation/red_hat_jboss_enterprise_application_platform/7.4/html/configuring_messaging/configuring-artemis-interceptors
Chapter 5. Storage classes and storage pools
Chapter 5. Storage classes and storage pools The OpenShift Data Foundation operator installs a default storage class depending on the platform in use. This default storage class is owned and controlled by the operator and it cannot be deleted or modified. However, you can create a custom storage class if you want the storage class to have a different behaviour. You can create multiple storage pools which map to storage classes that provide the following features: Enable applications with their own high availability to use persistent volumes with two replicas, potentially improving application performance. Save space for persistent volume claims using storage classes with compression enabled. Note Multiple storage classes and multiple pools are not supported for external mode OpenShift Data Foundation clusters. Note With a minimal cluster of a single device set, only two new storage classes can be created. Every storage cluster expansion allows two new additional storage classes. 5.1. Creating storage classes and pools You can create a storage class using an existing pool or you can create a new pool for the storage class while creating it. Prerequisites Ensure that you are logged into the OpenShift Container Platform web console and OpenShift Data Foundation cluster is in Ready state. Procedure Click Storage StorageClasses . Click Create Storage Class . Enter the storage class Name and Description . Reclaim Policy is set to Delete as the default option. Use this setting. If you change the reclaim policy to Retain in the storage class, the persistent volume (PV) remains in Released state even after deleting the persistent volume claim (PVC). Volume binding mode is set to WaitForConsumer as the default option. If you choose the Immediate option, then the PV is created at the same time while creating the PVC. Select RBD Provisioner which is the plugin used for provisioning the persistent volumes. Select an existing Storage Pool from the list or create a new pool. Create new pool Click Create New Pool . Enter Pool name . Choose 2-way-Replication or 3-way-Replication as the Data Protection Policy. Select Enable compression if you need to compress the data. Enabling compression can impact application performance and might prove ineffective when data to be written is already compressed or encrypted. Data written before enabling compression will not be compressed. Click Create to create the new storage pool. Click Finish after the pool is created. Optional: Select Enable Encryption checkbox. Click Create to create the storage class. 5.2. Creating a storage class for persistent volume encryption Persistent volume (PV) encryption guarantees isolation and confidentiality between tenants (applications). Before you can use PV encryption, you must create a storage class for PV encryption. OpenShift Data Foundation supports storing encryption passphrases in HashiCorp Vault. Use the following procedure to create an encryption enabled storage class using an external key management system (KMS) for persistent volume encryption. Persistent volume encryption is only available for RBD PVs. You can configure access to the KMS in two different ways: Using vaulttokens : allows users to authenticate using a token Using vaulttenantsa (technology preview): allows users to use serviceaccounts to authenticate with Vault Important Accessing the KMS using vaulttenantsa is a Technology Preview feature. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information, see Technology Preview Features Support Scope . See the relevant prerequisites section for your use case before following the procedure for creating the storage class: Section 5.2.1, "Prerequisites for using vaulttokens " Section 5.2.2, "Prerequisites for using vaulttenantsa " 5.2.1. Prerequisites for using vaulttokens The OpenShift Data Foundation cluster is in Ready state. On the external key management system (KMS), Ensure that a policy with a token exists and the key value backend path in Vault is enabled. For more information, see Enabling key value and policy in Vault . Ensure that you are using signed certificates on your Vault servers. Create a secret in the tenant's namespace as follows: On the OpenShift Container Platform web console, navigate to Workloads Secrets . Click Create Key/value secret . Enter Secret Name as ceph-csi-kms-token . Enter Key as token . Enter Value . It is the token from Vault. You can either click Browse to select and upload the file containing the token or enter the token directly in the text box. Click Create . Note The token can be deleted only after all the encrypted PVCs using the ceph-csi-kms-token have been deleted. , follow the steps in Section 5.2.3, "Procedure for creating a storage class for PV encryption" . 5.2.2. Prerequisites for using vaulttenantsa The OpenShift Data Foundation cluster is in Ready state. On the external key management system (KMS), Ensure that a policy exists and the key value backend path in Vault is enabled. For more information, see Enabling key value and policy in Vault . Ensure that you are using signed certificates on your Vault servers. Create the following serviceaccount in the tenant namespace as shown below: The Kubernetes authentication method must be configured before OpenShift Data Foundation can authenticate with and start using Vault. The instructions below create and configure serviceAccount , ClusterRole , and ClusterRoleBinding required to allow OpenShift Data Foundation to authenticate with Vault. Apply the following YAML to your Openshift cluster: Identify the secret name associated with the serviceaccount (SA) created above: Get the token and the CA certificate from the secret: Retrieve the OCP cluster endpoint: Use the information collected in the steps above to setup the kubernetes authentication method in Vault as shown below: Create a role in Vault for the tenant namespace: csi-kubernetes is the default role name that OpenShift Data Foundation looks for in Vault. The default service account name in the tenant namespace in the Openshift Data Foundation cluster is ceph-csi-vault-sa . These default values can be overridden by creating a ConfigMap in the tenant namespace. For more information about overriding the default names, see Overriding Vault connection details using tenant ConfigMap . In order to create a storageclass that uses the vaulttenantsa method for PV encrytpion, you must either edit the existing ConfigMap or create a ConfigMap named csi-kms-connection-details that will hold all the information needed to establish the connection with Vault. The sample yaml given below can be used to update or create the csi-kms-connection-detail ConfigMap: encryptionKMSType : should be set to vaulttenantsa to use service accounts for authentication with vault. vaultAddress : The hostname or IP address of the vault server with the port number. vaultTLSServerName : (Optional) The vault TLS server name vaultAuthPath : (Optional) The path where kubernetes auth method is enabled in Vault. The default path is kubernetes . If the auth method is enabled in a different path other than kubernetes , this variable needs to be set as "/v1/auth/<path>/login" . vaultAuthNamespace : (Optional) The Vault namespace where kubernetes auth method is enabled. vaultNamespace : (Optional) The Vault namespace where the backend path being used to store the keys exists vaultBackendPath : The backend path in Vault where the encryption keys will be stored vaultCAFromSecret : The secret in the OpenShift Data Foundation cluster containing the CA certificate from Vault vaultClientCertFromSecret : The secret in the OpenShift Data Foundation cluster containing the client certificate from Vault vaultClientCertKeyFromSecret : The secret in the OpenShift Data Foundation cluster containing the client private key from Vault tenantSAName : (Optional) The service account name in the tenant namespace. The default value is ceph-csi-vault-sa . If a different name is to be used, this variable has to be set accordingly. , follow the steps in Section 5.2.3, "Procedure for creating a storage class for PV encryption" . 5.2.3. Procedure for creating a storage class for PV encryption After performing the required prerequisites for either vaulttokens or vaulttenantsa , perform the steps below to create a storageclass with encryption enabled. Navigate to Storage StorageClasses . Click Create Storage Class . Enter the storage class Name and Description . Select either Delete or Retain for the Reclaim Policy . By default, Delete is selected. Select either Immediate or WaitForFirstConsumer as the Volume binding mode . WaitForConsumer is set as the default option. Select RBD Provisioner openshift-storage.rbd.csi.ceph.com which is the plugin used for provisioning the persistent volumes. Select Storage Pool where the volume data is stored from the list or create a new pool. Select the Enable encryption checkbox. There are two options available to set the KMS connection details: Choose existing KMS connection : Select an existing KMS connection from the drop-down list. The list is populated from the the connection details available in the csi-kms-connection-details ConfigMap. Create new KMS connection : This is applicable for vaulttokens only. Key Management Service Provider is set to Vault by default. Enter a unique Vault Service Name , host Address of the Vault server ( https://<hostname or ip> ), and Port number. Expand Advanced Settings to enter additional settings and certificate details based on your Vault configuration. Enter the key value secret path in Backend Path that is dedicated and unique to OpenShift Data Foundation. Optional : Enter TLS Server Name and Vault Enterprise Namespace . Provide CA Certificate , Client Certificate and Client Private Key by uploading the respective PEM encoded certificate file. Click Save . Click Save . Click Create . Edit the ConfigMap to add the VAULT_BACKEND or vaultBackend parameter if the HashiCorp Vault setup does not allow automatic detection of the Key/Value (KV) secret engine API version used by the backend path. Note VAULT_BACKEND or vaultBackend are optional parameters that has added to the configmap to specify the version of the KV secret engine API associated with the backend path. Ensure that the value matches the KV secret engine API version that is set for the backend path, otherwise it might result in a failure during persistent volume claim (PVC) creation. Identify the encryptionKMSID being used by the newly created storage class. On the OpenShift Web Console, navigate to Storage Storage Classes . Click the Storage class name YAML tab. Capture the encryptionKMSID being used by the storage class. Example: On the OpenShift Web Console, navigate to Workloads ConfigMaps . To view the KMS connection details, click csi-kms-connection-details . Edit the ConfigMap. Click Action menu (...) Edit ConfigMap . Add the VAULT_BACKEND or vaultBackend parameter depending on the backend that is configured for the previously identified encryptionKMSID. You can assign kv for KV secret engine API, version 1 and kv-v2 for KV secret engine API, version 2. Example: Click Save steps The storage class can be used to create encrypted persistent volumes. For more information, see managing persistent volume claims . Important Red Hat works with the technology partners to provide this documentation as a service to the customers. However, Red Hat does not provide support for the HashiCorp product. For technical assistance with this product, contact HashiCorp . 5.2.3.1. Overriding Vault connection details using tenant ConfigMap The Vault connections details can be reconfigured per tenant by creating a ConfigMap in the Openshift namespace with configuration options that differ from the values set in the csi-kms-connection-details ConfigMap in the openshift-storage namespace. The ConfigMap needs to be located in the tenant namespace. The values in the ConfigMap in the tenant namespace will override the values set in the csi-kms-connection-details ConfigMap for the encrypted Persistent Volumes created in that namespace. Procedure Ensure that you are in the tenant namespace. Click on Workloads ConfigMaps . Click on Create ConfigMap . The following is a sample yaml. The values to be overidden for the given tenant namespace can be specified under the data section as shown below: Once the yaml is edited, click on Create .
[ "cat <<EOF | oc create -f - apiVersion: v1 kind: ServiceAccount metadata: name: ceph-csi-vault-sa EOF", "apiVersion: v1 kind: ServiceAccount metadata: name: rbd-csi-vault-token-review --- kind: ClusterRole apiVersion: rbac.authorization.k8s.io/v1 metadata: name: rbd-csi-vault-token-review rules: - apiGroups: [\"authentication.k8s.io\"] resources: [\"tokenreviews\"] verbs: [\"create\", \"get\", \"list\"] --- kind: ClusterRoleBinding apiVersion: rbac.authorization.k8s.io/v1 metadata: name: rbd-csi-vault-token-review subjects: - kind: ServiceAccount name: rbd-csi-vault-token-review namespace: openshift-storage roleRef: kind: ClusterRole name: rbd-csi-vault-token-review apiGroup: rbac.authorization.k8s.io", "oc -n openshift-storage get sa rbd-csi-vault-token-review -o jsonpath=\"{.secrets[*]['name']}\"", "oc get secret <secret associated with SA> -o jsonpath=\"{.data['token']}\" | base64 --decode; echo oc get secret <secret associated with SA> -o jsonpath=\"{.data['ca\\.crt']}\" | base64 --decode; echo", "oc config view --minify --flatten -o jsonpath=\"{.clusters[0].cluster.server}\"", "vault auth enable kubernetes vault write auth/kubernetes/config token_reviewer_jwt=<SA token> kubernetes_host=<OCP cluster endpoint> kubernetes_ca_cert=<SA CA certificate>", "vault write \"auth/kubernetes/role/csi-kubernetes\" bound_service_account_names=\"ceph-csi-vault-sa\" bound_service_account_namespaces=<tenant_namespace> policies=<policy_name_in_vault>", "apiVersion: v1 data: vault-tenant-sa: |- { \"encryptionKMSType\": \"vaulttenantsa\", \"vaultAddress\": \"<https://hostname_or_ip_of_vault_server:port>\", \"vaultTLSServerName\": \"<vault TLS server name>\", \"vaultAuthPath\": \"/v1/auth/kubernetes/login\", \"vaultAuthNamespace\": \"<vault auth namespace name>\" \"vaultNamespace\": \"<vault namespace name>\", \"vaultBackendPath\": \"<vault backend path name>\", \"vaultCAFromSecret\": \"<secret containing CA cert>\", \"vaultClientCertFromSecret\": \"<secret containing client cert>\", \"vaultClientCertKeyFromSecret\": \"<secret containing client private key>\", \"tenantSAName\": \"<service account name in the tenant namespace>\" } metadata: name: csi-kms-connection-details", "encryptionKMSID: 1-vault", "kind: ConfigMap apiVersion: v1 metadata: name: csi-kms-connection-details [...] data: 1-vault: |- { \"KMS_PROVIDER\": \"vaulttokens\", \"KMS_SERVICE_NAME\": \"1-vault\", [...] \"VAULT_BACKEND\": \"kv-v2\" } 2-vault: |- { \"encryptionKMSType\": \"vaulttenantsa\", [...] \"vaultBackend\": \"kv-v2\" }", "--- apiVersion: v1 kind: ConfigMap metadata: name: ceph-csi-kms-config data: vaultAddress: \"<vault_address:port>\" vaultBackendPath: \"<backend_path>\" vaultTLSServerName: \"<vault_tls_server_name>\" vaultNamespace: \"<vault_namespace>\"" ]
https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.9/html/deploying_and_managing_openshift_data_foundation_using_google_cloud/storage-classes-and-storage-pools_rhodf
Appendix C. Using AMQ Broker with the examples
Appendix C. Using AMQ Broker with the examples The Red Hat build of Apache Qpid JMS examples require a running message broker with a queue named queue . Use the procedures below to install and start the broker and define the queue. C.1. Installing the broker Follow the instructions in Getting Started with AMQ Broker to install the broker and create a broker instance . Enable anonymous access. The following procedures refer to the location of the broker instance as <broker-instance-dir> . C.2. Starting the broker Procedure Use the artemis run command to start the broker. USD <broker-instance-dir> /bin/artemis run Check the console output for any critical errors logged during startup. The broker logs Server is now live when it is ready. USD example-broker/bin/artemis run __ __ ____ ____ _ /\ | \/ |/ __ \ | _ \ | | / \ | \ / | | | | | |_) |_ __ ___ | | _____ _ __ / /\ \ | |\/| | | | | | _ <| '__/ _ \| |/ / _ \ '__| / ____ \| | | | |__| | | |_) | | | (_) | < __/ | /_/ \_\_| |_|\___\_\ |____/|_| \___/|_|\_\___|_| Red Hat AMQ <version> 2020-06-03 12:12:11,807 INFO [org.apache.activemq.artemis.integration.bootstrap] AMQ101000: Starting ActiveMQ Artemis Server ... 2020-06-03 12:12:12,336 INFO [org.apache.activemq.artemis.core.server] AMQ221007: Server is now live ... C.3. Creating a queue In a new terminal, use the artemis queue command to create a queue named queue . USD <broker-instance-dir> /bin/artemis queue create --name queue --address queue --auto-create-address --anycast You are prompted to answer a series of yes or no questions. Answer N for no to all of them. Once the queue is created, the broker is ready for use with the example programs. C.4. Stopping the broker When you are done running the examples, use the artemis stop command to stop the broker. USD <broker-instance-dir> /bin/artemis stop Revised on 2023-12-06 12:44:04 UTC
[ "<broker-instance-dir> /bin/artemis run", "example-broker/bin/artemis run __ __ ____ ____ _ /\\ | \\/ |/ __ \\ | _ \\ | | / \\ | \\ / | | | | | |_) |_ __ ___ | | _____ _ __ / /\\ \\ | |\\/| | | | | | _ <| '__/ _ \\| |/ / _ \\ '__| / ____ \\| | | | |__| | | |_) | | | (_) | < __/ | /_/ \\_\\_| |_|\\___\\_\\ |____/|_| \\___/|_|\\_\\___|_| Red Hat AMQ <version> 2020-06-03 12:12:11,807 INFO [org.apache.activemq.artemis.integration.bootstrap] AMQ101000: Starting ActiveMQ Artemis Server 2020-06-03 12:12:12,336 INFO [org.apache.activemq.artemis.core.server] AMQ221007: Server is now live", "<broker-instance-dir> /bin/artemis queue create --name queue --address queue --auto-create-address --anycast", "<broker-instance-dir> /bin/artemis stop" ]
https://docs.redhat.com/en/documentation/red_hat_build_of_apache_qpid_jms/2.4/html/using_qpid_jms/using_the_broker_with_the_examples
Making open source more inclusive
Making open source more inclusive Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see our CTO Chris Wright's message .
null
https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.18/html/deploying_openshift_data_foundation_using_bare_metal_infrastructure/making-open-source-more-inclusive
Chapter 20. SSH Protocol
Chapter 20. SSH Protocol SSH TM (or S ecure SH ell) is a protocol which facilitates secure communications between two systems using a client/server architecture and allows users to log into server host systems remotely. Unlike other remote communication protocols, such as FTP or Telnet, SSH encrypts the login session, making it impossible for intruders to collect unencrypted passwords. SSH is designed to replace older, less secure terminal applications used to log into remote hosts, such as telnet or rsh . A related program called scp replaces older programs designed to copy files between hosts, such as rcp . Because these older applications do not encrypt passwords transmitted between the client and the server, avoid them whenever possible. Using secure methods to log into remote systems decreases the risks for both the client system and the remote host. 20.1. Features of SSH The SSH protocol provides the following safeguards: After an initial connection, the client can verify that it is connecting to the same server it had connected to previously. The client transmits its authentication information to the server using strong, 128-bit encryption. All data sent and received during a session is transferred using 128-bit encryption, making intercepted transmissions extremely difficult to decrypt and read. The client can forward X11 [6] applications from the server. This technique, called X11 forwarding , provides a secure means to use graphical applications over a network. Because the SSH protocol encrypts everything it sends and receives, it can be used to secure otherwise insecure protocols. Using a technique called port forwarding , an SSH server can become a conduit to securing otherwise insecure protocols, like POP, and increasing overall system and data security. Red Hat Enterprise Linux includes the general OpenSSH package ( openssh ) as well as the OpenSSH server ( openssh-server ) and client ( openssh-clients ) packages. Refer to the chapter titled OpenSSH in the System Administrators Guide for instructions on installing and deploying OpenSSH. Note, the OpenSSH packages require the OpenSSL package ( openssl ) which installs several important cryptographic libraries, enabling OpenSSH to provide encrypted communications. 20.1.1. Why Use SSH? Nefarious computer users have a variety of tools at their disposal enabling them to disrupt, intercept, and re-route network traffic in an effort to gain access to a system. In general terms, these threats can be categorized as follows: Interception of communication between two systems - In this scenario, the attacker can be somewhere on the network between the communicating entities, copying any information passed between them. The attacker may intercept and keep the information, or alter the information and send it on to the intended recipient. This attack can be mounted through the use of a packet sniffer - a common network utility. Impersonation of a particular host - Using this strategy, an attacker's system is configured to pose as the intended recipient of a transmission. If this strategy works, the user's system remains unaware that it is communicating with the wrong host. This attack can be mounted through techniques known as DNS poisoning [7] or IP spoofing [8] . Both techniques intercept potentially sensitive information and, if the interception is made for hostile reasons, the results can be disastrous. If SSH is used for remote shell login and file copying, these security threats can be greatly diminished. This is because the SSH client and server use digital signatures to verify their identity. Additionally, all communication between the client and server systems is encrypted. Attempts to spoof the identity of either side of a communication does not work, since each packet is encrypted using a key known only by the local and remote systems. [6] X11 refers to the X11R6.7 windowing display system, traditionally referred to as the X Window System or X. Red Hat Enterprise Linux includes XFree86, an open source X Window System. [7] DNS poisoning occurs when an intruder cracks a DNS server, pointing client systems to a maliciously duplicated host. [8] IP spoofing occurs when an intruder sends network packets which falsely appear to be from a trusted host on the network.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/reference_guide/ch-ssh
14.11.4. Creating an XML Dump File for a Storage Pool
14.11.4. Creating an XML Dump File for a Storage Pool The pool-dumpxml --inactive pool-or-uuid command returns the XML information about the specified storage pool object. Using --inactive dumps the configuration that will be used on start of the pool as opposed to the current pool configuration.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/virtualization_administration_guide/sub-sect-storage_pool_commands-creating_an_xml_dump_file_for_a_pool
Chapter 2. Updating the undercloud
Chapter 2. Updating the undercloud You can use director to update the main packages on the undercloud node. To update the undercloud and its overcloud images to the latest Red Hat OpenStack Platform (RHOSP) 16.2 version, complete the following procedures: Section 2.1, "Performing a minor update of a containerized undercloud" Section 2.2, "Updating the overcloud images" Prerequisites Before you can update the undercloud to the latest RHOSP 16.2 version, ensure that you complete all the update preparation procedures. For more information, see Chapter 1, Preparing for a minor update . 2.1. Performing a minor update of a containerized undercloud Director provides commands to update the main packages on the undercloud node. Use director to perform a minor update within the current version of your RHOSP environment. Procedure On the undercloud node, log in as the stack user. Source the stackrc file: Update the director main packages with the dnf update command: USD sudo dnf update -y python3-tripleoclient* tripleo-ansible ansible Update the undercloud environment with the openstack undercloud upgrade command : Wait until the undercloud update process completes. Reboot the undercloud to update the operating system's kernel and other system packages: Wait until the node boots. 2.2. Updating the overcloud images You must replace your current overcloud images with new versions to ensure that director can introspect and provision your nodes with the latest version of the RHOSP software. If you are using pre-provisioned nodes, this step is not required. Prerequisites You have updated the undercloud node to the latest version. For more information, see Section 2.1, "Performing a minor update of a containerized undercloud" . Procedure Source the stackrc file: Remove any existing images from the images directory on the stack user's home ( /home/stack/images ): Extract the archives: Import the latest images into the director: USD openstack overcloud image upload --update-existing --image-path /home/stack/images/ Configure your nodes to use the new images: USD openstack overcloud node configure USD(openstack baremetal node list -c UUID -f value) Verify the existence of the new images: USD openstack image list USD ls -l /var/lib/ironic/httpboot Important When you deploy overcloud nodes, ensure that the overcloud image version corresponds to the respective heat template version. For example, use only the RHOSP 16.2 images with the RHOSP 16.2 heat templates. If you deployed a connected environment that uses the Red Hat Customer Portal or Red Hat Satellite Server, the overcloud image and package repository versions might be out of sync. To ensure that the overcloud image and package repository versions match, you can use the virt-customize tool. For more information, see the Red Hat Knowledgebase solution Modifying the Red Hat Linux OpenStack Platform Overcloud Image with virt-customize . The new overcloud-full image replaces the old overcloud-full image. If you made changes to the old image, you must repeat the changes in the new image, especially if you want to deploy new nodes in the future.
[ "source ~/stackrc", "sudo dnf update -y python3-tripleoclient* tripleo-ansible ansible", "openstack undercloud upgrade", "sudo reboot", "source ~/stackrc", "rm -rf ~/images/*", "cd ~/images for i in /usr/share/rhosp-director-images/overcloud-full-latest-16.2.tar /usr/share/rhosp-director-images/ironic-python-agent-latest-16.2.tar; do tar -xvf USDi; done cd ~", "openstack overcloud image upload --update-existing --image-path /home/stack/images/", "openstack overcloud node configure USD(openstack baremetal node list -c UUID -f value)", "openstack image list ls -l /var/lib/ironic/httpboot" ]
https://docs.redhat.com/en/documentation/red_hat_openstack_platform/16.2/html/keeping_red_hat_openstack_platform_updated/assembly_updating-the-undercloud_keeping-updated
Chapter 2. Upgrading the Red Hat Quay Operator Overview
Chapter 2. Upgrading the Red Hat Quay Operator Overview The Red Hat Quay Operator follows a synchronized versioning scheme, which means that each version of the Operator is tied to the version of Red Hat Quay and the components that it manages. There is no field on the QuayRegistry custom resource which sets the version of Red Hat Quay to deploy ; the Operator can only deploy a single version of all components. This scheme was chosen to ensure that all components work well together and to reduce the complexity of the Operator needing to know how to manage the lifecycles of many different versions of Red Hat Quay on Kubernetes. 2.1. Operator Lifecycle Manager The Red Hat Quay Operator should be installed and upgraded using the Operator Lifecycle Manager (OLM) . When creating a Subscription with the default approvalStrategy: Automatic , OLM will automatically upgrade the Red Hat Quay Operator whenever a new version becomes available. Warning When the Red Hat Quay Operator is installed by Operator Lifecycle Manager, it might be configured to support automatic or manual upgrades. This option is shown on the OperatorHub page for the Red Hat Quay Operator during installation. It can also be found in the Red Hat Quay Operator Subscription object by the approvalStrategy field. Choosing Automatic means that your Red Hat Quay Operator will automatically be upgraded whenever a new Operator version is released. If this is not desirable, then the Manual approval strategy should be selected. 2.2. Upgrading the Red Hat Quay Operator The standard approach for upgrading installed Operators on OpenShift Container Platform is documented at Upgrading installed Operators . In general, Red Hat Quay supports upgrades from a prior (N-1) minor version only. For example, upgrading directly from Red Hat Quay 3.0.5 to the latest version of 3.5 is not supported. Instead, users would have to upgrade as follows: 3.0.5 3.1.3 3.1.3 3.2.2 3.2.2 3.3.4 3.3.4 3.4.z 3.4.z 3.5.z This is required to ensure that any necessary database migrations are done correctly and in the right order during the upgrade. In some cases, Red Hat Quay supports direct, single-step upgrades from prior (N-2, N-3) minor versions. This simplifies the upgrade procedure for customers on older releases. The following upgrade paths are supported for Red Hat Quay 3.12: 3.10.z 3.12.z 3.11.z 3.12.z For users on standalone deployments of Red Hat Quay wanting to upgrade to 3.12, see the Standalone upgrade guide. 2.2.1. Upgrading Red Hat Quay to version 3.12 To update Red Hat Quay from one minor version to the , for example, 3.11 3.12, you must change the update channel for the Red Hat Quay Operator. Procedure In the OpenShift Container Platform Web Console, navigate to Operators Installed Operators . Click on the Red Hat Quay Operator. Navigate to the Subscription tab. Under Subscription details click Update channel . Select stable-3.12 Save . Check the progress of the new installation under Upgrade status . Wait until the upgrade status changes to 1 installed before proceeding. In your OpenShift Container Platform cluster, navigate to Workloads Pods . Existing pods should be terminated, or in the process of being terminated. Wait for the following pods, which are responsible for upgrading the database and alembic migration of existing data, to spin up: clair-postgres-upgrade , quay-postgres-upgrade , and quay-app-upgrade . After the clair-postgres-upgrade , quay-postgres-upgrade , and quay-app-upgrade pods are marked as Completed , the remaining pods for your Red Hat Quay deployment spin up. This takes approximately ten minutes. Verify that the quay-database and clair-postgres pods now use the postgresql-13 image. After the quay-app pod is marked as Running , you can reach your Red Hat Quay registry. 2.2.2. Upgrading to the minor release version For z stream upgrades, for example, 3.11.1 3.11.2, updates are released in the major-minor channel that the user initially selected during install. The procedure to perform a z stream upgrade depends on the approvalStrategy as outlined above. If the approval strategy is set to Automatic , the Red Hat Quay Operator upgrades automatically to the newest z stream. This results in automatic, rolling Red Hat Quay updates to newer z streams with little to no downtime. Otherwise, the update must be manually approved before installation can begin. 2.2.3. Changing the update channel for the Red Hat Quay Operator The subscription of an installed Operator specifies an update channel, which is used to track and receive updates for the Operator. To upgrade the Red Hat Quay Operator to start tracking and receiving updates from a newer channel, change the update channel in the Subscription tab for the installed Red Hat Quay Operator. For subscriptions with an Automatic approval strategy, the upgrade begins automatically and can be monitored on the page that lists the Installed Operators. 2.2.4. Manually approving a pending Operator upgrade If an installed Operator has the approval strategy in its subscription set to Manual , when new updates are released in its current update channel, the update must be manually approved before installation can begin. If the Red Hat Quay Operator has a pending upgrade, this status will be displayed in the list of Installed Operators. In the Subscription tab for the Red Hat Quay Operator, you can preview the install plan and review the resources that are listed as available for upgrade. If satisfied, click Approve and return to the page that lists Installed Operators to monitor the progress of the upgrade. The following image shows the Subscription tab in the UI, including the update Channel , the Approval strategy, the Upgrade status and the InstallPlan : The list of Installed Operators provides a high-level summary of the current Quay installation: 2.3. Upgrading a QuayRegistry resource When the Red Hat Quay Operator starts, it immediately looks for any QuayRegistries it can find in the namespace(s) it is configured to watch. When it finds one, the following logic is used: If status.currentVersion is unset, reconcile as normal. If status.currentVersion equals the Operator version, reconcile as normal. If status.currentVersion does not equal the Operator version, check if it can be upgraded. If it can, perform upgrade tasks and set the status.currentVersion to the Operator's version once complete. If it cannot be upgraded, return an error and leave the QuayRegistry and its deployed Kubernetes objects alone. 2.4. Upgrading a QuayEcosystem Upgrades are supported from versions of the Operator which used the QuayEcosystem API for a limited set of configurations. To ensure that migrations do not happen unexpectedly, a special label needs to be applied to the QuayEcosystem for it to be migrated. A new QuayRegistry will be created for the Operator to manage, but the old QuayEcosystem will remain until manually deleted to ensure that you can roll back and still access Quay in case anything goes wrong. To migrate an existing QuayEcosystem to a new QuayRegistry , use the following procedure. Procedure Add "quay-operator/migrate": "true" to the metadata.labels of the QuayEcosystem . USD oc edit quayecosystem <quayecosystemname> metadata: labels: quay-operator/migrate: "true" Wait for a QuayRegistry to be created with the same metadata.name as your QuayEcosystem . The QuayEcosystem will be marked with the label "quay-operator/migration-complete": "true" . After the status.registryEndpoint of the new QuayRegistry is set, access Red Hat Quay and confirm that all data and settings were migrated successfully. If everything works correctly, you can delete the QuayEcosystem and Kubernetes garbage collection will clean up all old resources. 2.4.1. Reverting QuayEcosystem Upgrade If something goes wrong during the automatic upgrade from QuayEcosystem to QuayRegistry , follow these steps to revert back to using the QuayEcosystem : Procedure Delete the QuayRegistry using either the UI or kubectl : USD kubectl delete -n <namespace> quayregistry <quayecosystem-name> If external access was provided using a Route , change the Route to point back to the original Service using the UI or kubectl . Note If your QuayEcosystem was managing the PostgreSQL database, the upgrade process will migrate your data to a new PostgreSQL database managed by the upgraded Operator. Your old database will not be changed or removed but Red Hat Quay will no longer use it once the migration is complete. If there are issues during the data migration, the upgrade process will exit and it is recommended that you continue with your database as an unmanaged component. 2.4.2. Supported QuayEcosystem Configurations for Upgrades The Red Hat Quay Operator reports errors in its logs and in status.conditions if migrating a QuayEcosystem component fails or is unsupported. All unmanaged components should migrate successfully because no Kubernetes resources need to be adopted and all the necessary values are already provided in Red Hat Quay's config.yaml file. Database Ephemeral database not supported ( volumeSize field must be set). Redis Nothing special needed. External Access Only passthrough Route access is supported for automatic migration. Manual migration required for other methods. LoadBalancer without custom hostname: After the QuayEcosystem is marked with label "quay-operator/migration-complete": "true" , delete the metadata.ownerReferences field from existing Service before deleting the QuayEcosystem to prevent Kubernetes from garbage collecting the Service and removing the load balancer. A new Service will be created with metadata.name format <QuayEcosystem-name>-quay-app . Edit the spec.selector of the existing Service to match the spec.selector of the new Service so traffic to the old load balancer endpoint will now be directed to the new pods. You are now responsible for the old Service ; the Quay Operator will not manage it. LoadBalancer / NodePort / Ingress with custom hostname: A new Service of type LoadBalancer will be created with metadata.name format <QuayEcosystem-name>-quay-app . Change your DNS settings to point to the status.loadBalancer endpoint provided by the new Service . Clair Nothing special needed. Object Storage QuayEcosystem did not have a managed object storage component, so object storage will always be marked as unmanaged. Local storage is not supported. Repository Mirroring Nothing special needed.
[ "oc edit quayecosystem <quayecosystemname>", "metadata: labels: quay-operator/migrate: \"true\"", "kubectl delete -n <namespace> quayregistry <quayecosystem-name>" ]
https://docs.redhat.com/en/documentation/red_hat_quay/3.12/html/upgrade_red_hat_quay/operator-upgrade
Chapter 2. Installing the Red Hat JBoss Core Services 2.4.57
Chapter 2. Installing the Red Hat JBoss Core Services 2.4.57 You can install the Apache HTTP Server 2.4.57 on Red Hat Enterprise Linux or Windows Server. For more information, see the following sections of the installation guide: Installing the JBCS Apache HTTP Server on RHEL from archive files Installing the JBCS Apache HTTP Server on RHEL from RPM packages Installing the JBCS Apache HTTP Server on Windows Server
null
https://docs.redhat.com/en/documentation/red_hat_jboss_core_services/2.4.57/html/red_hat_jboss_core_services_apache_http_server_2.4.57_service_pack_6_release_notes/installing_the_red_hat_jboss_core_services_2_4_57
Chapter 21. MachineConfiguration [operator.openshift.io/v1]
Chapter 21. MachineConfiguration [operator.openshift.io/v1] Description MachineConfiguration provides information to configure an operator to manage Machine Configuration. Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object Required spec 21.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata spec object spec is the specification of the desired behavior of the Machine Config Operator status object status is the most recently observed status of the Machine Config Operator 21.1.1. .spec Description spec is the specification of the desired behavior of the Machine Config Operator Type object Property Type Description failedRevisionLimit integer failedRevisionLimit is the number of failed static pod installer revisions to keep on disk and in the api -1 = unlimited, 0 or unset = 5 (default) forceRedeploymentReason string forceRedeploymentReason can be used to force the redeployment of the operand by providing a unique string. This provides a mechanism to kick a previously failed deployment and provide a reason why you think it will work this time instead of failing again on the same config. logLevel string logLevel is an intent based logging for an overall component. It does not give fine grained control, but it is a simple way to manage coarse grained logging choices that operators have to interpret for their operands. Valid values are: "Normal", "Debug", "Trace", "TraceAll". Defaults to "Normal". managedBootImages object managedBootImages allows configuration for the management of boot images for machine resources within the cluster. This configuration allows users to select resources that should be updated to the latest boot images during cluster upgrades, ensuring that new machines always boot with the current cluster version's boot image. When omitted, no boot images will be updated. managementState string managementState indicates whether and how the operator should manage the component nodeDisruptionPolicy object nodeDisruptionPolicy allows an admin to set granular node disruption actions for MachineConfig-based updates, such as drains, service reloads, etc. Specifying this will allow for less downtime when doing small configuration updates to the cluster. This configuration has no effect on cluster upgrades which will still incur node disruption where required. observedConfig `` observedConfig holds a sparse config that controller has observed from the cluster state. It exists in spec because it is an input to the level for the operator operatorLogLevel string operatorLogLevel is an intent based logging for the operator itself. It does not give fine grained control, but it is a simple way to manage coarse grained logging choices that operators have to interpret for themselves. Valid values are: "Normal", "Debug", "Trace", "TraceAll". Defaults to "Normal". succeededRevisionLimit integer succeededRevisionLimit is the number of successful static pod installer revisions to keep on disk and in the api -1 = unlimited, 0 or unset = 5 (default) unsupportedConfigOverrides `` unsupportedConfigOverrides overrides the final configuration that was computed by the operator. Red Hat does not support the use of this field. Misuse of this field could lead to unexpected behavior or conflict with other configuration options. Seek guidance from the Red Hat support before using this field. Use of this property blocks cluster upgrades, it must be removed before upgrading your cluster. 21.1.2. .spec.managedBootImages Description managedBootImages allows configuration for the management of boot images for machine resources within the cluster. This configuration allows users to select resources that should be updated to the latest boot images during cluster upgrades, ensuring that new machines always boot with the current cluster version's boot image. When omitted, no boot images will be updated. Type object Property Type Description machineManagers array machineManagers can be used to register machine management resources for boot image updates. The Machine Config Operator will watch for changes to this list. Only one entry is permitted per type of machine management resource. machineManagers[] object MachineManager describes a target machine resource that is registered for boot image updates. It stores identifying information such as the resource type and the API Group of the resource. It also provides granular control via the selection field. 21.1.3. .spec.managedBootImages.machineManagers Description machineManagers can be used to register machine management resources for boot image updates. The Machine Config Operator will watch for changes to this list. Only one entry is permitted per type of machine management resource. Type array 21.1.4. .spec.managedBootImages.machineManagers[] Description MachineManager describes a target machine resource that is registered for boot image updates. It stores identifying information such as the resource type and the API Group of the resource. It also provides granular control via the selection field. Type object Required apiGroup resource selection Property Type Description apiGroup string apiGroup is name of the APIGroup that the machine management resource belongs to. The only current valid value is machine.openshift.io. machine.openshift.io means that the machine manager will only register resources that belong to OpenShift machine API group. resource string resource is the machine management resource's type. The only current valid value is machinesets. machinesets means that the machine manager will only register resources of the kind MachineSet. selection object selection allows granular control of the machine management resources that will be registered for boot image updates. 21.1.5. .spec.managedBootImages.machineManagers[].selection Description selection allows granular control of the machine management resources that will be registered for boot image updates. Type object Required mode Property Type Description mode string mode determines how machine managers will be selected for updates. Valid values are All and Partial. All means that every resource matched by the machine manager will be updated. Partial requires specified selector(s) and allows customisation of which resources matched by the machine manager will be updated. partial object partial provides label selector(s) that can be used to match machine management resources. Only permitted when mode is set to "Partial". 21.1.6. .spec.managedBootImages.machineManagers[].selection.partial Description partial provides label selector(s) that can be used to match machine management resources. Only permitted when mode is set to "Partial". Type object Required machineResourceSelector Property Type Description machineResourceSelector object machineResourceSelector is a label selector that can be used to select machine resources like MachineSets. 21.1.7. .spec.managedBootImages.machineManagers[].selection.partial.machineResourceSelector Description machineResourceSelector is a label selector that can be used to select machine resources like MachineSets. Type object Property Type Description matchExpressions array matchExpressions is a list of label selector requirements. The requirements are ANDed. matchExpressions[] object A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. matchLabels object (string) matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed. 21.1.8. .spec.managedBootImages.machineManagers[].selection.partial.machineResourceSelector.matchExpressions Description matchExpressions is a list of label selector requirements. The requirements are ANDed. Type array 21.1.9. .spec.managedBootImages.machineManagers[].selection.partial.machineResourceSelector.matchExpressions[] Description A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. Type object Required key operator Property Type Description key string key is the label key that the selector applies to. operator string operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist. values array (string) values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch. 21.1.10. .spec.nodeDisruptionPolicy Description nodeDisruptionPolicy allows an admin to set granular node disruption actions for MachineConfig-based updates, such as drains, service reloads, etc. Specifying this will allow for less downtime when doing small configuration updates to the cluster. This configuration has no effect on cluster upgrades which will still incur node disruption where required. Type object Property Type Description files array files is a list of MachineConfig file definitions and actions to take to changes on those paths This list supports a maximum of 50 entries. files[] object NodeDisruptionPolicySpecFile is a file entry and corresponding actions to take and is used in the NodeDisruptionPolicyConfig object sshkey object sshkey maps to the ignition.sshkeys field in the MachineConfig object, definition an action for this will apply to all sshkey changes in the cluster units array units is a list MachineConfig unit definitions and actions to take on changes to those services This list supports a maximum of 50 entries. units[] object NodeDisruptionPolicySpecUnit is a systemd unit name and corresponding actions to take and is used in the NodeDisruptionPolicyConfig object 21.1.11. .spec.nodeDisruptionPolicy.files Description files is a list of MachineConfig file definitions and actions to take to changes on those paths This list supports a maximum of 50 entries. Type array 21.1.12. .spec.nodeDisruptionPolicy.files[] Description NodeDisruptionPolicySpecFile is a file entry and corresponding actions to take and is used in the NodeDisruptionPolicyConfig object Type object Required actions path Property Type Description actions array actions represents the series of commands to be executed on changes to the file at the corresponding file path. Actions will be applied in the order that they are set in this list. If there are other incoming changes to other MachineConfig entries in the same update that require a reboot, the reboot will supercede these actions. Valid actions are Reboot, Drain, Reload, DaemonReload and None. The Reboot action and the None action cannot be used in conjunction with any of the other actions. This list supports a maximum of 10 entries. actions[] object path string path is the location of a file being managed through a MachineConfig. The Actions in the policy will apply to changes to the file at this path. 21.1.13. .spec.nodeDisruptionPolicy.files[].actions Description actions represents the series of commands to be executed on changes to the file at the corresponding file path. Actions will be applied in the order that they are set in this list. If there are other incoming changes to other MachineConfig entries in the same update that require a reboot, the reboot will supercede these actions. Valid actions are Reboot, Drain, Reload, DaemonReload and None. The Reboot action and the None action cannot be used in conjunction with any of the other actions. This list supports a maximum of 10 entries. Type array 21.1.14. .spec.nodeDisruptionPolicy.files[].actions[] Description Type object Required type Property Type Description reload object reload specifies the service to reload, only valid if type is reload restart object restart specifies the service to restart, only valid if type is restart type string type represents the commands that will be carried out if this NodeDisruptionPolicySpecActionType is executed Valid values are Reboot, Drain, Reload, Restart, DaemonReload and None. reload/restart requires a corresponding service target specified in the reload/restart field. Other values require no further configuration 21.1.15. .spec.nodeDisruptionPolicy.files[].actions[].reload Description reload specifies the service to reload, only valid if type is reload Type object Required serviceName Property Type Description serviceName string serviceName is the full name (e.g. crio.service) of the service to be reloaded Service names should be of the format USD{NAME}USD{SERVICETYPE} and can up to 255 characters long. USD{NAME} must be atleast 1 character long and can only consist of alphabets, digits, ":", "-", "_", ".", and "\". USD{SERVICETYPE} must be one of ".service", ".socket", ".device", ".mount", ".automount", ".swap", ".target", ".path", ".timer", ".snapshot", ".slice" or ".scope". 21.1.16. .spec.nodeDisruptionPolicy.files[].actions[].restart Description restart specifies the service to restart, only valid if type is restart Type object Required serviceName Property Type Description serviceName string serviceName is the full name (e.g. crio.service) of the service to be restarted Service names should be of the format USD{NAME}USD{SERVICETYPE} and can up to 255 characters long. USD{NAME} must be atleast 1 character long and can only consist of alphabets, digits, ":", "-", "_", ".", and "\". USD{SERVICETYPE} must be one of ".service", ".socket", ".device", ".mount", ".automount", ".swap", ".target", ".path", ".timer", ".snapshot", ".slice" or ".scope". 21.1.17. .spec.nodeDisruptionPolicy.sshkey Description sshkey maps to the ignition.sshkeys field in the MachineConfig object, definition an action for this will apply to all sshkey changes in the cluster Type object Required actions Property Type Description actions array actions represents the series of commands to be executed on changes to the file at the corresponding file path. Actions will be applied in the order that they are set in this list. If there are other incoming changes to other MachineConfig entries in the same update that require a reboot, the reboot will supercede these actions. Valid actions are Reboot, Drain, Reload, DaemonReload and None. The Reboot action and the None action cannot be used in conjunction with any of the other actions. This list supports a maximum of 10 entries. actions[] object 21.1.18. .spec.nodeDisruptionPolicy.sshkey.actions Description actions represents the series of commands to be executed on changes to the file at the corresponding file path. Actions will be applied in the order that they are set in this list. If there are other incoming changes to other MachineConfig entries in the same update that require a reboot, the reboot will supercede these actions. Valid actions are Reboot, Drain, Reload, DaemonReload and None. The Reboot action and the None action cannot be used in conjunction with any of the other actions. This list supports a maximum of 10 entries. Type array 21.1.19. .spec.nodeDisruptionPolicy.sshkey.actions[] Description Type object Required type Property Type Description reload object reload specifies the service to reload, only valid if type is reload restart object restart specifies the service to restart, only valid if type is restart type string type represents the commands that will be carried out if this NodeDisruptionPolicySpecActionType is executed Valid values are Reboot, Drain, Reload, Restart, DaemonReload and None. reload/restart requires a corresponding service target specified in the reload/restart field. Other values require no further configuration 21.1.20. .spec.nodeDisruptionPolicy.sshkey.actions[].reload Description reload specifies the service to reload, only valid if type is reload Type object Required serviceName Property Type Description serviceName string serviceName is the full name (e.g. crio.service) of the service to be reloaded Service names should be of the format USD{NAME}USD{SERVICETYPE} and can up to 255 characters long. USD{NAME} must be atleast 1 character long and can only consist of alphabets, digits, ":", "-", "_", ".", and "\". USD{SERVICETYPE} must be one of ".service", ".socket", ".device", ".mount", ".automount", ".swap", ".target", ".path", ".timer", ".snapshot", ".slice" or ".scope". 21.1.21. .spec.nodeDisruptionPolicy.sshkey.actions[].restart Description restart specifies the service to restart, only valid if type is restart Type object Required serviceName Property Type Description serviceName string serviceName is the full name (e.g. crio.service) of the service to be restarted Service names should be of the format USD{NAME}USD{SERVICETYPE} and can up to 255 characters long. USD{NAME} must be atleast 1 character long and can only consist of alphabets, digits, ":", "-", "_", ".", and "\". USD{SERVICETYPE} must be one of ".service", ".socket", ".device", ".mount", ".automount", ".swap", ".target", ".path", ".timer", ".snapshot", ".slice" or ".scope". 21.1.22. .spec.nodeDisruptionPolicy.units Description units is a list MachineConfig unit definitions and actions to take on changes to those services This list supports a maximum of 50 entries. Type array 21.1.23. .spec.nodeDisruptionPolicy.units[] Description NodeDisruptionPolicySpecUnit is a systemd unit name and corresponding actions to take and is used in the NodeDisruptionPolicyConfig object Type object Required actions name Property Type Description actions array actions represents the series of commands to be executed on changes to the file at the corresponding file path. Actions will be applied in the order that they are set in this list. If there are other incoming changes to other MachineConfig entries in the same update that require a reboot, the reboot will supercede these actions. Valid actions are Reboot, Drain, Reload, DaemonReload and None. The Reboot action and the None action cannot be used in conjunction with any of the other actions. This list supports a maximum of 10 entries. actions[] object name string name represents the service name of a systemd service managed through a MachineConfig Actions specified will be applied for changes to the named service. Service names should be of the format USD{NAME}USD{SERVICETYPE} and can up to 255 characters long. USD{NAME} must be atleast 1 character long and can only consist of alphabets, digits, ":", "-", "_", ".", and "\". USD{SERVICETYPE} must be one of ".service", ".socket", ".device", ".mount", ".automount", ".swap", ".target", ".path", ".timer", ".snapshot", ".slice" or ".scope". 21.1.24. .spec.nodeDisruptionPolicy.units[].actions Description actions represents the series of commands to be executed on changes to the file at the corresponding file path. Actions will be applied in the order that they are set in this list. If there are other incoming changes to other MachineConfig entries in the same update that require a reboot, the reboot will supercede these actions. Valid actions are Reboot, Drain, Reload, DaemonReload and None. The Reboot action and the None action cannot be used in conjunction with any of the other actions. This list supports a maximum of 10 entries. Type array 21.1.25. .spec.nodeDisruptionPolicy.units[].actions[] Description Type object Required type Property Type Description reload object reload specifies the service to reload, only valid if type is reload restart object restart specifies the service to restart, only valid if type is restart type string type represents the commands that will be carried out if this NodeDisruptionPolicySpecActionType is executed Valid values are Reboot, Drain, Reload, Restart, DaemonReload and None. reload/restart requires a corresponding service target specified in the reload/restart field. Other values require no further configuration 21.1.26. .spec.nodeDisruptionPolicy.units[].actions[].reload Description reload specifies the service to reload, only valid if type is reload Type object Required serviceName Property Type Description serviceName string serviceName is the full name (e.g. crio.service) of the service to be reloaded Service names should be of the format USD{NAME}USD{SERVICETYPE} and can up to 255 characters long. USD{NAME} must be atleast 1 character long and can only consist of alphabets, digits, ":", "-", "_", ".", and "\". USD{SERVICETYPE} must be one of ".service", ".socket", ".device", ".mount", ".automount", ".swap", ".target", ".path", ".timer", ".snapshot", ".slice" or ".scope". 21.1.27. .spec.nodeDisruptionPolicy.units[].actions[].restart Description restart specifies the service to restart, only valid if type is restart Type object Required serviceName Property Type Description serviceName string serviceName is the full name (e.g. crio.service) of the service to be restarted Service names should be of the format USD{NAME}USD{SERVICETYPE} and can up to 255 characters long. USD{NAME} must be atleast 1 character long and can only consist of alphabets, digits, ":", "-", "_", ".", and "\". USD{SERVICETYPE} must be one of ".service", ".socket", ".device", ".mount", ".automount", ".swap", ".target", ".path", ".timer", ".snapshot", ".slice" or ".scope". 21.1.28. .status Description status is the most recently observed status of the Machine Config Operator Type object Property Type Description conditions array conditions is a list of conditions and their status conditions[] object Condition contains details for one aspect of the current state of this API Resource. nodeDisruptionPolicyStatus object nodeDisruptionPolicyStatus status reflects what the latest cluster-validated policies are, and will be used by the Machine Config Daemon during future node updates. observedGeneration integer observedGeneration is the last generation change you've dealt with 21.1.29. .status.conditions Description conditions is a list of conditions and their status Type array 21.1.30. .status.conditions[] Description Condition contains details for one aspect of the current state of this API Resource. Type object Required lastTransitionTime message reason status type Property Type Description lastTransitionTime string lastTransitionTime is the last time the condition transitioned from one status to another. This should be when the underlying condition changed. If that is not known, then using the time when the API field changed is acceptable. message string message is a human readable message indicating details about the transition. This may be an empty string. observedGeneration integer observedGeneration represents the .metadata.generation that the condition was set based upon. For instance, if .metadata.generation is currently 12, but the .status.conditions[x].observedGeneration is 9, the condition is out of date with respect to the current state of the instance. reason string reason contains a programmatic identifier indicating the reason for the condition's last transition. Producers of specific condition types may define expected values and meanings for this field, and whether the values are considered a guaranteed API. The value should be a CamelCase string. This field may not be empty. status string status of the condition, one of True, False, Unknown. type string type of condition in CamelCase or in foo.example.com/CamelCase. 21.1.31. .status.nodeDisruptionPolicyStatus Description nodeDisruptionPolicyStatus status reflects what the latest cluster-validated policies are, and will be used by the Machine Config Daemon during future node updates. Type object Property Type Description clusterPolicies object clusterPolicies is a merge of cluster default and user provided node disruption policies. 21.1.32. .status.nodeDisruptionPolicyStatus.clusterPolicies Description clusterPolicies is a merge of cluster default and user provided node disruption policies. Type object Property Type Description files array files is a list of MachineConfig file definitions and actions to take to changes on those paths files[] object NodeDisruptionPolicyStatusFile is a file entry and corresponding actions to take and is used in the NodeDisruptionPolicyClusterStatus object sshkey object sshkey is the overall sshkey MachineConfig definition units array units is a list MachineConfig unit definitions and actions to take on changes to those services units[] object NodeDisruptionPolicyStatusUnit is a systemd unit name and corresponding actions to take and is used in the NodeDisruptionPolicyClusterStatus object 21.1.33. .status.nodeDisruptionPolicyStatus.clusterPolicies.files Description files is a list of MachineConfig file definitions and actions to take to changes on those paths Type array 21.1.34. .status.nodeDisruptionPolicyStatus.clusterPolicies.files[] Description NodeDisruptionPolicyStatusFile is a file entry and corresponding actions to take and is used in the NodeDisruptionPolicyClusterStatus object Type object Required actions path Property Type Description actions array actions represents the series of commands to be executed on changes to the file at the corresponding file path. Actions will be applied in the order that they are set in this list. If there are other incoming changes to other MachineConfig entries in the same update that require a reboot, the reboot will supercede these actions. Valid actions are Reboot, Drain, Reload, DaemonReload and None. The Reboot action and the None action cannot be used in conjunction with any of the other actions. This list supports a maximum of 10 entries. actions[] object path string path is the location of a file being managed through a MachineConfig. The Actions in the policy will apply to changes to the file at this path. 21.1.35. .status.nodeDisruptionPolicyStatus.clusterPolicies.files[].actions Description actions represents the series of commands to be executed on changes to the file at the corresponding file path. Actions will be applied in the order that they are set in this list. If there are other incoming changes to other MachineConfig entries in the same update that require a reboot, the reboot will supercede these actions. Valid actions are Reboot, Drain, Reload, DaemonReload and None. The Reboot action and the None action cannot be used in conjunction with any of the other actions. This list supports a maximum of 10 entries. Type array 21.1.36. .status.nodeDisruptionPolicyStatus.clusterPolicies.files[].actions[] Description Type object Required type Property Type Description reload object reload specifies the service to reload, only valid if type is reload restart object restart specifies the service to restart, only valid if type is restart type string type represents the commands that will be carried out if this NodeDisruptionPolicyStatusActionType is executed Valid values are Reboot, Drain, Reload, Restart, DaemonReload, None and Special. reload/restart requires a corresponding service target specified in the reload/restart field. Other values require no further configuration 21.1.37. .status.nodeDisruptionPolicyStatus.clusterPolicies.files[].actions[].reload Description reload specifies the service to reload, only valid if type is reload Type object Required serviceName Property Type Description serviceName string serviceName is the full name (e.g. crio.service) of the service to be reloaded Service names should be of the format USD{NAME}USD{SERVICETYPE} and can up to 255 characters long. USD{NAME} must be atleast 1 character long and can only consist of alphabets, digits, ":", "-", "_", ".", and "\". USD{SERVICETYPE} must be one of ".service", ".socket", ".device", ".mount", ".automount", ".swap", ".target", ".path", ".timer", ".snapshot", ".slice" or ".scope". 21.1.38. .status.nodeDisruptionPolicyStatus.clusterPolicies.files[].actions[].restart Description restart specifies the service to restart, only valid if type is restart Type object Required serviceName Property Type Description serviceName string serviceName is the full name (e.g. crio.service) of the service to be restarted Service names should be of the format USD{NAME}USD{SERVICETYPE} and can up to 255 characters long. USD{NAME} must be atleast 1 character long and can only consist of alphabets, digits, ":", "-", "_", ".", and "\". USD{SERVICETYPE} must be one of ".service", ".socket", ".device", ".mount", ".automount", ".swap", ".target", ".path", ".timer", ".snapshot", ".slice" or ".scope". 21.1.39. .status.nodeDisruptionPolicyStatus.clusterPolicies.sshkey Description sshkey is the overall sshkey MachineConfig definition Type object Required actions Property Type Description actions array actions represents the series of commands to be executed on changes to the file at the corresponding file path. Actions will be applied in the order that they are set in this list. If there are other incoming changes to other MachineConfig entries in the same update that require a reboot, the reboot will supercede these actions. Valid actions are Reboot, Drain, Reload, DaemonReload and None. The Reboot action and the None action cannot be used in conjunction with any of the other actions. This list supports a maximum of 10 entries. actions[] object 21.1.40. .status.nodeDisruptionPolicyStatus.clusterPolicies.sshkey.actions Description actions represents the series of commands to be executed on changes to the file at the corresponding file path. Actions will be applied in the order that they are set in this list. If there are other incoming changes to other MachineConfig entries in the same update that require a reboot, the reboot will supercede these actions. Valid actions are Reboot, Drain, Reload, DaemonReload and None. The Reboot action and the None action cannot be used in conjunction with any of the other actions. This list supports a maximum of 10 entries. Type array 21.1.41. .status.nodeDisruptionPolicyStatus.clusterPolicies.sshkey.actions[] Description Type object Required type Property Type Description reload object reload specifies the service to reload, only valid if type is reload restart object restart specifies the service to restart, only valid if type is restart type string type represents the commands that will be carried out if this NodeDisruptionPolicyStatusActionType is executed Valid values are Reboot, Drain, Reload, Restart, DaemonReload, None and Special. reload/restart requires a corresponding service target specified in the reload/restart field. Other values require no further configuration 21.1.42. .status.nodeDisruptionPolicyStatus.clusterPolicies.sshkey.actions[].reload Description reload specifies the service to reload, only valid if type is reload Type object Required serviceName Property Type Description serviceName string serviceName is the full name (e.g. crio.service) of the service to be reloaded Service names should be of the format USD{NAME}USD{SERVICETYPE} and can up to 255 characters long. USD{NAME} must be atleast 1 character long and can only consist of alphabets, digits, ":", "-", "_", ".", and "\". USD{SERVICETYPE} must be one of ".service", ".socket", ".device", ".mount", ".automount", ".swap", ".target", ".path", ".timer", ".snapshot", ".slice" or ".scope". 21.1.43. .status.nodeDisruptionPolicyStatus.clusterPolicies.sshkey.actions[].restart Description restart specifies the service to restart, only valid if type is restart Type object Required serviceName Property Type Description serviceName string serviceName is the full name (e.g. crio.service) of the service to be restarted Service names should be of the format USD{NAME}USD{SERVICETYPE} and can up to 255 characters long. USD{NAME} must be atleast 1 character long and can only consist of alphabets, digits, ":", "-", "_", ".", and "\". USD{SERVICETYPE} must be one of ".service", ".socket", ".device", ".mount", ".automount", ".swap", ".target", ".path", ".timer", ".snapshot", ".slice" or ".scope". 21.1.44. .status.nodeDisruptionPolicyStatus.clusterPolicies.units Description units is a list MachineConfig unit definitions and actions to take on changes to those services Type array 21.1.45. .status.nodeDisruptionPolicyStatus.clusterPolicies.units[] Description NodeDisruptionPolicyStatusUnit is a systemd unit name and corresponding actions to take and is used in the NodeDisruptionPolicyClusterStatus object Type object Required actions name Property Type Description actions array actions represents the series of commands to be executed on changes to the file at the corresponding file path. Actions will be applied in the order that they are set in this list. If there are other incoming changes to other MachineConfig entries in the same update that require a reboot, the reboot will supercede these actions. Valid actions are Reboot, Drain, Reload, DaemonReload and None. The Reboot action and the None action cannot be used in conjunction with any of the other actions. This list supports a maximum of 10 entries. actions[] object name string name represents the service name of a systemd service managed through a MachineConfig Actions specified will be applied for changes to the named service. Service names should be of the format USD{NAME}USD{SERVICETYPE} and can up to 255 characters long. USD{NAME} must be atleast 1 character long and can only consist of alphabets, digits, ":", "-", "_", ".", and "\". USD{SERVICETYPE} must be one of ".service", ".socket", ".device", ".mount", ".automount", ".swap", ".target", ".path", ".timer", ".snapshot", ".slice" or ".scope". 21.1.46. .status.nodeDisruptionPolicyStatus.clusterPolicies.units[].actions Description actions represents the series of commands to be executed on changes to the file at the corresponding file path. Actions will be applied in the order that they are set in this list. If there are other incoming changes to other MachineConfig entries in the same update that require a reboot, the reboot will supercede these actions. Valid actions are Reboot, Drain, Reload, DaemonReload and None. The Reboot action and the None action cannot be used in conjunction with any of the other actions. This list supports a maximum of 10 entries. Type array 21.1.47. .status.nodeDisruptionPolicyStatus.clusterPolicies.units[].actions[] Description Type object Required type Property Type Description reload object reload specifies the service to reload, only valid if type is reload restart object restart specifies the service to restart, only valid if type is restart type string type represents the commands that will be carried out if this NodeDisruptionPolicyStatusActionType is executed Valid values are Reboot, Drain, Reload, Restart, DaemonReload, None and Special. reload/restart requires a corresponding service target specified in the reload/restart field. Other values require no further configuration 21.1.48. .status.nodeDisruptionPolicyStatus.clusterPolicies.units[].actions[].reload Description reload specifies the service to reload, only valid if type is reload Type object Required serviceName Property Type Description serviceName string serviceName is the full name (e.g. crio.service) of the service to be reloaded Service names should be of the format USD{NAME}USD{SERVICETYPE} and can up to 255 characters long. USD{NAME} must be atleast 1 character long and can only consist of alphabets, digits, ":", "-", "_", ".", and "\". USD{SERVICETYPE} must be one of ".service", ".socket", ".device", ".mount", ".automount", ".swap", ".target", ".path", ".timer", ".snapshot", ".slice" or ".scope". 21.1.49. .status.nodeDisruptionPolicyStatus.clusterPolicies.units[].actions[].restart Description restart specifies the service to restart, only valid if type is restart Type object Required serviceName Property Type Description serviceName string serviceName is the full name (e.g. crio.service) of the service to be restarted Service names should be of the format USD{NAME}USD{SERVICETYPE} and can up to 255 characters long. USD{NAME} must be atleast 1 character long and can only consist of alphabets, digits, ":", "-", "_", ".", and "\". USD{SERVICETYPE} must be one of ".service", ".socket", ".device", ".mount", ".automount", ".swap", ".target", ".path", ".timer", ".snapshot", ".slice" or ".scope". 21.2. API endpoints The following API endpoints are available: /apis/operator.openshift.io/v1/machineconfigurations DELETE : delete collection of MachineConfiguration GET : list objects of kind MachineConfiguration POST : create a MachineConfiguration /apis/operator.openshift.io/v1/machineconfigurations/{name} DELETE : delete a MachineConfiguration GET : read the specified MachineConfiguration PATCH : partially update the specified MachineConfiguration PUT : replace the specified MachineConfiguration /apis/operator.openshift.io/v1/machineconfigurations/{name}/status GET : read status of the specified MachineConfiguration PATCH : partially update status of the specified MachineConfiguration PUT : replace status of the specified MachineConfiguration 21.2.1. /apis/operator.openshift.io/v1/machineconfigurations HTTP method DELETE Description delete collection of MachineConfiguration Table 21.1. HTTP responses HTTP code Reponse body 200 - OK Status schema 401 - Unauthorized Empty HTTP method GET Description list objects of kind MachineConfiguration Table 21.2. HTTP responses HTTP code Reponse body 200 - OK MachineConfigurationList schema 401 - Unauthorized Empty HTTP method POST Description create a MachineConfiguration Table 21.3. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 21.4. Body parameters Parameter Type Description body MachineConfiguration schema Table 21.5. HTTP responses HTTP code Reponse body 200 - OK MachineConfiguration schema 201 - Created MachineConfiguration schema 202 - Accepted MachineConfiguration schema 401 - Unauthorized Empty 21.2.2. /apis/operator.openshift.io/v1/machineconfigurations/{name} Table 21.6. Global path parameters Parameter Type Description name string name of the MachineConfiguration HTTP method DELETE Description delete a MachineConfiguration Table 21.7. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed Table 21.8. HTTP responses HTTP code Reponse body 200 - OK Status schema 202 - Accepted Status schema 401 - Unauthorized Empty HTTP method GET Description read the specified MachineConfiguration Table 21.9. HTTP responses HTTP code Reponse body 200 - OK MachineConfiguration schema 401 - Unauthorized Empty HTTP method PATCH Description partially update the specified MachineConfiguration Table 21.10. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 21.11. HTTP responses HTTP code Reponse body 200 - OK MachineConfiguration schema 401 - Unauthorized Empty HTTP method PUT Description replace the specified MachineConfiguration Table 21.12. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 21.13. Body parameters Parameter Type Description body MachineConfiguration schema Table 21.14. HTTP responses HTTP code Reponse body 200 - OK MachineConfiguration schema 201 - Created MachineConfiguration schema 401 - Unauthorized Empty 21.2.3. /apis/operator.openshift.io/v1/machineconfigurations/{name}/status Table 21.15. Global path parameters Parameter Type Description name string name of the MachineConfiguration HTTP method GET Description read status of the specified MachineConfiguration Table 21.16. HTTP responses HTTP code Reponse body 200 - OK MachineConfiguration schema 401 - Unauthorized Empty HTTP method PATCH Description partially update status of the specified MachineConfiguration Table 21.17. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 21.18. HTTP responses HTTP code Reponse body 200 - OK MachineConfiguration schema 401 - Unauthorized Empty HTTP method PUT Description replace status of the specified MachineConfiguration Table 21.19. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 21.20. Body parameters Parameter Type Description body MachineConfiguration schema Table 21.21. HTTP responses HTTP code Reponse body 200 - OK MachineConfiguration schema 201 - Created MachineConfiguration schema 401 - Unauthorized Empty
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.18/html/operator_apis/machineconfiguration-operator-openshift-io-v1
30.2. Network Boot Configuration
30.2. Network Boot Configuration The step is to copy the files necessary to start the installation to the tftp server so they can be found when the client requests them. The tftp server is usually the same server as the network server exporting the installation tree. The PXE boot configuration procedure differs for BIOS and EFI. A separate yaboot configuration procedure is provided for Power Systems servers. Note Red Hat Satellite has the ability to automate the setup of a PXE server. See the Red Hat Satellite User Guide for more information. 30.2.1. Configuring PXE Boot for BIOS If tftp-server is not yet installed, run yum install tftp-server . In the tftp-server config file at /etc/xinetd.d/tftp , change the disabled parameter from yes to no . Configure your DHCP server to use the boot images packaged with SYSLINUX. (If you do not have a DHCP server installed, refer to the DHCP Servers chapter in the Red Hat Enterprise Linux Deployment Guide .) A sample configuration in /etc/dhcp/dhcpd.conf might look like: You now need the pxelinux.0 file from the syslinux-nolinux package in the ISO image file. To access it, run the following commands as root: Extract the package: Create a pxelinux directory within tftpboot and copy pxelinux.0 into it: Create a pxelinux.cfg directory within pxelinux : Add a config file to this directory. The file should either be named default or named after the IP address, converted into hexadecimal format without delimiters. For example, if your machine's IP address is 10.0.0.1, the filename would be 0A000001 . A sample config file at /var/lib/tftpboot/pxelinux/pxelinux.cfg/default might look like: For instructions on how to specify the installation source, refer to Section 7.1.3, "Additional Boot Options" Copy the splash image into your tftp root directory: Copy the boot images into your tftp root directory: Boot the client system, and select the network device as your boot device when prompted.
[ "option space pxelinux; option pxelinux.magic code 208 = string; option pxelinux.configfile code 209 = text; option pxelinux.pathprefix code 210 = text; option pxelinux.reboottime code 211 = unsigned integer 32; subnet 10.0.0.0 netmask 255.255.255.0 { option routers 10.0.0.254; range 10.0.0.2 10.0.0.253; class \"pxeclients\" { match if substring (option vendor-class-identifier, 0, 9) = \"PXEClient\"; next-server 10.0.0.1; if option arch = 00:06 { filename \"pxelinux/bootia32.efi\"; } else if option arch = 00:07 { filename \"pxelinux/bootx64.efi\"; } else { filename \"pxelinux/pxelinux.0\"; } } host example-ia32 { hardware ethernet XX:YY:ZZ:11:22:33; fixed-address 10.0.0.2; } }", "mount -t iso9660 / path_to_image/name_of_image .iso / mount_point -o loop,ro cp -pr / mount_point /Packages/syslinux-nolinux- version-architecture .rpm / publicly_available_directory umount / mount_point", "rpm2cpio syslinux-nolinux- version-architecture .rpm | cpio -dimv", "mkdir /var/lib/tftpboot/pxelinux cp publicly_available_directory /usr/share/syslinux/pxelinux.0 /var/lib/tftpboot/pxelinux", "mkdir /var/lib/tftpboot/pxelinux/pxelinux.cfg", "default vesamenu.c32 prompt 1 timeout 600 display boot.msg label linux menu label ^Install or upgrade an existing system menu default kernel vmlinuz append initrd=initrd.img label vesa menu label Install system with ^basic video driver kernel vmlinuz append initrd=initrd.img xdriver=vesa nomodeset label rescue menu label ^Rescue installed system kernel vmlinuz append initrd=initrd.img rescue label local menu label Boot from ^local drive localboot 0xffff label memtest86 menu label ^Memory test kernel memtest append -", "cp /boot/grub/splash.xpm.gz /var/lib/tftpboot/pxelinux/splash.xpm.gz", "cp / path/to /x86_64/os/images/pxeboot/{vmlinuz,initrd.img} /var/lib/tftpboot/pxelinux/rhel6/" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/installation_guide/s1-netboot-pxe-config
B.3. Identity Management Clients
B.3. Identity Management Clients This section describes common client problems for IdM in Red Hat Enterprise Linux. Additional resources: To validate your /etc/sssd.conf file, see SSSD Configuration Validation in the System-Level Authentication Guide . B.3.1. The Client Is Unable to Resolve Reverse Lookups when Using an External DNS An external DNS server returns a wrong host name for the IdM server. The following errors related to the IdM server appear in the Kerberos database: What this means: The external DNS name server returns the wrong host name for the IdM server or returns no answer at all. To fix the problem: Verify your DNS configuration, and make sure the DNS domains used by IdM are properly delegated. See Section 2.1.5, "Host Name and DNS Configuration" for details. Verify your reverse (PTR) DNS records settings. See Chapter 33, Managing DNS for details. B.3.2. The Client Is Not Added to the DNS Zone When running the ipa-client-install utility, the nsupdate utility fails to add the client to the DNS zone. What this means: The DNS configuration is incorrect. To fix the problem: Verify your configuration for DNS delegation from the parent zone to IdM. See Section 2.1.5, "Host Name and DNS Configuration" for details. Make sure that dynamic updates are allowed in the IdM zone. See Section 33.5.1, "Enabling Dynamic DNS Updates" for details. For details on managing DNS in IdM, see Section 33.7, "Managing Reverse DNS Zones" . For details on managing DNS in Red Hat Enterprise Linux, see Editing Zone Files in the Networking Guide . B.3.3. Client Connection Problems Users cannot log in to a machine. Attempts to access user and group information, such as with the getent passwd admin command, fail. What this means: Client authentication problems often indicate problems with the System Security Services Daemon (SSSD) service. To fix the problem: Examine the SSSD logs in the /var/log/sssd/ directory. The directory includes a log file for the DNS domain, such as sssd_ example.com .log . If the logs do not include enough information, increase the log level: In the /etc/sssd/sssd.conf file, look up the [domain/ example.com ] section. Adjust the debug_level option to record more information in the logs. Restart the sssd service. Examine sssd_ example.com .log again. The file now includes more error messages.
[ "Jun 30 11:11:48 server1 krb5kdc[1279](info): AS_REQ (4 etypes {18 17 16 23}) 192.0.2.1: NEEDED_PREAUTH: admin EXAMPLE COM for krbtgt/EXAMPLE COM EXAMPLE COM, Additional pre-authentication required Jun 30 11:11:48 server1 krb5kdc[1279](info): AS_REQ (4 etypes {18 17 16 23}) 192.0.2.1: ISSUE: authtime 1309425108, etypes {rep=18 tkt=18 ses=18}, admin EXAMPLE COM for krbtgt/EXAMPLE COM EXAMPLE COM Jun 30 11:11:49 server1 krb5kdc[1279](info): TGS_REQ (4 etypes {18 17 16 23}) 192.0.2.1: UNKNOWN_SERVER: authtime 0, admin EXAMPLE COM for HTTP/[email protected], Server not found in Kerberos database", "debug_level = 9", "systemctl start sssd" ]
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/linux_domain_identity_authentication_and_policy_guide/trouble-client
function::kernel_short
function::kernel_short Name function::kernel_short - Retrieves a short value stored in kernel memory. Synopsis Arguments addr The kernel address to retrieve the short from. General Syntax kernel_short:long(addr:long) Description Returns the short value from a given kernel memory address. Reports an error when reading from the given address fails.
[ "function kernel_short:long(addr:long)" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/systemtap_tapset_reference/api-kernel-short
Appendix A. Using your subscription
Appendix A. Using your subscription AMQ is provided through a software subscription. To manage your subscriptions, access your account at the Red Hat Customer Portal. A.1. Accessing your account Procedure Go to access.redhat.com . If you do not already have an account, create one. Log in to your account. A.2. Activating a subscription Procedure Go to access.redhat.com . Navigate to My Subscriptions . Navigate to Activate a subscription and enter your 16-digit activation number. A.3. Downloading release files To access .zip, .tar.gz, and other release files, use the customer portal to find the relevant files for download. If you are using RPM packages or the Red Hat Maven repository, this step is not required. Procedure Open a browser and log in to the Red Hat Customer Portal Product Downloads page at access.redhat.com/downloads . Locate the Red Hat AMQ entries in the INTEGRATION AND AUTOMATION category. Select the desired AMQ product. The Software Downloads page opens. Click the Download link for your component. A.4. Registering your system for packages To install RPM packages for this product on Red Hat Enterprise Linux, your system must be registered. If you are using downloaded release files, this step is not required. Procedure Go to access.redhat.com . Navigate to Registration Assistant . Select your OS version and continue to the page. Use the listed command in your system terminal to complete the registration. For more information about registering your system, see one of the following resources: Red Hat Enterprise Linux 7 - Registering the system and managing subscriptions Red Hat Enterprise Linux 8 - Registering the system and managing subscriptions
null
https://docs.redhat.com/en/documentation/red_hat_amq/2021.q1/html/using_the_amq_jms_client/using_your_subscription
Chapter 6. Storage
Chapter 6. Storage Full Support of fsfreeze The fsfreeze tool is fully supported in Red Hat Enterprise Linux 6.5. The fsfreeze command halts access to a file system on a disk. fsfreeze is designed to be used with hardware RAID devices, assisting in the creation of volume snapshots. For more details on the fsfreeze utility, refer to the fsfreeze(8) man page. pNFS File Layout Hardening pNFS allows traditional NFS systems to scale out in traditional NAS environments, by allowing the compute clients to read and write data directly and in parallel, to and from the physical storage devices. The NFS server is used only to control meta-data and coordinate access, allowing predictably scalable access to very large sets from many clients. Bug fixes to pNFS are being delivered in this release. Support of Red Hat Storage in FUSE FUSE (Filesystem in User Space) is a framework that enables development of file systems purely in the user space without requiring modifications to the kernel. Red Hat Enterprise Linux 6.5 delivers performance enhancements for user space file systems that use FUSE, for example, GlusterFS (Red Hat Storage). Dynamic aggregation of LVM metadata via lvmetad Most LVM commands require an accurate view of the LVM metadata stored on the disk devices on the system. With the current LVM design, if this information is not available, LVM must scan all the physical disk devices in the system. This requires a significant amount of I/O operations in systems that have a large number of disks. The purpose of the lvmetad daemon is to eliminate the need for this scanning by dynamically aggregating metadata information each time the status of a device changes. These events are signaled to lvmetad by udev rules. If lvmetad is not running, LVM performs a scan as it normally would. This feature is disabled by default in Red Hat Enterprise Linux 6. To enable it, refer to the use_lvmetad parameter in the /etc/lvm/lvm.conf file, and enable the lvmetad daemon by configuring the lvm2-lvmetad init script. LVM support for (non-clustered) thinly-provisioned snapshots An implementation of LVM copy-on-write (cow) snapshots, previously available as a Technology Preview, is now fully supported in Red Hat Enterprise Linux 6.5. The main advantage of this implementation, compared to the implementation of snapshots, is that it allows many virtual devices to be stored on the same data volume. This implementation also provides support for arbitrary depth of recursive snapshots (snapshots of snapshots of snapshots ...). Note that this feature is for use on a single system. It is not available for multi-system access in cluster environments. For more information, refer to the documentation of the -s, --snapshot option in the lvcreate man page. LVM support for (non-clustered) thinly-provisioned LVs Logical Volumes (LVs) can now be thinly provisioned to manage a storage pool of free space to be allocated to an arbitrary number of devices when needed by applications. This allows creation of devices that can be bound to a thinly provisioned pool for late allocation when an application actually writes to the pool. The thinly-provisioned pool can be expanded dynamically if and when needed for cost-effective allocation of storage space. This feature, previously available as a Technology Preview, is now fully supported. You must have the device-mapper-persistent-data package installed to use this feature. For more information, refer to the lvcreate(8) man page. Multipath I/O Updates Scalability and ease-of-use of Device Mapper Multipath have been improved. These improvements include in particular: responsiveness of utilities, multipath device automatic naming, more robust multipath target detection. Performance Improvements in GFS2 Red Hat Enterprise Linux 6.5 introduces the Orlov block allocator that provides better locality for files which are truly related to each other and likely to be accessed together. In addition, when resource groups are highly contended, a different group is used to maximize performance. TRIM Support in mdadm The mdadm tool now supports the TRIM commands for RAID0, RAID1, and RAID10. Support For LSI Syncro Red Hat Enterprise Linux 6 includes code in the megaraid_sas driver to enable LSI Syncro CS high-availability direct-atteched storage (HA-DAS) adapters. While the megaraid_sas driver is fully supported for previously enabled adaptaers, the use of this driver for Syncro CS is available as a Technology Preview. Support for this adapter will be provided directly by LSI, your system integrator, or system vendor. Users deploying Syncro CS on Red Hat Enterprise Linux 6 are encouraged to provide feedback to Red Hat and LSI. For more infomration on LSI Syncro CS solutions, please visit http://www.lsi.com/products/shared-das/pages/default.aspx . Safe Offline Interface for DASD devices Red Hat Enterprise Linux 6.5 introduces the safe offline interface for direct access storage devices (DASDs). Instead of setting a DASD device offline and returning all outstanding I/O requests as failed, with this interface, the user can set a DASD device offline and write all outstanding data to the device before setting the device offline. Support for FBA EAV and EDEV Red Hat Enterprise Linux 6.5 supports Fixed Block Access (FBA) Extended Address Volumes (EAV) and EDEV installations. FBA Direct Access Storage Devices (DASDs) are mainframe-specific disk devices. In contrast to Extended Count Key Data (ECKD) DASDs, these disks do not require formatting and resemble the Logical Block Addressing (LBA) of non-mainframe disks. Despite this resemblance, the Linux kernel applies special handling during partition detection for FBA DASDs, resulting in a single, immutable partition being reported. While actual FBA DASD hardware is no longer available, the IBM z/VMhypervisor can simulate FBA DASD disks, backed by either ECKD or SCSI devices. EDEV storage then appears to the system as an FBA DASD (with one immutable partition), rather than an ECKD DASD.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.5_release_notes/bh-storage
2.5. Testing the Resource Configuration
2.5. Testing the Resource Configuration In the cluster status display shown in Section 2.4, "Creating the Resources and Resource Groups with the pcs Command" , all of the resources are running on node z1.example.com . You can test whether the resource group fails over to node z2.example.com by using the following procedure to put the first node in standby mode, after which the node will no longer be able to host resources. The following command puts node z1.example.com in standby mode. After putting node z1 in standby mode, check the cluster status. Note that the resources should now all be running on z2 . The web site at the defined IP address should still display, without interruption. To remove z1 from standby mode, enter the following command. Note Removing a node from standby mode does not in itself cause the resources to fail back over to that node. This will depend on the resource-stickiness value for the resources. For information on the resource-stickiness meta attribute, see Configuring a Resource to Prefer its Current Node in the Red Hat High Availability Add-On Reference .
[ "root@z1 ~]# pcs node standby z1.example.com", "pcs status Cluster name: my_cluster Last updated: Wed Jul 31 17:16:17 2013 Last change: Wed Jul 31 17:18:34 2013 via crm_attribute on z1.example.com Stack: corosync Current DC: z2.example.com (2) - partition with quorum Version: 1.1.10-5.el7-9abe687 2 Nodes configured 6 Resources configured Node z1.example.com (1): standby Online: [ z2.example.com ] Full list of resources: myapc (stonith:fence_apc_snmp): Started z1.example.com Resource Group: apachegroup my_lvm (ocf::heartbeat:LVM): Started z2.example.com my_fs (ocf::heartbeat:Filesystem): Started z2.example.com VirtualIP (ocf::heartbeat:IPaddr2): Started z2.example.com Website (ocf::heartbeat:apache): Started z2.example.com", "root@z1 ~]# pcs node unstandby z1.example.com" ]
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/high_availability_add-on_administration/s1-unittest-haaa
Chapter 7. Upgrading Dev Spaces
Chapter 7. Upgrading Dev Spaces This chapter describes how to upgrade from CodeReady Workspaces 3.1 to OpenShift Dev Spaces 3.15. 7.1. Upgrading the chectl management tool This section describes how to upgrade the dsc management tool. Procedure Section 1.2, "Installing the dsc management tool" . 7.2. Specifying the update approval strategy The Red Hat OpenShift Dev Spaces Operator supports two upgrade strategies: Automatic The Operator installs new updates when they become available. Manual New updates need to be manually approved before installation begins. You can specify the update approval strategy for the Red Hat OpenShift Dev Spaces Operator by using the OpenShift web console. Prerequisites An OpenShift web console session by a cluster administrator. See Accessing the web console . An instance of OpenShift Dev Spaces that was installed by using Red Hat Ecosystem Catalog. Procedure In the OpenShift web console, navigate to Operators Installed Operators . Click Red Hat OpenShift Dev Spaces in the list of installed Operators. Navigate to the Subscription tab. Configure the Update approval strategy to Automatic or Manual . Additional resources Changing the update channel for an Operator 7.3. Upgrading Dev Spaces using the OpenShift web console You can manually approve an upgrade from an earlier minor version using the Red Hat OpenShift Dev Spaces Operator from the Red Hat Ecosystem Catalog in the OpenShift web console. Prerequisites An OpenShift web console session by a cluster administrator. See Accessing the web console . An instance of OpenShift Dev Spaces that was installed by using the Red Hat Ecosystem Catalog. The approval strategy in the subscription is Manual . See Section 7.2, "Specifying the update approval strategy" . Procedure Manually approve the pending Red Hat OpenShift Dev Spaces Operator upgrade. See Manually approving a pending Operator upgrade . Verification steps Navigate to the OpenShift Dev Spaces instance. The 3.15 version number is visible at the bottom of the page. Additional resources Manually approving a pending Operator upgrade 7.4. Upgrading Dev Spaces using the CLI management tool This section describes how to upgrade from the minor version using the CLI management tool. Prerequisites An administrative account on OpenShift. A running instance of a minor version of CodeReady Workspaces, installed using the CLI management tool on the same instance of OpenShift, in the openshift-devspaces OpenShift project. dsc for OpenShift Dev Spaces version 3.15. See: Section 1.2, "Installing the dsc management tool" . Procedure Save and push changes back to the Git repositories for all running CodeReady Workspaces 3.1 workspaces. Shut down all workspaces in the CodeReady Workspaces 3.1 instance. Upgrade OpenShift Dev Spaces: Note For slow systems or internet connections, add the --k8spodwaittimeout=1800000 flag option to extend the Pod timeout period to 1800000 ms or longer. Verification steps Navigate to the OpenShift Dev Spaces instance. The 3.15 version number is visible at the bottom of the page. 7.5. Upgrading Dev Spaces in a restricted environment This section describes how to upgrade Red Hat OpenShift Dev Spaces and perform minor version updates by using the CLI management tool in a restricted environment. Prerequisites The OpenShift Dev Spaces instance was installed on OpenShift using the dsc --installer operator method in the openshift-devspaces project. See Section 2.1.4, "Installing Dev Spaces in a restricted environment" . The OpenShift cluster has at least 64 GB of disk space. The OpenShift cluster is ready to operate on a restricted network, and the OpenShift control plane has access to the public internet. See About disconnected installation mirroring and Using Operator Lifecycle Manager on restricted networks . An active oc session with administrative permissions to the OpenShift cluster. See Getting started with the OpenShift CLI . An active oc registry session to the registry.redhat.io Red Hat Ecosystem Catalog. See: Red Hat Container Registry authentication . opm . See Installing the opm CLI . jq . See Downloading jq . podman . See Podman Installation Instructions . skopeo version 1.6 or higher. See Installing Skopeo . An active skopeo session with administrative access to the private Docker registry. Authenticating to a registry , and Mirroring images for a disconnected installation . dsc for OpenShift Dev Spaces version 3.15. See Section 1.2, "Installing the dsc management tool" . Procedure Download and execute the mirroring script to install a custom Operator catalog and mirror the related images: prepare-restricted-environment.sh . 1 The private Docker registry where the images will be mirrored In all running workspaces in the CodeReady Workspaces 3.1 instance, save and push changes back to the Git repositories. Stop all workspaces in the CodeReady Workspaces 3.1 instance. Run the following command: Verification steps Navigate to the OpenShift Dev Spaces instance. The 3.15 version number is visible at the bottom of the page. Additional resources Red Hat-provided Operator catalogs Managing custom catalogs 7.6. Repairing the Dev Workspace Operator on OpenShift Under certain conditions, such as OLM restart or cluster upgrade, the Dev Spaces Operator for OpenShift Dev Spaces might automatically install the Dev Workspace Operator even when it is already present on the cluster. In that case, you can repair the Dev Workspace Operator on OpenShift as follows: Prerequisites An active oc session as a cluster administrator to the destination OpenShift cluster. See Getting started with the CLI . On the Installed Operators page of the OpenShift web console, you see multiple entries for the Dev Workspace Operator or one entry that is stuck in a loop of Replacing and Pending . Procedure Delete the devworkspace-controller namespace that contains the failing pod. Update DevWorkspace and DevWorkspaceTemplate Custom Resource Definitions (CRD) by setting the conversion strategy to None and removing the entire webhook section: spec: ... conversion: strategy: None status: ... Tip You can find and edit the DevWorkspace and DevWorkspaceTemplate CRDs in the Administrator perspective of the OpenShift web console by searching for DevWorkspace in Administration CustomResourceDefinitions . Note The DevWorkspaceOperatorConfig and DevWorkspaceRouting CRDs have the conversion strategy set to None by default. Remove the Dev Workspace Operator subscription: 1 openshift-operators or an OpenShift project where the Dev Workspace Operator is installed. Get the Dev Workspace Operator CSVs in the <devworkspace_operator.vX.Y.Z> format: Remove each Dev Workspace Operator CSV: 1 openshift-operators or an OpenShift project where the Dev Workspace Operator is installed. Re-create the Dev Workspace Operator subscription: 1 Automatic or Manual . Important For installPlanApproval: Manual , in the Administrator perspective of the OpenShift web console, go to Operators Installed Operators and select the following for the Dev Workspace Operator : Upgrade available Preview InstallPlan Approve . In the Administrator perspective of the OpenShift web console, go to Operators Installed Operators and verify the Succeeded status of the Dev Workspace Operator .
[ "dsc server:update -n openshift-devspaces", "bash prepare-restricted-environment.sh --devworkspace_operator_index registry.redhat.io/redhat/redhat-operator-index:v4.16 --devworkspace_operator_version \"v0.29.0\" --prod_operator_index \"registry.redhat.io/redhat/redhat-operator-index:v4.16\" --prod_operator_package_name \"devspaces\" --prod_operator_bundle_name \"devspacesoperator\" --prod_operator_version \"v3.15.0\" --my_registry \" <my_registry> \" 1", "dsc server:update --che-operator-image=\"USDTAG\" -n openshift-devspaces --k8spodwaittimeout=1800000", "spec: conversion: strategy: None status:", "oc delete sub devworkspace-operator -n openshift-operators 1", "oc get csv | grep devworkspace", "oc delete csv <devworkspace_operator.vX.Y.Z> -n openshift-operators 1", "cat <<EOF | oc apply -f - apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: devworkspace-operator namespace: openshift-operators spec: channel: fast name: devworkspace-operator source: redhat-operators sourceNamespace: openshift-marketplace installPlanApproval: Automatic 1 startingCSV: devworkspace-operator.v0.29.0 EOF" ]
https://docs.redhat.com/en/documentation/red_hat_openshift_dev_spaces/3.15/html/administration_guide/upgrading-devspaces
Virtualization
Virtualization Red Hat OpenShift Service on AWS 4 OpenShift Virtualization installation and usage. Red Hat OpenShift Documentation Team
null
https://docs.redhat.com/en/documentation/red_hat_openshift_service_on_aws/4/html/virtualization/index
Proof of Concept - Deploying Red Hat Quay
Proof of Concept - Deploying Red Hat Quay Red Hat Quay 3 Deploying Red Hat Quay Red Hat OpenShift Documentation Team
[ "sudo yum install -y podman", "sudo yum module install -y container-tools", "subscription-manager register --username=<user_name> --password=<password> subscription-manager refresh subscription-manager list --available subscription-manager attach --pool=<pool_id> yum update -y", "sudo podman login registry.redhat.io", "firewall-cmd --permanent --add-port=80/tcp && firewall-cmd --permanent --add-port=443/tcp && firewall-cmd --permanent --add-port=5432/tcp && firewall-cmd --permanent --add-port=5433/tcp && firewall-cmd --permanent --add-port=6379/tcp && firewall-cmd --reload", "ip a", "--- link/ether 6c:6a:77:eb:09:f1 brd ff:ff:ff:ff:ff:ff inet 192.168.1.132/24 brd 192.168.1.255 scope global dynamic noprefixroute wlp82s0 ---", "cat /etc/hosts", "192.168.1.138 quay-server.example.com", "mkdir -p USDQUAY/postgres-quay", "setfacl -m u:26:-wx USDQUAY/postgres-quay", "sudo podman run -d --rm --name postgresql-quay -e POSTGRESQL_USER=quayuser -e POSTGRESQL_PASSWORD=quaypass -e POSTGRESQL_DATABASE=quay -e POSTGRESQL_ADMIN_PASSWORD=adminpass -p 5432:5432 -v USDQUAY/postgres-quay:/var/lib/pgsql/data:Z registry.redhat.io/rhel8/postgresql-13", "sudo podman exec -it postgresql-quay /bin/bash -c 'echo \"CREATE EXTENSION IF NOT EXISTS pg_trgm\" | psql -d quay -U postgres'", "sudo podman run -d --rm --name redis -p 6379:6379 -e REDIS_PASSWORD=strongpassword registry.redhat.io/rhel8/redis-6:1-110", "touch config.yaml", "BUILDLOGS_REDIS: host: quay-server.example.com password: strongpassword port: 6379 CREATE_NAMESPACE_ON_PUSH: true DATABASE_SECRET_KEY: a8c2744b-7004-4af2-bcee-e417e7bdd235 DB_URI: postgresql://quayuser:[email protected]:5432/quay DISTRIBUTED_STORAGE_CONFIG: default: - LocalStorage - storage_path: /datastorage/registry DISTRIBUTED_STORAGE_DEFAULT_LOCATIONS: [] DISTRIBUTED_STORAGE_PREFERENCE: - default FEATURE_MAILING: false SECRET_KEY: e9bd34f4-900c-436a-979e-7530e5d74ac8 SERVER_HOSTNAME: quay-server.example.com SETUP_COMPLETE: true USER_EVENTS_REDIS: host: quay-server.example.com password: strongpassword port: 6379", "mkdir USDQUAY/config", "cp -v config.yaml USDQUAY/config", "SERVER_HOSTNAME: quay-server.example.com SETUP_COMPLETE: true SUPER_USERS: - quayadmin 1", "mkdir USDQUAY/storage", "setfacl -m u:1001:-wx USDQUAY/storage", "sudo podman run -d --rm -p 80:8080 -p 443:8443 --name=quay -v USDQUAY/config:/conf/stack:Z -v USDQUAY/storage:/datastorage:Z registry.redhat.io/quay/quay-rhel8:v3.13.3", "sudo podman login --tls-verify=false quay-server.example.com", "Username: quayadmin Password: password Login Succeeded!", "sudo podman pull busybox", "Trying to pull docker.io/library/busybox Getting image source signatures Copying blob 4c892f00285e done Copying config 22667f5368 done Writing manifest to image destination Storing signatures 22667f53682a2920948d19c7133ab1c9c3f745805c14125859d20cede07f11f9", "sudo podman images", "REPOSITORY TAG IMAGE ID CREATED SIZE docker.io/library/busybox latest 22667f53682a 14 hours ago 1.45 MB", "sudo podman tag docker.io/library/busybox quay-server.example.com/quayadmin/busybox:test", "sudo podman push --tls-verify=false quay-server.example.com/quayadmin/busybox:test", "Getting image source signatures Copying blob 6b245f040973 done Copying config 22667f5368 done Writing manifest to image destination Storing signatures", "sudo podman rmi quay-server.example.com/quayadmin/busybox:test", "Untagged: quay-server.example.com/quayadmin/busybox:test", "sudo podman pull --tls-verify=false quay-server.example.com/quayadmin/busybox:test", "Trying to pull quay-server.example.com/quayadmin/busybox:test Getting image source signatures Copying blob 6ef22a7134ba [--------------------------------------] 0.0b / 0.0b Copying config 22667f5368 done Writing manifest to image destination Storing signatures 22667f53682a2920948d19c7133ab1c9c3f745805c14125859d20cede07f11f9", "openssl genrsa -out rootCA.key 2048", "openssl req -x509 -new -nodes -key rootCA.key -sha256 -days 1024 -out rootCA.pem", "Country Name (2 letter code) [XX]:IE State or Province Name (full name) []:GALWAY Locality Name (eg, city) [Default City]:GALWAY Organization Name (eg, company) [Default Company Ltd]:QUAY Organizational Unit Name (eg, section) []:DOCS Common Name (eg, your name or your server's hostname) []:quay-server.example.com", "openssl genrsa -out ssl.key 2048", "openssl req -new -key ssl.key -out ssl.csr", "Country Name (2 letter code) [XX]:IE State or Province Name (full name) []:GALWAY Locality Name (eg, city) [Default City]:GALWAY Organization Name (eg, company) [Default Company Ltd]:QUAY Organizational Unit Name (eg, section) []:DOCS Common Name (eg, your name or your server's hostname) []:quay-server.example.com Email Address []:", "[req] req_extensions = v3_req distinguished_name = req_distinguished_name [req_distinguished_name] [ v3_req ] basicConstraints = CA:FALSE keyUsage = nonRepudiation, digitalSignature, keyEncipherment subjectAltName = @alt_names [alt_names] DNS.1 = <quay-server.example.com> IP.1 = 192.168.1.112", "openssl x509 -req -in ssl.csr -CA rootCA.pem -CAkey rootCA.key -CAcreateserial -out ssl.cert -days 356 -extensions v3_req -extfile openssl.cnf", "ls /path/to/certificates", "rootCA.key ssl-bundle.cert ssl.key custom-ssl-config-bundle-secret.yaml rootCA.pem ssl.cert openssl.cnf rootCA.srl ssl.csr", "cp ~/ssl.cert ~/ssl.key /path/to/configuration_directory", "cd /path/to/configuration_directory", "SERVER_HOSTNAME: <quay-server.example.com> PREFERRED_URL_SCHEME: https", "cat rootCA.pem >> ssl.cert", "sudo podman stop <quay_container_name>", "sudo podman run -d --rm -p 80:8080 -p 443:8443 --name=quay -v USDQUAY/config:/conf/stack:Z -v USDQUAY/storage:/datastorage:Z registry.redhat.io/quay/quay-rhel8:v3.13.3", "sudo podman login quay-server.example.com", "Error: error authenticating creds for \"quay-server.example.com\": error pinging docker registry quay-server.example.com: Get \"https://quay-server.example.com/v2/\": x509: certificate signed by unknown authority", "sudo podman login --tls-verify=false quay-server.example.com", "Login Succeeded!", "sudo cp rootCA.pem /etc/containers/certs.d/quay-server.example.com/ca.crt", "sudo podman login quay-server.example.com", "Login Succeeded!", "sudo cp rootCA.pem /etc/pki/ca-trust/source/anchors/", "sudo update-ca-trust extract", "trust list | grep quay label: quay-server.example.com", "sudo rm /etc/pki/ca-trust/source/anchors/rootCA.pem", "sudo update-ca-trust extract", "trust list | grep quay", "podman network create ip-dual-stack --ipv6", "FEATURE_LISTEN_IP_VERSION: dual-stack", "sudo podman run -d --rm -p \"[::]:80:8080\" -p \"[::]:443:8443\" --name=quay -v USDQUAY/config:/conf/stack:Z -v USDQUAY/storage:/datastorage:Z registry.redhat.io/quay/quay-rhel8:v3.13.3" ]
https://docs.redhat.com/en/documentation/red_hat_quay/3/html-single/proof_of_concept_-_deploying_red_hat_quay/index
Chapter 1. Storage APIs
Chapter 1. Storage APIs 1.1. CSIDriver [storage.k8s.io/v1] Description CSIDriver captures information about a Container Storage Interface (CSI) volume driver deployed on the cluster. Kubernetes attach detach controller uses this object to determine whether attach is required. Kubelet uses this object to determine whether pod information needs to be passed on mount. CSIDriver objects are non-namespaced. Type object 1.2. CSINode [storage.k8s.io/v1] Description CSINode holds information about all CSI drivers installed on a node. CSI drivers do not need to create the CSINode object directly. As long as they use the node-driver-registrar sidecar container, the kubelet will automatically populate the CSINode object for the CSI driver as part of kubelet plugin registration. CSINode has the same name as a node. If the object is missing, it means either there are no CSI Drivers available on the node, or the Kubelet version is low enough that it doesn't create this object. CSINode has an OwnerReference that points to the corresponding node object. Type object 1.3. CSIStorageCapacity [storage.k8s.io/v1] Description CSIStorageCapacity stores the result of one CSI GetCapacity call. For a given StorageClass, this describes the available capacity in a particular topology segment. This can be used when considering where to instantiate new PersistentVolumes. For example this can express things like: - StorageClass "standard" has "1234 GiB" available in "topology.kubernetes.io/zone=us-east1" - StorageClass "localssd" has "10 GiB" available in "kubernetes.io/hostname=knode-abc123" The following three cases all imply that no capacity is available for a certain combination: - no object exists with suitable topology and storage class name - such an object exists, but the capacity is unset - such an object exists, but the capacity is zero The producer of these objects can decide which approach is more suitable. They are consumed by the kube-scheduler when a CSI driver opts into capacity-aware scheduling with CSIDriverSpec.StorageCapacity. The scheduler compares the MaximumVolumeSize against the requested size of pending volumes to filter out unsuitable nodes. If MaximumVolumeSize is unset, it falls back to a comparison against the less precise Capacity. If that is also unset, the scheduler assumes that capacity is insufficient and tries some other node. Type object 1.4. PersistentVolume [v1] Description PersistentVolume (PV) is a storage resource provisioned by an administrator. It is analogous to a node. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes Type object 1.5. PersistentVolumeClaim [v1] Description PersistentVolumeClaim is a user's request for and claim to a persistent volume Type object 1.6. StorageClass [storage.k8s.io/v1] Description StorageClass describes the parameters for a class of storage for which PersistentVolumes can be dynamically provisioned. StorageClasses are non-namespaced; the name of the storage class according to etcd is in ObjectMeta.Name. Type object 1.7. StorageState [migration.k8s.io/v1alpha1] Description The state of the storage of a specific resource. Type object 1.8. StorageVersionMigration [migration.k8s.io/v1alpha1] Description StorageVersionMigration represents a migration of stored data to the latest storage version. Type object 1.9. VolumeAttachment [storage.k8s.io/v1] Description VolumeAttachment captures the intent to attach or detach the specified volume to/from the specified node. VolumeAttachment objects are non-namespaced. Type object 1.10. VolumeSnapshot [snapshot.storage.k8s.io/v1] Description VolumeSnapshot is a user's request for either creating a point-in-time snapshot of a persistent volume, or binding to a pre-existing snapshot. Type object 1.11. VolumeSnapshotClass [snapshot.storage.k8s.io/v1] Description VolumeSnapshotClass specifies parameters that a underlying storage system uses when creating a volume snapshot. A specific VolumeSnapshotClass is used by specifying its name in a VolumeSnapshot object. VolumeSnapshotClasses are non-namespaced Type object 1.12. VolumeSnapshotContent [snapshot.storage.k8s.io/v1] Description VolumeSnapshotContent represents the actual "on-disk" snapshot object in the underlying storage system Type object
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.13/html/storage_apis/storage-apis
Chapter 8. Installing a cluster on Azure into an existing VNet
Chapter 8. Installing a cluster on Azure into an existing VNet In OpenShift Container Platform version 4.12, you can install a cluster into an existing Azure Virtual Network (VNet) on Microsoft Azure. The installation program provisions the rest of the required infrastructure, which you can further customize. To customize the installation, you modify parameters in the install-config.yaml file before you install the cluster. 8.1. Prerequisites You reviewed details about the OpenShift Container Platform installation and update processes. You read the documentation on selecting a cluster installation method and preparing it for users . You configured an Azure account to host the cluster and determined the tested and validated region to deploy the cluster to. If you use a firewall, you configured it to allow the sites that your cluster requires access to. If the cloud identity and access management (IAM) APIs are not accessible in your environment, or if you do not want to store an administrator-level credential secret in the kube-system namespace, you can manually create and maintain IAM credentials . If you use customer-managed encryption keys, you prepared your Azure environment for encryption . 8.2. About reusing a VNet for your OpenShift Container Platform cluster In OpenShift Container Platform 4.12, you can deploy a cluster into an existing Azure Virtual Network (VNet) in Microsoft Azure. If you do, you must also use existing subnets within the VNet and routing rules. By deploying OpenShift Container Platform into an existing Azure VNet, you might be able to avoid service limit constraints in new accounts or more easily abide by the operational constraints that your company's guidelines set. This is a good option to use if you cannot obtain the infrastructure creation permissions that are required to create the VNet. 8.2.1. Requirements for using your VNet When you deploy a cluster by using an existing VNet, you must perform additional network configuration before you install the cluster. In installer-provisioned infrastructure clusters, the installer usually creates the following components, but it does not create them when you install into an existing VNet: Subnets Route tables VNets Network Security Groups Note The installation program requires that you use the cloud-provided DNS server. Using a custom DNS server is not supported and causes the installation to fail. If you use a custom VNet, you must correctly configure it and its subnets for the installation program and the cluster to use. The installation program cannot subdivide network ranges for the cluster to use, set route tables for the subnets, or set VNet options like DHCP, so you must do so before you install the cluster. The cluster must be able to access the resource group that contains the existing VNet and subnets. While all of the resources that the cluster creates are placed in a separate resource group that it creates, some network resources are used from a separate group. Some cluster Operators must be able to access resources in both resource groups. For example, the Machine API controller attaches NICS for the virtual machines that it creates to subnets from the networking resource group. Your VNet must meet the following characteristics: The VNet's CIDR block must contain the Networking.MachineCIDR range, which is the IP address pool for cluster machines. The VNet and its subnets must belong to the same resource group, and the subnets must be configured to use Azure-assigned DHCP IP addresses instead of static IP addresses. You must provide two subnets within your VNet, one for the control plane machines and one for the compute machines. Because Azure distributes machines in different availability zones within the region that you specify, your cluster will have high availability by default. Note By default, if you specify availability zones in the install-config.yaml file, the installation program distributes the control plane machines and the compute machines across these availability zones within a region . To ensure high availability for your cluster, select a region with at least three availability zones. If your region contains fewer than three availability zones, the installation program places more than one control plane machine in the available zones. To ensure that the subnets that you provide are suitable, the installation program confirms the following data: All the specified subnets exist. There are two private subnets, one for the control plane machines and one for the compute machines. The subnet CIDRs belong to the machine CIDR that you specified. Machines are not provisioned in availability zones that you do not provide private subnets for. If required, the installation program creates public load balancers that manage the control plane and worker nodes, and Azure allocates a public IP address to them. Note If you destroy a cluster that uses an existing VNet, the VNet is not deleted. 8.2.1.1. Network security group requirements The network security groups for the subnets that host the compute and control plane machines require specific access to ensure that the cluster communication is correct. You must create rules to allow access to the required cluster communication ports. Important The network security group rules must be in place before you install the cluster. If you attempt to install a cluster without the required access, the installation program cannot reach the Azure APIs, and installation fails. Table 8.1. Required ports Port Description Control plane Compute 80 Allows HTTP traffic x 443 Allows HTTPS traffic x 6443 Allows communication to the control plane machines x 22623 Allows internal communication to the machine config server for provisioning machines x Important Currently, there is no supported way to block or restrict the machine config server endpoint. The machine config server must be exposed to the network so that newly-provisioned machines, which have no existing configuration or state, are able to fetch their configuration. In this model, the root of trust is the certificate signing requests (CSR) endpoint, which is where the kubelet sends its certificate signing request for approval to join the cluster. Because of this, machine configs should not be used to distribute sensitive information, such as secrets and certificates. To ensure that the machine config server endpoints, ports 22623 and 22624, are secured in bare metal scenarios, customers must configure proper network policies. Because cluster components do not modify the user-provided network security groups, which the Kubernetes controllers update, a pseudo-network security group is created for the Kubernetes controller to modify without impacting the rest of the environment. Additional resources About the OpenShift SDN network plugin 8.2.2. Division of permissions Starting with OpenShift Container Platform 4.3, you do not need all of the permissions that are required for an installation program-provisioned infrastructure cluster to deploy a cluster. This change mimics the division of permissions that you might have at your company: some individuals can create different resources in your clouds than others. For example, you might be able to create application-specific items, like instances, storage, and load balancers, but not networking-related components such as VNets, subnet, or ingress rules. The Azure credentials that you use when you create your cluster do not need the networking permissions that are required to make VNets and core networking components within the VNet, such as subnets, routing tables, internet gateways, NAT, and VPN. You still need permission to make the application resources that the machines within the cluster require, such as load balancers, security groups, storage accounts, and nodes. 8.2.3. Isolation between clusters Because the cluster is unable to modify network security groups in an existing subnet, there is no way to isolate clusters from each other on the VNet. 8.3. Internet access for OpenShift Container Platform In OpenShift Container Platform 4.12, you require access to the internet to install your cluster. You must have internet access to: Access OpenShift Cluster Manager Hybrid Cloud Console to download the installation program and perform subscription management. If the cluster has internet access and you do not disable Telemetry, that service automatically entitles your cluster. Access Quay.io to obtain the packages that are required to install your cluster. Obtain the packages that are required to perform cluster updates. Important If your cluster cannot have direct internet access, you can perform a restricted network installation on some types of infrastructure that you provision. During that process, you download the required content and use it to populate a mirror registry with the installation packages. With some installation types, the environment that you install your cluster in will not require internet access. Before you update the cluster, you update the content of the mirror registry. 8.4. Generating a key pair for cluster node SSH access During an OpenShift Container Platform installation, you can provide an SSH public key to the installation program. The key is passed to the Red Hat Enterprise Linux CoreOS (RHCOS) nodes through their Ignition config files and is used to authenticate SSH access to the nodes. The key is added to the ~/.ssh/authorized_keys list for the core user on each node, which enables password-less authentication. After the key is passed to the nodes, you can use the key pair to SSH in to the RHCOS nodes as the user core . To access the nodes through SSH, the private key identity must be managed by SSH for your local user. If you want to SSH in to your cluster nodes to perform installation debugging or disaster recovery, you must provide the SSH public key during the installation process. The ./openshift-install gather command also requires the SSH public key to be in place on the cluster nodes. Important Do not skip this procedure in production environments, where disaster recovery and debugging is required. Note You must use a local key, not one that you configured with platform-specific approaches such as AWS key pairs . Procedure If you do not have an existing SSH key pair on your local machine to use for authentication onto your cluster nodes, create one. For example, on a computer that uses a Linux operating system, run the following command: USD ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1 1 Specify the path and file name, such as ~/.ssh/id_ed25519 , of the new SSH key. If you have an existing key pair, ensure your public key is in the your ~/.ssh directory. Note If you plan to install an OpenShift Container Platform cluster that uses FIPS validated or Modules In Process cryptographic libraries on the x86_64 , ppc64le , and s390x architectures. do not create a key that uses the ed25519 algorithm. Instead, create a key that uses the rsa or ecdsa algorithm. View the public SSH key: USD cat <path>/<file_name>.pub For example, run the following to view the ~/.ssh/id_ed25519.pub public key: USD cat ~/.ssh/id_ed25519.pub Add the SSH private key identity to the SSH agent for your local user, if it has not already been added. SSH agent management of the key is required for password-less SSH authentication onto your cluster nodes, or if you want to use the ./openshift-install gather command. Note On some distributions, default SSH private key identities such as ~/.ssh/id_rsa and ~/.ssh/id_dsa are managed automatically. If the ssh-agent process is not already running for your local user, start it as a background task: USD eval "USD(ssh-agent -s)" Example output Agent pid 31874 Note If your cluster is in FIPS mode, only use FIPS-compliant algorithms to generate the SSH key. The key must be either RSA or ECDSA. Add your SSH private key to the ssh-agent : USD ssh-add <path>/<file_name> 1 1 Specify the path and file name for your SSH private key, such as ~/.ssh/id_ed25519 Example output Identity added: /home/<you>/<path>/<file_name> (<computer_name>) steps When you install OpenShift Container Platform, provide the SSH public key to the installation program. 8.5. Obtaining the installation program Before you install OpenShift Container Platform, download the installation file on the host you are using for installation. Prerequisites You have a computer that runs Linux or macOS, with 500 MB of local disk space. Procedure Access the Infrastructure Provider page on the OpenShift Cluster Manager site. If you have a Red Hat account, log in with your credentials. If you do not, create an account. Select your infrastructure provider. Navigate to the page for your installation type, download the installation program that corresponds with your host operating system and architecture, and place the file in the directory where you will store the installation configuration files. Important The installation program creates several files on the computer that you use to install your cluster. You must keep the installation program and the files that the installation program creates after you finish installing the cluster. Both files are required to delete the cluster. Important Deleting the files created by the installation program does not remove your cluster, even if the cluster failed during installation. To remove your cluster, complete the OpenShift Container Platform uninstallation procedures for your specific cloud provider. Extract the installation program. For example, on a computer that uses a Linux operating system, run the following command: USD tar -xvf openshift-install-linux.tar.gz Download your installation pull secret from the Red Hat OpenShift Cluster Manager . This pull secret allows you to authenticate with the services that are provided by the included authorities, including Quay.io, which serves the container images for OpenShift Container Platform components. 8.6. Creating the installation configuration file You can customize the OpenShift Container Platform cluster you install on Microsoft Azure. Prerequisites Obtain the OpenShift Container Platform installation program and the pull secret for your cluster. Obtain service principal permissions at the subscription level. Procedure Create the install-config.yaml file. Change to the directory that contains the installation program and run the following command: USD ./openshift-install create install-config --dir <installation_directory> 1 1 For <installation_directory> , specify the directory name to store the files that the installation program creates. When specifying the directory: Verify that the directory has the execute permission. This permission is required to run Terraform binaries under the installation directory. Use an empty directory. Some installation assets, such as bootstrap X.509 certificates, have short expiration intervals, therefore you must not reuse an installation directory. If you want to reuse individual files from another cluster installation, you can copy them into your directory. However, the file names for the installation assets might change between releases. Use caution when copying installation files from an earlier OpenShift Container Platform version. At the prompts, provide the configuration details for your cloud: Optional: Select an SSH key to use to access your cluster machines. Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. Select azure as the platform to target. If you do not have a Microsoft Azure profile stored on your computer, specify the following Azure parameter values for your subscription and service principal: azure subscription id : The subscription ID to use for the cluster. Specify the id value in your account output. azure tenant id : The tenant ID. Specify the tenantId value in your account output. azure service principal client id : The value of the appId parameter for the service principal. azure service principal client secret : The value of the password parameter for the service principal. Select the region to deploy the cluster to. Select the base domain to deploy the cluster to. The base domain corresponds to the Azure DNS Zone that you created for your cluster. Enter a descriptive name for your cluster. Important All Azure resources that are available through public endpoints are subject to resource name restrictions, and you cannot create resources that use certain terms. For a list of terms that Azure restricts, see Resolve reserved resource name errors in the Azure documentation. Paste the pull secret from the Red Hat OpenShift Cluster Manager . Modify the install-config.yaml file. You can find more information about the available parameters in the "Installation configuration parameters" section. Back up the install-config.yaml file so that you can use it to install multiple clusters. Important The install-config.yaml file is consumed during the installation process. If you want to reuse the file, you must back it up now. 8.6.1. Installation configuration parameters Before you deploy an OpenShift Container Platform cluster, you provide parameter values to describe your account on the cloud platform that hosts your cluster and optionally customize your cluster's platform. When you create the install-config.yaml installation configuration file, you provide values for the required parameters through the command line. If you customize your cluster, you can modify the install-config.yaml file to provide more details about the platform. Note After installation, you cannot modify these parameters in the install-config.yaml file. 8.6.1.1. Required configuration parameters Required installation configuration parameters are described in the following table: Table 8.2. Required parameters Parameter Description Values apiVersion The API version for the install-config.yaml content. The current version is v1 . The installation program may also support older API versions. String baseDomain The base domain of your cloud provider. The base domain is used to create routes to your OpenShift Container Platform cluster components. The full DNS name for your cluster is a combination of the baseDomain and metadata.name parameter values that uses the <metadata.name>.<baseDomain> format. A fully-qualified domain or subdomain name, such as example.com . metadata Kubernetes resource ObjectMeta , from which only the name parameter is consumed. Object metadata.name The name of the cluster. DNS records for the cluster are all subdomains of {{.metadata.name}}.{{.baseDomain}} . String of lowercase letters, hyphens ( - ), and periods ( . ), such as dev . platform The configuration for the specific platform upon which to perform the installation: alibabacloud , aws , baremetal , azure , gcp , ibmcloud , nutanix , openstack , ovirt , vsphere , or {} . For additional information about platform.<platform> parameters, consult the table for your specific platform that follows. Object pullSecret Get a pull secret from the Red Hat OpenShift Cluster Manager to authenticate downloading container images for OpenShift Container Platform components from services such as Quay.io. { "auths":{ "cloud.openshift.com":{ "auth":"b3Blb=", "email":"[email protected]" }, "quay.io":{ "auth":"b3Blb=", "email":"[email protected]" } } } 8.6.1.2. Network configuration parameters You can customize your installation configuration based on the requirements of your existing network infrastructure. For example, you can expand the IP address block for the cluster network or provide different IP address blocks than the defaults. Only IPv4 addresses are supported. Note Globalnet is not supported with Red Hat OpenShift Data Foundation disaster recovery solutions. For regional disaster recovery scenarios, ensure that you use a nonoverlapping range of private IP addresses for the cluster and service networks in each cluster. Table 8.3. Network parameters Parameter Description Values networking The configuration for the cluster network. Object Note You cannot modify parameters specified by the networking object after installation. networking.networkType The Red Hat OpenShift Networking network plugin to install. Either OpenShiftSDN or OVNKubernetes . OpenShiftSDN is a CNI plugin for all-Linux networks. OVNKubernetes is a CNI plugin for Linux networks and hybrid networks that contain both Linux and Windows servers. The default value is OVNKubernetes . networking.clusterNetwork The IP address blocks for pods. The default value is 10.128.0.0/14 with a host prefix of /23 . If you specify multiple IP address blocks, the blocks must not overlap. An array of objects. For example: networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 networking.clusterNetwork.cidr Required if you use networking.clusterNetwork . An IP address block. An IPv4 network. An IP address block in Classless Inter-Domain Routing (CIDR) notation. The prefix length for an IPv4 block is between 0 and 32 . networking.clusterNetwork.hostPrefix The subnet prefix length to assign to each individual node. For example, if hostPrefix is set to 23 then each node is assigned a /23 subnet out of the given cidr . A hostPrefix value of 23 provides 510 (2^(32 - 23) - 2) pod IP addresses. A subnet prefix. The default value is 23 . networking.serviceNetwork The IP address block for services. The default value is 172.30.0.0/16 . The OpenShift SDN and OVN-Kubernetes network plugins support only a single IP address block for the service network. An array with an IP address block in CIDR format. For example: networking: serviceNetwork: - 172.30.0.0/16 networking.machineNetwork The IP address blocks for machines. If you specify multiple IP address blocks, the blocks must not overlap. An array of objects. For example: networking: machineNetwork: - cidr: 10.0.0.0/16 networking.machineNetwork.cidr Required if you use networking.machineNetwork . An IP address block. The default value is 10.0.0.0/16 for all platforms other than libvirt. For libvirt, the default value is 192.168.126.0/24 . An IP network block in CIDR notation. For example, 10.0.0.0/16 . Note Set the networking.machineNetwork to match the CIDR that the preferred NIC resides in. 8.6.1.3. Optional configuration parameters Optional installation configuration parameters are described in the following table: Table 8.4. Optional parameters Parameter Description Values additionalTrustBundle A PEM-encoded X.509 certificate bundle that is added to the nodes' trusted certificate store. This trust bundle may also be used when a proxy has been configured. String capabilities Controls the installation of optional core cluster components. You can reduce the footprint of your OpenShift Container Platform cluster by disabling optional components. For more information, see the "Cluster capabilities" page in Installing . String array capabilities.baselineCapabilitySet Selects an initial set of optional capabilities to enable. Valid values are None , v4.11 , v4.12 and vCurrent . The default value is vCurrent . String capabilities.additionalEnabledCapabilities Extends the set of optional capabilities beyond what you specify in baselineCapabilitySet . You may specify multiple capabilities in this parameter. String array compute The configuration for the machines that comprise the compute nodes. Array of MachinePool objects. compute.architecture Determines the instruction set architecture of the machines in the pool. Currently, clusters with varied architectures are not supported. All pools must specify the same architecture. Valid values are amd64 and arm64 . Not all installation options support the 64-bit ARM architecture. To verify if your installation option is supported on your platform, see Supported installation methods for different platforms in Selecting a cluster installation method and preparing it for users . String compute.hyperthreading Whether to enable or disable simultaneous multithreading, or hyperthreading , on compute machines. By default, simultaneous multithreading is enabled to increase the performance of your machines' cores. Important If you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance. Enabled or Disabled compute.name Required if you use compute . The name of the machine pool. worker compute.platform Required if you use compute . Use this parameter to specify the cloud provider to host the worker machines. This parameter value must match the controlPlane.platform parameter value. alibabacloud , aws , azure , gcp , ibmcloud , nutanix , openstack , ovirt , vsphere , or {} compute.replicas The number of compute machines, which are also known as worker machines, to provision. A positive integer greater than or equal to 2 . The default value is 3 . featureSet Enables the cluster for a feature set. A feature set is a collection of OpenShift Container Platform features that are not enabled by default. For more information about enabling a feature set during installation, see "Enabling features using feature gates". String. The name of the feature set to enable, such as TechPreviewNoUpgrade . controlPlane The configuration for the machines that comprise the control plane. Array of MachinePool objects. controlPlane.architecture Determines the instruction set architecture of the machines in the pool. Currently, clusters with varied architectures are not supported. All pools must specify the same architecture. Valid values are amd64 and arm64 . Not all installation options support the 64-bit ARM architecture. To verify if your installation option is supported on your platform, see Supported installation methods for different platforms in Selecting a cluster installation method and preparing it for users . String controlPlane.hyperthreading Whether to enable or disable simultaneous multithreading, or hyperthreading , on control plane machines. By default, simultaneous multithreading is enabled to increase the performance of your machines' cores. Important If you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance. Enabled or Disabled controlPlane.name Required if you use controlPlane . The name of the machine pool. master controlPlane.platform Required if you use controlPlane . Use this parameter to specify the cloud provider that hosts the control plane machines. This parameter value must match the compute.platform parameter value. alibabacloud , aws , azure , gcp , ibmcloud , nutanix , openstack , ovirt , vsphere , or {} controlPlane.replicas The number of control plane machines to provision. The only supported value is 3 , which is the default value. credentialsMode The Cloud Credential Operator (CCO) mode. If no mode is specified, the CCO dynamically tries to determine the capabilities of the provided credentials, with a preference for mint mode on the platforms where multiple modes are supported. Note Not all CCO modes are supported for all cloud providers. For more information about CCO modes, see the Cloud Credential Operator entry in the Cluster Operators reference content. Note If your AWS account has service control policies (SCP) enabled, you must configure the credentialsMode parameter to Mint , Passthrough or Manual . Mint , Passthrough , Manual or an empty string ( "" ). fips Enable or disable FIPS mode. The default is false (disabled). If FIPS mode is enabled, the Red Hat Enterprise Linux CoreOS (RHCOS) machines that OpenShift Container Platform runs on bypass the default Kubernetes cryptography suite and use the cryptography modules that are provided with RHCOS instead. Important To enable FIPS mode for your cluster, you must run the installation program from a Red Hat Enterprise Linux (RHEL) computer configured to operate in FIPS mode. For more information about configuring FIPS mode on RHEL, see Installing the system in FIPS mode . The use of FIPS validated or Modules In Process cryptographic libraries is only supported on OpenShift Container Platform deployments on the x86_64 , ppc64le , and s390x architectures. Note If you are using Azure File storage, you cannot enable FIPS mode. false or true imageContentSources Sources and repositories for the release-image content. Array of objects. Includes a source and, optionally, mirrors , as described in the following rows of this table. imageContentSources.source Required if you use imageContentSources . Specify the repository that users refer to, for example, in image pull specifications. String imageContentSources.mirrors Specify one or more repositories that may also contain the same images. Array of strings publish How to publish or expose the user-facing endpoints of your cluster, such as the Kubernetes API, OpenShift routes. Internal or External . To deploy a private cluster, which cannot be accessed from the internet, set publish to Internal . The default value is External . sshKey The SSH key to authenticate access to your cluster machines. Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. For example, sshKey: ssh-ed25519 AAAA.. . 8.6.1.4. Additional Azure configuration parameters Additional Azure configuration parameters are described in the following table. Note By default, if you specify availability zones in the install-config.yaml file, the installation program distributes the control plane machines and the compute machines across these availability zones within a region . To ensure high availability for your cluster, select a region with at least three availability zones. If your region contains fewer than three availability zones, the installation program places more than one control plane machine in the available zones. Table 8.5. Additional Azure parameters Parameter Description Values compute.platform.azure.encryptionAtHost Enables host-level encryption for compute machines. You can enable this encryption alongside user-managed server-side encryption. This feature encrypts temporary, ephemeral, cached and un-managed disks on the VM host. This is not a prerequisite for user-managed server-side encryption. true or false . The default is false . compute.platform.azure.osDisk.diskSizeGB The Azure disk size for the VM. Integer that represents the size of the disk in GB. The default is 128 . compute.platform.azure.osDisk.diskType Defines the type of disk. standard_LRS , premium_LRS , or standardSSD_LRS . The default is premium_LRS . compute.platform.azure.ultraSSDCapability Enables the use of Azure ultra disks for persistent storage on compute nodes. This requires that your Azure region and zone have ultra disks available. Enabled , Disabled . The default is Disabled . compute.platform.azure.osDisk.diskEncryptionSet.resourceGroup The name of the Azure resource group that contains the disk encryption set from the installation prerequisites. This resource group should be different from the resource group where you install the cluster to avoid deleting your Azure encryption key when the cluster is destroyed. This value is only necessary if you intend to install the cluster with user-managed disk encryption. String, for example production_encryption_resource_group . compute.platform.azure.osDisk.diskEncryptionSet.name The name of the disk encryption set that contains the encryption key from the installation prerequisites. String, for example production_disk_encryption_set . compute.platform.azure.osDisk.diskEncryptionSet.subscriptionId Defines the Azure subscription of the disk encryption set where the disk encryption set resides. This secondary disk encryption set is used to encrypt compute machines. String, in the format 00000000-0000-0000-0000-000000000000 . compute.platform.azure.vmNetworkingType Enables accelerated networking. Accelerated networking enables single root I/O virtualization (SR-IOV) to a VM, improving its networking performance. If instance type of compute machines support Accelerated networking, by default, the installer enables Accelerated networking, otherwise the default networking type is Basic . Accelerated or Basic . compute.platform.azure.type Defines the Azure instance type for compute machines. String compute.platform.azure.zones The availability zones where the installation program creates compute machines. String list controlPlane.platform.azure.type Defines the Azure instance type for control plane machines. String controlPlane.platform.azure.zones The availability zones where the installation program creates control plane machines. String list platform.azure.defaultMachinePlatform.encryptionAtHost Enables host-level encryption for compute machines. You can enable this encryption alongside user-managed server-side encryption. This feature encrypts temporary, ephemeral, cached, and un-managed disks on the VM host. This parameter is not a prerequisite for user-managed server-side encryption. true or false . The default is false . platform.azure.defaultMachinePlatform.osDisk.diskEncryptionSet.name The name of the disk encryption set that contains the encryption key from the installation prerequisites. String, for example, production_disk_encryption_set . platform.azure.defaultMachinePlatform.osDisk.diskEncryptionSet.resourceGroup The name of the Azure resource group that contains the disk encryption set from the installation prerequisites. To avoid deleting your Azure encryption key when the cluster is destroyed, this resource group must be different from the resource group where you install the cluster. This value is necessary only if you intend to install the cluster with user-managed disk encryption. String, for example, production_encryption_resource_group . platform.azure.defaultMachinePlatform.osDisk.diskEncryptionSet.subscriptionId Defines the Azure subscription of the disk encryption set where the disk encryption set resides. This secondary disk encryption set is used to encrypt compute machines. String, in the format 00000000-0000-0000-0000-000000000000 . platform.azure.defaultMachinePlatform.osDisk.diskSizeGB The Azure disk size for the VM. Integer that represents the size of the disk in GB. The default is 128 . platform.azure.defaultMachinePlatform.osDisk.diskType Defines the type of disk. premium_LRS or standardSSD_LRS . The default is premium_LRS . platform.azure.defaultMachinePlatform.type The Azure instance type for control plane and compute machines. The Azure instance type. platform.azure.defaultMachinePlatform.zones The availability zones where the installation program creates compute and control plane machines. String list. controlPlane.platform.azure.encryptionAtHost Enables host-level encryption for control plane machines. You can enable this encryption alongside user-managed server-side encryption. This feature encrypts temporary, ephemeral, cached and un-managed disks on the VM host. This is not a prerequisite for user-managed server-side encryption. true or false . The default is false . controlPlane.platform.azure.osDisk.diskEncryptionSet.resourceGroup The name of the Azure resource group that contains the disk encryption set from the installation prerequisites. This resource group should be different from the resource group where you install the cluster to avoid deleting your Azure encryption key when the cluster is destroyed. This value is only necessary if you intend to install the cluster with user-managed disk encryption. String, for example production_encryption_resource_group . controlPlane.platform.azure.osDisk.diskEncryptionSet.name The name of the disk encryption set that contains the encryption key from the installation prerequisites. String, for example production_disk_encryption_set . controlPlane.platform.azure.osDisk.diskEncryptionSet.subscriptionId Defines the Azure subscription of the disk encryption set where the disk encryption set resides. This secondary disk encryption set is used to encrypt control plane machines. String, in the format 00000000-0000-0000-0000-000000000000 . controlPlane.platform.azure.osDisk.diskSizeGB The Azure disk size for the VM. Integer that represents the size of the disk in GB. The default is 1024 . controlPlane.platform.azure.osDisk.diskType Defines the type of disk. premium_LRS or standardSSD_LRS . The default is premium_LRS . controlPlane.platform.azure.ultraSSDCapability Enables the use of Azure ultra disks for persistent storage on control plane machines. This requires that your Azure region and zone have ultra disks available. Enabled , Disabled . The default is Disabled . controlPlane.platform.azure.vmNetworkingType Enables accelerated networking. Accelerated networking enables single root I/O virtualization (SR-IOV) to a VM, improving its networking performance. If instance type of control plane machines support Accelerated networking, by default, the installer enables Accelerated networking, otherwise the default networking type is Basic . Accelerated or Basic . platform.azure.baseDomainResourceGroupName The name of the resource group that contains the DNS zone for your base domain. String, for example production_cluster . platform.azure.resourceGroupName The name of an already existing resource group to install your cluster to. This resource group must be empty and only used for this specific cluster; the cluster components assume ownership of all resources in the resource group. If you limit the service principal scope of the installation program to this resource group, you must ensure all other resources used by the installation program in your environment have the necessary permissions, such as the public DNS zone and virtual network. Destroying the cluster by using the installation program deletes this resource group. String, for example existing_resource_group . platform.azure.outboundType The outbound routing strategy used to connect your cluster to the internet. If you are using user-defined routing, you must have pre-existing networking available where the outbound routing has already been configured prior to installing a cluster. The installation program is not responsible for configuring user-defined routing. LoadBalancer or UserDefinedRouting . The default is LoadBalancer . platform.azure.region The name of the Azure region that hosts your cluster. Any valid region name, such as centralus . platform.azure.zone List of availability zones to place machines in. For high availability, specify at least two zones. List of zones, for example ["1", "2", "3"] . platform.azure.defaultMachinePlatform.ultraSSDCapability Enables the use of Azure ultra disks for persistent storage on control plane and compute machines. This requires that your Azure region and zone have ultra disks available. Enabled , Disabled . The default is Disabled . platform.azure.networkResourceGroupName The name of the resource group that contains the existing VNet that you want to deploy your cluster to. This name cannot be the same as the platform.azure.baseDomainResourceGroupName . String. platform.azure.virtualNetwork The name of the existing VNet that you want to deploy your cluster to. String. platform.azure.controlPlaneSubnet The name of the existing subnet in your VNet that you want to deploy your control plane machines to. Valid CIDR, for example 10.0.0.0/16 . platform.azure.computeSubnet The name of the existing subnet in your VNet that you want to deploy your compute machines to. Valid CIDR, for example 10.0.0.0/16 . platform.azure.cloudName The name of the Azure cloud environment that is used to configure the Azure SDK with the appropriate Azure API endpoints. If empty, the default value AzurePublicCloud is used. Any valid cloud environment, such as AzurePublicCloud or AzureUSGovernmentCloud . platform.azure.defaultMachinePlatform.vmNetworkingType Enables accelerated networking. Accelerated networking enables single root I/O virtualization (SR-IOV) to a VM, improving its networking performance. Accelerated or Basic . If instance type of control plane and compute machines support Accelerated networking, by default, the installer enables Accelerated networking, otherwise the default networking type is Basic . Note You cannot customize Azure Availability Zones or Use tags to organize your Azure resources with an Azure cluster. 8.6.2. Minimum resource requirements for cluster installation Each cluster machine must meet the following minimum requirements: Table 8.6. Minimum resource requirements Machine Operating System vCPU [1] Virtual RAM Storage Input/Output Per Second (IOPS) [2] Bootstrap RHCOS 4 16 GB 100 GB 300 Control plane RHCOS 4 16 GB 100 GB 300 Compute RHCOS, RHEL 8.6 and later [3] 2 8 GB 100 GB 300 One vCPU is equivalent to one physical core when simultaneous multithreading (SMT), or hyperthreading, is not enabled. When enabled, use the following formula to calculate the corresponding ratio: (threads per core x cores) x sockets = vCPUs. OpenShift Container Platform and Kubernetes are sensitive to disk performance, and faster storage is recommended, particularly for etcd on the control plane nodes which require a 10 ms p99 fsync duration. Note that on many cloud platforms, storage size and IOPS scale together, so you might need to over-allocate storage volume to obtain sufficient performance. As with all user-provisioned installations, if you choose to use RHEL compute machines in your cluster, you take responsibility for all operating system life cycle management and maintenance, including performing system updates, applying patches, and completing all other required tasks. Use of RHEL 7 compute machines is deprecated and has been removed in OpenShift Container Platform 4.10 and later. Important You are required to use Azure virtual machines that have the premiumIO parameter set to true . If an instance type for your platform meets the minimum requirements for cluster machines, it is supported to use in OpenShift Container Platform. Additional resources Optimizing storage 8.6.3. Tested instance types for Azure The following Microsoft Azure instance types have been tested with OpenShift Container Platform. Example 8.1. Machine types based on 64-bit x86 architecture standardBasv2Family standardBSFamily standardBsv2Family standardDADSv5Family standardDASv4Family standardDASv5Family standardDCACCV5Family standardDCADCCV5Family standardDCADSv5Family standardDCASv5Family standardDCSv3Family standardDCSv2Family standardDDCSv3Family standardDDSv4Family standardDDSv5Family standardDLDSv5Family standardDLSv5Family standardDSFamily standardDSv2Family standardDSv2PromoFamily standardDSv3Family standardDSv4Family standardDSv5Family standardEADSv5Family standardEASv4Family standardEASv5Family standardEBDSv5Family standardEBSv5Family standardECACCV5Family standardECADCCV5Family standardECADSv5Family standardECASv5Family standardEDSv4Family standardEDSv5Family standardEIADSv5Family standardEIASv4Family standardEIASv5Family standardEIBDSv5Family standardEIBSv5Family standardEIDSv5Family standardEISv3Family standardEISv5Family standardESv3Family standardESv4Family standardESv5Family standardFXMDVSFamily standardFSFamily standardFSv2Family standardGSFamily standardHBrsv2Family standardHBSFamily standardHBv4Family standardHCSFamily standardHXFamily standardLASv3Family standardLSFamily standardLSv2Family standardLSv3Family standardMDSHighMemoryv3Family standardMDSMediumMemoryv2Family standardMDSMediumMemoryv3Family standardMIDSHighMemoryv3Family standardMIDSMediumMemoryv2Family standardMISHighMemoryv3Family standardMISMediumMemoryv2Family standardMSFamily standardMSHighMemoryv3Family standardMSMediumMemoryv2Family standardMSMediumMemoryv3Family StandardNCADSA100v4Family Standard NCASv3_T4 Family standardNCSv3Family standardNDSv2Family StandardNGADSV620v1Family standardNPSFamily StandardNVADSA10v5Family standardNVSv3Family standardXEISv4Family 8.6.4. Tested instance types for Azure on 64-bit ARM infrastructures The following Microsoft Azure ARM64 instance types have been tested with OpenShift Container Platform. Example 8.2. Machine types based on 64-bit ARM architecture standardBpsv2Family standardDPSv5Family standardDPDSv5Family standardDPLDSv5Family standardDPLSv5Family standardEPSv5Family standardEPDSv5Family StandardDpdsv6Family StandardDpldsv6Famil StandardDplsv6Family StandardDpsv6Family StandardEpdsv6Family StandardEpsv6Family 8.6.5. Sample customized install-config.yaml file for Azure You can customize the install-config.yaml file to specify more details about your OpenShift Container Platform cluster's platform or modify the values of the required parameters. Important This sample YAML file is provided for reference only. You must obtain your install-config.yaml file by using the installation program and modify it. apiVersion: v1 baseDomain: example.com 1 controlPlane: 2 hyperthreading: Enabled 3 4 name: master platform: azure: encryptionAtHost: true ultraSSDCapability: Enabled osDisk: diskSizeGB: 1024 5 diskType: Premium_LRS diskEncryptionSet: resourceGroup: disk_encryption_set_resource_group name: disk_encryption_set_name subscriptionId: secondary_subscription_id type: Standard_D8s_v3 replicas: 3 compute: 6 - hyperthreading: Enabled 7 name: worker platform: azure: ultraSSDCapability: Enabled type: Standard_D2s_v3 encryptionAtHost: true osDisk: diskSizeGB: 512 8 diskType: Standard_LRS diskEncryptionSet: resourceGroup: disk_encryption_set_resource_group name: disk_encryption_set_name subscriptionId: secondary_subscription_id zones: 9 - "1" - "2" - "3" replicas: 5 metadata: name: test-cluster 10 networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 networkType: OVNKubernetes 11 serviceNetwork: - 172.30.0.0/16 platform: azure: defaultMachinePlatform: ultraSSDCapability: Enabled baseDomainResourceGroupName: resource_group 12 region: centralus 13 resourceGroupName: existing_resource_group 14 networkResourceGroupName: vnet_resource_group 15 virtualNetwork: vnet 16 controlPlaneSubnet: control_plane_subnet 17 computeSubnet: compute_subnet 18 outboundType: Loadbalancer cloudName: AzurePublicCloud pullSecret: '{"auths": ...}' 19 fips: false 20 sshKey: ssh-ed25519 AAAA... 21 1 10 13 19 Required. The installation program prompts you for this value. 2 6 If you do not provide these parameters and values, the installation program provides the default value. 3 7 The controlPlane section is a single mapping, but the compute section is a sequence of mappings. To meet the requirements of the different data structures, the first line of the compute section must begin with a hyphen, - , and the first line of the controlPlane section must not. Only one control plane pool is used. 4 Whether to enable or disable simultaneous multithreading, or hyperthreading . By default, simultaneous multithreading is enabled to increase the performance of your machines' cores. You can disable it by setting the parameter value to Disabled . If you disable simultaneous multithreading in some cluster machines, you must disable it in all cluster machines. Important If you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance. Use larger virtual machine types, such as Standard_D8s_v3 , for your machines if you disable simultaneous multithreading. 5 8 You can specify the size of the disk to use in GB. Minimum recommendation for control plane nodes is 1024 GB. 9 Specify a list of zones to deploy your machines to. For high availability, specify at least two zones. 11 The cluster network plugin to install. The supported values are OVNKubernetes and OpenShiftSDN . The default value is OVNKubernetes . 12 Specify the name of the resource group that contains the DNS zone for your base domain. 14 Specify the name of an already existing resource group to install your cluster to. If undefined, a new resource group is created for the cluster. 15 If you use an existing VNet, specify the name of the resource group that contains it. 16 If you use an existing VNet, specify its name. 17 If you use an existing VNet, specify the name of the subnet to host the control plane machines. 18 If you use an existing VNet, specify the name of the subnet to host the compute machines. 20 Whether to enable or disable FIPS mode. By default, FIPS mode is not enabled. If FIPS mode is enabled, the Red Hat Enterprise Linux CoreOS (RHCOS) machines that OpenShift Container Platform runs on bypass the default Kubernetes cryptography suite and use the cryptography modules that are provided with RHCOS instead. Important To enable FIPS mode for your cluster, you must run the installation program from a Red Hat Enterprise Linux (RHEL) computer configured to operate in FIPS mode. For more information about configuring FIPS mode on RHEL, see Installing the system in FIPS mode . The use of FIPS validated or Modules In Process cryptographic libraries is only supported on OpenShift Container Platform deployments on the x86_64 , ppc64le , and s390x architectures. 21 You can optionally provide the sshKey value that you use to access the machines in your cluster. Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. 8.6.6. Configuring the cluster-wide proxy during installation Production environments can deny direct access to the internet and instead have an HTTP or HTTPS proxy available. You can configure a new OpenShift Container Platform cluster to use a proxy by configuring the proxy settings in the install-config.yaml file. Prerequisites You have an existing install-config.yaml file. You reviewed the sites that your cluster requires access to and determined whether any of them need to bypass the proxy. By default, all cluster egress traffic is proxied, including calls to hosting cloud provider APIs. You added sites to the Proxy object's spec.noProxy field to bypass the proxy if necessary. Note The Proxy object status.noProxy field is populated with the values of the networking.machineNetwork[].cidr , networking.clusterNetwork[].cidr , and networking.serviceNetwork[] fields from your installation configuration. For installations on Amazon Web Services (AWS), Google Cloud Platform (GCP), Microsoft Azure, and Red Hat OpenStack Platform (RHOSP), the Proxy object status.noProxy field is also populated with the instance metadata endpoint ( 169.254.169.254 ). Procedure Edit your install-config.yaml file and add the proxy settings. For example: apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5 1 A proxy URL to use for creating HTTP connections outside the cluster. The URL scheme must be http . 2 A proxy URL to use for creating HTTPS connections outside the cluster. 3 A comma-separated list of destination domain names, IP addresses, or other network CIDRs to exclude from proxying. Preface a domain with . to match subdomains only. For example, .y.com matches x.y.com , but not y.com . Use * to bypass the proxy for all destinations. 4 If provided, the installation program generates a config map that is named user-ca-bundle in the openshift-config namespace that contains one or more additional CA certificates that are required for proxying HTTPS connections. The Cluster Network Operator then creates a trusted-ca-bundle config map that merges these contents with the Red Hat Enterprise Linux CoreOS (RHCOS) trust bundle, and this config map is referenced in the trustedCA field of the Proxy object. The additionalTrustBundle field is required unless the proxy's identity certificate is signed by an authority from the RHCOS trust bundle. 5 Optional: The policy to determine the configuration of the Proxy object to reference the user-ca-bundle config map in the trustedCA field. The allowed values are Proxyonly and Always . Use Proxyonly to reference the user-ca-bundle config map only when http/https proxy is configured. Use Always to always reference the user-ca-bundle config map. The default value is Proxyonly . Note The installation program does not support the proxy readinessEndpoints field. Note If the installer times out, restart and then complete the deployment by using the wait-for command of the installer. For example: USD ./openshift-install wait-for install-complete --log-level debug Save the file and reference it when installing OpenShift Container Platform. The installation program creates a cluster-wide proxy that is named cluster that uses the proxy settings in the provided install-config.yaml file. If no proxy settings are provided, a cluster Proxy object is still created, but it will have a nil spec . Note Only the Proxy object named cluster is supported, and no additional proxies can be created. Additional resources For more details about Accelerated Networking, see Accelerated Networking for Microsoft Azure VMs . 8.7. Deploying the cluster You can install OpenShift Container Platform on a compatible cloud platform. Important You can run the create cluster command of the installation program only once, during initial installation. Prerequisites Configure an account with the cloud platform that hosts your cluster. Obtain the OpenShift Container Platform installation program and the pull secret for your cluster. Verify the cloud provider account on your host has the correct permissions to deploy the cluster. An account with incorrect permissions causes the installation process to fail with an error message that displays the missing permissions. Procedure Change to the directory that contains the installation program and initialize the cluster deployment: USD ./openshift-install create cluster --dir <installation_directory> \ 1 --log-level=info 2 1 For <installation_directory> , specify the location of your customized ./install-config.yaml file. 2 To view different installation details, specify warn , debug , or error instead of info . Note If the cloud provider account that you configured on your host does not have sufficient permissions to deploy the cluster, the installation process stops, and the missing permissions are displayed. Verification When the cluster deployment completes successfully: The terminal displays directions for accessing your cluster, including a link to the web console and credentials for the kubeadmin user. Credential information also outputs to <installation_directory>/.openshift_install.log . Important Do not delete the installation program or the files that the installation program creates. Both are required to delete the cluster. Example output ... INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: "kubeadmin", and password: "password" INFO Time elapsed: 36m22s Important The Ignition config files that the installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending node-bootstrapper certificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information. It is recommended that you use Ignition config files within 12 hours after they are generated because the 24-hour certificate rotates from 16 to 22 hours after the cluster is installed. By using the Ignition config files within 12 hours, you can avoid installation failure if the certificate update runs during installation. 8.8. Finalizing user-managed encryption after installation If you installed OpenShift Container Platform using a user-managed encryption key, you can complete the installation by creating a new storage class and granting write permissions to the Azure cluster resource group. Procedure Obtain the identity of the cluster resource group used by the installer: If you specified an existing resource group in install-config.yaml , obtain its Azure identity by running the following command: USD az identity list --resource-group "<existing_resource_group>" If you did not specify a existing resource group in install-config.yaml , locate the resource group that the installer created, and then obtain its Azure identity by running the following commands: USD az group list USD az identity list --resource-group "<installer_created_resource_group>" Grant a role assignment to the cluster resource group so that it can write to the Disk Encryption Set by running the following command: USD az role assignment create --role "<privileged_role>" \ 1 --assignee "<resource_group_identity>" 2 1 Specifies an Azure role that has read/write permissions to the disk encryption set. You can use the Owner role or a custom role with the necessary permissions. 2 Specifies the identity of the cluster resource group. Obtain the id of the disk encryption set you created prior to installation by running the following command: USD az disk-encryption-set show -n <disk_encryption_set_name> \ 1 --resource-group <resource_group_name> 2 1 Specifies the name of the disk encryption set. 2 Specifies the resource group that contains the disk encryption set. The id is in the format of "/subscriptions/... /resourceGroups/... /providers/Microsoft.Compute/diskEncryptionSets/... " . Obtain the identity of the cluster service principal by running the following command: USD az identity show -g <cluster_resource_group> \ 1 -n <cluster_service_principal_name> \ 2 --query principalId --out tsv 1 Specifies the name of the cluster resource group created by the installation program. 2 Specifies the name of the cluster service principal created by the installation program. The identity is in the format of 12345678-1234-1234-1234-1234567890 . Create a role assignment that grants the cluster service principal necessary privileges to the disk encryption set by running the following command: USD az role assignment create --assignee <cluster_service_principal_id> \ 1 --role <privileged_role> \ 2 --scope <disk_encryption_set_id> \ 3 1 Specifies the ID of the cluster service principal obtained in the step. 2 Specifies the Azure role name. You can use the Contributor role or a custom role with the necessary permissions. 3 Specifies the ID of the disk encryption set. Create a storage class that uses the user-managed disk encryption set: Save the following storage class definition to a file, for example storage-class-definition.yaml : kind: StorageClass apiVersion: storage.k8s.io/v1 metadata: name: managed-premium provisioner: kubernetes.io/azure-disk parameters: skuname: Premium_LRS kind: Managed diskEncryptionSetID: "<disk_encryption_set_ID>" 1 resourceGroup: "<resource_group_name>" 2 reclaimPolicy: Delete allowVolumeExpansion: true volumeBindingMode: WaitForFirstConsumer 1 Specifies the ID of the disk encryption set that you created in the prerequisite steps, for example "/subscriptions/xxxxxx-xxxxx-xxxxx/resourceGroups/test-encryption/providers/Microsoft.Compute/diskEncryptionSets/disk-encryption-set-xxxxxx" . 2 Specifies the name of the resource group used by the installer. This is the same resource group from the first step. Create the storage class managed-premium from the file you created by running the following command: USD oc create -f storage-class-definition.yaml Select the managed-premium storage class when you create persistent volumes to use encrypted storage. 8.9. Installing the OpenShift CLI by downloading the binary You can install the OpenShift CLI ( oc ) to interact with OpenShift Container Platform from a command-line interface. You can install oc on Linux, Windows, or macOS. Important If you installed an earlier version of oc , you cannot use it to complete all of the commands in OpenShift Container Platform 4.12. Download and install the new version of oc . Installing the OpenShift CLI on Linux You can install the OpenShift CLI ( oc ) binary on Linux by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the architecture from the Product Variant drop-down list. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.12 Linux Client entry and save the file. Unpack the archive: USD tar xvf <file> Place the oc binary in a directory that is on your PATH . To check your PATH , execute the following command: USD echo USDPATH Verification After you install the OpenShift CLI, it is available using the oc command: USD oc <command> Installing the OpenShift CLI on Windows You can install the OpenShift CLI ( oc ) binary on Windows by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.12 Windows Client entry and save the file. Unzip the archive with a ZIP program. Move the oc binary to a directory that is on your PATH . To check your PATH , open the command prompt and execute the following command: C:\> path Verification After you install the OpenShift CLI, it is available using the oc command: C:\> oc <command> Installing the OpenShift CLI on macOS You can install the OpenShift CLI ( oc ) binary on macOS by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.12 macOS Client entry and save the file. Note For macOS arm64, choose the OpenShift v4.12 macOS arm64 Client entry. Unpack and unzip the archive. Move the oc binary to a directory on your PATH. To check your PATH , open a terminal and execute the following command: USD echo USDPATH Verification After you install the OpenShift CLI, it is available using the oc command: USD oc <command> 8.10. Logging in to the cluster by using the CLI You can log in to your cluster as a default system user by exporting the cluster kubeconfig file. The kubeconfig file contains information about the cluster that is used by the CLI to connect a client to the correct cluster and API server. The file is specific to a cluster and is created during OpenShift Container Platform installation. Prerequisites You deployed an OpenShift Container Platform cluster. You installed the oc CLI. Procedure Export the kubeadmin credentials: USD export KUBECONFIG=<installation_directory>/auth/kubeconfig 1 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. Verify you can run oc commands successfully using the exported configuration: USD oc whoami Example output system:admin Additional resources See Accessing the web console for more details about accessing and understanding the OpenShift Container Platform web console. 8.11. Telemetry access for OpenShift Container Platform In OpenShift Container Platform 4.12, the Telemetry service, which runs by default to provide metrics about cluster health and the success of updates, requires internet access. If your cluster is connected to the internet, Telemetry runs automatically, and your cluster is registered to OpenShift Cluster Manager Hybrid Cloud Console . After you confirm that your OpenShift Cluster Manager Hybrid Cloud Console inventory is correct, either maintained automatically by Telemetry or manually by using OpenShift Cluster Manager, use subscription watch to track your OpenShift Container Platform subscriptions at the account or multi-cluster level. Additional resources See About remote health monitoring for more information about the Telemetry service 8.12. steps Customize your cluster . If necessary, you can opt out of remote health reporting .
[ "ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1", "cat <path>/<file_name>.pub", "cat ~/.ssh/id_ed25519.pub", "eval \"USD(ssh-agent -s)\"", "Agent pid 31874", "ssh-add <path>/<file_name> 1", "Identity added: /home/<you>/<path>/<file_name> (<computer_name>)", "tar -xvf openshift-install-linux.tar.gz", "./openshift-install create install-config --dir <installation_directory> 1", "{ \"auths\":{ \"cloud.openshift.com\":{ \"auth\":\"b3Blb=\", \"email\":\"[email protected]\" }, \"quay.io\":{ \"auth\":\"b3Blb=\", \"email\":\"[email protected]\" } } }", "networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23", "networking: serviceNetwork: - 172.30.0.0/16", "networking: machineNetwork: - cidr: 10.0.0.0/16", "apiVersion: v1 baseDomain: example.com 1 controlPlane: 2 hyperthreading: Enabled 3 4 name: master platform: azure: encryptionAtHost: true ultraSSDCapability: Enabled osDisk: diskSizeGB: 1024 5 diskType: Premium_LRS diskEncryptionSet: resourceGroup: disk_encryption_set_resource_group name: disk_encryption_set_name subscriptionId: secondary_subscription_id type: Standard_D8s_v3 replicas: 3 compute: 6 - hyperthreading: Enabled 7 name: worker platform: azure: ultraSSDCapability: Enabled type: Standard_D2s_v3 encryptionAtHost: true osDisk: diskSizeGB: 512 8 diskType: Standard_LRS diskEncryptionSet: resourceGroup: disk_encryption_set_resource_group name: disk_encryption_set_name subscriptionId: secondary_subscription_id zones: 9 - \"1\" - \"2\" - \"3\" replicas: 5 metadata: name: test-cluster 10 networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 networkType: OVNKubernetes 11 serviceNetwork: - 172.30.0.0/16 platform: azure: defaultMachinePlatform: ultraSSDCapability: Enabled baseDomainResourceGroupName: resource_group 12 region: centralus 13 resourceGroupName: existing_resource_group 14 networkResourceGroupName: vnet_resource_group 15 virtualNetwork: vnet 16 controlPlaneSubnet: control_plane_subnet 17 computeSubnet: compute_subnet 18 outboundType: Loadbalancer cloudName: AzurePublicCloud pullSecret: '{\"auths\": ...}' 19 fips: false 20 sshKey: ssh-ed25519 AAAA... 21", "apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5", "./openshift-install wait-for install-complete --log-level debug", "./openshift-install create cluster --dir <installation_directory> \\ 1 --log-level=info 2", "INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: \"kubeadmin\", and password: \"password\" INFO Time elapsed: 36m22s", "az identity list --resource-group \"<existing_resource_group>\"", "az group list", "az identity list --resource-group \"<installer_created_resource_group>\"", "az role assignment create --role \"<privileged_role>\" \\ 1 --assignee \"<resource_group_identity>\" 2", "az disk-encryption-set show -n <disk_encryption_set_name> \\ 1 --resource-group <resource_group_name> 2", "az identity show -g <cluster_resource_group> \\ 1 -n <cluster_service_principal_name> \\ 2 --query principalId --out tsv", "az role assignment create --assignee <cluster_service_principal_id> \\ 1 --role <privileged_role> \\ 2 --scope <disk_encryption_set_id> \\ 3", "kind: StorageClass apiVersion: storage.k8s.io/v1 metadata: name: managed-premium provisioner: kubernetes.io/azure-disk parameters: skuname: Premium_LRS kind: Managed diskEncryptionSetID: \"<disk_encryption_set_ID>\" 1 resourceGroup: \"<resource_group_name>\" 2 reclaimPolicy: Delete allowVolumeExpansion: true volumeBindingMode: WaitForFirstConsumer", "oc create -f storage-class-definition.yaml", "tar xvf <file>", "echo USDPATH", "oc <command>", "C:\\> path", "C:\\> oc <command>", "echo USDPATH", "oc <command>", "export KUBECONFIG=<installation_directory>/auth/kubeconfig 1", "oc whoami", "system:admin" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.12/html/installing_on_azure/installing-azure-vnet
Chapter 3. Understanding Ansible concepts
Chapter 3. Understanding Ansible concepts As a automation developer, review the following Ansible concepts to create successful Ansible playbooks and automation execution environments before beginning your Ansible development project. 3.1. Prerequisites Ansible is installed. For information about installing Ansible, see Installing Ansible in the Ansible documentation. 3.2. About Ansible Playbooks Playbooks are files written in YAML that contain specific sets of human-readable instructions, or "plays", that you send to run on a single target or groups of targets. Playbooks can be used to manage configurations of and deployments to remote machines, as well as sequence multi-tier rollouts involving rolling updates. Use playbooks to delegate actions to other hosts, interacting with monitoring servers and load balancers along the way. Once written, playbooks can be used repeatedly across your enterprise for automation. 3.3. About Ansible Roles A role is Ansible's way of bundling automation content as well as loading related vars, files, tasks, handlers, and other artifacts automatically by utilizing a known file structure. Instead of creating huge playbooks with hundreds of tasks, you can use roles to break the tasks apart into smaller, more discrete and composable units of work. You can find roles for provisioning infrastructure, deploying applications, and all of the tasks you do every day on Ansible Galaxy. Filter your search by Type and select Role . Once you find a role that you're interested in, you can download it by using the ansible-galaxy command that comes bundled with Ansible: USD ansible-galaxy role install username.rolename 3.4. About Content Collections An Ansible Content Collection is a ready-to-use toolkit for automation. It includes multiple types of content such as playbooks, roles, modules, and plugins all in one place. The diagram below shows the basic structure of a collection: collection/ ├── docs/ ├── galaxy.yml ├── meta/ │ └── runtime.yml ├── plugins/ │ ├── modules/ │ │ └── module1.py │ ├── inventory/ │ ├── lookup/ │ ├── filter/ │ └── .../ ├── README.md ├── roles/ │ ├── role1/ │ ├── role2/ │ └── .../ ├── playbooks/ │ ├── files/ │ ├── vars/ │ ├── templates/ │ ├── playbook1.yml │ └── tasks/ └── tests/ ├── integration/ └── unit/ In Red Hat Ansible Automation Platform, automation hub serves as the source for Ansible Certified Content Collections. 3.5. About Execution Environments Automation execution environments are consistent and shareable container images that serve as Ansible control nodes. Automation execution environments reduce the challenge of sharing Ansible content that has external dependencies. Automation execution environments contain: Ansible Core Ansible Runner Ansible Collections Python libraries System dependencies Custom user needs You can define and create an automation execution environment using Ansible Builder. Additional resources For more information on Ansible Builder, see Creating and Consuming Execution Environments .
[ "ansible-galaxy role install username.rolename", "collection/ ├── docs/ ├── galaxy.yml ├── meta/ │ └── runtime.yml ├── plugins/ │ ├── modules/ │ │ └── module1.py │ ├── inventory/ │ ├── lookup/ │ ├── filter/ │ └── .../ ├── README.md ├── roles/ │ ├── role1/ │ ├── role2/ │ └── .../ ├── playbooks/ │ ├── files/ │ ├── vars/ │ ├── templates/ │ ├── playbook1.yml │ └── tasks/ └── tests/ ├── integration/ └── unit/" ]
https://docs.redhat.com/en/documentation/red_hat_ansible_automation_platform/2.3/html/red_hat_ansible_automation_platform_creator_guide/understanding_ansible_concepts
Chapter 2. Configuring an Azure Stack Hub account
Chapter 2. Configuring an Azure Stack Hub account Before you can install OpenShift Container Platform, you must configure a Microsoft Azure account. Important All Azure resources that are available through public endpoints are subject to resource name restrictions, and you cannot create resources that use certain terms. For a list of terms that Azure restricts, see Resolve reserved resource name errors in the Azure documentation. 2.1. Azure Stack Hub account limits The OpenShift Container Platform cluster uses a number of Microsoft Azure Stack Hub components, and the default Quota types in Azure Stack Hub affect your ability to install OpenShift Container Platform clusters. The following table summarizes the Azure Stack Hub components whose limits can impact your ability to install and run OpenShift Container Platform clusters. Component Number of components required by default Description vCPU 56 A default cluster requires 56 vCPUs, so you must increase the account limit. By default, each cluster creates the following instances: One bootstrap machine, which is removed after installation Three control plane machines Three compute machines Because the bootstrap, control plane, and worker machines use Standard_DS4_v2 virtual machines, which use 8 vCPUs, a default cluster requires 56 vCPUs. The bootstrap node VM is used only during installation. To deploy more worker nodes, enable autoscaling, deploy large workloads, or use a different instance type, you must further increase the vCPU limit for your account to ensure that your cluster can deploy the machines that you require. VNet 1 Each default cluster requires one Virtual Network (VNet), which contains two subnets. Network interfaces 7 Each default cluster requires seven network interfaces. If you create more machines or your deployed workloads create load balancers, your cluster uses more network interfaces. Network security groups 2 Each cluster creates network security groups for each subnet in the VNet. The default cluster creates network security groups for the control plane and for the compute node subnets: controlplane Allows the control plane machines to be reached on port 6443 from anywhere node Allows worker nodes to be reached from the internet on ports 80 and 443 Network load balancers 3 Each cluster creates the following load balancers : default Public IP address that load balances requests to ports 80 and 443 across worker machines internal Private IP address that load balances requests to ports 6443 and 22623 across control plane machines external Public IP address that load balances requests to port 6443 across control plane machines If your applications create more Kubernetes LoadBalancer service objects, your cluster uses more load balancers. Public IP addresses 2 The public load balancer uses a public IP address. The bootstrap machine also uses a public IP address so that you can SSH into the machine to troubleshoot issues during installation. The IP address for the bootstrap node is used only during installation. Private IP addresses 7 The internal load balancer, each of the three control plane machines, and each of the three worker machines each use a private IP address. Additional resources Optimizing storage . 2.2. Configuring a DNS zone in Azure Stack Hub To successfully install OpenShift Container Platform on Azure Stack Hub, you must create DNS records in an Azure Stack Hub DNS zone. The DNS zone must be authoritative for the domain. To delegate a registrar's DNS zone to Azure Stack Hub, see Microsoft's documentation for Azure Stack Hub datacenter DNS integration . 2.3. Required Azure Stack Hub roles Your Microsoft Azure Stack Hub account must have the following roles for the subscription that you use: Owner To set roles on the Azure portal, see the Manage access to resources in Azure Stack Hub with role-based access control in the Microsoft documentation. 2.4. Creating a service principal Because OpenShift Container Platform and its installation program create Microsoft Azure resources by using the Azure Resource Manager, you must create a service principal to represent it. Prerequisites Install or update the Azure CLI . Your Azure account has the required roles for the subscription that you use. Procedure Register your environment: USD az cloud register -n AzureStackCloud --endpoint-resource-manager <endpoint> 1 1 Specify the Azure Resource Manager endpoint, `https://management.<region>.<fqdn>/`. See the Microsoft documentation for details. Set the active environment: USD az cloud set -n AzureStackCloud Update your environment configuration to use the specific API version for Azure Stack Hub: USD az cloud update --profile 2019-03-01-hybrid Log in to the Azure CLI: USD az login If you are in a multitenant environment, you must also supply the tenant ID. If your Azure account uses subscriptions, ensure that you are using the right subscription: View the list of available accounts and record the tenantId value for the subscription you want to use for your cluster: USD az account list --refresh Example output [ { "cloudName": AzureStackCloud", "id": "9bab1460-96d5-40b3-a78e-17b15e978a80", "isDefault": true, "name": "Subscription Name", "state": "Enabled", "tenantId": "6057c7e9-b3ae-489d-a54e-de3f6bf6a8ee", "user": { "name": "[email protected]", "type": "user" } } ] View your active account details and confirm that the tenantId value matches the subscription you want to use: USD az account show Example output { "environmentName": AzureStackCloud", "id": "9bab1460-96d5-40b3-a78e-17b15e978a80", "isDefault": true, "name": "Subscription Name", "state": "Enabled", "tenantId": "6057c7e9-b3ae-489d-a54e-de3f6bf6a8ee", 1 "user": { "name": "[email protected]", "type": "user" } } 1 Ensure that the value of the tenantId parameter is the correct subscription ID. If you are not using the right subscription, change the active subscription: USD az account set -s <subscription_id> 1 1 Specify the subscription ID. Verify the subscription ID update: USD az account show Example output { "environmentName": AzureStackCloud", "id": "33212d16-bdf6-45cb-b038-f6565b61edda", "isDefault": true, "name": "Subscription Name", "state": "Enabled", "tenantId": "8049c7e9-c3de-762d-a54e-dc3f6be6a7ee", "user": { "name": "[email protected]", "type": "user" } } Record the tenantId and id parameter values from the output. You need these values during the OpenShift Container Platform installation. Create the service principal for your account: USD az ad sp create-for-rbac --role Contributor --name <service_principal> \ 1 --scopes /subscriptions/<subscription_id> 2 --years <years> 3 1 Specify the service principal name. 2 Specify the subscription ID. 3 Specify the number of years. By default, a service principal expires in one year. By using the --years option you can extend the validity of your service principal. Example output Creating 'Contributor' role assignment under scope '/subscriptions/<subscription_id>' The output includes credentials that you must protect. Be sure that you do not include these credentials in your code or check the credentials into your source control. For more information, see https://aka.ms/azadsp-cli { "appId": "ac461d78-bf4b-4387-ad16-7e32e328aec6", "displayName": <service_principal>", "password": "00000000-0000-0000-0000-000000000000", "tenantId": "8049c7e9-c3de-762d-a54e-dc3f6be6a7ee" } Record the values of the appId and password parameters from the output. You need these values during OpenShift Container Platform installation. Additional resources For more information about CCO modes, see About the Cloud Credential Operator . 2.5. steps Install an OpenShift Container Platform cluster: Installing a cluster quickly on Azure Stack Hub . Install an OpenShift Container Platform cluster on Azure Stack Hub with user-provisioned infrastructure by following Installing a cluster on Azure Stack Hub using ARM templates .
[ "az cloud register -n AzureStackCloud --endpoint-resource-manager <endpoint> 1", "az cloud set -n AzureStackCloud", "az cloud update --profile 2019-03-01-hybrid", "az login", "az account list --refresh", "[ { \"cloudName\": AzureStackCloud\", \"id\": \"9bab1460-96d5-40b3-a78e-17b15e978a80\", \"isDefault\": true, \"name\": \"Subscription Name\", \"state\": \"Enabled\", \"tenantId\": \"6057c7e9-b3ae-489d-a54e-de3f6bf6a8ee\", \"user\": { \"name\": \"[email protected]\", \"type\": \"user\" } } ]", "az account show", "{ \"environmentName\": AzureStackCloud\", \"id\": \"9bab1460-96d5-40b3-a78e-17b15e978a80\", \"isDefault\": true, \"name\": \"Subscription Name\", \"state\": \"Enabled\", \"tenantId\": \"6057c7e9-b3ae-489d-a54e-de3f6bf6a8ee\", 1 \"user\": { \"name\": \"[email protected]\", \"type\": \"user\" } }", "az account set -s <subscription_id> 1", "az account show", "{ \"environmentName\": AzureStackCloud\", \"id\": \"33212d16-bdf6-45cb-b038-f6565b61edda\", \"isDefault\": true, \"name\": \"Subscription Name\", \"state\": \"Enabled\", \"tenantId\": \"8049c7e9-c3de-762d-a54e-dc3f6be6a7ee\", \"user\": { \"name\": \"[email protected]\", \"type\": \"user\" } }", "az ad sp create-for-rbac --role Contributor --name <service_principal> \\ 1 --scopes /subscriptions/<subscription_id> 2 --years <years> 3", "Creating 'Contributor' role assignment under scope '/subscriptions/<subscription_id>' The output includes credentials that you must protect. Be sure that you do not include these credentials in your code or check the credentials into your source control. For more information, see https://aka.ms/azadsp-cli { \"appId\": \"ac461d78-bf4b-4387-ad16-7e32e328aec6\", \"displayName\": <service_principal>\", \"password\": \"00000000-0000-0000-0000-000000000000\", \"tenantId\": \"8049c7e9-c3de-762d-a54e-dc3f6be6a7ee\" }" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.12/html/installing_on_azure_stack_hub/installing-azure-stack-hub-account
Chapter 1. Recommended practices for installing large clusters
Chapter 1. Recommended practices for installing large clusters Apply the following practices when installing large clusters or scaling clusters to larger node counts. 1.1. Recommended practices for installing large scale clusters When installing large clusters or scaling the cluster to larger node counts, set the cluster network cidr accordingly in your install-config.yaml file before you install the cluster: networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineCIDR: 10.0.0.0/16 networkType: OpenShiftSDN serviceNetwork: - 172.30.0.0/16 The default cluster network cidr 10.128.0.0/14 cannot be used if the cluster size is more than 500 nodes. It must be set to 10.128.0.0/12 or 10.128.0.0/10 to get to larger node counts beyond 500 nodes.
[ "networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineCIDR: 10.0.0.0/16 networkType: OpenShiftSDN serviceNetwork: - 172.30.0.0/16" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.7/html/scalability_and_performance/recommended-cluster-install-practices
Chapter 18. Backing up and restoring Red Hat Quay on a standalone deployment
Chapter 18. Backing up and restoring Red Hat Quay on a standalone deployment Use the content within this section to back up and restore Red Hat Quay in standalone deployments. 18.1. Optional: Enabling read-only mode for Red Hat Quay Enabling read-only mode for your Red Hat Quay deployment allows you to manage the registry's operations. Red Hat Quay administrators can enable read-only mode to restrict write access to the registry, which helps ensure data integrity, mitigate risks during maintenance windows, and provide a safeguard against unintended modifications to registry data. It also helps to ensure that your Red Hat Quay registry remains online and available to serve images to users. Note In some cases, a read-only option for Red Hat Quay is not possible since it requires inserting a service key and other manual configuration changes. As an alternative to read-only mode, Red Hat Quay administrators might consider enabling the DISABLE_PUSHES feature. When this field is set to true , users are unable to push images or image tags to the registry when using the CLI. Enabling DISABLE_PUSHES differs from read-only mode because the database is not set as read-only when it is enabled. This field might be useful in some situations such as when Red Hat Quay administrators want to calculate their registry's quota and disable image pushing until after calculation has completed. With this method, administrators can avoid putting putting the whole registry in read-only mode, which affects the database, so that most operations can still be done. For information about enabling this configuration field, see Miscellaneous configuration fields . Prerequisites If you are using Red Hat Enterprise Linux (RHEL) 7.x: You have enabled the Red Hat Software Collections List (RHSCL). You have installed Python 3.6. You have downloaded the virtualenv package. You have installed the git CLI. If you are using Red Hat Enterprise Linux (RHEL) 8: You have installed Python 3 on your machine. You have downloaded the python3-virtualenv package. You have installed the git CLI. You have cloned the https://github.com/quay/quay.git repository. 18.1.1. Creating service keys for standalone Red Hat Quay Red Hat Quay uses service keys to communicate with various components. These keys are used to sign completed requests, such as requesting to scan images, login, storage access, and so on. Procedure If your Red Hat Quay registry is readily available, you can generate service keys inside of the Quay registry container. Enter the following command to generate a key pair inside of the Quay container: USD podman exec quay python3 tools/generatekeypair.py quay-readonly If your Red Hat Quay is not readily available, you must generate your service keys inside of a virtual environment. Change into the directory of your Red Hat Quay deployment and create a virtual environment inside of that directory: USD cd <USDQUAY>/quay && virtualenv -v venv Activate the virtual environment by entering the following command: USD source venv/bin/activate Optional. Install the pip CLI tool if you do not have it installed: USD venv/bin/pip install --upgrade pip In your Red Hat Quay directory, create a requirements-generatekeys.txt file with the following content: USD cat << EOF > requirements-generatekeys.txt cryptography==3.4.7 pycparser==2.19 pycryptodome==3.9.4 pycryptodomex==3.9.4 pyjwkest==1.4.2 PyJWT==1.7.1 Authlib==1.0.0a2 EOF Enter the following command to install the Python dependencies defined in the requirements-generatekeys.txt file: USD venv/bin/pip install -r requirements-generatekeys.txt Enter the following command to create the necessary service keys: USD PYTHONPATH=. venv/bin/python /<path_to_cloned_repo>/tools/generatekeypair.py quay-readonly Example output Writing public key to quay-readonly.jwk Writing key ID to quay-readonly.kid Writing private key to quay-readonly.pem Enter the following command to deactivate the virtual environment: USD deactivate 18.1.2. Adding keys to the PostgreSQL database Use the following procedure to add your service keys to the PostgreSQL database. Prerequistes You have created the service keys. Procedure Enter the following command to enter your Red Hat Quay database environment: USD podman exec -it postgresql-quay psql -U postgres -d quay Display the approval types and associated notes of the servicekeyapproval by entering the following command: quay=# select * from servicekeyapproval; Example output id | approver_id | approval_type | approved_date | notes ----+-------------+----------------------------------+----------------------------+------- 1 | | ServiceKeyApprovalType.AUTOMATIC | 2024-05-07 03:47:48.181347 | 2 | | ServiceKeyApprovalType.AUTOMATIC | 2024-05-07 03:47:55.808087 | 3 | | ServiceKeyApprovalType.AUTOMATIC | 2024-05-07 03:49:04.27095 | 4 | | ServiceKeyApprovalType.AUTOMATIC | 2024-05-07 03:49:05.46235 | 5 | 1 | ServiceKeyApprovalType.SUPERUSER | 2024-05-07 04:05:10.296796 | ... Add the service key to your Red Hat Quay database by entering the following query: quay=# INSERT INTO servicekey (name, service, metadata, kid, jwk, created_date, expiration_date) VALUES ('quay-readonly', 'quay', '{}', '{<contents_of_.kid_file>}', '{<contents_of_.jwk_file>}', '{<created_date_of_read-only>}', '{<expiration_date_of_read-only>}'); Example output INSERT 0 1 , add the key approval with the following query: quay=# INSERT INTO servicekeyapproval ('approval_type', 'approved_date', 'notes') VALUES ("ServiceKeyApprovalType.SUPERUSER", "CURRENT_DATE", {include_notes_here_on_why_this_is_being_added}); Example output INSERT 0 1 Set the approval_id field on the created service key row to the id field from the created service key approval. You can use the following SELECT statements to get the necessary IDs: UPDATE servicekey SET approval_id = (SELECT id FROM servicekeyapproval WHERE approval_type = 'ServiceKeyApprovalType.SUPERUSER') WHERE name = 'quay-readonly'; UPDATE 1 18.1.3. Configuring read-only mode for standalone Red Hat Quay After the service keys have been created and added to your PostgreSQL database, you must restart the Quay container on your standalone deployment. Prerequisites You have created the service keys and added them to your PostgreSQL database. Procedure Shutdown all Red Hat Quay instances on all virtual machines. For example: USD podman stop <quay_container_name_on_virtual_machine_a> USD podman stop <quay_container_name_on_virtual_machine_b> Enter the following command to copy the contents of the quay-readonly.kid file and the quay-readonly.pem file to the directory that holds your Red Hat Quay configuration bundle: USD cp quay-readonly.kid quay-readonly.pem USDQuay/config Enter the following command to set file permissions on all files in your configuration bundle folder: USD setfacl -m user:1001:rw USDQuay/config/* Modify your Red Hat Quay config.yaml file and add the following information: # ... REGISTRY_STATE: readonly INSTANCE_SERVICE_KEY_KID_LOCATION: 'conf/stack/quay-readonly.kid' INSTANCE_SERVICE_KEY_LOCATION: 'conf/stack/quay-readonly.pem' # ... Distribute the new configuration bundle to all Red Hat Quay instances. Start Red Hat Quay by entering the following command: USD podman run -d --rm -p 80:8080 -p 443:8443 \ --name=quay-main-app \ -v USDQUAY/config:/conf/stack:Z \ -v USDQUAY/storage:/datastorage:Z \ {productrepo}/{quayimage}:{productminv} After starting Red Hat Quay, a banner inside in your instance informs users that Red Hat Quay is running in read-only mode. Pushes should be rejected and a 405 error should be logged. You can test this by running the following command: USD podman push <quay-server.example.com>/quayadmin/busybox:test Example output 613be09ab3c0: Preparing denied: System is currently read-only. Pulls will succeed but all write operations are currently suspended. With your Red Hat Quay deployment on read-only mode, you can safely manage your registry's operations and perform such actions as backup and restore. Optional. After you are finished with read-only mode, you can return to normal operations by removing the following information from your config.yaml file. Then, restart your Red Hat Quay deployment: # ... REGISTRY_STATE: readonly INSTANCE_SERVICE_KEY_KID_LOCATION: 'conf/stack/quay-readonly.kid' INSTANCE_SERVICE_KEY_LOCATION: 'conf/stack/quay-readonly.pem' # ... USD podman restart <container_id> 18.1.4. Updating read-only expiration time The Red Hat Quay read-only key has an expiration date, and when that date passes the key is deactivated. Before the key expires, its expiration time can be updated in the database. To update the key, connect your Red Hat Quay production database using the methods described earlier and issue the following query: quay=# UPDATE servicekey SET expiration_date = 'new-date' WHERE id = servicekey_id; The list of service key IDs can be obtained by running the following query: SELECT id, name, expiration_date FROM servicekey; 18.2. Backing up Red Hat Quay on standalone deployments This procedure describes how to create a backup of Red Hat Quay on standalone deployments. Procedure Create a temporary backup directory, for example, quay-backup : USD mkdir /tmp/quay-backup The following example command denotes the local directory that the Red Hat Quay was started in, for example, /opt/quay-install : Change into the directory that bind-mounts to /conf/stack inside of the container, for example, /opt/quay-install , by running the following command: USD cd /opt/quay-install Compress the contents of your Red Hat Quay deployment into an archive in the quay-backup directory by entering the following command: USD tar cvf /tmp/quay-backup/quay-backup.tar.gz * Example output: config.yaml config.yaml.bak extra_ca_certs/ extra_ca_certs/ca.crt ssl.cert ssl.key Back up the Quay container service by entering the following command: Redirect the contents of your conf/stack/config.yaml file to your temporary quay-config.yaml file by entering the following command: USD podman exec -it quay cat /conf/stack/config.yaml > /tmp/quay-backup/quay-config.yaml Obtain the DB_URI located in your temporary quay-config.yaml by entering the following command: USD grep DB_URI /tmp/quay-backup/quay-config.yaml Example output: Extract the PostgreSQL contents to your temporary backup directory in a backup .sql file by entering the following command: USD pg_dump -h 172.24.10.50 -p 5432 -d quay -U <username> -W -O > /tmp/quay-backup/quay-backup.sql Print the contents of your DISTRIBUTED_STORAGE_CONFIG by entering the following command: DISTRIBUTED_STORAGE_CONFIG: default: - S3Storage - s3_bucket: <bucket_name> storage_path: /registry s3_access_key: <s3_access_key> s3_secret_key: <s3_secret_key> host: <host_name> s3_region: <region> Export the AWS_ACCESS_KEY_ID by using the access_key credential obtained in Step 7: USD export AWS_ACCESS_KEY_ID=<access_key> Export the AWS_SECRET_ACCESS_KEY by using the secret_key obtained in Step 7: USD export AWS_SECRET_ACCESS_KEY=<secret_key> Sync the quay bucket to the /tmp/quay-backup/blob-backup/ directory from the hostname of your DISTRIBUTED_STORAGE_CONFIG : USD aws s3 sync s3://<bucket_name> /tmp/quay-backup/blob-backup/ --source-region us-east-2 Example output: It is recommended that you delete the quay-config.yaml file after syncing the quay bucket because it contains sensitive information. The quay-config.yaml file will not be lost because it is backed up in the quay-backup.tar.gz file. 18.3. Restoring Red Hat Quay on standalone deployments This procedure describes how to restore Red Hat Quay on standalone deployments. Prerequisites You have backed up your Red Hat Quay deployment. Procedure Create a new directory that will bind-mount to /conf/stack inside of the Red Hat Quay container: USD mkdir /opt/new-quay-install Copy the contents of your temporary backup directory created in Backing up Red Hat Quay on standalone deployments to the new-quay-install1 directory created in Step 1: USD cp /tmp/quay-backup/quay-backup.tar.gz /opt/new-quay-install/ Change into the new-quay-install directory by entering the following command: USD cd /opt/new-quay-install/ Extract the contents of your Red Hat Quay directory: USD tar xvf /tmp/quay-backup/quay-backup.tar.gz * Example output: Recall the DB_URI from your backed-up config.yaml file by entering the following command: USD grep DB_URI config.yaml Example output: postgresql://<username>:[email protected]/quay Run the following command to enter the PostgreSQL database server: USD sudo postgres Enter psql and create a new database in 172.24.10.50 to restore the quay databases, for example, example_restore_registry_quay_database , by entering the following command: USD psql "host=172.24.10.50 port=5432 dbname=postgres user=<username> password=test123" postgres=> CREATE DATABASE example_restore_registry_quay_database; Example output: Connect to the database by running the following command: postgres=# \c "example-restore-registry-quay-database"; Example output: You are now connected to database "example-restore-registry-quay-database" as user "postgres". Create a pg_trmg extension of your Quay database by running the following command: example_restore_registry_quay_database=> CREATE EXTENSION IF NOT EXISTS pg_trgm; Example output: CREATE EXTENSION Exit the postgres CLI by entering the following command: \q Import the database backup to your new database by running the following command: USD psql "host=172.24.10.50 port=5432 dbname=example_restore_registry_quay_database user=<username> password=test123" -W < /tmp/quay-backup/quay-backup.sql Example output: Update the value of DB_URI in your config.yaml from postgresql://<username>:[email protected]/quay to postgresql://<username>:[email protected]/example-restore-registry-quay-database before restarting the Red Hat Quay deployment. Note The DB_URI format is DB_URI postgresql://<login_user_name>:<login_user_password>@<postgresql_host>/<quay_database> . If you are moving from one PostgreSQL server to another PostgreSQL server, update the value of <login_user_name> , <login_user_password> and <postgresql_host> at the same time. In the /opt/new-quay-install directory, print the contents of your DISTRIBUTED_STORAGE_CONFIG bundle: USD cat config.yaml | grep DISTRIBUTED_STORAGE_CONFIG -A10 Example output: DISTRIBUTED_STORAGE_CONFIG: default: DISTRIBUTED_STORAGE_CONFIG: default: - S3Storage - s3_bucket: <bucket_name> storage_path: /registry s3_access_key: <s3_access_key> s3_region: <region> s3_secret_key: <s3_secret_key> host: <host_name> Note Your DISTRIBUTED_STORAGE_CONFIG in /opt/new-quay-install must be updated before restarting your Red Hat Quay deployment. Export the AWS_ACCESS_KEY_ID by using the access_key credential obtained in Step 13: USD export AWS_ACCESS_KEY_ID=<access_key> Export the AWS_SECRET_ACCESS_KEY by using the secret_key obtained in Step 13: USD export AWS_SECRET_ACCESS_KEY=<secret_key> Create a new s3 bucket by entering the following command: USD aws s3 mb s3://<new_bucket_name> --region us-east-2 Example output: USD make_bucket: quay Upload all blobs to the new s3 bucket by entering the following command: USD aws s3 sync --no-verify-ssl \ --endpoint-url <example_endpoint_url> 1 /tmp/quay-backup/blob-backup/. s3://quay/ 1 The Red Hat Quay registry endpoint must be the same before backup and after restore. Example output: upload: ../../tmp/quay-backup/blob-backup/datastorage/registry/sha256/50/505edb46ea5d32b5cbe275eb766d960842a52ee77ac225e4dc8abb12f409a30d to s3://quay/datastorage/registry/sha256/50/505edb46ea5d32b5cbe275eb766d960842a52ee77ac225e4dc8abb12f409a30d upload: ../../tmp/quay-backup/blob-backup/datastorage/registry/sha256/27/27930dc06c2ee27ac6f543ba0e93640dd21eea458eac47355e8e5989dea087d0 to s3://quay/datastorage/registry/sha256/27/27930dc06c2ee27ac6f543ba0e93640dd21eea458eac47355e8e5989dea087d0 upload: ../../tmp/quay-backup/blob-backup/datastorage/registry/sha256/8c/8c7daf5e20eee45ffe4b36761c4bb6729fb3ee60d4f588f712989939323110ec to s3://quay/datastorage/registry/sha256/8c/8c7daf5e20eee45ffe4b36761c4bb6729fb3ee60d4f588f712989939323110ec ... Before restarting your Red Hat Quay deployment, update the storage settings in your config.yaml: DISTRIBUTED_STORAGE_CONFIG: default: DISTRIBUTED_STORAGE_CONFIG: default: - S3Storage - s3_bucket: <new_bucket_name> storage_path: /registry s3_access_key: <s3_access_key> s3_secret_key: <s3_secret_key> s3_region: <region> host: <host_name>
[ "podman exec quay python3 tools/generatekeypair.py quay-readonly", "cd <USDQUAY>/quay && virtualenv -v venv", "source venv/bin/activate", "venv/bin/pip install --upgrade pip", "cat << EOF > requirements-generatekeys.txt cryptography==3.4.7 pycparser==2.19 pycryptodome==3.9.4 pycryptodomex==3.9.4 pyjwkest==1.4.2 PyJWT==1.7.1 Authlib==1.0.0a2 EOF", "venv/bin/pip install -r requirements-generatekeys.txt", "PYTHONPATH=. venv/bin/python /<path_to_cloned_repo>/tools/generatekeypair.py quay-readonly", "Writing public key to quay-readonly.jwk Writing key ID to quay-readonly.kid Writing private key to quay-readonly.pem", "deactivate", "podman exec -it postgresql-quay psql -U postgres -d quay", "quay=# select * from servicekeyapproval;", "id | approver_id | approval_type | approved_date | notes ----+-------------+----------------------------------+----------------------------+------- 1 | | ServiceKeyApprovalType.AUTOMATIC | 2024-05-07 03:47:48.181347 | 2 | | ServiceKeyApprovalType.AUTOMATIC | 2024-05-07 03:47:55.808087 | 3 | | ServiceKeyApprovalType.AUTOMATIC | 2024-05-07 03:49:04.27095 | 4 | | ServiceKeyApprovalType.AUTOMATIC | 2024-05-07 03:49:05.46235 | 5 | 1 | ServiceKeyApprovalType.SUPERUSER | 2024-05-07 04:05:10.296796 |", "quay=# INSERT INTO servicekey (name, service, metadata, kid, jwk, created_date, expiration_date) VALUES ('quay-readonly', 'quay', '{}', '{<contents_of_.kid_file>}', '{<contents_of_.jwk_file>}', '{<created_date_of_read-only>}', '{<expiration_date_of_read-only>}');", "INSERT 0 1", "quay=# INSERT INTO servicekeyapproval ('approval_type', 'approved_date', 'notes') VALUES (\"ServiceKeyApprovalType.SUPERUSER\", \"CURRENT_DATE\", {include_notes_here_on_why_this_is_being_added});", "INSERT 0 1", "UPDATE servicekey SET approval_id = (SELECT id FROM servicekeyapproval WHERE approval_type = 'ServiceKeyApprovalType.SUPERUSER') WHERE name = 'quay-readonly';", "UPDATE 1", "podman stop <quay_container_name_on_virtual_machine_a>", "podman stop <quay_container_name_on_virtual_machine_b>", "cp quay-readonly.kid quay-readonly.pem USDQuay/config", "setfacl -m user:1001:rw USDQuay/config/*", "REGISTRY_STATE: readonly INSTANCE_SERVICE_KEY_KID_LOCATION: 'conf/stack/quay-readonly.kid' INSTANCE_SERVICE_KEY_LOCATION: 'conf/stack/quay-readonly.pem'", "podman run -d --rm -p 80:8080 -p 443:8443 --name=quay-main-app -v USDQUAY/config:/conf/stack:Z -v USDQUAY/storage:/datastorage:Z {productrepo}/{quayimage}:{productminv}", "podman push <quay-server.example.com>/quayadmin/busybox:test", "613be09ab3c0: Preparing denied: System is currently read-only. Pulls will succeed but all write operations are currently suspended.", "REGISTRY_STATE: readonly INSTANCE_SERVICE_KEY_KID_LOCATION: 'conf/stack/quay-readonly.kid' INSTANCE_SERVICE_KEY_LOCATION: 'conf/stack/quay-readonly.pem'", "podman restart <container_id>", "quay=# UPDATE servicekey SET expiration_date = 'new-date' WHERE id = servicekey_id;", "SELECT id, name, expiration_date FROM servicekey;", "mkdir /tmp/quay-backup", "podman run --name quay-app -v /opt/quay-install/config:/conf/stack:Z -v /opt/quay-install/storage:/datastorage:Z registry.redhat.io/quay/quay-rhel8:v3.13.3", "cd /opt/quay-install", "tar cvf /tmp/quay-backup/quay-backup.tar.gz *", "config.yaml config.yaml.bak extra_ca_certs/ extra_ca_certs/ca.crt ssl.cert ssl.key", "podman inspect quay-app | jq -r '.[0].Config.CreateCommand | .[]' | paste -s -d ' ' - /usr/bin/podman run --name quay-app -v /opt/quay-install/config:/conf/stack:Z -v /opt/quay-install/storage:/datastorage:Z registry.redhat.io/quay/quay-rhel8:v3.13.3", "podman exec -it quay cat /conf/stack/config.yaml > /tmp/quay-backup/quay-config.yaml", "grep DB_URI /tmp/quay-backup/quay-config.yaml", "postgresql://<username>:[email protected]/quay", "pg_dump -h 172.24.10.50 -p 5432 -d quay -U <username> -W -O > /tmp/quay-backup/quay-backup.sql", "DISTRIBUTED_STORAGE_CONFIG: default: - S3Storage - s3_bucket: <bucket_name> storage_path: /registry s3_access_key: <s3_access_key> s3_secret_key: <s3_secret_key> host: <host_name> s3_region: <region>", "export AWS_ACCESS_KEY_ID=<access_key>", "export AWS_SECRET_ACCESS_KEY=<secret_key>", "aws s3 sync s3://<bucket_name> /tmp/quay-backup/blob-backup/ --source-region us-east-2", "download: s3://<user_name>/registry/sha256/9c/9c3181779a868e09698b567a3c42f3744584ddb1398efe2c4ba569a99b823f7a to registry/sha256/9c/9c3181779a868e09698b567a3c42f3744584ddb1398efe2c4ba569a99b823f7a download: s3://<user_name>/registry/sha256/e9/e9c5463f15f0fd62df3898b36ace8d15386a6813ffb470f332698ecb34af5b0d to registry/sha256/e9/e9c5463f15f0fd62df3898b36ace8d15386a6813ffb470f332698ecb34af5b0d", "mkdir /opt/new-quay-install", "cp /tmp/quay-backup/quay-backup.tar.gz /opt/new-quay-install/", "cd /opt/new-quay-install/", "tar xvf /tmp/quay-backup/quay-backup.tar.gz *", "config.yaml config.yaml.bak extra_ca_certs/ extra_ca_certs/ca.crt ssl.cert ssl.key", "grep DB_URI config.yaml", "postgresql://<username>:[email protected]/quay", "sudo postgres", "psql \"host=172.24.10.50 port=5432 dbname=postgres user=<username> password=test123\" postgres=> CREATE DATABASE example_restore_registry_quay_database;", "CREATE DATABASE", "postgres=# \\c \"example-restore-registry-quay-database\";", "You are now connected to database \"example-restore-registry-quay-database\" as user \"postgres\".", "example_restore_registry_quay_database=> CREATE EXTENSION IF NOT EXISTS pg_trgm;", "CREATE EXTENSION", "\\q", "psql \"host=172.24.10.50 port=5432 dbname=example_restore_registry_quay_database user=<username> password=test123\" -W < /tmp/quay-backup/quay-backup.sql", "SET SET SET SET SET", "cat config.yaml | grep DISTRIBUTED_STORAGE_CONFIG -A10", "DISTRIBUTED_STORAGE_CONFIG: default: DISTRIBUTED_STORAGE_CONFIG: default: - S3Storage - s3_bucket: <bucket_name> storage_path: /registry s3_access_key: <s3_access_key> s3_region: <region> s3_secret_key: <s3_secret_key> host: <host_name>", "export AWS_ACCESS_KEY_ID=<access_key>", "export AWS_SECRET_ACCESS_KEY=<secret_key>", "aws s3 mb s3://<new_bucket_name> --region us-east-2", "make_bucket: quay", "aws s3 sync --no-verify-ssl --endpoint-url <example_endpoint_url> 1 /tmp/quay-backup/blob-backup/. s3://quay/", "upload: ../../tmp/quay-backup/blob-backup/datastorage/registry/sha256/50/505edb46ea5d32b5cbe275eb766d960842a52ee77ac225e4dc8abb12f409a30d to s3://quay/datastorage/registry/sha256/50/505edb46ea5d32b5cbe275eb766d960842a52ee77ac225e4dc8abb12f409a30d upload: ../../tmp/quay-backup/blob-backup/datastorage/registry/sha256/27/27930dc06c2ee27ac6f543ba0e93640dd21eea458eac47355e8e5989dea087d0 to s3://quay/datastorage/registry/sha256/27/27930dc06c2ee27ac6f543ba0e93640dd21eea458eac47355e8e5989dea087d0 upload: ../../tmp/quay-backup/blob-backup/datastorage/registry/sha256/8c/8c7daf5e20eee45ffe4b36761c4bb6729fb3ee60d4f588f712989939323110ec to s3://quay/datastorage/registry/sha256/8c/8c7daf5e20eee45ffe4b36761c4bb6729fb3ee60d4f588f712989939323110ec", "DISTRIBUTED_STORAGE_CONFIG: default: DISTRIBUTED_STORAGE_CONFIG: default: - S3Storage - s3_bucket: <new_bucket_name> storage_path: /registry s3_access_key: <s3_access_key> s3_secret_key: <s3_secret_key> s3_region: <region> host: <host_name>" ]
https://docs.redhat.com/en/documentation/red_hat_quay/3/html/manage_red_hat_quay/standalone-deployment-backup-restore
Chapter 66. Kubernetes Event
Chapter 66. Kubernetes Event Since Camel 3.20 Both producer and consumer are supported The Kubernetes Event component is one of the Kubernetes Components which provides a producer to execute Kubernetes Event operations and a consumer to consume events related to Event objects. 66.1. Dependencies When using kubernetes-events with Red Hat build of Apache Camel for Spring Boot, use the following Maven dependency to have support for auto configuration: <dependency> <groupId>org.apache.camel.springboot</groupId> <artifactId>camel-kubernetes-starter</artifactId> </dependency> 66.2. Configuring Options Camel components are configured on two separate levels: component level endpoint level 66.2.1. Configuring Component Options The component level is the highest level which holds general and common configurations that are inherited by the endpoints. For example a component may have security settings, credentials for authentication, urls for network connection and so forth. Some components only have a few options, and others may have many. Because components typically have pre configured defaults that are commonly used, then you may often only need to configure a few options on a component; or none at all. Configuring components can be done with the Component DSL , in a configuration file (application.properties|yaml), or directly with Java code. 66.2.2. Configuring Endpoint Options Where you find yourself configuring the most is on endpoints, as endpoints often have many options, which allows you to configure what you need the endpoint to do. The options are also categorized into whether the endpoint is used as consumer (from) or as a producer (to), or used for both. Configuring endpoints is most often done directly in the endpoint URI as path and query parameters. You can also use the Endpoint DSL as a type safe way of configuring endpoints. A good practice when configuring options is to use Property Placeholders , which allows to not hardcode urls, port numbers, sensitive information, and other settings. In other words placeholders allows to externalize the configuration from your code, and gives more flexibility and reuse. The following two sections lists all the options, firstly for the component followed by the endpoint. 66.3. Component Options The Kubernetes Event component supports 4 options, which are listed below. Name Description Default Type kubernetesClient (common) Autowired To use an existing kubernetes client. KubernetesClient bridgeErrorHandler (consumer) Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false boolean lazyStartProducer (producer) Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false boolean autowiredEnabled (advanced) Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true boolean 66.4. Endpoint Options The Kubernetes Event endpoint is configured using URI syntax: with the following path and query parameters: 66.4.1. Path Parameters (1 parameters) Name Description Default Type masterUrl (common) Required Kubernetes Master url. String 66.4.2. Query Parameters (33 parameters) Name Description Default Type apiVersion (common) The Kubernetes API Version to use. String dnsDomain (common) The dns domain, used for ServiceCall EIP. String kubernetesClient (common) Default KubernetesClient to use if provided. KubernetesClient namespace (common) The namespace. String portName (common) The port name, used for ServiceCall EIP. String portProtocol (common) The port protocol, used for ServiceCall EIP. tcp String crdGroup (consumer) The Consumer CRD Resource Group we would like to watch. String crdName (consumer) The Consumer CRD Resource name we would like to watch. String crdPlural (consumer) The Consumer CRD Resource Plural we would like to watch. String crdScope (consumer) The Consumer CRD Resource Scope we would like to watch. String crdVersion (consumer) The Consumer CRD Resource Version we would like to watch. String labelKey (consumer) The Consumer Label key when watching at some resources. String labelValue (consumer) The Consumer Label value when watching at some resources. String poolSize (consumer) The Consumer pool size. 1 int resourceName (consumer) The Consumer Resource Name we would like to watch. String bridgeErrorHandler (consumer (advanced)) Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false boolean exceptionHandler (consumer (advanced)) To let the consumer use a custom ExceptionHandler. Notice if the option bridgeErrorHandler is enabled then this option is not in use. By default the consumer will deal with exceptions, that will be logged at WARN or ERROR level and ignored. ExceptionHandler exchangePattern (consumer (advanced)) Sets the exchange pattern when the consumer creates an exchange. Enum values: InOnly InOut ExchangePattern operation (producer) Producer operation to do on Kubernetes. String lazyStartProducer (producer (advanced)) Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false boolean connectionTimeout (advanced) Connection timeout in milliseconds to use when making requests to the Kubernetes API server. Integer caCertData (security) The CA Cert Data. String caCertFile (security) The CA Cert File. String clientCertData (security) The Client Cert Data. String clientCertFile (security) The Client Cert File. String clientKeyAlgo (security) The Key Algorithm used by the client. String clientKeyData (security) The Client Key data. String clientKeyFile (security) The Client Key file. String clientKeyPassphrase (security) The Client Key Passphrase. String oauthToken (security) The Auth Token. String password (security) Password to connect to Kubernetes. String trustCerts (security) Define if the certs we used are trusted anyway or not. Boolean username (security) Username to connect to Kubernetes. String 66.5. Message Headers The Kubernetes Event component supports 14 message header(s), which is/are listed below: Name Description Default Type CamelKubernetesOperation (producer) Constant: KUBERNETES_OPERATION The Producer operation. String CamelKubernetesNamespaceName (producer) Constant: KUBERNETES_NAMESPACE_NAME The namespace name. String CamelKubernetesEventsLabels (producer) Constant: KUBERNETES_EVENTS_LABELS The event labels. Map CamelKubernetesEventTime (producer) Constant: KUBERNETES_EVENT_TIME The event time in ISO-8601 extended offset date-time format, such as '2011-12-03T10:15:3001:00'. server time String CamelKubernetesEventAction (consumer) Constant: KUBERNETES_EVENT_ACTION Action watched by the consumer. Enum values: ADDED MODIFIED DELETED ERROR BOOKMARK Action CamelKubernetesEventType (producer) Constant: KUBERNETES_EVENT_TYPE The event type. String CamelKubernetesEventReason (producer) Constant: KUBERNETES_EVENT_REASON The event reason. String CamelKubernetesEventNote (producer) Constant: KUBERNETES_EVENT_NOTE The event note. String CamelKubernetesEventRegarding (producer) Constant: KUBERNETES_EVENT_REGARDING The event regarding. ObjectReference CamelKubernetesEventRelated (producer) Constant: KUBERNETES_EVENT_RELATED The event related. ObjectReference CamelKubernetesEventReportingController (producer) Constant: KUBERNETES_EVENT_REPORTING_CONTROLLER The event reporting controller. String CamelKubernetesEventReportingInstance (producer) Constant: KUBERNETES_EVENT_REPORTING_INSTANCE The event reporting instance. String CamelKubernetesEventName (producer) Constant: KUBERNETES_EVENT_NAME The event name. String CamelKubernetesEventTimestamp (consumer) Constant: KUBERNETES_EVENT_TIMESTAMP Timestamp of the action watched by the consumer. long 66.6. Supported producer operation listEvents listEventsByLabels getEvent createEvent updateEvent deleteEvent 66.7. Kubernetes Events Producer Examples listEvents: this operation lists the events. from("direct:list"). to("kubernetes-events:///?kubernetesClient=#kubernetesClient&operation=listEvents"). to("mock:result"); This operation returns a list of events from your cluster. The type of the events is io.fabric8.kubernetes.api.model.events.v1.Event . To indicate from which namespace the events are expected, it is possible to set the message header CamelKubernetesNamespaceName . By default, the events of all namespaces are returned. listEventsByLabels: this operation lists the events selected by labels. from("direct:listByLabels").process(new Processor() { @Override public void process(Exchange exchange) throws Exception { Map<String, String> labels = new HashMap<>(); labels.put("key1", "value1"); labels.put("key2", "value2"); exchange.getIn().setHeader(KubernetesConstants.KUBERNETES_EVENTS_LABELS, labels); } }); to("kubernetes-events:///?kubernetesClient=#kubernetesClient&operation=listEventsByLabels"). to("mock:result"); This operation returns a list of events from your cluster that occurred in any namespaces, using a label selector (in the example above only expect events which have the label "key1" set to "value1" and the label "key2" set to "value2"). The type of the events is io.fabric8.kubernetes.api.model.events.v1.Event . This operation expects the message header CamelKubernetesEventsLabels to be set to a Map<String, String> where the key-value pairs represent the expected label names and values. getEvent: this operation gives a specific event. from("direct:get").process(new Processor() { @Override public void process(Exchange exchange) throws Exception { exchange.getIn().setHeader(KubernetesConstants.KUBERNETES_NAMESPACE_NAME, "test"); exchange.getIn().setHeader(KubernetesConstants.KUBERNETES_EVENT_NAME, "event1"); } }); to("kubernetes-events:///?kubernetesClient=#kubernetesClient&operation=getEvent"). to("mock:result"); This operation returns the event matching the criteria from your cluster. The type of the event is io.fabric8.kubernetes.api.model.events.v1.Event . This operation expects two message headers which are CamelKubernetesNamespaceName and CamelKubernetesEventName , the first one needs to be set to the name of the target namespace and second one needs to be set to the target name of event. If no matching event could be found, null is returned. createEvent: this operation creates a new event. from("direct:get").process(new Processor() { @Override public void process(Exchange exchange) throws Exception { exchange.getIn().setHeader(KubernetesConstants.KUBERNETES_NAMESPACE_NAME, "default"); exchange.getIn().setHeader(KubernetesConstants.KUBERNETES_EVENT_NAME, "test1"); Map<String, String> labels = new HashMap<>(); labels.put("this", "rocks"); exchange.getIn().setHeader(KubernetesConstants.KUBERNETES_EVENTS_LABELS, labels); exchange.getIn().setHeader(KubernetesConstants.KUBERNETES_EVENT_ACTION_PRODUCER, "Some Action"); exchange.getIn().setHeader(KubernetesConstants.KUBERNETES_EVENT_TYPE, "Normal"); exchange.getIn().setHeader(KubernetesConstants.KUBERNETES_EVENT_REASON, "Some Reason"); exchange.getIn().setHeader(KubernetesConstants.KUBERNETES_EVENT_REPORTING_CONTROLLER, "Some-Reporting-Controller"); exchange.getIn().setHeader(KubernetesConstants.KUBERNETES_EVENT_REPORTING_INSTANCE, "Some-Reporting-Instance"); } }); to("kubernetes-events:///?kubernetesClient=#kubernetesClient&operation=createEvent"). to("mock:result"); This operation publishes a new event in your cluster. An event can be created in two ways either from message headers or directly from an io.fabric8.kubernetes.api.model.events.v1.EventBuilder . Whatever the way used to create the event: The operation expects two message headers which are CamelKubernetesNamespaceName and CamelKubernetesEventName , to set respectively the name of namespace and the name of the produced event. The operation supports the message header CamelKubernetesEventsLabels to set the labels to the produced event. The message headers that can be used to create an event are CamelKubernetesEventTime , CamelKubernetesEventAction , CamelKubernetesEventType , CamelKubernetesEventReason , CamelKubernetesEventNote , CamelKubernetesEventRegarding , CamelKubernetesEventRelated , CamelKubernetesEventReportingController and CamelKubernetesEventReportingInstance . In case the supported message headers are not enough for a specific use case, it is still possible to set the message body with an object of type io.fabric8.kubernetes.api.model.events.v1.EventBuilder representing a prefilled builder to use when creating the event. Please note that the labels, name of event and name of namespace are always set from the message headers, even when the builder is provided. updateEvent: this operation updates an existing event. The behavior is exactly the same as createEvent , only the name of the operation is different. deleteEvent: this operation deletes an existing event. from("direct:get").process(new Processor() { @Override public void process(Exchange exchange) throws Exception { exchange.getIn().setHeader(KubernetesConstants.KUBERNETES_NAMESPACE_NAME, "default"); exchange.getIn().setHeader(KubernetesConstants.KUBERNETES_EVENT_NAME, "test1"); } }); to("kubernetes-events:///?kubernetesClient=#kubernetesClient&operation=deleteEvent"). to("mock:result"); This operation removes an existing event from your cluster. It returns a boolean to indicate whether the operation was successful or not. This operation expects two message headers which are CamelKubernetesNamespaceName and CamelKubernetesEventName , the first one needs to be set to the name of the target namespace and second one needs to be set to the target name of event. 66.8. Kubernetes Events Consumer Example fromF("kubernetes-events://%s?oauthToken=%s", host, authToken) .setHeader(KubernetesConstants.KUBERNETES_NAMESPACE_NAME, constant("default")) .setHeader(KubernetesConstants.KUBERNETES_EVENT_NAME, constant("test")) .process(new KubernertesProcessor()).to("mock:result"); public class KubernertesProcessor implements Processor { @Override public void process(Exchange exchange) throws Exception { Message in = exchange.getIn(); Event cm = exchange.getIn().getBody(Event.class); log.info("Got event with event name: " + cm.getMetadata().getName() + " and action " + in.getHeader(KubernetesConstants.KUBERNETES_EVENT_ACTION)); } } This consumer returns a message per event received on the namespace "default" for the event "test". It also set the action ( io.fabric8.kubernetes.client.Watcher.Action ) in the message header CamelKubernetesEventAction and the timestamp ( long ) in the message header CamelKubernetesEventTimestamp . 66.9. Spring Boot Auto-Configuration The component supports 102 options, which are listed below. Name Description Default Type camel.cluster.kubernetes.attributes Custom service attributes. Map camel.cluster.kubernetes.cluster-labels Set the labels used to identify the pods composing the cluster. Map camel.cluster.kubernetes.config-map-name Set the name of the ConfigMap used to do optimistic locking (defaults to 'leaders'). String camel.cluster.kubernetes.connection-timeout-millis Connection timeout in milliseconds to use when making requests to the Kubernetes API server. Integer camel.cluster.kubernetes.enabled Sets if the Kubernetes cluster service should be enabled or not, default is false. false Boolean camel.cluster.kubernetes.id Cluster Service ID. String camel.cluster.kubernetes.jitter-factor A jitter factor to apply in order to prevent all pods to call Kubernetes APIs in the same instant. Double camel.cluster.kubernetes.kubernetes-namespace Set the name of the Kubernetes namespace containing the pods and the configmap (autodetected by default). String camel.cluster.kubernetes.lease-duration-millis The default duration of the lease for the current leader. Long camel.cluster.kubernetes.master-url Set the URL of the Kubernetes master (read from Kubernetes client properties by default). String camel.cluster.kubernetes.order Service lookup order/priority. Integer camel.cluster.kubernetes.pod-name Set the name of the current pod (autodetected from container host name by default). String camel.cluster.kubernetes.renew-deadline-millis The deadline after which the leader must stop its services because it may have lost the leadership. Long camel.cluster.kubernetes.retry-period-millis The time between two subsequent attempts to check and acquire the leadership. It is randomized using the jitter factor. Long camel.component.kubernetes-config-maps.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.kubernetes-config-maps.bridge-error-handler Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false Boolean camel.component.kubernetes-config-maps.enabled Whether to enable auto configuration of the kubernetes-config-maps component. This is enabled by default. Boolean camel.component.kubernetes-config-maps.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.kubernetes-config-maps.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.kubernetes-custom-resources.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.kubernetes-custom-resources.bridge-error-handler Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false Boolean camel.component.kubernetes-custom-resources.enabled Whether to enable auto configuration of the kubernetes-custom-resources component. This is enabled by default. Boolean camel.component.kubernetes-custom-resources.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.kubernetes-custom-resources.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.kubernetes-deployments.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.kubernetes-deployments.bridge-error-handler Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false Boolean camel.component.kubernetes-deployments.enabled Whether to enable auto configuration of the kubernetes-deployments component. This is enabled by default. Boolean camel.component.kubernetes-deployments.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.kubernetes-deployments.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.kubernetes-events.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.kubernetes-events.bridge-error-handler Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false Boolean camel.component.kubernetes-events.enabled Whether to enable auto configuration of the kubernetes-events component. This is enabled by default. Boolean camel.component.kubernetes-events.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.kubernetes-events.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.kubernetes-hpa.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.kubernetes-hpa.bridge-error-handler Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false Boolean camel.component.kubernetes-hpa.enabled Whether to enable auto configuration of the kubernetes-hpa component. This is enabled by default. Boolean camel.component.kubernetes-hpa.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.kubernetes-hpa.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.kubernetes-job.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.kubernetes-job.bridge-error-handler Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false Boolean camel.component.kubernetes-job.enabled Whether to enable auto configuration of the kubernetes-job component. This is enabled by default. Boolean camel.component.kubernetes-job.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.kubernetes-job.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.kubernetes-namespaces.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.kubernetes-namespaces.bridge-error-handler Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false Boolean camel.component.kubernetes-namespaces.enabled Whether to enable auto configuration of the kubernetes-namespaces component. This is enabled by default. Boolean camel.component.kubernetes-namespaces.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.kubernetes-namespaces.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.kubernetes-nodes.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.kubernetes-nodes.bridge-error-handler Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false Boolean camel.component.kubernetes-nodes.enabled Whether to enable auto configuration of the kubernetes-nodes component. This is enabled by default. Boolean camel.component.kubernetes-nodes.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.kubernetes-nodes.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.kubernetes-persistent-volumes-claims.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.kubernetes-persistent-volumes-claims.enabled Whether to enable auto configuration of the kubernetes-persistent-volumes-claims component. This is enabled by default. Boolean camel.component.kubernetes-persistent-volumes-claims.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.kubernetes-persistent-volumes-claims.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.kubernetes-persistent-volumes.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.kubernetes-persistent-volumes.enabled Whether to enable auto configuration of the kubernetes-persistent-volumes component. This is enabled by default. Boolean camel.component.kubernetes-persistent-volumes.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.kubernetes-persistent-volumes.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.kubernetes-pods.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.kubernetes-pods.bridge-error-handler Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false Boolean camel.component.kubernetes-pods.enabled Whether to enable auto configuration of the kubernetes-pods component. This is enabled by default. Boolean camel.component.kubernetes-pods.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.kubernetes-pods.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.kubernetes-replication-controllers.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.kubernetes-replication-controllers.bridge-error-handler Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false Boolean camel.component.kubernetes-replication-controllers.enabled Whether to enable auto configuration of the kubernetes-replication-controllers component. This is enabled by default. Boolean camel.component.kubernetes-replication-controllers.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.kubernetes-replication-controllers.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.kubernetes-resources-quota.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.kubernetes-resources-quota.enabled Whether to enable auto configuration of the kubernetes-resources-quota component. This is enabled by default. Boolean camel.component.kubernetes-resources-quota.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.kubernetes-resources-quota.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.kubernetes-secrets.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.kubernetes-secrets.enabled Whether to enable auto configuration of the kubernetes-secrets component. This is enabled by default. Boolean camel.component.kubernetes-secrets.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.kubernetes-secrets.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.kubernetes-service-accounts.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.kubernetes-service-accounts.enabled Whether to enable auto configuration of the kubernetes-service-accounts component. This is enabled by default. Boolean camel.component.kubernetes-service-accounts.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.kubernetes-service-accounts.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.kubernetes-services.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.kubernetes-services.bridge-error-handler Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false Boolean camel.component.kubernetes-services.enabled Whether to enable auto configuration of the kubernetes-services component. This is enabled by default. Boolean camel.component.kubernetes-services.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.kubernetes-services.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.openshift-build-configs.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.openshift-build-configs.enabled Whether to enable auto configuration of the openshift-build-configs component. This is enabled by default. Boolean camel.component.openshift-build-configs.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.openshift-build-configs.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.openshift-builds.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.openshift-builds.enabled Whether to enable auto configuration of the openshift-builds component. This is enabled by default. Boolean camel.component.openshift-builds.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.openshift-builds.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.openshift-deploymentconfigs.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.openshift-deploymentconfigs.bridge-error-handler Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false Boolean camel.component.openshift-deploymentconfigs.enabled Whether to enable auto configuration of the openshift-deploymentconfigs component. This is enabled by default. Boolean camel.component.openshift-deploymentconfigs.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.openshift-deploymentconfigs.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean
[ "<dependency> <groupId>org.apache.camel.springboot</groupId> <artifactId>camel-kubernetes-starter</artifactId> </dependency>", "kubernetes-events:masterUrl", "from(\"direct:list\"). to(\"kubernetes-events:///?kubernetesClient=#kubernetesClient&operation=listEvents\"). to(\"mock:result\");", "from(\"direct:listByLabels\").process(new Processor() { @Override public void process(Exchange exchange) throws Exception { Map<String, String> labels = new HashMap<>(); labels.put(\"key1\", \"value1\"); labels.put(\"key2\", \"value2\"); exchange.getIn().setHeader(KubernetesConstants.KUBERNETES_EVENTS_LABELS, labels); } }); to(\"kubernetes-events:///?kubernetesClient=#kubernetesClient&operation=listEventsByLabels\"). to(\"mock:result\");", "from(\"direct:get\").process(new Processor() { @Override public void process(Exchange exchange) throws Exception { exchange.getIn().setHeader(KubernetesConstants.KUBERNETES_NAMESPACE_NAME, \"test\"); exchange.getIn().setHeader(KubernetesConstants.KUBERNETES_EVENT_NAME, \"event1\"); } }); to(\"kubernetes-events:///?kubernetesClient=#kubernetesClient&operation=getEvent\"). to(\"mock:result\");", "from(\"direct:get\").process(new Processor() { @Override public void process(Exchange exchange) throws Exception { exchange.getIn().setHeader(KubernetesConstants.KUBERNETES_NAMESPACE_NAME, \"default\"); exchange.getIn().setHeader(KubernetesConstants.KUBERNETES_EVENT_NAME, \"test1\"); Map<String, String> labels = new HashMap<>(); labels.put(\"this\", \"rocks\"); exchange.getIn().setHeader(KubernetesConstants.KUBERNETES_EVENTS_LABELS, labels); exchange.getIn().setHeader(KubernetesConstants.KUBERNETES_EVENT_ACTION_PRODUCER, \"Some Action\"); exchange.getIn().setHeader(KubernetesConstants.KUBERNETES_EVENT_TYPE, \"Normal\"); exchange.getIn().setHeader(KubernetesConstants.KUBERNETES_EVENT_REASON, \"Some Reason\"); exchange.getIn().setHeader(KubernetesConstants.KUBERNETES_EVENT_REPORTING_CONTROLLER, \"Some-Reporting-Controller\"); exchange.getIn().setHeader(KubernetesConstants.KUBERNETES_EVENT_REPORTING_INSTANCE, \"Some-Reporting-Instance\"); } }); to(\"kubernetes-events:///?kubernetesClient=#kubernetesClient&operation=createEvent\"). to(\"mock:result\");", "from(\"direct:get\").process(new Processor() { @Override public void process(Exchange exchange) throws Exception { exchange.getIn().setHeader(KubernetesConstants.KUBERNETES_NAMESPACE_NAME, \"default\"); exchange.getIn().setHeader(KubernetesConstants.KUBERNETES_EVENT_NAME, \"test1\"); } }); to(\"kubernetes-events:///?kubernetesClient=#kubernetesClient&operation=deleteEvent\"). to(\"mock:result\");", "fromF(\"kubernetes-events://%s?oauthToken=%s\", host, authToken) .setHeader(KubernetesConstants.KUBERNETES_NAMESPACE_NAME, constant(\"default\")) .setHeader(KubernetesConstants.KUBERNETES_EVENT_NAME, constant(\"test\")) .process(new KubernertesProcessor()).to(\"mock:result\"); public class KubernertesProcessor implements Processor { @Override public void process(Exchange exchange) throws Exception { Message in = exchange.getIn(); Event cm = exchange.getIn().getBody(Event.class); log.info(\"Got event with event name: \" + cm.getMetadata().getName() + \" and action \" + in.getHeader(KubernetesConstants.KUBERNETES_EVENT_ACTION)); } }" ]
https://docs.redhat.com/en/documentation/red_hat_build_of_apache_camel/4.4/html/red_hat_build_of_apache_camel_for_spring_boot_reference/csb-camel-kubernetes-event-component-starter
Chapter 15. Config map reference for the Cluster Monitoring Operator
Chapter 15. Config map reference for the Cluster Monitoring Operator 15.1. Cluster Monitoring Operator configuration reference Parts of OpenShift Container Platform cluster monitoring are configurable. The API is accessible by setting parameters defined in various config maps. To configure monitoring components, edit the ConfigMap object named cluster-monitoring-config in the openshift-monitoring namespace. These configurations are defined by ClusterMonitoringConfiguration . To configure monitoring components that monitor user-defined projects, edit the ConfigMap object named user-workload-monitoring-config in the openshift-user-workload-monitoring namespace. These configurations are defined by UserWorkloadConfiguration . The configuration file is always defined under the config.yaml key in the config map data. Note Not all configuration parameters are exposed. Configuring cluster monitoring is optional. If a configuration does not exist or is empty, default values are used. If the configuration is invalid YAML data, the Cluster Monitoring Operator stops reconciling the resources and reports Degraded=True in the status conditions of the Operator. 15.2. AdditionalAlertmanagerConfig 15.2.1. Description The AdditionalAlertmanagerConfig resource defines settings for how a component communicates with additional Alertmanager instances. 15.2.2. Required apiVersion Appears in: PrometheusK8sConfig , PrometheusRestrictedConfig , ThanosRulerConfig Property Type Description apiVersion string Defines the API version of Alertmanager. Possible values are v1 or v2 . The default is v2 . bearerToken *v1.SecretKeySelector Defines the secret key reference containing the bearer token to use when authenticating to Alertmanager. pathPrefix string Defines the path prefix to add in front of the push endpoint path. scheme string Defines the URL scheme to use when communicating with Alertmanager instances. Possible values are http or https . The default value is http . staticConfigs []string A list of statically configured Alertmanager endpoints in the form of <hosts>:<port> . timeout *string Defines the timeout value used when sending alerts. tlsConfig TLSConfig Defines the TLS settings to use for Alertmanager connections. 15.3. AlertmanagerMainConfig 15.3.1. Description The AlertmanagerMainConfig resource defines settings for the Alertmanager component in the openshift-monitoring namespace. Appears in: ClusterMonitoringConfiguration Property Type Description enabled *bool A Boolean flag that enables or disables the main Alertmanager instance in the openshift-monitoring namespace. The default value is true . enableUserAlertmanagerConfig bool A Boolean flag that enables or disables user-defined namespaces to be selected for AlertmanagerConfig lookups. This setting only applies if the user workload monitoring instance of Alertmanager is not enabled. The default value is false . logLevel string Defines the log level setting for Alertmanager. The possible values are: error , warn , info , debug . The default value is info . nodeSelector map[string]string Defines the nodes on which the Pods are scheduled. resources *v1.ResourceRequirements Defines resource requests and limits for the Alertmanager container. tolerations []v1.Toleration Defines tolerations for the pods. volumeClaimTemplate *monv1.EmbeddedPersistentVolumeClaim Defines persistent storage for Alertmanager. Use this setting to configure the persistent volume claim, including storage class, volume size, and name. 15.4. AlertmanagerUserWorkloadConfig 15.4.1. Description The AlertmanagerUserWorkloadConfig resource defines the settings for the Alertmanager instance used for user-defined projects. Appears in: UserWorkloadConfiguration Property Type Description enabled bool A Boolean flag that enables or disables a dedicated instance of Alertmanager for user-defined alerts in the openshift-user-workload-monitoring namespace. The default value is false . enableAlertmanagerConfig bool A Boolean flag to enable or disable user-defined namespaces to be selected for AlertmanagerConfig lookup. The default value is false . logLevel string Defines the log level setting for Alertmanager for user workload monitoring. The possible values are error , warn , info , and debug . The default value is info . resources *v1.ResourceRequirements Defines resource requests and limits for the Alertmanager container. nodeSelector map[string]string Defines the nodes on which the pods are scheduled. tolerations []v1.Toleration Defines tolerations for the pods. volumeClaimTemplate *monv1.EmbeddedPersistentVolumeClaim Defines persistent storage for Alertmanager. Use this setting to configure the persistent volume claim, including storage class, volume size and name. 15.5. ClusterMonitoringConfiguration 15.5.1. Description The ClusterMonitoringConfiguration resource defines settings that customize the default platform monitoring stack through the cluster-monitoring-config config map in the openshift-monitoring namespace. Property Type Description alertmanagerMain * AlertmanagerMainConfig AlertmanagerMainConfig defines settings for the Alertmanager component in the openshift-monitoring namespace. enableUserWorkload *bool UserWorkloadEnabled is a Boolean flag that enables monitoring for user-defined projects. k8sPrometheusAdapter * K8sPrometheusAdapter K8sPrometheusAdapter defines settings for the Prometheus Adapter component. kubeStateMetrics * KubeStateMetricsConfig KubeStateMetricsConfig defines settings for the kube-state-metrics agent. prometheusK8s * PrometheusK8sConfig PrometheusK8sConfig defines settings for the Prometheus component. prometheusOperator * PrometheusOperatorConfig PrometheusOperatorConfig defines settings for the Prometheus Operator component. openshiftStateMetrics * OpenShiftStateMetricsConfig OpenShiftMetricsConfig defines settings for the openshift-state-metrics agent. telemeterClient * TelemeterClientConfig TelemeterClientConfig defines settings for the Telemeter Client component. thanosQuerier * ThanosQuerierConfig ThanosQuerierConfig defines settings for the Thanos Querier component. 15.6. DedicatedServiceMonitors 15.6.1. Description You can use the DedicatedServiceMonitors resource to configure dedicated Service Monitors for the Prometheus Adapter Appears in: K8sPrometheusAdapter Property Type Description enabled bool When enabled is set to true , the Cluster Monitoring Operator (CMO) deploys a dedicated Service Monitor that exposes the kubelet /metrics/resource endpoint. This Service Monitor sets honorTimestamps: true and only keeps metrics that are relevant for the pod resource queries of Prometheus Adapter. Additionally, Prometheus Adapter is configured to use these dedicated metrics. Overall, this feature improves the consistency of Prometheus Adapter-based CPU usage measurements used by, for example, the oc adm top pod command or the Horizontal Pod Autoscaler. 15.7. K8sPrometheusAdapter 15.7.1. Description The K8sPrometheusAdapter resource defines settings for the Prometheus Adapter component. Appears in: ClusterMonitoringConfiguration Property Type Description audit *Audit Defines the audit configuration used by the Prometheus Adapter instance. Possible profile values are: metadata , request , requestresponse , and none . The default value is metadata . nodeSelector map[string]string Defines the nodes on which the pods are scheduled. tolerations []v1.Toleration Defines tolerations for the pods. dedicatedServiceMonitors * DedicatedServiceMonitors Defines dedicated service monitors. 15.8. KubeStateMetricsConfig 15.8.1. Description The KubeStateMetricsConfig resource defines settings for the kube-state-metrics agent. Appears in: ClusterMonitoringConfiguration Property Type Description nodeSelector map[string]string Defines the nodes on which the pods are scheduled. tolerations []v1.Toleration Defines tolerations for the pods. 15.9. OpenShiftStateMetricsConfig 15.9.1. Description The OpenShiftStateMetricsConfig resource defines settings for the openshift-state-metrics agent. Appears in: ClusterMonitoringConfiguration Property Type Description nodeSelector map[string]string Defines the nodes on which the pods are scheduled. tolerations []v1.Toleration Defines tolerations for the pods. 15.10. PrometheusK8sConfig 15.10.1. Description The PrometheusK8sConfig resource defines settings for the Prometheus component. Appears in: ClusterMonitoringConfiguration Property Type Description additionalAlertmanagerConfigs [] AdditionalAlertmanagerConfig Configures additional Alertmanager instances that receive alerts from the Prometheus component. By default, no additional Alertmanager instances are configured. enforcedBodySizeLimit string Enforces a body size limit for Prometheus scraped metrics. If a scraped target's body response is larger than the limit, the scrape will fail. The following values are valid: an empty value to specify no limit, a numeric value in Prometheus size format (such as 64MB ), or the string automatic , which indicates that the limit will be automatically calculated based on cluster capacity. The default value is empty, which indicates no limit. externalLabels map[string]string Defines labels to be added to any time series or alerts when communicating with external systems such as federation, remote storage, and Alertmanager. By default, no labels are added. logLevel string Defines the log level setting for Prometheus. The possible values are: error , warn , info , and debug . The default value is info . nodeSelector map[string]string Defines the nodes on which the pods are scheduled. queryLogFile string Specifies the file to which PromQL queries are logged. This setting can be either a filename, in which case the queries are saved to an emptyDir volume at /var/log/prometheus , or a full path to a location where an emptyDir volume will be mounted and the queries saved. Writing to /dev/stderr , /dev/stdout or /dev/null is supported, but writing to any other /dev/ path is not supported. Relative paths are also not supported. By default, PromQL queries are not logged. remoteWrite [] RemoteWriteSpec Defines the remote write configuration, including URL, authentication, and relabeling settings. resources *v1.ResourceRequirements Defines resource requests and limits for the Prometheus container. retention string Defines the duration for which Prometheus retains data. This definition must be specified using the following regular expression pattern: [0-9]+(ms|s|m|h|d|w|y) (ms = milliseconds, s= seconds,m = minutes, h = hours, d = days, w = weeks, y = years). The default value is 15d . retentionSize string Defines the maximum amount of disk space used by data blocks plus the write-ahead log (WAL). Supported values are B , KB , KiB , MB , MiB , GB , GiB , TB , TiB , PB , PiB , EB , and EiB . By default, no limit is defined. tolerations []v1.Toleration Defines tolerations for the pods. volumeClaimTemplate *monv1.EmbeddedPersistentVolumeClaim Defines persistent storage for Prometheus. Use this setting to configure the persistent volume claim, including storage class, volume size and name. 15.11. PrometheusOperatorConfig 15.11.1. Description The PrometheusOperatorConfig resource defines settings for the Prometheus Operator component. Appears in: ClusterMonitoringConfiguration , UserWorkloadConfiguration Property Type Description logLevel string Defines the log level settings for Prometheus Operator. The possible values are error , warn , info , and debug . The default value is info . nodeSelector map[string]string Defines the nodes on which the pods are scheduled. tolerations []v1.Toleration Defines tolerations for the pods. 15.12. PrometheusRestrictedConfig 15.12.1. Description The PrometheusRestrictedConfig resource defines the settings for the Prometheus component that monitors user-defined projects. Appears in: UserWorkloadConfiguration Property Type Description additionalAlertmanagerConfigs [] AdditionalAlertmanagerConfig Configures additional Alertmanager instances that receive alerts from the Prometheus component. By default, no additional Alertmanager instances are configured. enforcedLabelLimit *uint64 Specifies a per-scrape limit on the number of labels accepted for a sample. If the number of labels exceeds this limit after metric relabeling, the entire scrape is treated as failed. The default value is 0 , which means that no limit is set. enforcedLabelNameLengthLimit *uint64 Specifies a per-scrape limit on the length of a label name for a sample. If the length of a label name exceeds this limit after metric relabeling, the entire scrape is treated as failed. The default value is 0 , which means that no limit is set. enforcedLabelValueLengthLimit *uint64 Specifies a per-scrape limit on the length of a label value for a sample. If the length of a label value exceeds this limit after metric relabeling, the entire scrape is treated as failed. The default value is 0 , which means that no limit is set. enforcedSampleLimit *uint64 Specifies a global limit on the number of scraped samples that will be accepted. This setting overrides the SampleLimit value set in any user-defined ServiceMonitor or PodMonitor object if the value is greater than enforcedTargetLimit . Administrators can use this setting to keep the overall number of samples under control. The default value is 0 , which means that no limit is set. enforcedTargetLimit *uint64 Specifies a global limit on the number of scraped targets. This setting overrides the TargetLimit value set in any user-defined ServiceMonitor or PodMonitor object if the value is greater than enforcedSampleLimit . Administrators can use this setting to keep the overall number of targets under control. The default value is 0 . externalLabels map[string]string Defines labels to be added to any time series or alerts when communicating with external systems such as federation, remote storage, and Alertmanager. By default, no labels are added. logLevel string Defines the log level setting for Prometheus. The possible values are error , warn , info , and debug . The default setting is info . nodeSelector map[string]string Defines the nodes on which the pods are scheduled. queryLogFile string Specifies the file to which PromQL queries are logged. This setting can be either a filename, in which case the queries are saved to an emptyDir volume at /var/log/prometheus , or a full path to a location where an emptyDir volume will be mounted and the queries saved. Writing to /dev/stderr , /dev/stdout or /dev/null is supported, but writing to any other /dev/ path is not supported. Relative paths are also not supported. By default, PromQL queries are not logged. remoteWrite [] RemoteWriteSpec Defines the remote write configuration, including URL, authentication, and relabeling settings. resources *v1.ResourceRequirements Defines resource requests and limits for the Prometheus container. retention string Defines the duration for which Prometheus retains data. This definition must be specified using the following regular expression pattern: [0-9]+(ms|s|m|h|d|w|y) (ms = milliseconds, s= seconds,m = minutes, h = hours, d = days, w = weeks, y = years). The default value is 15d . retentionSize string Defines the maximum amount of disk space used by data blocks plus the write-ahead log (WAL). Supported values are B , KB , KiB , MB , MiB , GB , GiB , TB , TiB , PB , PiB , EB , and EiB . The default value is nil . tolerations []v1.Toleration Defines tolerations for the pods. volumeClaimTemplate *monv1.EmbeddedPersistentVolumeClaim Defines persistent storage for Prometheus. Use this setting to configure the storage class and size of a volume. 15.13. RemoteWriteSpec 15.13.1. Description The RemoteWriteSpec resource defines the settings for remote write storage. 15.13.2. Required url Appears in: PrometheusK8sConfig , PrometheusRestrictedConfig Property Type Description authorization *monv1.SafeAuthorization Defines the authorization settings for remote write storage. basicAuth *monv1.BasicAuth Defines basic authentication settings for the remote write endpoint URL. bearerTokenFile string Defines the file that contains the bearer token for the remote write endpoint. However, because you cannot mount secrets in a pod, in practice you can only reference the token of the service account. headers map[string]string Specifies the custom HTTP headers to be sent along with each remote write request. Headers set by Prometheus cannot be overwritten. metadataConfig *monv1.MetadataConfig Defines settings for sending series metadata to remote write storage. name string Defines the name of the remote write queue. This name is used in metrics and logging to differentiate queues. If specified, this name must be unique. oauth2 *monv1.OAuth2 Defines OAuth2 authentication settings for the remote write endpoint. proxyUrl string Defines an optional proxy URL. queueConfig *monv1.QueueConfig Allows tuning configuration for remote write queue parameters. remoteTimeout string Defines the timeout value for requests to the remote write endpoint. sigv4 *monv1.Sigv4 Defines AWS Signature Version 4 authentication settings. tlsConfig *monv1.SafeTLSConfig Defines TLS authentication settings for the remote write endpoint. url string Defines the URL of the remote write endpoint to which samples will be sent. writeRelabelConfigs []monv1.RelabelConfig Defines the list of remote write relabel configurations. 15.14. TelemeterClientConfig 15.14.1. Description The TelemeterClientConfig resource defines settings for the telemeter-client component. 15.14.2. Required nodeSelector tolerations Appears in: ClusterMonitoringConfiguration Property Type Description nodeSelector map[string]string Defines the nodes on which the pods are scheduled. tolerations []v1.Toleration Defines tolerations for the pods. 15.15. ThanosQuerierConfig 15.15.1. Description The ThanosQuerierConfig resource defines settings for the Thanos Querier component. Appears in: ClusterMonitoringConfiguration Property Type Description enableRequestLogging bool A Boolean flag that enables or disables request logging. The default value is false . logLevel string Defines the log level setting for Thanos Querier. The possible values are error , warn , info , and debug . The default value is info . nodeSelector map[string]string Defines the nodes on which the pods are scheduled. resources *v1.ResourceRequirements Defines resource requests and limits for the Thanos Querier container. tolerations []v1.Toleration Defines tolerations for the pods. 15.16. ThanosRulerConfig 15.16.1. Description The ThanosRulerConfig resource defines configuration for the Thanos Ruler instance for user-defined projects. Appears in: UserWorkloadConfiguration Property Type Description additionalAlertmanagerConfigs [] AdditionalAlertmanagerConfig Configures how the Thanos Ruler component communicates with additional Alertmanager instances. The default value is nil . logLevel string Defines the log level setting for Thanos Ruler. The possible values are error , warn , info , and debug . The default value is info . nodeSelector map[string]string Defines the nodes on which the Pods are scheduled. resources *v1.ResourceRequirements Defines resource requests and limits for the Thanos Ruler container. retention string Defines the duration for which Prometheus retains data. This definition must be specified using the following regular expression pattern: [0-9]+(ms|s|m|h|d|w|y) (ms = milliseconds, s= seconds,m = minutes, h = hours, d = days, w = weeks, y = years). The default value is 15d . tolerations []v1.Toleration Defines tolerations for the pods. volumeClaimTemplate *monv1.EmbeddedPersistentVolumeClaim Defines persistent storage for Thanos Ruler. Use this setting to configure the storage class and size of a volume. 15.17. TLSConfig 15.17.1. Description The TLSConfig resource configures the settings for TLS connections. 15.17.2. Required insecureSkipVerify Appears in: AdditionalAlertmanagerConfig Property Type Description ca *v1.SecretKeySelector Defines the secret key reference containing the Certificate Authority (CA) to use for the remote host. cert *v1.SecretKeySelector Defines the secret key reference containing the public certificate to use for the remote host. key *v1.SecretKeySelector Defines the secret key reference containing the private key to use for the remote host. serverName string Used to verify the hostname on the returned certificate. insecureSkipVerify bool When set to true , disables the verification of the remote host's certificate and name. 15.18. UserWorkloadConfiguration 15.18.1. Description The UserWorkloadConfiguration resource defines the settings responsible for user-defined projects in the user-workload-monitoring-config config map in the openshift-user-workload-monitoring namespace. You can only enable UserWorkloadConfiguration after you have set enableUserWorkload to true in the cluster-monitoring-config config map under the openshift-monitoring namespace. Property Type Description alertmanager * AlertmanagerUserWorkloadConfig Defines the settings for the Alertmanager component in user workload monitoring. prometheus * PrometheusRestrictedConfig Defines the settings for the Prometheus component in user workload monitoring. prometheusOperator * PrometheusOperatorConfig Defines the settings for the Prometheus Operator component in user workload monitoring. thanosRuler * ThanosRulerConfig Defines the settings for the Thanos Ruler component in user workload monitoring.
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.11/html/monitoring/config-map-reference-for-the-cluster-monitoring-operator
Chapter 8. Reinstalling an Existing Host as a Self-Hosted Engine Node
Chapter 8. Reinstalling an Existing Host as a Self-Hosted Engine Node You can convert an existing, standard host in a self-hosted engine environment to a self-hosted engine node capable of hosting the Manager virtual machine. Procedure Click Compute Hosts and select the host. Click Management Maintenance and click OK . Click Installation Reinstall . Click the Hosted Engine tab and select DEPLOY from the drop-down list. Click OK . The host is reinstalled with self-hosted engine configuration, and is flagged with a crown icon in the Administration Portal. After reinstalling the hosts as self-hosted engine nodes, you can check the status of the new environment by running the following command on one of the nodes: If the new environment is running without issue, you can decommission the original Manager machine.
[ "hosted-engine --vm-status" ]
https://docs.redhat.com/en/documentation/red_hat_virtualization/4.3/html/migrating_from_a_standalone_manager_to_a_self-hosted_engine/Reinstalling_an_Existing_Host_as_a_Self-Hosted_Engine_Node_migrating_to_SHE
Config APIs
Config APIs OpenShift Container Platform 4.13 Reference guide for config APIs Red Hat OpenShift Documentation Team
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.13/html-single/config_apis/index
Chapter 3. Migration Toolkit for Virtualization 2.4
Chapter 3. Migration Toolkit for Virtualization 2.4 Migrate virtual machines (VMs) from VMware vSphere or Red Hat Virtualization or OpenStack to OpenShift Virtualization with the Migration Toolkit for Virtualization (MTV). The release notes describe technical changes, new features and enhancements, and known issues. 3.1. Technical changes This release has the following technical changes: Faster disk image migration from RHV Disk images are not converted anymore using virt-v2v when migrating from RHV. This change speeds up migrations and also allows migration for guest operating systems that are not supported by virt-vsv. (forklift-controller#403) Faster disk transfers by ovirt-imageio client (ovirt-img) Disk transfers use ovirt-imageio client (ovirt-img) instead of Containerized Data Import (CDI) when migrating from RHV to the local OpenShift Container Platform cluster, accelerating the migration. Faster migration using conversion pod disk transfer When migrating from vSphere to the local OpenShift Container Platform cluster, the conversion pod transfers the disk data instead of Containerized Data Importer (CDI), accelerating the migration. Migrated virtual machines are not scheduled on the target OCP cluster The migrated virtual machines are no longer scheduled on the target OpenShift Container Platform cluster. This enables migrating VMs that cannot start due to limit constraints on the target at migration time. StorageProfile resource needs to be updated for a non-provisioner storage class You must update the StorageProfile resource with accessModes and volumeMode for non-provisioner storage classes such as NFS. VDDK 8 can be used in the VDDK image versions of MTV supported only using VDDK version 7 for the VDDK image. MTV supports both versions 7 and 8, as follows: If you are migrating to OCP 4.12 or earlier, use VDDK version 7. If you are migrating to OCP 4.13 or later, use VDDK version 8. 3.2. New features and enhancements This release has the following features and improvements: OpenStack migration MTV now supports migrations with OpenStack as a source provider. This feature is a provided as a Technology Preview and only supports cold migrations. OCP console plugin The Migration Toolkit for Virtualization Operator now integrates the MTV web console into the Red Hat OpenShift web console. The new UI operates as an OCP Console plugin that adds the sub-menu Migration to the navigation bar. It is implemented in version 2.4, disabling the old UI. You can enable the old UI by setting feature_ui: true in ForkliftController. (MTV-427) Skip certification option Skip certificate validation option was added to VMware and RHV providers. If selected, the provider's certificate will not be validated and the UI will not ask for specifying a CA certificate. Only third-party certificate required Only the third-party certificate needs to be specified when defining a RHV provider that sets with the Manager CA certificate. Conversion of VMs with RHEL9 guest operating system Cold migrations from vSphere to a local Red Hat OpenShift cluster use virt-v2v on RHEL 9. (MTV-332) 3.3. Known issues This release has the following known issues: Deleting migration plan does not remove temporary resources Deleting a migration plan does not remove temporary resources such as importer pods, conversion pods, config maps, secrets, failed VMs and data volumes. You must archive a migration plan before deleting it to clean up the temporary resources. (BZ#2018974) Unclear error status message for VM with no operating system The error status message for a VM with no operating system on the Plans page of the web console does not describe the reason for the failure. (BZ#22008846) Log archive file includes logs of a deleted migration plan or VM If deleting a migration plan and then running a new migration plan with the same name, or if deleting a migrated VM and then remigrate the source VM, then the log archive file created by the MTV web console might include the logs of the deleted migration plan or VM. (BZ#2023764) Migration of virtual machines with encrypted partitions fails during conversion vSphere only: Migrations from RHV and OpenStack don't fail, but the encryption key may be missing on the target OCP cluster. Snapshots that are created during the migration in OpenStack are not deleted The Migration Controller service does not delete snapshots that are created during the migration for source virtual machines in OpenStack automatically. Workaround: the snapshots can be removed manually on OpenStack. RHV snapshots are not deleted after a successful migration The Migration Controller service does not delete snapshots automatically after a successful warm migration of a RHV VM. Workaround: Snapshots can be removed from RHV instead. (MTV-349) Migration fails during precopy/cutover while a snapshot operation is executed on the source VM Some warm migrations from RHV might fail. When running a migration plan for warm migration of multiple VMs from RHV, the migrations of some VMs might fail during the cutover stage. In that case, restart the migration plan and set the cutover time for the VM migrations that failed in the first run. Warm migration from RHV fails if a snapshot operation is performed on the source VM. If the user performs a snapshot operation on the source VM at the time when a migration snapshot is scheduled, the migration fails instead of waiting for the user's snapshot operation to finish. (MTV-456) Cannot schedule migrated VM with multiple disks to more than one storage classes of type hostPath When migrating a VM with multiple disks to more than one storage classes of type hostPath, it may result in a VM that cannot be scheduled. Workaround: It is recommended to use shared storage on the target OCP cluster. Deleting migrated VM does not remove PVC and PV When removing a VM that was migrated, its persistent volume claims (PVCs) and physical volumes (PV) are not deleted. Workaround: remove the CDI importer pods and then remove the remaining PVCs and PVs. (MTV-492) PVC deletion hangs after archiving and deleting migration plan When a migration fails, its PVCs and PVs are not deleted as expected when its migration plan is archived and deleted. Workaround: Remove the CDI importer pods and then remove the remaining PVCs and PVs. (MTV-493) VM with multiple disks may boot from non-bootable disk after migration VM with multiple disks that was migrated might not be able to boot on the target OCP cluster. Workaround: Set the boot order appropriately to boot from the bootable disk. (MTV-433) Non-supported guest operating systems in warm migrations Warm migrations and migrations to remote OCP clusters from vSphere do not support all types of guest operating systems that are supported in cold migrations to the local OCP cluster. It is a consequence of using RHEL 8 in the former case and RHEL 9 in the latter case. See Converting virtual machines from other hypervisors to KVM with virt-v2v in RHEL 7, RHEL 8, and RHEL 9 for the list of supported guest operating systems. VMs from vSphere with RHEL 9 guest operating system may start with network interfaces that are down When migrating VMs that are installed with RHEL 9 as guest operating system from vSphere, their network interfaces could be disabled when they start in OpenShift Virtualization. (MTV-491) Upgrade from 2.4.0 fails When upgrading from MTV 2.4.0 to a later version, the operation fails with an error that says the field spec.selector of deployment forklift-controller is immutable. Workaround: remove the custom resource forklift-controller of type ForkliftController from the installed namespace, and recreate it. The user needs to refresh the OCP Console once the forklift-console-plugin pod runs to load the upgraded MTV web console. (MTV-518) 3.4. Resolved issues This release has the following resolved issues: Multiple HTTP/2 enabled web servers are vulnerable to a DDoS attack (Rapid Reset Attack) A flaw was found in handling multiplexed streams in the HTTP/2 protocol. In releases of MTV, the HTTP/2 protocol allowed a denial of service (server resource consumption) because request cancellation could reset multiple streams quickly. The server had to set up and tear down the streams while not hitting any server-side limit for the maximum number of active streams per connection, which resulted in a denial of service due to server resource consumption. This issue has been resolved in MTV 2.4.3 and 2.5.2. It is advised to update to one of these versions of MTV or later. For more information, see CVE-2023-44487 (Rapid Reset Attack) and CVE-2023-39325 (Rapid Reset Attack) . Improve invalid/conflicting VM name handling Improve the automatic renaming of VMs during migration to fit RFC 1123. This feature that was introduced in 2.3.4 is enhanced to cover more special cases. (MTV-212) Prevent locking user accounts due to incorrect credentials If a user specifies an incorrect password for RHV providers, they are no longer locked in RHV. An error returns when the RHV manager is accessible and adding the provider. If the RHV manager is inaccessible, the provider is added, but there would be no further attempt after failing, due to incorrect credentials. (MTV-324) Users without cluster-admin role can create new providers Previously, the cluster-admin role was required to browse and create providers. In this release, users with sufficient permissions on MTV resources (providers, plans, migrations, NetworkMaps, StorageMaps, hooks) can operate MTV without cluster-admin permissions. (MTV-334) Convert i440fx to q35 Migration of virtual machines with i440fx chipset is now supported. The chipset is converted to q35 during the migration. (MTV-430) Preserve the UUID setting in SMBIOS for a VM that is migrated from RHV The Universal Unique ID (UUID) number within the System Management BIOS (SMBIOS) no longer changes for VMs that are migrated from RHV. This enhancement enables applications that operate within the guest operating system and rely on this setting, such as for licensing purposes, to operate on the target OCP cluster in a manner similar to that of RHV. (MTV-597) Do not expose password for RHV in error messages Previously, the password that was specified for RHV manager appeared in error messages that were displayed in the web console and logs when failing to connect to RHV. In this release, error messages that are generated when failing to connect to RHV do not reveal the password for RHV manager. QEMU guest agent is now installed on migrated VMs The QEMU guest agent is installed on VMs during cold migration from vSphere. (BZ#2018062)
null
https://docs.redhat.com/en/documentation/migration_toolkit_for_virtualization/2.6/html/release_notes/rn-24_release-notes
Chapter 6. Installing a cluster on GCP in a restricted network
Chapter 6. Installing a cluster on GCP in a restricted network In OpenShift Container Platform 4.15, you can install a cluster on Google Cloud Platform (GCP) in a restricted network by creating an internal mirror of the installation release content on an existing Google Virtual Private Cloud (VPC). Important You can install an OpenShift Container Platform cluster by using mirrored installation release content, but your cluster will require internet access to use the GCP APIs. 6.1. Prerequisites You reviewed details about the OpenShift Container Platform installation and update processes. You read the documentation on selecting a cluster installation method and preparing it for users . You configured a GCP project to host the cluster. You mirrored the images for a disconnected installation to your registry and obtained the imageContentSources data for your version of OpenShift Container Platform. Important Because the installation media is on the mirror host, you can use that computer to complete all installation steps. You have an existing VPC in GCP. While installing a cluster in a restricted network that uses installer-provisioned infrastructure, you cannot use the installer-provisioned VPC. You must use a user-provisioned VPC that satisfies one of the following requirements: Contains the mirror registry Has firewall rules or a peering connection to access the mirror registry hosted elsewhere If you use a firewall, you configured it to allow the sites that your cluster requires access to. While you might need to grant access to more sites, you must grant access to *.googleapis.com and accounts.google.com . 6.2. About installations in restricted networks In OpenShift Container Platform 4.15, you can perform an installation that does not require an active connection to the internet to obtain software components. Restricted network installations can be completed using installer-provisioned infrastructure or user-provisioned infrastructure, depending on the cloud platform to which you are installing the cluster. If you choose to perform a restricted network installation on a cloud platform, you still require access to its cloud APIs. Some cloud functions, like Amazon Web Service's Route 53 DNS and IAM services, require internet access. Depending on your network, you might require less internet access for an installation on bare metal hardware, Nutanix, or on VMware vSphere. To complete a restricted network installation, you must create a registry that mirrors the contents of the OpenShift image registry and contains the installation media. You can create this registry on a mirror host, which can access both the internet and your closed network, or by using other methods that meet your restrictions. 6.2.1. Additional limits Clusters in restricted networks have the following additional limitations and restrictions: The ClusterVersion status includes an Unable to retrieve available updates error. By default, you cannot use the contents of the Developer Catalog because you cannot access the required image stream tags. 6.3. Internet access for OpenShift Container Platform In OpenShift Container Platform 4.15, you require access to the internet to obtain the images that are necessary to install your cluster. You must have internet access to: Access OpenShift Cluster Manager to download the installation program and perform subscription management. If the cluster has internet access and you do not disable Telemetry, that service automatically entitles your cluster. Access Quay.io to obtain the packages that are required to install your cluster. Obtain the packages that are required to perform cluster updates. 6.4. Generating a key pair for cluster node SSH access During an OpenShift Container Platform installation, you can provide an SSH public key to the installation program. The key is passed to the Red Hat Enterprise Linux CoreOS (RHCOS) nodes through their Ignition config files and is used to authenticate SSH access to the nodes. The key is added to the ~/.ssh/authorized_keys list for the core user on each node, which enables password-less authentication. After the key is passed to the nodes, you can use the key pair to SSH in to the RHCOS nodes as the user core . To access the nodes through SSH, the private key identity must be managed by SSH for your local user. If you want to SSH in to your cluster nodes to perform installation debugging or disaster recovery, you must provide the SSH public key during the installation process. The ./openshift-install gather command also requires the SSH public key to be in place on the cluster nodes. Important Do not skip this procedure in production environments, where disaster recovery and debugging is required. Note You must use a local key, not one that you configured with platform-specific approaches such as AWS key pairs . Procedure If you do not have an existing SSH key pair on your local machine to use for authentication onto your cluster nodes, create one. For example, on a computer that uses a Linux operating system, run the following command: USD ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1 1 Specify the path and file name, such as ~/.ssh/id_ed25519 , of the new SSH key. If you have an existing key pair, ensure your public key is in the your ~/.ssh directory. Note If you plan to install an OpenShift Container Platform cluster that uses the RHEL cryptographic libraries that have been submitted to NIST for FIPS 140-2/140-3 Validation on only the x86_64 , ppc64le , and s390x architectures, do not create a key that uses the ed25519 algorithm. Instead, create a key that uses the rsa or ecdsa algorithm. View the public SSH key: USD cat <path>/<file_name>.pub For example, run the following to view the ~/.ssh/id_ed25519.pub public key: USD cat ~/.ssh/id_ed25519.pub Add the SSH private key identity to the SSH agent for your local user, if it has not already been added. SSH agent management of the key is required for password-less SSH authentication onto your cluster nodes, or if you want to use the ./openshift-install gather command. Note On some distributions, default SSH private key identities such as ~/.ssh/id_rsa and ~/.ssh/id_dsa are managed automatically. If the ssh-agent process is not already running for your local user, start it as a background task: USD eval "USD(ssh-agent -s)" Example output Agent pid 31874 Note If your cluster is in FIPS mode, only use FIPS-compliant algorithms to generate the SSH key. The key must be either RSA or ECDSA. Add your SSH private key to the ssh-agent : USD ssh-add <path>/<file_name> 1 1 Specify the path and file name for your SSH private key, such as ~/.ssh/id_ed25519 Example output Identity added: /home/<you>/<path>/<file_name> (<computer_name>) steps When you install OpenShift Container Platform, provide the SSH public key to the installation program. 6.5. Creating the installation configuration file You can customize the OpenShift Container Platform cluster you install on Google Cloud Platform (GCP). Prerequisites You have the OpenShift Container Platform installation program and the pull secret for your cluster. For a restricted network installation, these files are on your mirror host. You have the imageContentSources values that were generated during mirror registry creation. You have obtained the contents of the certificate for your mirror registry. Configure a GCP account. Procedure Create the install-config.yaml file. Change to the directory that contains the installation program and run the following command: USD ./openshift-install create install-config --dir <installation_directory> 1 1 For <installation_directory> , specify the directory name to store the files that the installation program creates. When specifying the directory: Verify that the directory has the execute permission. This permission is required to run Terraform binaries under the installation directory. Use an empty directory. Some installation assets, such as bootstrap X.509 certificates, have short expiration intervals, therefore you must not reuse an installation directory. If you want to reuse individual files from another cluster installation, you can copy them into your directory. However, the file names for the installation assets might change between releases. Use caution when copying installation files from an earlier OpenShift Container Platform version. At the prompts, provide the configuration details for your cloud: Optional: Select an SSH key to use to access your cluster machines. Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. Select gcp as the platform to target. If you have not configured the service account key for your GCP account on your computer, you must obtain it from GCP and paste the contents of the file or enter the absolute path to the file. Select the project ID to provision the cluster in. The default value is specified by the service account that you configured. Select the region to deploy the cluster to. Select the base domain to deploy the cluster to. The base domain corresponds to the public DNS zone that you created for your cluster. Enter a descriptive name for your cluster. Edit the install-config.yaml file to give the additional information that is required for an installation in a restricted network. Update the pullSecret value to contain the authentication information for your registry: pullSecret: '{"auths":{"<mirror_host_name>:5000": {"auth": "<credentials>","email": "[email protected]"}}}' For <mirror_host_name> , specify the registry domain name that you specified in the certificate for your mirror registry, and for <credentials> , specify the base64-encoded user name and password for your mirror registry. Add the additionalTrustBundle parameter and value. additionalTrustBundle: | -----BEGIN CERTIFICATE----- ZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZ -----END CERTIFICATE----- The value must be the contents of the certificate file that you used for your mirror registry. The certificate file can be an existing, trusted certificate authority, or the self-signed certificate that you generated for the mirror registry. Define the network and subnets for the VPC to install the cluster in under the parent platform.gcp field: network: <existing_vpc> controlPlaneSubnet: <control_plane_subnet> computeSubnet: <compute_subnet> For platform.gcp.network , specify the name for the existing Google VPC. For platform.gcp.controlPlaneSubnet and platform.gcp.computeSubnet , specify the existing subnets to deploy the control plane machines and compute machines, respectively. Add the image content resources, which resemble the following YAML excerpt: imageContentSources: - mirrors: - <mirror_host_name>:5000/<repo_name>/release source: quay.io/openshift-release-dev/ocp-release - mirrors: - <mirror_host_name>:5000/<repo_name>/release source: registry.redhat.io/ocp/release For these values, use the imageContentSources that you recorded during mirror registry creation. Optional: Set the publishing strategy to Internal : publish: Internal By setting this option, you create an internal Ingress Controller and a private load balancer. Make any other modifications to the install-config.yaml file that you require. For more information about the parameters, see "Installation configuration parameters". Back up the install-config.yaml file so that you can use it to install multiple clusters. Important The install-config.yaml file is consumed during the installation process. If you want to reuse the file, you must back it up now. Additional resources Installation configuration parameters for GCP 6.5.1. Minimum resource requirements for cluster installation Each cluster machine must meet the following minimum requirements: Table 6.1. Minimum resource requirements Machine Operating System vCPU [1] Virtual RAM Storage Input/Output Per Second (IOPS) [2] Bootstrap RHCOS 4 16 GB 100 GB 300 Control plane RHCOS 4 16 GB 100 GB 300 Compute RHCOS, RHEL 8.6 and later [3] 2 8 GB 100 GB 300 One vCPU is equivalent to one physical core when simultaneous multithreading (SMT), or Hyper-Threading, is not enabled. When enabled, use the following formula to calculate the corresponding ratio: (threads per core x cores) x sockets = vCPUs. OpenShift Container Platform and Kubernetes are sensitive to disk performance, and faster storage is recommended, particularly for etcd on the control plane nodes which require a 10 ms p99 fsync duration. Note that on many cloud platforms, storage size and IOPS scale together, so you might need to over-allocate storage volume to obtain sufficient performance. As with all user-provisioned installations, if you choose to use RHEL compute machines in your cluster, you take responsibility for all operating system life cycle management and maintenance, including performing system updates, applying patches, and completing all other required tasks. Use of RHEL 7 compute machines is deprecated and has been removed in OpenShift Container Platform 4.10 and later. Note As of OpenShift Container Platform version 4.13, RHCOS is based on RHEL version 9.2, which updates the micro-architecture requirements. The following list contains the minimum instruction set architectures (ISA) that each architecture requires: x86-64 architecture requires x86-64-v2 ISA ARM64 architecture requires ARMv8.0-A ISA IBM Power architecture requires Power 9 ISA s390x architecture requires z14 ISA For more information, see Architectures (RHEL documentation). If an instance type for your platform meets the minimum requirements for cluster machines, it is supported to use in OpenShift Container Platform. Additional resources Optimizing storage 6.5.2. Tested instance types for GCP The following Google Cloud Platform instance types have been tested with OpenShift Container Platform. Example 6.1. Machine series C2 C2D C3 E2 M1 N1 N2 N2D Tau T2D 6.5.3. Tested instance types for GCP on 64-bit ARM infrastructures The following Google Cloud Platform (GCP) 64-bit ARM instance types have been tested with OpenShift Container Platform. Example 6.2. Machine series for 64-bit ARM machines Tau T2A 6.5.4. Using custom machine types Using a custom machine type to install a OpenShift Container Platform cluster is supported. Consider the following when using a custom machine type: Similar to predefined instance types, custom machine types must meet the minimum resource requirements for control plane and compute machines. For more information, see "Minimum resource requirements for cluster installation". The name of the custom machine type must adhere to the following syntax: custom-<number_of_cpus>-<amount_of_memory_in_mb> For example, custom-6-20480 . As part of the installation process, you specify the custom machine type in the install-config.yaml file. Sample install-config.yaml file with a custom machine type compute: - architecture: amd64 hyperthreading: Enabled name: worker platform: gcp: type: custom-6-20480 replicas: 2 controlPlane: architecture: amd64 hyperthreading: Enabled name: master platform: gcp: type: custom-6-20480 replicas: 3 6.5.5. Enabling Shielded VMs You can use Shielded VMs when installing your cluster. Shielded VMs have extra security features including secure boot, firmware and integrity monitoring, and rootkit detection. For more information, see Google's documentation on Shielded VMs . Note Shielded VMs are currently not supported on clusters with 64-bit ARM infrastructures. Procedure Use a text editor to edit the install-config.yaml file prior to deploying your cluster and add one of the following stanzas: To use shielded VMs for only control plane machines: controlPlane: platform: gcp: secureBoot: Enabled To use shielded VMs for only compute machines: compute: - platform: gcp: secureBoot: Enabled To use shielded VMs for all machines: platform: gcp: defaultMachinePlatform: secureBoot: Enabled 6.5.6. Enabling Confidential VMs You can use Confidential VMs when installing your cluster. Confidential VMs encrypt data while it is being processed. For more information, see Google's documentation on Confidential Computing . You can enable Confidential VMs and Shielded VMs at the same time, although they are not dependent on each other. Note Confidential VMs are currently not supported on 64-bit ARM architectures. Procedure Use a text editor to edit the install-config.yaml file prior to deploying your cluster and add one of the following stanzas: To use confidential VMs for only control plane machines: controlPlane: platform: gcp: confidentialCompute: Enabled 1 type: n2d-standard-8 2 onHostMaintenance: Terminate 3 1 Enable confidential VMs. 2 Specify a machine type that supports Confidential VMs. Confidential VMs require the N2D or C2D series of machine types. For more information on supported machine types, see Supported operating systems and machine types . 3 Specify the behavior of the VM during a host maintenance event, such as a hardware or software update. For a machine that uses Confidential VM, this value must be set to Terminate , which stops the VM. Confidential VMs do not support live VM migration. To use confidential VMs for only compute machines: compute: - platform: gcp: confidentialCompute: Enabled type: n2d-standard-8 onHostMaintenance: Terminate To use confidential VMs for all machines: platform: gcp: defaultMachinePlatform: confidentialCompute: Enabled type: n2d-standard-8 onHostMaintenance: Terminate 6.5.7. Sample customized install-config.yaml file for GCP You can customize the install-config.yaml file to specify more details about your OpenShift Container Platform cluster's platform or modify the values of the required parameters. Important This sample YAML file is provided for reference only. You must obtain your install-config.yaml file by using the installation program and modify it. apiVersion: v1 baseDomain: example.com 1 credentialsMode: Mint 2 controlPlane: 3 4 hyperthreading: Enabled 5 name: master platform: gcp: type: n2-standard-4 zones: - us-central1-a - us-central1-c osDisk: diskType: pd-ssd diskSizeGB: 1024 encryptionKey: 6 kmsKey: name: worker-key keyRing: test-machine-keys location: global projectID: project-id tags: 7 - control-plane-tag1 - control-plane-tag2 osImage: 8 project: example-project-name name: example-image-name replicas: 3 compute: 9 10 - hyperthreading: Enabled 11 name: worker platform: gcp: type: n2-standard-4 zones: - us-central1-a - us-central1-c osDisk: diskType: pd-standard diskSizeGB: 128 encryptionKey: 12 kmsKey: name: worker-key keyRing: test-machine-keys location: global projectID: project-id tags: 13 - compute-tag1 - compute-tag2 osImage: 14 project: example-project-name name: example-image-name replicas: 3 metadata: name: test-cluster 15 networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 networkType: OVNKubernetes 16 serviceNetwork: - 172.30.0.0/16 platform: gcp: projectID: openshift-production 17 region: us-central1 18 defaultMachinePlatform: tags: 19 - global-tag1 - global-tag2 osImage: 20 project: example-project-name name: example-image-name network: existing_vpc 21 controlPlaneSubnet: control_plane_subnet 22 computeSubnet: compute_subnet 23 pullSecret: '{"auths":{"<local_registry>": {"auth": "<credentials>","email": "[email protected]"}}}' 24 fips: false 25 sshKey: ssh-ed25519 AAAA... 26 additionalTrustBundle: | 27 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- imageContentSources: 28 - mirrors: - <local_registry>/<local_repository_name>/release source: quay.io/openshift-release-dev/ocp-release - mirrors: - <local_registry>/<local_repository_name>/release source: quay.io/openshift-release-dev/ocp-v4.0-art-dev 1 15 17 18 Required. The installation program prompts you for this value. 2 Optional: Add this parameter to force the Cloud Credential Operator (CCO) to use the specified mode. By default, the CCO uses the root credentials in the kube-system namespace to dynamically try to determine the capabilities of the credentials. For details about CCO modes, see the "About the Cloud Credential Operator" section in the Authentication and authorization guide. 3 9 If you do not provide these parameters and values, the installation program provides the default value. 4 10 The controlPlane section is a single mapping, but the compute section is a sequence of mappings. To meet the requirements of the different data structures, the first line of the compute section must begin with a hyphen, - , and the first line of the controlPlane section must not. Only one control plane pool is used. 5 11 Whether to enable or disable simultaneous multithreading, or hyperthreading . By default, simultaneous multithreading is enabled to increase the performance of your machines' cores. You can disable it by setting the parameter value to Disabled . If you disable simultaneous multithreading in some cluster machines, you must disable it in all cluster machines. Important If you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance. Use larger machine types, such as n1-standard-8 , for your machines if you disable simultaneous multithreading. 6 12 Optional: The custom encryption key section to encrypt both virtual machines and persistent volumes. Your default compute service account must have the permissions granted to use your KMS key and have the correct IAM role assigned. The default service account name follows the service-<project_number>@compute-system.iam.gserviceaccount.com pattern. For more information about granting the correct permissions for your service account, see "Machine management" "Creating compute machine sets" "Creating a compute machine set on GCP". 7 13 19 Optional: A set of network tags to apply to the control plane or compute machine sets. The platform.gcp.defaultMachinePlatform.tags parameter will apply to both control plane and compute machines. If the compute.platform.gcp.tags or controlPlane.platform.gcp.tags parameters are set, they override the platform.gcp.defaultMachinePlatform.tags parameter. 8 14 20 Optional: A custom Red Hat Enterprise Linux CoreOS (RHCOS) that should be used to boot control plane and compute machines. The project and name parameters under platform.gcp.defaultMachinePlatform.osImage apply to both control plane and compute machines. If the project and name parameters under controlPlane.platform.gcp.osImage or compute.platform.gcp.osImage are set, they override the platform.gcp.defaultMachinePlatform.osImage parameters. 16 The cluster network plugin to install. The default value OVNKubernetes is the only supported value. 21 Specify the name of an existing VPC. 22 Specify the name of the existing subnet to deploy the control plane machines to. The subnet must belong to the VPC that you specified. 23 Specify the name of the existing subnet to deploy the compute machines to. The subnet must belong to the VPC that you specified. 24 For <local_registry> , specify the registry domain name, and optionally the port, that your mirror registry uses to serve content. For example, registry.example.com or registry.example.com:5000 . For <credentials> , specify the base64-encoded user name and password for your mirror registry. 25 Whether to enable or disable FIPS mode. By default, FIPS mode is not enabled. If FIPS mode is enabled, the Red Hat Enterprise Linux CoreOS (RHCOS) machines that OpenShift Container Platform runs on bypass the default Kubernetes cryptography suite and use the cryptography modules that are provided with RHCOS instead. Important When running Red Hat Enterprise Linux (RHEL) or Red Hat Enterprise Linux CoreOS (RHCOS) booted in FIPS mode, OpenShift Container Platform core components use the RHEL cryptographic libraries that have been submitted to NIST for FIPS 140-2/140-3 Validation on only the x86_64, ppc64le, and s390x architectures. 26 You can optionally provide the sshKey value that you use to access the machines in your cluster. Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. 27 Provide the contents of the certificate file that you used for your mirror registry. 28 Provide the imageContentSources section from the output of the command to mirror the repository. 6.5.8. Create an Ingress Controller with global access on GCP You can create an Ingress Controller that has global access to a Google Cloud Platform (GCP) cluster. Global access is only available to Ingress Controllers using internal load balancers. Prerequisites You created the install-config.yaml and complete any modifications to it. Procedure Create an Ingress Controller with global access on a new GCP cluster. Change to the directory that contains the installation program and create a manifest file: USD ./openshift-install create manifests --dir <installation_directory> 1 1 For <installation_directory> , specify the name of the directory that contains the install-config.yaml file for your cluster. Create a file that is named cluster-ingress-default-ingresscontroller.yaml in the <installation_directory>/manifests/ directory: USD touch <installation_directory>/manifests/cluster-ingress-default-ingresscontroller.yaml 1 1 For <installation_directory> , specify the directory name that contains the manifests/ directory for your cluster. After creating the file, several network configuration files are in the manifests/ directory, as shown: USD ls <installation_directory>/manifests/cluster-ingress-default-ingresscontroller.yaml Example output cluster-ingress-default-ingresscontroller.yaml Open the cluster-ingress-default-ingresscontroller.yaml file in an editor and enter a custom resource (CR) that describes the Operator configuration you want: Sample clientAccess configuration to Global apiVersion: operator.openshift.io/v1 kind: IngressController metadata: name: default namespace: openshift-ingress-operator spec: endpointPublishingStrategy: loadBalancer: providerParameters: gcp: clientAccess: Global 1 type: GCP scope: Internal 2 type: LoadBalancerService 1 Set gcp.clientAccess to Global . 2 Global access is only available to Ingress Controllers using internal load balancers. 6.5.9. Configuring the cluster-wide proxy during installation Production environments can deny direct access to the internet and instead have an HTTP or HTTPS proxy available. You can configure a new OpenShift Container Platform cluster to use a proxy by configuring the proxy settings in the install-config.yaml file. Prerequisites You have an existing install-config.yaml file. You reviewed the sites that your cluster requires access to and determined whether any of them need to bypass the proxy. By default, all cluster egress traffic is proxied, including calls to hosting cloud provider APIs. You added sites to the Proxy object's spec.noProxy field to bypass the proxy if necessary. Note The Proxy object status.noProxy field is populated with the values of the networking.machineNetwork[].cidr , networking.clusterNetwork[].cidr , and networking.serviceNetwork[] fields from your installation configuration. For installations on Amazon Web Services (AWS), Google Cloud Platform (GCP), Microsoft Azure, and Red Hat OpenStack Platform (RHOSP), the Proxy object status.noProxy field is also populated with the instance metadata endpoint ( 169.254.169.254 ). Procedure Edit your install-config.yaml file and add the proxy settings. For example: apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5 1 A proxy URL to use for creating HTTP connections outside the cluster. The URL scheme must be http . 2 A proxy URL to use for creating HTTPS connections outside the cluster. 3 A comma-separated list of destination domain names, IP addresses, or other network CIDRs to exclude from proxying. Preface a domain with . to match subdomains only. For example, .y.com matches x.y.com , but not y.com . Use * to bypass the proxy for all destinations. 4 If provided, the installation program generates a config map that is named user-ca-bundle in the openshift-config namespace that contains one or more additional CA certificates that are required for proxying HTTPS connections. The Cluster Network Operator then creates a trusted-ca-bundle config map that merges these contents with the Red Hat Enterprise Linux CoreOS (RHCOS) trust bundle, and this config map is referenced in the trustedCA field of the Proxy object. The additionalTrustBundle field is required unless the proxy's identity certificate is signed by an authority from the RHCOS trust bundle. 5 Optional: The policy to determine the configuration of the Proxy object to reference the user-ca-bundle config map in the trustedCA field. The allowed values are Proxyonly and Always . Use Proxyonly to reference the user-ca-bundle config map only when http/https proxy is configured. Use Always to always reference the user-ca-bundle config map. The default value is Proxyonly . Note The installation program does not support the proxy readinessEndpoints field. Note If the installer times out, restart and then complete the deployment by using the wait-for command of the installer. For example: USD ./openshift-install wait-for install-complete --log-level debug Save the file and reference it when installing OpenShift Container Platform. The installation program creates a cluster-wide proxy that is named cluster that uses the proxy settings in the provided install-config.yaml file. If no proxy settings are provided, a cluster Proxy object is still created, but it will have a nil spec . Note Only the Proxy object named cluster is supported, and no additional proxies can be created. 6.6. Installing the OpenShift CLI by downloading the binary You can install the OpenShift CLI ( oc ) to interact with OpenShift Container Platform from a command-line interface. You can install oc on Linux, Windows, or macOS. Important If you installed an earlier version of oc , you cannot use it to complete all of the commands in OpenShift Container Platform 4.15. Download and install the new version of oc . Installing the OpenShift CLI on Linux You can install the OpenShift CLI ( oc ) binary on Linux by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the architecture from the Product Variant drop-down list. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.15 Linux Clients entry and save the file. Unpack the archive: USD tar xvf <file> Place the oc binary in a directory that is on your PATH . To check your PATH , execute the following command: USD echo USDPATH Verification After you install the OpenShift CLI, it is available using the oc command: USD oc <command> Installing the OpenShift CLI on Windows You can install the OpenShift CLI ( oc ) binary on Windows by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.15 Windows Client entry and save the file. Unzip the archive with a ZIP program. Move the oc binary to a directory that is on your PATH . To check your PATH , open the command prompt and execute the following command: C:\> path Verification After you install the OpenShift CLI, it is available using the oc command: C:\> oc <command> Installing the OpenShift CLI on macOS You can install the OpenShift CLI ( oc ) binary on macOS by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.15 macOS Clients entry and save the file. Note For macOS arm64, choose the OpenShift v4.15 macOS arm64 Client entry. Unpack and unzip the archive. Move the oc binary to a directory on your PATH. To check your PATH , open a terminal and execute the following command: USD echo USDPATH Verification Verify your installation by using an oc command: USD oc <command> 6.7. Alternatives to storing administrator-level secrets in the kube-system project By default, administrator secrets are stored in the kube-system project. If you configured the credentialsMode parameter in the install-config.yaml file to Manual , you must use one of the following alternatives: To manage long-term cloud credentials manually, follow the procedure in Manually creating long-term credentials . To implement short-term credentials that are managed outside the cluster for individual components, follow the procedures in Configuring a GCP cluster to use short-term credentials . 6.7.1. Manually creating long-term credentials The Cloud Credential Operator (CCO) can be put into manual mode prior to installation in environments where the cloud identity and access management (IAM) APIs are not reachable, or the administrator prefers not to store an administrator-level credential secret in the cluster kube-system namespace. Procedure Add the following granular permissions to the GCP account that the installation program uses: Example 6.3. Required GCP permissions compute.machineTypes.list compute.regions.list compute.zones.list dns.changes.create dns.changes.get dns.managedZones.create dns.managedZones.delete dns.managedZones.get dns.managedZones.list dns.networks.bindPrivateDNSZone dns.resourceRecordSets.create dns.resourceRecordSets.delete dns.resourceRecordSets.list If you did not set the credentialsMode parameter in the install-config.yaml configuration file to Manual , modify the value as shown: Sample configuration file snippet apiVersion: v1 baseDomain: example.com credentialsMode: Manual # ... If you have not previously created installation manifest files, do so by running the following command: USD openshift-install create manifests --dir <installation_directory> where <installation_directory> is the directory in which the installation program creates files. Set a USDRELEASE_IMAGE variable with the release image from your installation file by running the following command: USD RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}') Extract the list of CredentialsRequest custom resources (CRs) from the OpenShift Container Platform release image by running the following command: USD oc adm release extract \ --from=USDRELEASE_IMAGE \ --credentials-requests \ --included \ 1 --install-config=<path_to_directory_with_installation_configuration>/install-config.yaml \ 2 --to=<path_to_directory_for_credentials_requests> 3 1 The --included parameter includes only the manifests that your specific cluster configuration requires. 2 Specify the location of the install-config.yaml file. 3 Specify the path to the directory where you want to store the CredentialsRequest objects. If the specified directory does not exist, this command creates it. This command creates a YAML file for each CredentialsRequest object. Sample CredentialsRequest object apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: name: <component_credentials_request> namespace: openshift-cloud-credential-operator ... spec: providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: GCPProviderSpec predefinedRoles: - roles/storage.admin - roles/iam.serviceAccountUser skipServiceCheck: true ... Create YAML files for secrets in the openshift-install manifests directory that you generated previously. The secrets must be stored using the namespace and secret name defined in the spec.secretRef for each CredentialsRequest object. Sample CredentialsRequest object with secrets apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: name: <component_credentials_request> namespace: openshift-cloud-credential-operator ... spec: providerSpec: apiVersion: cloudcredential.openshift.io/v1 ... secretRef: name: <component_secret> namespace: <component_namespace> ... Sample Secret object apiVersion: v1 kind: Secret metadata: name: <component_secret> namespace: <component_namespace> data: service_account.json: <base64_encoded_gcp_service_account_file> Important Before upgrading a cluster that uses manually maintained credentials, you must ensure that the CCO is in an upgradeable state. 6.7.2. Configuring a GCP cluster to use short-term credentials To install a cluster that is configured to use GCP Workload Identity, you must configure the CCO utility and create the required GCP resources for your cluster. 6.7.2.1. Configuring the Cloud Credential Operator utility To create and manage cloud credentials from outside of the cluster when the Cloud Credential Operator (CCO) is operating in manual mode, extract and prepare the CCO utility ( ccoctl ) binary. Note The ccoctl utility is a Linux binary that must run in a Linux environment. Prerequisites You have access to an OpenShift Container Platform account with cluster administrator access. You have installed the OpenShift CLI ( oc ). You have added one of the following authentication options to the GCP account that the installation program uses: The IAM Workload Identity Pool Admin role. The following granular permissions: Example 6.4. Required GCP permissions compute.projects.get iam.googleapis.com/workloadIdentityPoolProviders.create iam.googleapis.com/workloadIdentityPoolProviders.get iam.googleapis.com/workloadIdentityPools.create iam.googleapis.com/workloadIdentityPools.delete iam.googleapis.com/workloadIdentityPools.get iam.googleapis.com/workloadIdentityPools.undelete iam.roles.create iam.roles.delete iam.roles.list iam.roles.undelete iam.roles.update iam.serviceAccounts.create iam.serviceAccounts.delete iam.serviceAccounts.getIamPolicy iam.serviceAccounts.list iam.serviceAccounts.setIamPolicy iam.workloadIdentityPoolProviders.get iam.workloadIdentityPools.delete resourcemanager.projects.get resourcemanager.projects.getIamPolicy resourcemanager.projects.setIamPolicy storage.buckets.create storage.buckets.delete storage.buckets.get storage.buckets.getIamPolicy storage.buckets.setIamPolicy storage.objects.create storage.objects.delete storage.objects.list Procedure Set a variable for the OpenShift Container Platform release image by running the following command: USD RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}') Obtain the CCO container image from the OpenShift Container Platform release image by running the following command: USD CCO_IMAGE=USD(oc adm release info --image-for='cloud-credential-operator' USDRELEASE_IMAGE -a ~/.pull-secret) Note Ensure that the architecture of the USDRELEASE_IMAGE matches the architecture of the environment in which you will use the ccoctl tool. Extract the ccoctl binary from the CCO container image within the OpenShift Container Platform release image by running the following command: USD oc image extract USDCCO_IMAGE --file="/usr/bin/ccoctl" -a ~/.pull-secret Change the permissions to make ccoctl executable by running the following command: USD chmod 775 ccoctl Verification To verify that ccoctl is ready to use, display the help file. Use a relative file name when you run the command, for example: USD ./ccoctl.rhel9 Example output OpenShift credentials provisioning tool Usage: ccoctl [command] Available Commands: alibabacloud Manage credentials objects for alibaba cloud aws Manage credentials objects for AWS cloud azure Manage credentials objects for Azure gcp Manage credentials objects for Google cloud help Help about any command ibmcloud Manage credentials objects for IBM Cloud nutanix Manage credentials objects for Nutanix Flags: -h, --help help for ccoctl Use "ccoctl [command] --help" for more information about a command. 6.7.2.2. Creating GCP resources with the Cloud Credential Operator utility You can use the ccoctl gcp create-all command to automate the creation of GCP resources. Note By default, ccoctl creates objects in the directory in which the commands are run. To create the objects in a different directory, use the --output-dir flag. This procedure uses <path_to_ccoctl_output_dir> to refer to this directory. Prerequisites You must have: Extracted and prepared the ccoctl binary. Procedure Set a USDRELEASE_IMAGE variable with the release image from your installation file by running the following command: USD RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}') Extract the list of CredentialsRequest objects from the OpenShift Container Platform release image by running the following command: USD oc adm release extract \ --from=USDRELEASE_IMAGE \ --credentials-requests \ --included \ 1 --install-config=<path_to_directory_with_installation_configuration>/install-config.yaml \ 2 --to=<path_to_directory_for_credentials_requests> 3 1 The --included parameter includes only the manifests that your specific cluster configuration requires. 2 Specify the location of the install-config.yaml file. 3 Specify the path to the directory where you want to store the CredentialsRequest objects. If the specified directory does not exist, this command creates it. Note This command might take a few moments to run. Use the ccoctl tool to process all CredentialsRequest objects by running the following command: USD ccoctl gcp create-all \ --name=<name> \ 1 --region=<gcp_region> \ 2 --project=<gcp_project_id> \ 3 --credentials-requests-dir=<path_to_credentials_requests_directory> 4 1 Specify the user-defined name for all created GCP resources used for tracking. 2 Specify the GCP region in which cloud resources will be created. 3 Specify the GCP project ID in which cloud resources will be created. 4 Specify the directory containing the files of CredentialsRequest manifests to create GCP service accounts. Note If your cluster uses Technology Preview features that are enabled by the TechPreviewNoUpgrade feature set, you must include the --enable-tech-preview parameter. Verification To verify that the OpenShift Container Platform secrets are created, list the files in the <path_to_ccoctl_output_dir>/manifests directory: USD ls <path_to_ccoctl_output_dir>/manifests Example output cluster-authentication-02-config.yaml openshift-cloud-controller-manager-gcp-ccm-cloud-credentials-credentials.yaml openshift-cloud-credential-operator-cloud-credential-operator-gcp-ro-creds-credentials.yaml openshift-cloud-network-config-controller-cloud-credentials-credentials.yaml openshift-cluster-api-capg-manager-bootstrap-credentials-credentials.yaml openshift-cluster-csi-drivers-gcp-pd-cloud-credentials-credentials.yaml openshift-image-registry-installer-cloud-credentials-credentials.yaml openshift-ingress-operator-cloud-credentials-credentials.yaml openshift-machine-api-gcp-cloud-credentials-credentials.yaml You can verify that the IAM service accounts are created by querying GCP. For more information, refer to GCP documentation on listing IAM service accounts. 6.7.2.3. Incorporating the Cloud Credential Operator utility manifests To implement short-term security credentials managed outside the cluster for individual components, you must move the manifest files that the Cloud Credential Operator utility ( ccoctl ) created to the correct directories for the installation program. Prerequisites You have configured an account with the cloud platform that hosts your cluster. You have configured the Cloud Credential Operator utility ( ccoctl ). You have created the cloud provider resources that are required for your cluster with the ccoctl utility. Procedure Add the following granular permissions to the GCP account that the installation program uses: Example 6.5. Required GCP permissions compute.machineTypes.list compute.regions.list compute.zones.list dns.changes.create dns.changes.get dns.managedZones.create dns.managedZones.delete dns.managedZones.get dns.managedZones.list dns.networks.bindPrivateDNSZone dns.resourceRecordSets.create dns.resourceRecordSets.delete dns.resourceRecordSets.list If you did not set the credentialsMode parameter in the install-config.yaml configuration file to Manual , modify the value as shown: Sample configuration file snippet apiVersion: v1 baseDomain: example.com credentialsMode: Manual # ... If you have not previously created installation manifest files, do so by running the following command: USD openshift-install create manifests --dir <installation_directory> where <installation_directory> is the directory in which the installation program creates files. Copy the manifests that the ccoctl utility generated to the manifests directory that the installation program created by running the following command: USD cp /<path_to_ccoctl_output_dir>/manifests/* ./manifests/ Copy the tls directory that contains the private key to the installation directory: USD cp -a /<path_to_ccoctl_output_dir>/tls . 6.8. Deploying the cluster You can install OpenShift Container Platform on a compatible cloud platform. Important You can run the create cluster command of the installation program only once, during initial installation. Prerequisites You have configured an account with the cloud platform that hosts your cluster. You have the OpenShift Container Platform installation program and the pull secret for your cluster. You have verified that the cloud provider account on your host has the correct permissions to deploy the cluster. An account with incorrect permissions causes the installation process to fail with an error message that displays the missing permissions. Procedure Remove any existing GCP credentials that do not use the service account key for the GCP account that you configured for your cluster and that are stored in the following locations: The GOOGLE_CREDENTIALS , GOOGLE_CLOUD_KEYFILE_JSON , or GCLOUD_KEYFILE_JSON environment variables The ~/.gcp/osServiceAccount.json file The gcloud cli default credentials Change to the directory that contains the installation program and initialize the cluster deployment: USD ./openshift-install create cluster --dir <installation_directory> \ 1 --log-level=info 2 1 For <installation_directory> , specify the location of your customized ./install-config.yaml file. 2 To view different installation details, specify warn , debug , or error instead of info . Optional: You can reduce the number of permissions for the service account that you used to install the cluster. If you assigned the Owner role to your service account, you can remove that role and replace it with the Viewer role. If you included the Service Account Key Admin role, you can remove it. Verification When the cluster deployment completes successfully: The terminal displays directions for accessing your cluster, including a link to the web console and credentials for the kubeadmin user. Credential information also outputs to <installation_directory>/.openshift_install.log . Important Do not delete the installation program or the files that the installation program creates. Both are required to delete the cluster. Example output ... INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: "kubeadmin", and password: "password" INFO Time elapsed: 36m22s Important The Ignition config files that the installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending node-bootstrapper certificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information. It is recommended that you use Ignition config files within 12 hours after they are generated because the 24-hour certificate rotates from 16 to 22 hours after the cluster is installed. By using the Ignition config files within 12 hours, you can avoid installation failure if the certificate update runs during installation. 6.9. Logging in to the cluster by using the CLI You can log in to your cluster as a default system user by exporting the cluster kubeconfig file. The kubeconfig file contains information about the cluster that is used by the CLI to connect a client to the correct cluster and API server. The file is specific to a cluster and is created during OpenShift Container Platform installation. Prerequisites You deployed an OpenShift Container Platform cluster. You installed the oc CLI. Procedure Export the kubeadmin credentials: USD export KUBECONFIG=<installation_directory>/auth/kubeconfig 1 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. Verify you can run oc commands successfully using the exported configuration: USD oc whoami Example output system:admin 6.10. Disabling the default OperatorHub catalog sources Operator catalogs that source content provided by Red Hat and community projects are configured for OperatorHub by default during an OpenShift Container Platform installation. In a restricted network environment, you must disable the default catalogs as a cluster administrator. Procedure Disable the sources for the default catalogs by adding disableAllDefaultSources: true to the OperatorHub object: USD oc patch OperatorHub cluster --type json \ -p '[{"op": "add", "path": "/spec/disableAllDefaultSources", "value": true}]' Tip Alternatively, you can use the web console to manage catalog sources. From the Administration Cluster Settings Configuration OperatorHub page, click the Sources tab, where you can create, update, delete, disable, and enable individual sources. 6.11. Telemetry access for OpenShift Container Platform In OpenShift Container Platform 4.15, the Telemetry service, which runs by default to provide metrics about cluster health and the success of updates, requires internet access. If your cluster is connected to the internet, Telemetry runs automatically, and your cluster is registered to OpenShift Cluster Manager . After you confirm that your OpenShift Cluster Manager inventory is correct, either maintained automatically by Telemetry or manually by using OpenShift Cluster Manager, use subscription watch to track your OpenShift Container Platform subscriptions at the account or multi-cluster level. Additional resources See About remote health monitoring for more information about the Telemetry service 6.12. steps Validate an installation . Customize your cluster . Configure image streams for the Cluster Samples Operator and the must-gather tool. Learn how to use Operator Lifecycle Manager (OLM) on restricted networks . If the mirror registry that you used to install your cluster has a trusted CA, add it to the cluster by configuring additional trust stores . If necessary, you can opt out of remote health reporting . If necessary, see Registering your disconnected cluster
[ "ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1", "cat <path>/<file_name>.pub", "cat ~/.ssh/id_ed25519.pub", "eval \"USD(ssh-agent -s)\"", "Agent pid 31874", "ssh-add <path>/<file_name> 1", "Identity added: /home/<you>/<path>/<file_name> (<computer_name>)", "./openshift-install create install-config --dir <installation_directory> 1", "pullSecret: '{\"auths\":{\"<mirror_host_name>:5000\": {\"auth\": \"<credentials>\",\"email\": \"[email protected]\"}}}'", "additionalTrustBundle: | -----BEGIN CERTIFICATE----- ZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZ -----END CERTIFICATE-----", "network: <existing_vpc> controlPlaneSubnet: <control_plane_subnet> computeSubnet: <compute_subnet>", "imageContentSources: - mirrors: - <mirror_host_name>:5000/<repo_name>/release source: quay.io/openshift-release-dev/ocp-release - mirrors: - <mirror_host_name>:5000/<repo_name>/release source: registry.redhat.io/ocp/release", "publish: Internal", "compute: - architecture: amd64 hyperthreading: Enabled name: worker platform: gcp: type: custom-6-20480 replicas: 2 controlPlane: architecture: amd64 hyperthreading: Enabled name: master platform: gcp: type: custom-6-20480 replicas: 3", "controlPlane: platform: gcp: secureBoot: Enabled", "compute: - platform: gcp: secureBoot: Enabled", "platform: gcp: defaultMachinePlatform: secureBoot: Enabled", "controlPlane: platform: gcp: confidentialCompute: Enabled 1 type: n2d-standard-8 2 onHostMaintenance: Terminate 3", "compute: - platform: gcp: confidentialCompute: Enabled type: n2d-standard-8 onHostMaintenance: Terminate", "platform: gcp: defaultMachinePlatform: confidentialCompute: Enabled type: n2d-standard-8 onHostMaintenance: Terminate", "apiVersion: v1 baseDomain: example.com 1 credentialsMode: Mint 2 controlPlane: 3 4 hyperthreading: Enabled 5 name: master platform: gcp: type: n2-standard-4 zones: - us-central1-a - us-central1-c osDisk: diskType: pd-ssd diskSizeGB: 1024 encryptionKey: 6 kmsKey: name: worker-key keyRing: test-machine-keys location: global projectID: project-id tags: 7 - control-plane-tag1 - control-plane-tag2 osImage: 8 project: example-project-name name: example-image-name replicas: 3 compute: 9 10 - hyperthreading: Enabled 11 name: worker platform: gcp: type: n2-standard-4 zones: - us-central1-a - us-central1-c osDisk: diskType: pd-standard diskSizeGB: 128 encryptionKey: 12 kmsKey: name: worker-key keyRing: test-machine-keys location: global projectID: project-id tags: 13 - compute-tag1 - compute-tag2 osImage: 14 project: example-project-name name: example-image-name replicas: 3 metadata: name: test-cluster 15 networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 networkType: OVNKubernetes 16 serviceNetwork: - 172.30.0.0/16 platform: gcp: projectID: openshift-production 17 region: us-central1 18 defaultMachinePlatform: tags: 19 - global-tag1 - global-tag2 osImage: 20 project: example-project-name name: example-image-name network: existing_vpc 21 controlPlaneSubnet: control_plane_subnet 22 computeSubnet: compute_subnet 23 pullSecret: '{\"auths\":{\"<local_registry>\": {\"auth\": \"<credentials>\",\"email\": \"[email protected]\"}}}' 24 fips: false 25 sshKey: ssh-ed25519 AAAA... 26 additionalTrustBundle: | 27 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- imageContentSources: 28 - mirrors: - <local_registry>/<local_repository_name>/release source: quay.io/openshift-release-dev/ocp-release - mirrors: - <local_registry>/<local_repository_name>/release source: quay.io/openshift-release-dev/ocp-v4.0-art-dev", "./openshift-install create manifests --dir <installation_directory> 1", "touch <installation_directory>/manifests/cluster-ingress-default-ingresscontroller.yaml 1", "ls <installation_directory>/manifests/cluster-ingress-default-ingresscontroller.yaml", "cluster-ingress-default-ingresscontroller.yaml", "apiVersion: operator.openshift.io/v1 kind: IngressController metadata: name: default namespace: openshift-ingress-operator spec: endpointPublishingStrategy: loadBalancer: providerParameters: gcp: clientAccess: Global 1 type: GCP scope: Internal 2 type: LoadBalancerService", "apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5", "./openshift-install wait-for install-complete --log-level debug", "tar xvf <file>", "echo USDPATH", "oc <command>", "C:\\> path", "C:\\> oc <command>", "echo USDPATH", "oc <command>", "apiVersion: v1 baseDomain: example.com credentialsMode: Manual", "openshift-install create manifests --dir <installation_directory>", "RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}')", "oc adm release extract --from=USDRELEASE_IMAGE --credentials-requests --included \\ 1 --install-config=<path_to_directory_with_installation_configuration>/install-config.yaml \\ 2 --to=<path_to_directory_for_credentials_requests> 3", "apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: name: <component_credentials_request> namespace: openshift-cloud-credential-operator spec: providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: GCPProviderSpec predefinedRoles: - roles/storage.admin - roles/iam.serviceAccountUser skipServiceCheck: true", "apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: name: <component_credentials_request> namespace: openshift-cloud-credential-operator spec: providerSpec: apiVersion: cloudcredential.openshift.io/v1 secretRef: name: <component_secret> namespace: <component_namespace>", "apiVersion: v1 kind: Secret metadata: name: <component_secret> namespace: <component_namespace> data: service_account.json: <base64_encoded_gcp_service_account_file>", "RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}')", "CCO_IMAGE=USD(oc adm release info --image-for='cloud-credential-operator' USDRELEASE_IMAGE -a ~/.pull-secret)", "oc image extract USDCCO_IMAGE --file=\"/usr/bin/ccoctl\" -a ~/.pull-secret", "chmod 775 ccoctl", "./ccoctl.rhel9", "OpenShift credentials provisioning tool Usage: ccoctl [command] Available Commands: alibabacloud Manage credentials objects for alibaba cloud aws Manage credentials objects for AWS cloud azure Manage credentials objects for Azure gcp Manage credentials objects for Google cloud help Help about any command ibmcloud Manage credentials objects for IBM Cloud nutanix Manage credentials objects for Nutanix Flags: -h, --help help for ccoctl Use \"ccoctl [command] --help\" for more information about a command.", "RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}')", "oc adm release extract --from=USDRELEASE_IMAGE --credentials-requests --included \\ 1 --install-config=<path_to_directory_with_installation_configuration>/install-config.yaml \\ 2 --to=<path_to_directory_for_credentials_requests> 3", "ccoctl gcp create-all --name=<name> \\ 1 --region=<gcp_region> \\ 2 --project=<gcp_project_id> \\ 3 --credentials-requests-dir=<path_to_credentials_requests_directory> 4", "ls <path_to_ccoctl_output_dir>/manifests", "cluster-authentication-02-config.yaml openshift-cloud-controller-manager-gcp-ccm-cloud-credentials-credentials.yaml openshift-cloud-credential-operator-cloud-credential-operator-gcp-ro-creds-credentials.yaml openshift-cloud-network-config-controller-cloud-credentials-credentials.yaml openshift-cluster-api-capg-manager-bootstrap-credentials-credentials.yaml openshift-cluster-csi-drivers-gcp-pd-cloud-credentials-credentials.yaml openshift-image-registry-installer-cloud-credentials-credentials.yaml openshift-ingress-operator-cloud-credentials-credentials.yaml openshift-machine-api-gcp-cloud-credentials-credentials.yaml", "apiVersion: v1 baseDomain: example.com credentialsMode: Manual", "openshift-install create manifests --dir <installation_directory>", "cp /<path_to_ccoctl_output_dir>/manifests/* ./manifests/", "cp -a /<path_to_ccoctl_output_dir>/tls .", "./openshift-install create cluster --dir <installation_directory> \\ 1 --log-level=info 2", "INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: \"kubeadmin\", and password: \"password\" INFO Time elapsed: 36m22s", "export KUBECONFIG=<installation_directory>/auth/kubeconfig 1", "oc whoami", "system:admin", "oc patch OperatorHub cluster --type json -p '[{\"op\": \"add\", \"path\": \"/spec/disableAllDefaultSources\", \"value\": true}]'" ]
https://docs.redhat.com/en/documentation/openshift_container_platform_installation/4.15/html/installing_on_gcp/installing-restricted-networks-gcp-installer-provisioned
Chapter 1. Migration from OpenShift Container Platform 3 to 4 overview
Chapter 1. Migration from OpenShift Container Platform 3 to 4 overview OpenShift Container Platform 4 clusters are different from OpenShift Container Platform 3 clusters. OpenShift Container Platform 4 clusters contain new technologies and functionality that result in a cluster that is self-managing, flexible, and automated. To learn more about migrating from OpenShift Container Platform 3 to 4 see About migrating from OpenShift Container Platform 3 to 4 . 1.1. Differences between OpenShift Container Platform 3 and 4 Before migrating from OpenShift Container Platform 3 to 4, you can check differences between OpenShift Container Platform 3 and 4 . Review the following information: Architecture Installation and update Storage , network , logging , security , and monitoring considerations 1.2. Planning network considerations Before migrating from OpenShift Container Platform 3 to 4, review the differences between OpenShift Container Platform 3 and 4 for information about the following areas: DNS considerations Isolating the DNS domain of the target cluster from the clients . Setting up the target cluster to accept the source DNS domain . You can migrate stateful application workloads from OpenShift Container Platform 3 to 4 at the granularity of a namespace. To learn more about MTC see Understanding MTC . Note If you are migrating from OpenShift Container Platform 3, see About migrating from OpenShift Container Platform 3 to 4 and Installing the legacy Migration Toolkit for Containers Operator on OpenShift Container Platform 3 . 1.3. Installing MTC Review the following tasks to install the MTC: Install the Migration Toolkit for Containers Operator on target cluster by using Operator Lifecycle Manager (OLM) . Install the legacy Migration Toolkit for Containers Operator on the source cluster manually . Configure object storage to use as a replication repository . 1.4. Upgrading MTC You upgrade the Migration Toolkit for Containers (MTC) on OpenShift Container Platform 4.12 by using OLM. You upgrade MTC on OpenShift Container Platform 3 by reinstalling the legacy Migration Toolkit for Containers Operator. 1.5. Reviewing premigration checklists Before you migrate your application workloads with the Migration Toolkit for Containers (MTC), review the premigration checklists . 1.6. Migrating applications You can migrate your applications by using the MTC web console or the command line . 1.7. Advanced migration options You can automate your migrations and modify MTC custom resources to improve the performance of large-scale migrations by using the following options: Running a state migration Creating migration hooks Editing, excluding, and mapping migrated resources Configuring the migration controller for large migrations 1.8. Troubleshooting migrations You can perform the following troubleshooting tasks: Viewing migration plan resources by using the MTC web console Viewing the migration plan aggregated log file Using the migration log reader Accessing performance metrics Using the must-gather tool Using the Velero CLI to debug Backup and Restore CRs Using MTC custom resources for troubleshooting Checking common issues and concerns 1.9. Rolling back a migration You can roll back a migration by using the MTC web console, by using the CLI, or manually. 1.10. Uninstalling MTC and deleting resources You can uninstall the MTC and delete its resources to clean up the cluster.
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.12/html/migrating_from_version_3_to_4/migration-from-version-3-to-4-overview
Administration Guide
Administration Guide Red Hat Virtualization 4.4 Administration tasks in Red Hat Virtualization Red Hat Virtualization Documentation Team Red Hat Customer Content Services [email protected] Abstract This document provides information and procedures relevant to Red Hat Virtualization administrators.
null
https://docs.redhat.com/en/documentation/red_hat_virtualization/4.4/html/administration_guide/index
Chapter 5. Deploy standalone Multicloud Object Gateway
Chapter 5. Deploy standalone Multicloud Object Gateway Deploying only the Multicloud Object Gateway component with the OpenShift Data Foundation provides the flexibility in deployment and helps to reduce the resource consumption. You can deploy the Multicloud Object Gateway component either using dynamic storage devices or using the local storage devices. 5.1. Deploy standalone Multicloud Object Gateway using dynamic storage devices Use this section to deploy only the standalone Multicloud Object Gateway component, which involves the following steps: Installing Red Hat OpenShift Data Foundation Operator Creating standalone Multicloud Object Gateway 5.1.1. Installing Red Hat OpenShift Data Foundation Operator You can install Red Hat OpenShift Data Foundation Operator using the Red Hat OpenShift Container Platform Operator Hub. Prerequisites Access to an OpenShift Container Platform cluster using an account with cluster-admin and Operator installation permissions. You must have at least three worker nodes in the Red Hat OpenShift Container Platform cluster. For additional resource requirements, see the Planning your deployment guide. Important When you need to override the cluster-wide default node selector for OpenShift Data Foundation, you can use the following command in the command line interface to specify a blank node selector for the openshift-storage namespace (create openshift-storage namespace in this case): Taint a node as infra to ensure only Red Hat OpenShift Data Foundation resources are scheduled on that node. This helps you save on subscription costs. For more information, see How to use dedicated worker nodes for Red Hat OpenShift Data Foundation chapter in the Managing and Allocating Storage Resources guide. Procedure Log in to the OpenShift Web Console. Click Operators OperatorHub . Scroll or type OpenShift Data Foundation into the Filter by keyword box to find the OpenShift Data Foundation Operator. Click Install . Set the following options on the Install Operator page: Update Channel as stable-4.9 . Installation Mode as A specific namespace on the cluster . Installed Namespace as Operator recommended namespace openshift-storage . If Namespace openshift-storage does not exist, it is created during the operator installation. Select Approval Strategy as Automatic or Manual . If you select Automatic updates, then the Operator Lifecycle Manager (OLM) automatically upgrades the running instance of your Operator without any intervention. If you select Manual updates, then the OLM creates an update request. As a cluster administrator, you must then manually approve that update request to update the Operator to a newer version. Ensure that the Enable option is selected for the Console plugin . Click Install . Note We recommend using all default settings. Changing it may result in unexpected behavior. Alter only if you are aware of its result. Verification steps Verify that the OpenShift Data Foundation Operator shows a green tick indicating successful installation. After the operator is successfully installed, a pop-up with a message, Web console update is available appears on the user interface. Click Refresh web console from this pop-up for the console changes to reflect. In the Web Console, navigate to Operators and verify if OpenShift Data Foundation is available. Important In case the console plugin option was not automatically enabled after you installed the OpenShift Data Foundation Operator, you need to enable it. For more information on how to enable the console plugin, see Enabling the Red Hat OpenShift Data Foundation console plugin . 5.1.2. Creating standalone Multicloud Object Gateway Use this section to create only the Multicloud Object Gateway component with OpenShift Data Foundation. Prerequisites Ensure that OpenShift Data Foundation Operator is installed. (For deploying using local storage devices only) Ensure that Local Storage Operator is installed. Ensure that you have a storage class and is set as the default. Procedure In the OpenShift Web Console, click Operators Installed Operators to view all the installed operators. Ensure that the Project selected is openshift-storage . Click OpenShift Data Foundation operator and then click Create StorageSystem . In the Backing storage page, expand Advanced . Select Multicloud Object Gateway for Deployment type . Click . Optional: In the Security page, select Connect to an external key management service . Key Management Service Provider is set to Vault by default. Enter Vault Service Name , host Address of Vault server ('https:// <hostname or ip> '), Port number , and Token . Expand Advanced Settings to enter additional settings and certificate details based on your Vault configuration: Enter the Key Value secret path in the Backend Path that is dedicated and unique to OpenShift Data Foundation. Optional: Enter TLS Server Name and Vault Enterprise Namespace . Upload the respective PEM encoded certificate file to provide the CA Certificate , Client Certificate , and Client Private Key . Click Save . Click . In the Review and create page, review the configuration details: To modify any configuration settings, click Back . Click Create StorageSystem . Verification steps Verifying that the OpenShift Data Foundation cluster is healthy In the OpenShift Web Console, click Storage OpenShift Data Foundation . In the Status card of the Overview tab, click Storage System and then click the storage system link from the pop up that appears. In the Status card of the Object tab, verify that both Object Service and Data Resiliency have a green tick. In the Details card, verify that the MCG information is displayed. Verify the state of the pods Click Workloads Pods from the OpenShift Web Console. Select openshift-storage from the Project drop-down list and verify that the following pods are in Running state. Note If the Show default projects option is disabled, use the toggle button to list all the default projects. Component Corresponding pods OpenShift Data Foundation Operator ocs-operator-* (1 pod on any worker node) ocs-metrics-exporter-* (1 pod on any worker node) odf-operator-controller-manager-* (1 pod on any worker node) odf-console-* (1 pod on any worker node) Rook-ceph Operator rook-ceph-operator-* (1 pod on any worker node) Multicloud Object Gateway noobaa-operator-* (1 pod on any worker node) noobaa-core-* (1 pod on any worker node) noobaa-db-pg-* (1 pod on any worker node) noobaa-endpoint-* (1 pod on any worker node) 5.2. Deploy standalone Multicloud Object Gateway using local storage devices Use this section to deploy only the standalone Multicloud Object Gateway component, which involves the following steps: Installing the Local Storage Operator Installing Red Hat OpenShift Data Foundation Operator Creating standalone Multicloud Object Gateway 5.2.1. Installing Local Storage Operator Install the Local Storage Operator from the Operator Hub before creating Red Hat OpenShift Data Foundation clusters on local storage devices. Procedure Log in to the OpenShift Web Console. Click Operators OperatorHub . Type local storage in the Filter by keyword box to find the Local Storage Operator from the list of operators and click on it. Set the following options on the Install Operator page: Update channel as either 4.9 or stable . Installation mode as A specific namespace on the cluster . Installed Namespace as Operator recommended namespace openshift-local-storage . Update approval as Automatic . Click Install . Verification steps Verify that the Local Storage Operator shows a green tick indicating successful installation. 5.2.2. Installing Red Hat OpenShift Data Foundation Operator You can install Red Hat OpenShift Data Foundation Operator using the Red Hat OpenShift Container Platform Operator Hub. Prerequisites Access to an OpenShift Container Platform cluster using an account with cluster-admin and Operator installation permissions. You must have at least three worker nodes in the Red Hat OpenShift Container Platform cluster. For additional resource requirements, see the Planning your deployment guide. Important When you need to override the cluster-wide default node selector for OpenShift Data Foundation, you can use the following command in the command line interface to specify a blank node selector for the openshift-storage namespace (create openshift-storage namespace in this case): Taint a node as infra to ensure only Red Hat OpenShift Data Foundation resources are scheduled on that node. This helps you save on subscription costs. For more information, see How to use dedicated worker nodes for Red Hat OpenShift Data Foundation chapter in the Managing and Allocating Storage Resources guide. Procedure Log in to the OpenShift Web Console. Click Operators OperatorHub . Scroll or type OpenShift Data Foundation into the Filter by keyword box to find the OpenShift Data Foundation Operator. Click Install . Set the following options on the Install Operator page: Update Channel as stable-4.9 . Installation Mode as A specific namespace on the cluster . Installed Namespace as Operator recommended namespace openshift-storage . If Namespace openshift-storage does not exist, it is created during the operator installation. Select Approval Strategy as Automatic or Manual . If you select Automatic updates, then the Operator Lifecycle Manager (OLM) automatically upgrades the running instance of your Operator without any intervention. If you select Manual updates, then the OLM creates an update request. As a cluster administrator, you must then manually approve that update request to update the Operator to a newer version. Ensure that the Enable option is selected for the Console plugin . Click Install . Note We recommend using all default settings. Changing it may result in unexpected behavior. Alter only if you are aware of its result. Verification steps Verify that the OpenShift Data Foundation Operator shows a green tick indicating successful installation. After the operator is successfully installed, a pop-up with a message, Web console update is available appears on the user interface. Click Refresh web console from this pop-up for the console changes to reflect. In the Web Console, navigate to Operators and verify if OpenShift Data Foundation is available. Important In case the console plugin option was not automatically enabled after you installed the OpenShift Data Foundation Operator, you need to enable it. For more information on how to enable the console plugin, see Enabling the Red Hat OpenShift Data Foundation console plugin . 5.2.3. Creating standalone Multicloud Object Gateway Use this section to create only the Multicloud Object Gateway component with OpenShift Data Foundation. Prerequisites Ensure that OpenShift Data Foundation Operator is installed. (For deploying using local storage devices only) Ensure that Local Storage Operator is installed. Ensure that you have a storage class and is set as the default. Procedure In the OpenShift Web Console, click Operators Installed Operators to view all the installed operators. Ensure that the Project selected is openshift-storage . Click OpenShift Data Foundation operator and then click Create StorageSystem . In the Backing storage page, expand Advanced . Select Multicloud Object Gateway for Deployment type . Click . Optional: In the Security page, select Connect to an external key management service . Key Management Service Provider is set to Vault by default. Enter Vault Service Name , host Address of Vault server ('https:// <hostname or ip> '), Port number , and Token . Expand Advanced Settings to enter additional settings and certificate details based on your Vault configuration: Enter the Key Value secret path in the Backend Path that is dedicated and unique to OpenShift Data Foundation. Optional: Enter TLS Server Name and Vault Enterprise Namespace . Upload the respective PEM encoded certificate file to provide the CA Certificate , Client Certificate , and Client Private Key . Click Save . Click . In the Review and create page, review the configuration details: To modify any configuration settings, click Back . Click Create StorageSystem . Verification steps Verifying that the OpenShift Data Foundation cluster is healthy In the OpenShift Web Console, click Storage OpenShift Data Foundation . In the Status card of the Overview tab, click Storage System and then click the storage system link from the pop up that appears. In the Status card of the Object tab, verify that both Object Service and Data Resiliency have a green tick. In the Details card, verify that the MCG information is displayed. Verify the state of the pods Click Workloads Pods from the OpenShift Web Console. Select openshift-storage from the Project drop-down list and verify that the following pods are in Running state. Note If the Show default projects option is disabled, use the toggle button to list all the default projects. Component Corresponding pods OpenShift Data Foundation Operator ocs-operator-* (1 pod on any worker node) ocs-metrics-exporter-* (1 pod on any worker node) odf-operator-controller-manager-* (1 pod on any worker node) odf-console-* (1 pod on any worker node) Rook-ceph Operator rook-ceph-operator-* (1 pod on any worker node) Multicloud Object Gateway noobaa-operator-* (1 pod on any worker node) noobaa-core-* (1 pod on any worker node) noobaa-db-pg-* (1 pod on any worker node) noobaa-endpoint-* (1 pod on any worker node)
[ "oc annotate namespace openshift-storage openshift.io/node-selector=", "oc annotate namespace openshift-storage openshift.io/node-selector=" ]
https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.9/html/deploying_openshift_data_foundation_using_red_hat_virtualization_platform/deploy-standalone-multicloud-object-gateway
Chapter 15. Planning for Installation on IBM Z
Chapter 15. Planning for Installation on IBM Z 15.1. Pre-installation Red Hat Enterprise Linux 7 runs on zEnterprise 196 or later IBM mainframe systems. The installation process assumes that you are familiar with the IBM Z and can set up logical partitions (LPARs) and z/VM guest virtual machines. For additional information on IBM Z, see http://www.ibm.com/systems/z . For installation of Red Hat Enterprise Linux on IBM Z, Red Hat supports DASD (Direct Access Storage Device) and FCP (Fiber Channel Protocol) storage devices. Before you install Red Hat Enterprise Linux, you must decide on the following: Decide whether you want to run the operating system on an LPAR or as a z/VM guest operating system. Decide if you need swap space and if so, how much. Although it is possible (and recommended) to assign enough memory to a z/VM guest virtual machine and let z/VM do the necessary swapping, there are cases where the amount of required RAM is hard to predict. Such instances should be examined on a case-by-case basis. See Section 18.15.3.4, "Recommended Partitioning Scheme" . Decide on a network configuration. Red Hat Enterprise Linux 7 for IBM Z supports the following network devices: Real and virtual Open Systems Adapter (OSA) Real and virtual HiperSockets LAN channel station (LCS) for real OSA You require the following hardware: Disk space. Calculate how much disk space you need and allocate sufficient disk space on DASDs [2] or SCSI [3] disks. You require at least 10 GB for a server installation, and 20 GB if you want to install all packages. You also require disk space for any application data. After the installation, you can add or delete more DASD or SCSI disk partitions. The disk space used by the newly installed Red Hat Enterprise Linux system (the Linux instance) must be separate from the disk space used by other operating systems you have installed on your system. For more information about disks and partition configuration, see Section 18.15.3.4, "Recommended Partitioning Scheme" . RAM. Acquire 1 GB (recommended) for the Linux instance. With some tuning, an instance might run with as little as 512 MB RAM. Note When initializing swap space on an FBA ( Fixed Block Architecture ) DASD using the SWAPGEN utility, the FBAPART option must be used. [2] Direct Access Storage Devices (DASDs) are hard disks that allow a maximum of three partitions per device. For example, dasda can have partitions dasda1 , dasda2 , and dasda3 . [3] Using the SCSI-over-Fibre Channel device driver (the zfcp device driver) and a switch, SCSI LUNs can be presented to Linux on IBM Z as if they were locally attached SCSI drives.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/installation_guide/chap-installation-planning-s390
Chapter 11. Optimizing networking
Chapter 11. Optimizing networking The OpenShift SDN uses OpenvSwitch, virtual extensible LAN (VXLAN) tunnels, OpenFlow rules, and iptables. This network can be tuned by using jumbo frames, network interface controllers (NIC) offloads, multi-queue, and ethtool settings. OVN-Kubernetes uses Geneve (Generic Network Virtualization Encapsulation) instead of VXLAN as the tunnel protocol. VXLAN provides benefits over VLANs, such as an increase in networks from 4096 to over 16 million, and layer 2 connectivity across physical networks. This allows for all pods behind a service to communicate with each other, even if they are running on different systems. VXLAN encapsulates all tunneled traffic in user datagram protocol (UDP) packets. However, this leads to increased CPU utilization. Both these outer- and inner-packets are subject to normal checksumming rules to guarantee data is not corrupted during transit. Depending on CPU performance, this additional processing overhead can cause a reduction in throughput and increased latency when compared to traditional, non-overlay networks. Cloud, VM, and bare metal CPU performance can be capable of handling much more than one Gbps network throughput. When using higher bandwidth links such as 10 or 40 Gbps, reduced performance can occur. This is a known issue in VXLAN-based environments and is not specific to containers or OpenShift Container Platform. Any network that relies on VXLAN tunnels will perform similarly because of the VXLAN implementation. If you are looking to push beyond one Gbps, you can: Evaluate network plugins that implement different routing techniques, such as border gateway protocol (BGP). Use VXLAN-offload capable network adapters. VXLAN-offload moves the packet checksum calculation and associated CPU overhead off of the system CPU and onto dedicated hardware on the network adapter. This frees up CPU cycles for use by pods and applications, and allows users to utilize the full bandwidth of their network infrastructure. VXLAN-offload does not reduce latency. However, CPU utilization is reduced even in latency tests. 11.1. Optimizing the MTU for your network There are two important maximum transmission units (MTUs): the network interface controller (NIC) MTU and the cluster network MTU. The NIC MTU is only configured at the time of OpenShift Container Platform installation. The MTU must be less than or equal to the maximum supported value of the NIC of your network. If you are optimizing for throughput, choose the largest possible value. If you are optimizing for lowest latency, choose a lower value. The SDN overlay's MTU must be less than the NIC MTU by 50 bytes at a minimum. This accounts for the SDN overlay header. So, on a normal ethernet network, set this to 1450 . On a jumbo frame ethernet network, set this to 8950 . For OVN and Geneve, the MTU must be less than the NIC MTU by 100 bytes at a minimum. Note This 50 byte overlay header is relevant to the OpenShift SDN. Other SDN solutions might require the value to be more or less. 11.2. Recommended practices for installing large scale clusters When installing large clusters or scaling the cluster to larger node counts, set the cluster network cidr accordingly in your install-config.yaml file before you install the cluster: networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 networkType: OpenShiftSDN serviceNetwork: - 172.30.0.0/16 The default cluster network cidr 10.128.0.0/14 cannot be used if the cluster size is more than 500 nodes. It must be set to 10.128.0.0/12 or 10.128.0.0/10 to get to larger node counts beyond 500 nodes. 11.3. Impact of IPsec Because encrypting and decrypting node hosts uses CPU power, performance is affected both in throughput and CPU usage on the nodes when encryption is enabled, regardless of the IP security system being used. IPSec encrypts traffic at the IP payload level, before it hits the NIC, protecting fields that would otherwise be used for NIC offloading. This means that some NIC acceleration features might not be usable when IPSec is enabled and will lead to decreased throughput and increased CPU usage. Additional resources Modifying advanced network configuration parameters Configuration parameters for the OVN-Kubernetes default CNI network provider Configuration parameters for the OpenShift SDN default CNI network provider
[ "networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 networkType: OpenShiftSDN serviceNetwork: - 172.30.0.0/16" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.9/html/scalability_and_performance/optimizing-networking
Chapter 4. Desktop
Chapter 4. Desktop Kate now retains printing preferences Previously, the Kate text editor did not retain printing preferences, which meant that the user was forced to set all the Header & Footer and Margin preferences after every print job or session. This bug has been fixed, and Kate again retains the printing preferences as expected. LibreOffice upgrade The libreoffice packages have been upgraded to upstream version 4.2.8.2, which provides a number of bug fixes and enhancements over the version, including: OpenXML interoperability has been improved. Additional statistics functions have been added to the Calc application, thus improving interoperability with Microsoft Excel and its Analysis ToolPak add-in. Various performance improvements have been implemented in Calc. This update adds new import filters for importing files from the Apple Keynote and Abiword applications. The export filter for the MathML markup language has been improved. This update adds a new start screen that includes thumbnails of recently opened documents. A visual clue is now displayed in the Slide Sorter window for slides with transitions or animations. This update improves trend lines in charts. LibreOffice now supports BCP 47 language tags. For a complete list of bug fixes and enhancements provided by this upgrade, refer to https://wiki.documentfoundation.org/ReleaseNotes/4.2 New package: libgovirt The libgovirt package has been added to this Red Hat Enterprise Linux release. The libgovirt package is a library that allows the remote-viewer tool to connect to virtual machines managed by oVirt and Red Hat Enterprise Virtualization. dejavu-fonts upgraded to upstream version 2.33 The dejavu-fonts packages have been upgraded to upstream version 2.33, which provides a number of bug fixes and enhancements over the version. Notably, this adds a number of new characters and symbols to the supported fonts. Support for transliteration from Latin to US-ASCII Prior to this update, icu in Red Hat Enterprise Linux 6 did not support transliteration from Latin to US-ASCII characters mode of the transliterator_transliterate() function. Consequently, the user could not, for example, easily remove non-ASCII characters from PHP code strings. With this update, the user can use transliterator_transliterate() to transliterate Latin characters to US-ASCII characters.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.7_release_notes/desktop
Appendix A. About the Authors
Appendix A. About the Authors
null
https://docs.redhat.com/en/documentation/red_hat_ansible_automation_platform/2.4/html/ansible_automation_platform_1.2_to_2_migration_guide/about_the_authors
14.6. Converting an Existing Image to Another Format
14.6. Converting an Existing Image to Another Format The convert option is used to convert one recognized image format to another image format. For a list of accepted formats, see Section 14.12, "Supported qemu-img Formats" . The -p parameter shows the progress of the command (optional and not for every command) and -S flag allows for the creation of a sparse file , which is included within the disk image. Sparse files in all purposes function like a standard file, except that the physical blocks that only contain zeros (that is, nothing). When the Operating System sees this file, it treats it as it exists and takes up actual disk space, even though in reality it does not take any. This is particularly helpful when creating a disk for a guest virtual machine as this gives the appearance that the disk has taken much more disk space than it has. For example, if you set -S to 50Gb on a disk image that is 10Gb, then your 10Gb of disk space will appear to be 60Gb in size even though only 10Gb is actually being used. Convert the disk image filename to disk image output_filename using format output_format . The disk image can be optionally compressed with the -c option, or encrypted with the -o option by setting -o encryption . Note that the options available with the -o parameter differ with the selected format. Only the qcow2 and qcow2 format supports encryption or compression. qcow2 encryption uses the AES format with secure 128-bit keys. qcow2 compression is read-only, so if a compressed sector is converted from qcow2 format, it is written to the new format as uncompressed data. Image conversion is also useful to get a smaller image when using a format which can grow, such as qcow or cow . The empty sectors are detected and suppressed from the destination image.
[ "qemu-img convert [-c] [-p] [-f fmt ] [-t cache ] [-O output_fmt ] [-o options ] [-S sparse_size ] filename output_filename" ]
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/virtualization_deployment_and_administration_guide/sect-using_qemu_img-converting_an_existing_image_to_another_format
7.5 Release Notes
7.5 Release Notes Red Hat Enterprise Linux 7 Release Notes for Red Hat Enterprise Linux 7.5 Red Hat Customer Content Services
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/7.5_release_notes/index
Chapter 4. PodSecurityPolicyReview [security.openshift.io/v1]
Chapter 4. PodSecurityPolicyReview [security.openshift.io/v1] Description PodSecurityPolicyReview checks which service accounts (not users, since that would be cluster-wide) can create the PodTemplateSpec in question. Compatibility level 2: Stable within a major release for a minimum of 9 months or 3 minor releases (whichever is longer). Type object Required spec 4.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds spec object PodSecurityPolicyReviewSpec defines specification for PodSecurityPolicyReview status object PodSecurityPolicyReviewStatus represents the status of PodSecurityPolicyReview. 4.1.1. .spec Description PodSecurityPolicyReviewSpec defines specification for PodSecurityPolicyReview Type object Required template Property Type Description serviceAccountNames array (string) serviceAccountNames is an optional set of ServiceAccounts to run the check with. If serviceAccountNames is empty, the template.spec.serviceAccountName is used, unless it's empty, in which case "default" is used instead. If serviceAccountNames is specified, template.spec.serviceAccountName is ignored. template PodTemplateSpec template is the PodTemplateSpec to check. The template.spec.serviceAccountName field is used if serviceAccountNames is empty, unless the template.spec.serviceAccountName is empty, in which case "default" is used. If serviceAccountNames is specified, template.spec.serviceAccountName is ignored. 4.1.2. .status Description PodSecurityPolicyReviewStatus represents the status of PodSecurityPolicyReview. Type object Required allowedServiceAccounts Property Type Description allowedServiceAccounts array allowedServiceAccounts returns the list of service accounts in this namespace that have the power to create the PodTemplateSpec. allowedServiceAccounts[] object ServiceAccountPodSecurityPolicyReviewStatus represents ServiceAccount name and related review status 4.1.3. .status.allowedServiceAccounts Description allowedServiceAccounts returns the list of service accounts in this namespace that have the power to create the PodTemplateSpec. Type array 4.1.4. .status.allowedServiceAccounts[] Description ServiceAccountPodSecurityPolicyReviewStatus represents ServiceAccount name and related review status Type object Required name Property Type Description allowedBy ObjectReference allowedBy is a reference to the rule that allows the PodTemplateSpec. A rule can be a SecurityContextConstraint or a PodSecurityPolicy A nil , indicates that it was denied. name string name contains the allowed and the denied ServiceAccount name reason string A machine-readable description of why this operation is in the "Failure" status. If this value is empty there is no information available. template PodTemplateSpec template is the PodTemplateSpec after the defaulting is applied. 4.2. API endpoints The following API endpoints are available: /apis/security.openshift.io/v1/namespaces/{namespace}/podsecuritypolicyreviews POST : create a PodSecurityPolicyReview 4.2.1. /apis/security.openshift.io/v1/namespaces/{namespace}/podsecuritypolicyreviews Table 4.1. Global path parameters Parameter Type Description namespace string object name and auth scope, such as for teams and projects Table 4.2. Global query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. pretty string If 'true', then the output is pretty printed. HTTP method POST Description create a PodSecurityPolicyReview Table 4.3. Body parameters Parameter Type Description body PodSecurityPolicyReview schema Table 4.4. HTTP responses HTTP code Reponse body 200 - OK PodSecurityPolicyReview schema 201 - Created PodSecurityPolicyReview schema 202 - Accepted PodSecurityPolicyReview schema 401 - Unauthorized Empty
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.14/html/security_apis/podsecuritypolicyreview-security-openshift-io-v1
8.9. Improving Guest Virtual Machine Response Time
8.9. Improving Guest Virtual Machine Response Time Guest virtual machines can sometimes be slow to respond with certain workloads and usage patterns. Examples of situations which may cause slow or unresponsive guest virtual machines: Severely overcommitted memory. Overcommitted memory with high processor usage Other (not qemu-kvm processes) busy or stalled processes on the host physical machine. KVM guest virtual machines function as Linux processes. Linux processes are not permanently kept in main memory (physical RAM) and will be placed into swap space (virtual memory) especially if they are not being used. If a guest virtual machine is inactive for long periods of time, the host physical machine kernel may move the guest virtual machine into swap. As swap is slower than physical memory it may appear that the guest is not responding. This changes once the guest is loaded into the main memory. Note that the process of loading a guest virtual machine from swap to main memory may take several seconds per gigabyte of RAM assigned to the guest virtual machine, depending on the type of storage used for swap and the performance of the components. KVM guest virtual machines processes may be moved to swap regardless of whether memory is overcommitted or overall memory usage. Using unsafe overcommit levels or overcommitting with swap turned off guest virtual machine processes or other critical processes is not recommended. Always ensure the host physical machine has sufficient swap space when overcommitting memory. For more information on overcommitting with KVM, refer to Chapter 6, Overcommitting with KVM . Warning Virtual memory allows a Linux system to use more memory than there is physical RAM on the system. Underused processes are swapped out which allows active processes to use memory, improving memory utilization. Disabling swap reduces memory utilization as all processes are stored in physical RAM. If swap is turned off, do not overcommit guest virtual machines. Overcommitting guest virtual machines without any swap can cause guest virtual machines or the host physical machine system to crash.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/virtualization_administration_guide/sect-improving-vm-response-time
23.3. Methods
23.3. Methods 23.3.1. Creating a MAC Address Pool Creation of a MAC address pool requires values for name and ranges . Example 23.2. Creating a MAC address pool 23.3.2. Updating a MAC Address Pool The name , description , allow_duplicates , and ranges elements are updatable post-creation. Example 23.3. Updating a MAC address pool 23.3.3. Removing a MAC Address Pool Removal of a MAC address pool requires a DELETE request. Example 23.4. Removing a MAC address pool
[ "POST /ovirt-engine/api/macpools HTTP/1.1 Accept: application/xml Content-type: application/xml <mac_pool> <name>MACPool</name> <description>A MAC address pool</description> <allow_duplicates>true</allow_duplicates> <default_pool>false</default_pool> <ranges> <range> <from>00:1A:4A:16:01:51</from> <to>00:1A:4A:16:01:e6</to> </range> </ranges> </mac_pool>", "PUT /ovirt-engine/api/macpools/ab39bbc1-1d64-4737-9b20-ce081f99b0e1 HTTP/1.1 Accept: application/xml Content-type: application/xml <mac_pool> <name>UpdatedMACPool</name> <description>An updated MAC address pool</description> <allow_duplicates>false</allow_duplicates> <ranges> <range> <from>00:1A:4A:16:01:51</from> <to>00:1A:4A:16:01:e6</to> </range> <range> <from>02:1A:4A:01:00:00</from> <to>02:1A:4A:FF:FF:FF</to> </range> </ranges> </mac_pool>", "DELETE /ovirt-engine/api/macpools/ab39bbc1-1d64-4737-9b20-ce081f99b0e1 HTTP/1.1 HTTP/1.1 204 No Content" ]
https://docs.redhat.com/en/documentation/red_hat_virtualization/4.3/html/version_3_rest_api_guide/sect-mac_address_pool_methods
Chapter 4. Uninstalling OpenShift Data Foundation
Chapter 4. Uninstalling OpenShift Data Foundation 4.1. Uninstalling OpenShift Data Foundation in Internal mode To uninstall OpenShift Data Foundation in Internal mode, refer to the knowledgebase article on Uninstalling OpenShift Data Foundation .
null
https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.16/html/deploying_openshift_data_foundation_using_red_hat_openshift_service_on_aws_with_hosted_control_planes/uninstalling_openshift_data_foundation
Networking
Networking OpenShift Container Platform 4.7 Configuring and managing cluster networking Red Hat OpenShift Documentation Team
[ "ssh -i <ssh-key-path> core@<master-hostname>", "oc get -n openshift-network-operator deployment/network-operator", "NAME READY UP-TO-DATE AVAILABLE AGE network-operator 1/1 1 1 56m", "oc get clusteroperator/network", "NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE network 4.5.4 True False False 50m", "oc describe network.config/cluster", "Name: cluster Namespace: Labels: <none> Annotations: <none> API Version: config.openshift.io/v1 Kind: Network Metadata: Self Link: /apis/config.openshift.io/v1/networks/cluster Spec: 1 Cluster Network: Cidr: 10.128.0.0/14 Host Prefix: 23 Network Type: OpenShiftSDN Service Network: 172.30.0.0/16 Status: 2 Cluster Network: Cidr: 10.128.0.0/14 Host Prefix: 23 Cluster Network MTU: 8951 Network Type: OpenShiftSDN Service Network: 172.30.0.0/16 Events: <none>", "oc describe clusteroperators/network", "oc logs --namespace=openshift-network-operator deployment/network-operator", "spec: clusterNetwork: - cidr: 10.128.0.0/19 hostPrefix: 23 - cidr: 10.128.32.0/19 hostPrefix: 23", "spec: serviceNetwork: - 172.30.0.0/14", "defaultNetwork: type: OpenShiftSDN openshiftSDNConfig: mode: NetworkPolicy mtu: 1450 vxlanPort: 4789", "defaultNetwork: type: OVNKubernetes ovnKubernetesConfig: mtu: 1400 genevePort: 6081 ipsecConfig: {}", "kubeProxyConfig: proxyArguments: iptables-min-sync-period: - 0s", "apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: clusterNetwork: 1 - cidr: 10.128.0.0/14 hostPrefix: 23 serviceNetwork: 2 - 172.30.0.0/16 defaultNetwork: 3 type: OpenShiftSDN openshiftSDNConfig: mode: NetworkPolicy mtu: 1450 vxlanPort: 4789 kubeProxyConfig: iptablesSyncPeriod: 30s proxyArguments: iptables-min-sync-period: - 0s", "oc get -n openshift-dns-operator deployment/dns-operator", "NAME READY UP-TO-DATE AVAILABLE AGE dns-operator 1/1 1 1 23h", "oc get clusteroperator/dns", "NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE dns 4.1.0-0.11 True False False 92m", "oc describe dns.operator/default", "Name: default Namespace: Labels: <none> Annotations: <none> API Version: operator.openshift.io/v1 Kind: DNS Status: Cluster Domain: cluster.local 1 Cluster IP: 172.30.0.10 2", "oc get networks.config/cluster -o jsonpath='{USD.status.serviceNetwork}'", "[172.30.0.0/16]", "oc edit dns.operator/default", "apiVersion: operator.openshift.io/v1 kind: DNS metadata: name: default spec: servers: - name: foo-server 1 zones: 2 - example.com forwardPlugin: upstreams: 3 - 1.1.1.1 - 2.2.2.2:5353 - name: bar-server zones: - bar.com - example.com forwardPlugin: upstreams: - 3.3.3.3 - 4.4.4.4:5454", "oc get configmap/dns-default -n openshift-dns -o yaml", "apiVersion: v1 data: Corefile: | example.com:5353 { forward . 1.1.1.1 2.2.2.2:5353 } bar.com:5353 example.com:5353 { forward . 3.3.3.3 4.4.4.4:5454 1 } .:5353 { errors health kubernetes cluster.local in-addr.arpa ip6.arpa { pods insecure upstream fallthrough in-addr.arpa ip6.arpa } prometheus :9153 forward . /etc/resolv.conf { policy sequential } cache 30 reload } kind: ConfigMap metadata: labels: dns.operator.openshift.io/owning-dns: default name: dns-default namespace: openshift-dns", "oc describe clusteroperators/dns", "oc logs -n openshift-dns-operator deployment/dns-operator -c dns-operator", "apiVersion: config.openshift.io/v1 kind: Ingress metadata: name: cluster spec: domain: apps.openshiftdemos.com", "nodePlacement: nodeSelector: matchLabels: kubernetes.io/os: linux tolerations: - effect: NoSchedule operator: Exists", "apiVersion: operator.openshift.io/v1 kind: IngressController spec: tlsSecurityProfile: old: {} type: Old", "oc edit IngressController default -n openshift-ingress-operator", "apiVersion: operator.openshift.io/v1 kind: IngressController spec: tlsSecurityProfile: type: Custom 1 custom: 2 ciphers: 3 - ECDHE-ECDSA-CHACHA20-POLY1305 - ECDHE-RSA-CHACHA20-POLY1305 - ECDHE-RSA-AES128-GCM-SHA256 - ECDHE-ECDSA-AES128-GCM-SHA256 minTLSVersion: VersionTLS11", "oc describe IngressController default -n openshift-ingress-operator", "Name: default Namespace: openshift-ingress-operator Labels: <none> Annotations: <none> API Version: operator.openshift.io/v1 Kind: IngressController Spec: Tls Security Profile: Custom: Ciphers: ECDHE-ECDSA-CHACHA20-POLY1305 ECDHE-RSA-CHACHA20-POLY1305 ECDHE-RSA-AES128-GCM-SHA256 ECDHE-ECDSA-AES128-GCM-SHA256 Min TLS Version: VersionTLS11 Type: Custom", "oc describe --namespace=openshift-ingress-operator ingresscontroller/default", "oc describe clusteroperators/ingress", "oc logs --namespace=openshift-ingress-operator deployments/ingress-operator", "oc describe --namespace=openshift-ingress-operator ingresscontroller/<name>", "oc --namespace openshift-ingress-operator get ingresscontrollers", "NAME AGE default 10m", "oc --namespace openshift-ingress create secret tls custom-certs-default --cert=tls.crt --key=tls.key", "oc patch --type=merge --namespace openshift-ingress-operator ingresscontrollers/default --patch '{\"spec\":{\"defaultCertificate\":{\"name\":\"custom-certs-default\"}}}'", "echo Q | openssl s_client -connect console-openshift-console.apps.<domain>:443 -showcerts 2>/dev/null | openssl x509 -noout -subject -issuer -enddate", "subject=C = US, ST = NC, L = Raleigh, O = RH, OU = OCP4, CN = *.apps.example.com issuer=C = US, ST = NC, L = Raleigh, O = RH, OU = OCP4, CN = example.com notAfter=May 10 08:32:45 2022 GM", "oc patch -n openshift-ingress-operator ingresscontrollers/default --type json -p USD'- op: remove\\n path: /spec/defaultCertificate'", "echo Q | openssl s_client -connect console-openshift-console.apps.<domain>:443 -showcerts 2>/dev/null | openssl x509 -noout -subject -issuer -enddate", "subject=CN = *.apps.<domain> issuer=CN = ingress-operator@1620633373 notAfter=May 10 10:44:36 2023 GMT", "oc get -n openshift-ingress-operator ingresscontrollers/default -o jsonpath='{USD.status.availableReplicas}'", "2", "oc patch -n openshift-ingress-operator ingresscontroller/default --patch '{\"spec\":{\"replicas\": 3}}' --type=merge", "ingresscontroller.operator.openshift.io/default patched", "oc get -n openshift-ingress-operator ingresscontrollers/default -o jsonpath='{USD.status.availableReplicas}'", "3", "apiVersion: operator.openshift.io/v1 kind: IngressController metadata: name: default namespace: openshift-ingress-operator spec: replicas: 2 logging: access: destination: type: Container", "oc -n openshift-ingress logs deployment.apps/router-default -c logs", "2020-05-11T19:11:50.135710+00:00 router-default-57dfc6cd95-bpmk6 router-default-57dfc6cd95-bpmk6 haproxy[108]: 174.19.21.82:39654 [11/May/2020:19:11:50.133] public be_http:hello-openshift:hello-openshift/pod:hello-openshift:hello-openshift:10.128.2.12:8080 0/0/1/0/1 200 142 - - --NI 1/1/0/0/0 0/0 \"GET / HTTP/1.1\"", "apiVersion: operator.openshift.io/v1 kind: IngressController metadata: name: default namespace: openshift-ingress-operator spec: replicas: 2 logging: access: destination: type: Syslog syslog: address: 1.2.3.4 port: 10514", "apiVersion: operator.openshift.io/v1 kind: IngressController metadata: name: default namespace: openshift-ingress-operator spec: replicas: 2 logging: access: destination: type: Syslog syslog: address: 1.2.3.4 port: 10514 httpLogFormat: '%ci:%cp [%t] %ft %b/%s %B %bq %HM %HU %HV'", "apiVersion: operator.openshift.io/v1 kind: IngressController metadata: name: default namespace: openshift-ingress-operator spec: replicas: 2 logging: access: null", "cat router-internal.yaml apiVersion: v1 items: - apiVersion: operator.openshift.io/v1 kind: IngressController metadata: name: sharded namespace: openshift-ingress-operator spec: domain: <apps-sharded.basedomain.example.net> nodePlacement: nodeSelector: matchLabels: node-role.kubernetes.io/worker: \"\" routeSelector: matchLabels: type: sharded status: {} kind: List metadata: resourceVersion: \"\" selfLink: \"\"", "oc apply -f router-internal.yaml", "cat router-internal.yaml", "apiVersion: v1 items: - apiVersion: operator.openshift.io/v1 kind: IngressController metadata: name: sharded namespace: openshift-ingress-operator spec: domain: <apps-sharded.basedomain.example.net> nodePlacement: nodeSelector: matchLabels: node-role.kubernetes.io/worker: \"\" namespaceSelector: matchLabels: type: sharded status: {} kind: List metadata: resourceVersion: \"\" selfLink: \"\"", "oc apply -f router-internal.yaml", "apiVersion: operator.openshift.io/v1 kind: IngressController metadata: namespace: openshift-ingress-operator name: <name> 1 spec: domain: <domain> 2 endpointPublishingStrategy: type: LoadBalancerService loadBalancer: scope: Internal 3", "oc create -f <name>-ingress-controller.yaml 1", "oc --all-namespaces=true get ingresscontrollers", "oc replace --force --wait --filename - <<EOF apiVersion: operator.openshift.io/v1 kind: IngressController metadata: namespace: openshift-ingress-operator name: default spec: endpointPublishingStrategy: type: LoadBalancerService loadBalancer: scope: Internal EOF", "oc -n openshift-ingress-operator patch ingresscontroller/default --patch '{\"spec\":{\"routeAdmission\":{\"namespaceOwnership\":\"InterNamespaceAllowed\"}}}' --type=merge", "spec: routeAdmission: namespaceOwnership: InterNamespaceAllowed", "oc edit IngressController", "spec: routeAdmission: wildcardPolicy: WildcardsDisallowed # or WildcardsAllowed", "oc edit IngressController", "apiVersion: operator.openshift.io/v1 kind: IngressController metadata: name: default namespace: openshift-ingress-operator spec: httpHeaders: forwardedHeaderPolicy: Append", "oc -n openshift-ingress-operator annotate ingresscontrollers/<ingresscontroller_name> ingress.operator.openshift.io/default-enable-http2=true", "oc annotate ingresses.config/cluster ingress.operator.openshift.io/default-enable-http2=true", "oc edit ingresses.config/cluster -o yaml", "apiVersion: config.openshift.io/v1 kind: Ingress metadata: name: cluster spec: domain: apps.example.com 1 appsDomain: <test.example.com> 2", "oc expose service hello-openshift route.route.openshift.io/hello-openshift exposed", "oc get routes NAME HOST/PORT PATH SERVICES PORT TERMINATION WILDCARD hello-openshift hello_openshift-<my_project>.test.example.com hello-openshift 8080-tcp None", "oc get podnetworkconnectivitycheck -n openshift-network-diagnostics", "NAME AGE network-check-source-ci-ln-x5sv9rb-f76d1-4rzrp-worker-b-6xdmh-to-kubernetes-apiserver-endpoint-ci-ln-x5sv9rb-f76d1-4rzrp-master-0 75m network-check-source-ci-ln-x5sv9rb-f76d1-4rzrp-worker-b-6xdmh-to-kubernetes-apiserver-endpoint-ci-ln-x5sv9rb-f76d1-4rzrp-master-1 73m network-check-source-ci-ln-x5sv9rb-f76d1-4rzrp-worker-b-6xdmh-to-kubernetes-apiserver-endpoint-ci-ln-x5sv9rb-f76d1-4rzrp-master-2 75m network-check-source-ci-ln-x5sv9rb-f76d1-4rzrp-worker-b-6xdmh-to-kubernetes-apiserver-service-cluster 75m network-check-source-ci-ln-x5sv9rb-f76d1-4rzrp-worker-b-6xdmh-to-kubernetes-default-service-cluster 75m network-check-source-ci-ln-x5sv9rb-f76d1-4rzrp-worker-b-6xdmh-to-load-balancer-api-external 75m network-check-source-ci-ln-x5sv9rb-f76d1-4rzrp-worker-b-6xdmh-to-load-balancer-api-internal 75m network-check-source-ci-ln-x5sv9rb-f76d1-4rzrp-worker-b-6xdmh-to-network-check-target-ci-ln-x5sv9rb-f76d1-4rzrp-master-0 75m network-check-source-ci-ln-x5sv9rb-f76d1-4rzrp-worker-b-6xdmh-to-network-check-target-ci-ln-x5sv9rb-f76d1-4rzrp-master-1 75m network-check-source-ci-ln-x5sv9rb-f76d1-4rzrp-worker-b-6xdmh-to-network-check-target-ci-ln-x5sv9rb-f76d1-4rzrp-master-2 75m network-check-source-ci-ln-x5sv9rb-f76d1-4rzrp-worker-b-6xdmh-to-network-check-target-ci-ln-x5sv9rb-f76d1-4rzrp-worker-b-6xdmh 74m network-check-source-ci-ln-x5sv9rb-f76d1-4rzrp-worker-b-6xdmh-to-network-check-target-ci-ln-x5sv9rb-f76d1-4rzrp-worker-c-n8mbf 74m network-check-source-ci-ln-x5sv9rb-f76d1-4rzrp-worker-b-6xdmh-to-network-check-target-ci-ln-x5sv9rb-f76d1-4rzrp-worker-d-4hnrz 74m network-check-source-ci-ln-x5sv9rb-f76d1-4rzrp-worker-b-6xdmh-to-network-check-target-service-cluster 75m network-check-source-ci-ln-x5sv9rb-f76d1-4rzrp-worker-b-6xdmh-to-openshift-apiserver-endpoint-ci-ln-x5sv9rb-f76d1-4rzrp-master-0 75m network-check-source-ci-ln-x5sv9rb-f76d1-4rzrp-worker-b-6xdmh-to-openshift-apiserver-endpoint-ci-ln-x5sv9rb-f76d1-4rzrp-master-1 75m network-check-source-ci-ln-x5sv9rb-f76d1-4rzrp-worker-b-6xdmh-to-openshift-apiserver-endpoint-ci-ln-x5sv9rb-f76d1-4rzrp-master-2 74m network-check-source-ci-ln-x5sv9rb-f76d1-4rzrp-worker-b-6xdmh-to-openshift-apiserver-service-cluster 75m", "oc get podnetworkconnectivitycheck <name> -n openshift-network-diagnostics -o yaml", "apiVersion: controlplane.operator.openshift.io/v1alpha1 kind: PodNetworkConnectivityCheck metadata: name: network-check-source-ci-ln-x5sv9rb-f76d1-4rzrp-worker-b-6xdmh-to-kubernetes-apiserver-endpoint-ci-ln-x5sv9rb-f76d1-4rzrp-master-0 namespace: openshift-network-diagnostics spec: sourcePod: network-check-source-7c88f6d9f-hmg2f targetEndpoint: 10.0.0.4:6443 tlsClientCert: name: \"\" status: conditions: - lastTransitionTime: \"2021-01-13T20:11:34Z\" message: 'kubernetes-apiserver-endpoint-ci-ln-x5sv9rb-f76d1-4rzrp-master-0: tcp connection to 10.0.0.4:6443 succeeded' reason: TCPConnectSuccess status: \"True\" type: Reachable failures: - latency: 2.241775ms message: 'kubernetes-apiserver-endpoint-ci-ln-x5sv9rb-f76d1-4rzrp-master-0: failed to establish a TCP connection to 10.0.0.4:6443: dial tcp 10.0.0.4:6443: connect: connection refused' reason: TCPConnectError success: false time: \"2021-01-13T20:10:34Z\" - latency: 2.582129ms message: 'kubernetes-apiserver-endpoint-ci-ln-x5sv9rb-f76d1-4rzrp-master-0: failed to establish a TCP connection to 10.0.0.4:6443: dial tcp 10.0.0.4:6443: connect: connection refused' reason: TCPConnectError success: false time: \"2021-01-13T20:09:34Z\" - latency: 3.483578ms message: 'kubernetes-apiserver-endpoint-ci-ln-x5sv9rb-f76d1-4rzrp-master-0: failed to establish a TCP connection to 10.0.0.4:6443: dial tcp 10.0.0.4:6443: connect: connection refused' reason: TCPConnectError success: false time: \"2021-01-13T20:08:34Z\" outages: - end: \"2021-01-13T20:11:34Z\" endLogs: - latency: 2.032018ms message: 'kubernetes-apiserver-endpoint-ci-ln-x5sv9rb-f76d1-4rzrp-master-0: tcp connection to 10.0.0.4:6443 succeeded' reason: TCPConnect success: true time: \"2021-01-13T20:11:34Z\" - latency: 2.241775ms message: 'kubernetes-apiserver-endpoint-ci-ln-x5sv9rb-f76d1-4rzrp-master-0: failed to establish a TCP connection to 10.0.0.4:6443: dial tcp 10.0.0.4:6443: connect: connection refused' reason: TCPConnectError success: false time: \"2021-01-13T20:10:34Z\" - latency: 2.582129ms message: 'kubernetes-apiserver-endpoint-ci-ln-x5sv9rb-f76d1-4rzrp-master-0: failed to establish a TCP connection to 10.0.0.4:6443: dial tcp 10.0.0.4:6443: connect: connection refused' reason: TCPConnectError success: false time: \"2021-01-13T20:09:34Z\" - latency: 3.483578ms message: 'kubernetes-apiserver-endpoint-ci-ln-x5sv9rb-f76d1-4rzrp-master-0: failed to establish a TCP connection to 10.0.0.4:6443: dial tcp 10.0.0.4:6443: connect: connection refused' reason: TCPConnectError success: false time: \"2021-01-13T20:08:34Z\" message: Connectivity restored after 2m59.999789186s start: \"2021-01-13T20:08:34Z\" startLogs: - latency: 3.483578ms message: 'kubernetes-apiserver-endpoint-ci-ln-x5sv9rb-f76d1-4rzrp-master-0: failed to establish a TCP connection to 10.0.0.4:6443: dial tcp 10.0.0.4:6443: connect: connection refused' reason: TCPConnectError success: false time: \"2021-01-13T20:08:34Z\" successes: - latency: 2.845865ms message: 'kubernetes-apiserver-endpoint-ci-ln-x5sv9rb-f76d1-4rzrp-master-0: tcp connection to 10.0.0.4:6443 succeeded' reason: TCPConnect success: true time: \"2021-01-13T21:14:34Z\" - latency: 2.926345ms message: 'kubernetes-apiserver-endpoint-ci-ln-x5sv9rb-f76d1-4rzrp-master-0: tcp connection to 10.0.0.4:6443 succeeded' reason: TCPConnect success: true time: \"2021-01-13T21:13:34Z\" - latency: 2.895796ms message: 'kubernetes-apiserver-endpoint-ci-ln-x5sv9rb-f76d1-4rzrp-master-0: tcp connection to 10.0.0.4:6443 succeeded' reason: TCPConnect success: true time: \"2021-01-13T21:12:34Z\" - latency: 2.696844ms message: 'kubernetes-apiserver-endpoint-ci-ln-x5sv9rb-f76d1-4rzrp-master-0: tcp connection to 10.0.0.4:6443 succeeded' reason: TCPConnect success: true time: \"2021-01-13T21:11:34Z\" - latency: 1.502064ms message: 'kubernetes-apiserver-endpoint-ci-ln-x5sv9rb-f76d1-4rzrp-master-0: tcp connection to 10.0.0.4:6443 succeeded' reason: TCPConnect success: true time: \"2021-01-13T21:10:34Z\" - latency: 1.388857ms message: 'kubernetes-apiserver-endpoint-ci-ln-x5sv9rb-f76d1-4rzrp-master-0: tcp connection to 10.0.0.4:6443 succeeded' reason: TCPConnect success: true time: \"2021-01-13T21:09:34Z\" - latency: 1.906383ms message: 'kubernetes-apiserver-endpoint-ci-ln-x5sv9rb-f76d1-4rzrp-master-0: tcp connection to 10.0.0.4:6443 succeeded' reason: TCPConnect success: true time: \"2021-01-13T21:08:34Z\" - latency: 2.089073ms message: 'kubernetes-apiserver-endpoint-ci-ln-x5sv9rb-f76d1-4rzrp-master-0: tcp connection to 10.0.0.4:6443 succeeded' reason: TCPConnect success: true time: \"2021-01-13T21:07:34Z\" - latency: 2.156994ms message: 'kubernetes-apiserver-endpoint-ci-ln-x5sv9rb-f76d1-4rzrp-master-0: tcp connection to 10.0.0.4:6443 succeeded' reason: TCPConnect success: true time: \"2021-01-13T21:06:34Z\" - latency: 1.777043ms message: 'kubernetes-apiserver-endpoint-ci-ln-x5sv9rb-f76d1-4rzrp-master-0: tcp connection to 10.0.0.4:6443 succeeded' reason: TCPConnect success: true time: \"2021-01-13T21:05:34Z\"", "oc patch network.config.openshift.io cluster --type=merge -p '{ \"spec\": { \"serviceNodePortRange\": \"30000-<port>\" } }'", "network.config.openshift.io/cluster patched", "oc get configmaps -n openshift-kube-apiserver config -o jsonpath=\"{.data['config\\.yaml']}\" | grep -Eo '\"service-node-port-range\":[\"[[:digit:]]+-[[:digit:]]+\"]'", "\"service-node-port-range\":[\"30000-33000\"]", "apiVersion: v1 kind: Pod metadata: namespace: project1 name: example-pod spec: containers: - name: example-pod ports: - containerPort: 30100 name: sctpserver protocol: SCTP", "apiVersion: v1 kind: Service metadata: namespace: project1 name: sctpserver spec: ports: - name: sctpserver protocol: SCTP port: 30100 targetPort: 30100 type: ClusterIP", "kind: NetworkPolicy apiVersion: networking.k8s.io/v1 metadata: name: allow-sctp-on-http spec: podSelector: matchLabels: role: web ingress: - ports: - protocol: SCTP port: 80", "apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: name: load-sctp-module labels: machineconfiguration.openshift.io/role: worker spec: config: ignition: version: 3.2.0 storage: files: - path: /etc/modprobe.d/sctp-blacklist.conf mode: 0644 overwrite: true contents: source: data:, - path: /etc/modules-load.d/sctp-load.conf mode: 0644 overwrite: true contents: source: data:,sctp", "oc create -f load-sctp-module.yaml", "oc get nodes", "apiVersion: v1 kind: Pod metadata: name: sctpserver labels: app: sctpserver spec: containers: - name: sctpserver image: registry.access.redhat.com/ubi8/ubi command: [\"/bin/sh\", \"-c\"] args: [\"dnf install -y nc && sleep inf\"] ports: - containerPort: 30102 name: sctpserver protocol: SCTP", "oc create -f sctp-server.yaml", "apiVersion: v1 kind: Service metadata: name: sctpservice labels: app: sctpserver spec: type: NodePort selector: app: sctpserver ports: - name: sctpserver protocol: SCTP port: 30102 targetPort: 30102", "oc create -f sctp-service.yaml", "apiVersion: v1 kind: Pod metadata: name: sctpclient labels: app: sctpclient spec: containers: - name: sctpclient image: registry.access.redhat.com/ubi8/ubi command: [\"/bin/sh\", \"-c\"] args: [\"dnf install -y nc && sleep inf\"]", "oc apply -f sctp-client.yaml", "oc rsh sctpserver", "nc -l 30102 --sctp", "oc get services sctpservice -o go-template='{{.spec.clusterIP}}{{\"\\n\"}}'", "oc rsh sctpclient", "nc <cluster_IP> 30102 --sctp", "apiVersion: ptp.openshift.io/v1 kind: NodePtpDevice metadata: creationTimestamp: \"2019-11-15T08:57:11Z\" generation: 1 name: dev-worker-0 1 namespace: openshift-ptp 2 resourceVersion: \"487462\" selfLink: /apis/ptp.openshift.io/v1/namespaces/openshift-ptp/nodeptpdevices/dev-worker-0 uid: 08d133f7-aae2-403f-84ad-1fe624e5ab3f spec: {} status: devices: 3 - name: eno1 - name: eno2 - name: ens787f0 - name: ens787f1 - name: ens801f0 - name: ens801f1 - name: ens802f0 - name: ens802f1 - name: ens803", "cat << EOF| oc create -f - apiVersion: v1 kind: Namespace metadata: name: openshift-ptp labels: name: openshift-ptp openshift.io/cluster-monitoring: \"true\" EOF", "cat << EOF| oc create -f - apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: ptp-operators namespace: openshift-ptp spec: targetNamespaces: - openshift-ptp EOF", "OC_VERSION=USD(oc version -o yaml | grep openshiftVersion | grep -o '[0-9]*[.][0-9]*' | head -1)", "cat << EOF| oc create -f - apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: ptp-operator-subscription namespace: openshift-ptp spec: channel: \"USD{OC_VERSION}\" name: ptp-operator source: redhat-operators sourceNamespace: openshift-marketplace EOF", "oc get csv -n openshift-ptp -o custom-columns=Name:.metadata.name,Phase:.status.phase", "Name Phase ptp-operator.4.4.0-202006160135 Succeeded", "apiVersion: ptp.openshift.io/v1 kind: PtpConfig metadata: name: <name> 1 namespace: openshift-ptp 2 spec: profile: 3 - name: \"profile1\" 4 interface: \"ens787f1\" 5 ptp4lOpts: \"-s -2\" 6 phc2sysOpts: \"-a -r\" 7 recommend: 8 - profile: \"profile1\" 9 priority: 10 10 match: 11 - nodeLabel: \"node-role.kubernetes.io/worker\" 12 nodeName: \"dev-worker-0\" 13", "oc create -f <filename> 1", "oc get pods -n openshift-ptp -o wide", "NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES linuxptp-daemon-4xkbb 1/1 Running 0 43m 192.168.111.15 dev-worker-0 <none> <none> linuxptp-daemon-tdspf 1/1 Running 0 43m 192.168.111.11 dev-master-0 <none> <none> ptp-operator-657bbb64c8-2f8sj 1/1 Running 0 43m 10.128.0.116 dev-master-0 <none> <none> oc logs linuxptp-daemon-4xkbb -n openshift-ptp I1115 09:41:17.117596 4143292 daemon.go:107] in applyNodePTPProfile I1115 09:41:17.117604 4143292 daemon.go:109] updating NodePTPProfile to: I1115 09:41:17.117607 4143292 daemon.go:110] ------------------------------------ I1115 09:41:17.117612 4143292 daemon.go:102] Profile Name: profile1 1 I1115 09:41:17.117616 4143292 daemon.go:102] Interface: ens787f1 2 I1115 09:41:17.117620 4143292 daemon.go:102] Ptp4lOpts: -s -2 3 I1115 09:41:17.117623 4143292 daemon.go:102] Phc2sysOpts: -a -r 4 I1115 09:41:17.117626 4143292 daemon.go:116] ------------------------------------ I1115 09:41:18.117934 4143292 daemon.go:186] Starting phc2sys I1115 09:41:18.117985 4143292 daemon.go:187] phc2sys cmd: &{Path:/usr/sbin/phc2sys Args:[/usr/sbin/phc2sys -a -r] Env:[] Dir: Stdin:<nil> Stdout:<nil> Stderr:<nil> ExtraFiles:[] SysProcAttr:<nil> Process:<nil> ProcessState:<nil> ctx:<nil> lookPathErr:<nil> finished:false childFiles:[] closeAfterStart:[] closeAfterWait:[] goroutine:[] errch:<nil> waitDone:<nil>} I1115 09:41:19.118175 4143292 daemon.go:186] Starting ptp4l I1115 09:41:19.118209 4143292 daemon.go:187] ptp4l cmd: &{Path:/usr/sbin/ptp4l Args:[/usr/sbin/ptp4l -m -f /etc/ptp4l.conf -i ens787f1 -s -2] Env:[] Dir: Stdin:<nil> Stdout:<nil> Stderr:<nil> ExtraFiles:[] SysProcAttr:<nil> Process:<nil> ProcessState:<nil> ctx:<nil> lookPathErr:<nil> finished:false childFiles:[] closeAfterStart:[] closeAfterWait:[] goroutine:[] errch:<nil> waitDone:<nil>} ptp4l[102189.864]: selected /dev/ptp5 as PTP clock ptp4l[102189.886]: port 1: INITIALIZING to LISTENING on INIT_COMPLETE ptp4l[102189.886]: port 0: INITIALIZING to LISTENING on INIT_COMPLETE", "kind: NetworkPolicy apiVersion: networking.k8s.io/v1 metadata: name: deny-by-default spec: podSelector: {} ingress: []", "apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: allow-from-openshift-ingress spec: ingress: - from: - namespaceSelector: matchLabels: network.openshift.io/policy-group: ingress podSelector: {} policyTypes: - Ingress", "kind: NetworkPolicy apiVersion: networking.k8s.io/v1 metadata: name: allow-same-namespace spec: podSelector: {} ingress: - from: - podSelector: {}", "kind: NetworkPolicy apiVersion: networking.k8s.io/v1 metadata: name: allow-http-and-https spec: podSelector: matchLabels: role: frontend ingress: - ports: - protocol: TCP port: 80 - protocol: TCP port: 443", "kind: NetworkPolicy apiVersion: networking.k8s.io/v1 metadata: name: allow-pod-and-namespace-both spec: podSelector: matchLabels: name: test-pods ingress: - from: - namespaceSelector: matchLabels: project: project_name podSelector: matchLabels: name: test-pods", "touch <policy_name>.yaml", "kind: NetworkPolicy apiVersion: networking.k8s.io/v1 metadata: name: deny-by-default spec: podSelector: ingress: []", "kind: NetworkPolicy apiVersion: networking.k8s.io/v1 metadata: name: allow-same-namespace spec: podSelector: ingress: - from: - podSelector: {}", "oc apply -f <policy_name>.yaml -n <namespace>", "networkpolicy \"default-deny\" created", "kind: NetworkPolicy apiVersion: networking.k8s.io/v1 metadata: name: allow-27107 1 spec: podSelector: 2 matchLabels: app: mongodb ingress: - from: - podSelector: 3 matchLabels: app: app ports: 4 - protocol: TCP port: 27017", "oc get networkpolicy", "oc describe networkpolicy <policy_name> -n <namespace>", "oc describe networkpolicy allow-same-namespace", "Name: allow-same-namespace Namespace: ns1 Created on: 2021-05-24 22:28:56 -0400 EDT Labels: <none> Annotations: <none> Spec: PodSelector: <none> (Allowing the specific traffic to all pods in this namespace) Allowing ingress traffic: To Port: <any> (traffic allowed to all ports) From: PodSelector: <none> Not affecting egress traffic Policy Types: Ingress", "kind: NetworkPolicy apiVersion: networking.k8s.io/v1 metadata: name: allow-27107 1 spec: podSelector: 2 matchLabels: app: mongodb ingress: - from: - podSelector: 3 matchLabels: app: app ports: 4 - protocol: TCP port: 27017", "oc get networkpolicy -n <namespace>", "oc apply -n <namespace> -f <policy_file>.yaml", "oc edit networkpolicy <policy_name> -n <namespace>", "oc describe networkpolicy <policy_name> -n <namespace>", "kind: NetworkPolicy apiVersion: networking.k8s.io/v1 metadata: name: allow-27107 1 spec: podSelector: 2 matchLabels: app: mongodb ingress: - from: - podSelector: 3 matchLabels: app: app ports: 4 - protocol: TCP port: 27017", "oc delete networkpolicy <policy_name> -n <namespace>", "networkpolicy.networking.k8s.io/allow-same-namespace deleted", "oc adm create-bootstrap-project-template -o yaml > template.yaml", "oc create -f template.yaml -n openshift-config", "oc edit project.config.openshift.io/cluster", "apiVersion: config.openshift.io/v1 kind: Project metadata: spec: projectRequestTemplate: name: <template_name>", "oc edit template <project_template> -n openshift-config", "objects: - apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: allow-from-same-namespace spec: podSelector: {} ingress: - from: - podSelector: {} - apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: allow-from-openshift-ingress spec: ingress: - from: - namespaceSelector: matchLabels: network.openshift.io/policy-group: ingress podSelector: {} policyTypes: - Ingress", "oc new-project <project> 1", "oc get networkpolicy NAME POD-SELECTOR AGE allow-from-openshift-ingress <none> 7s allow-from-same-namespace <none> 7s", "cat << EOF| oc create -f - apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: allow-from-openshift-ingress spec: ingress: - from: - namespaceSelector: matchLabels: policy-group.network.openshift.io/ingress: \"\" podSelector: {} policyTypes: - Ingress EOF", "cat << EOF| oc create -f - apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: allow-from-openshift-monitoring spec: ingress: - from: - namespaceSelector: matchLabels: network.openshift.io/policy-group: monitoring podSelector: {} policyTypes: - Ingress EOF", "cat << EOF| oc create -f - kind: NetworkPolicy apiVersion: networking.k8s.io/v1 metadata: name: allow-same-namespace spec: podSelector: ingress: - from: - podSelector: {} EOF", "oc describe networkpolicy", "Name: allow-from-openshift-ingress Namespace: example1 Created on: 2020-06-09 00:28:17 -0400 EDT Labels: <none> Annotations: <none> Spec: PodSelector: <none> (Allowing the specific traffic to all pods in this namespace) Allowing ingress traffic: To Port: <any> (traffic allowed to all ports) From: NamespaceSelector: network.openshift.io/policy-group: ingress Not affecting egress traffic Policy Types: Ingress Name: allow-from-openshift-monitoring Namespace: example1 Created on: 2020-06-09 00:29:57 -0400 EDT Labels: <none> Annotations: <none> Spec: PodSelector: <none> (Allowing the specific traffic to all pods in this namespace) Allowing ingress traffic: To Port: <any> (traffic allowed to all ports) From: NamespaceSelector: network.openshift.io/policy-group: monitoring Not affecting egress traffic Policy Types: Ingress", "apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: # additionalNetworks: 1 - name: <name> 2 namespace: <namespace> 3 rawCNIConfig: |- 4 { } type: Raw", "apiVersion: k8s.cni.cncf.io/v1 kind: NetworkAttachmentDefinition metadata: name: <name> 1 spec: config: |- 2 { }", "{ \"cniVersion\": \"0.3.1\", \"name\": \"work-network\", \"type\": \"bridge\", \"isGateway\": true, \"vlan\": 2, \"ipam\": { \"type\": \"dhcp\" } }", "{ \"cniVersion\": \"0.3.1\", \"name\": \"work-network\", \"type\": \"host-device\", \"device\": \"eth1\", \"ipam\": { \"type\": \"dhcp\" } }", "{ \"cniVersion\": \"0.3.1\", \"name\": \"work-network\", \"type\": \"ipvlan\", \"master\": \"eth1\", \"mode\": \"l3\", \"ipam\": { \"type\": \"static\", \"addresses\": [ { \"address\": \"192.168.10.10/24\" } ] } }", "{ \"cniVersion\": \"0.3.1\", \"name\": \"macvlan-net\", \"type\": \"macvlan\", \"master\": \"eth1\", \"mode\": \"bridge\", \"ipam\": { \"type\": \"dhcp\" } }", "{ \"ipam\": { \"type\": \"static\", \"addresses\": [ { \"address\": \"191.168.1.7/24\" } ] } }", "apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: additionalNetworks: - name: dhcp-shim namespace: default type: Raw rawCNIConfig: |- { \"name\": \"dhcp-shim\", \"cniVersion\": \"0.3.1\", \"type\": \"bridge\", \"ipam\": { \"type\": \"dhcp\" } } #", "{ \"ipam\": { \"type\": \"dhcp\" } }", "{ \"ipam\": { \"type\": \"whereabouts\", \"range\": \"192.0.2.192/27\", \"exclude\": [ \"192.0.2.192/30\", \"192.0.2.196/32\" ] } }", "oc edit networks.operator.openshift.io cluster", "apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: # additionalNetworks: - name: tertiary-net namespace: project2 type: Raw rawCNIConfig: |- { \"cniVersion\": \"0.3.1\", \"name\": \"tertiary-net\", \"type\": \"ipvlan\", \"master\": \"eth1\", \"mode\": \"l2\", \"ipam\": { \"type\": \"static\", \"addresses\": [ { \"address\": \"192.168.1.23/24\" } ] } }", "oc get network-attachment-definitions -n <namespace>", "NAME AGE test-network-1 14m", "apiVersion: k8s.cni.cncf.io/v1 kind: NetworkAttachmentDefinition metadata: name: next-net spec: config: |- { \"cniVersion\": \"0.3.1\", \"name\": \"work-network\", \"type\": \"host-device\", \"device\": \"eth1\", \"ipam\": { \"type\": \"dhcp\" } }", "oc apply -f <file>.yaml", "metadata: annotations: k8s.v1.cni.cncf.io/networks: <network>[,<network>,...] 1", "metadata: annotations: k8s.v1.cni.cncf.io/networks: |- [ { \"name\": \"<network>\", 1 \"namespace\": \"<namespace>\", 2 \"default-route\": [\"<default-route>\"] 3 } ]", "oc create -f <name>.yaml", "oc get pod <name> -o yaml", "oc get pod example-pod -o yaml apiVersion: v1 kind: Pod metadata: annotations: k8s.v1.cni.cncf.io/networks: macvlan-bridge k8s.v1.cni.cncf.io/networks-status: |- 1 [{ \"name\": \"openshift-sdn\", \"interface\": \"eth0\", \"ips\": [ \"10.128.2.14\" ], \"default\": true, \"dns\": {} },{ \"name\": \"macvlan-bridge\", \"interface\": \"net1\", \"ips\": [ \"20.2.2.100\" ], \"mac\": \"22:2f:60:a5:f8:00\", \"dns\": {} }] name: example-pod namespace: default spec: status:", "oc edit pod <name>", "metadata: annotations: k8s.v1.cni.cncf.io/networks: '[<network>[,<network>,...]]' 1", "apiVersion: v1 kind: Pod metadata: name: example-pod annotations: k8s.v1.cni.cncf.io/networks: ' { \"name\": \"net1\" }, { \"name\": \"net2\", 1 \"default-route\": [\"192.0.2.1\"] 2 }' spec: containers: - name: example-pod command: [\"/bin/bash\", \"-c\", \"sleep 2000000000000\"] image: centos/tools", "oc exec -it <pod_name> -- ip route", "oc edit networks.operator.openshift.io cluster", "name: <name> 1 namespace: <namespace> 2 rawCNIConfig: '{ 3 }' type: Raw", "{ \"cniVersion\": \"0.3.1\", \"name\": \"<name>\", 1 \"plugins\": [{ 2 \"type\": \"macvlan\", \"capabilities\": { \"ips\": true }, 3 \"master\": \"eth0\", 4 \"mode\": \"bridge\", \"ipam\": { \"type\": \"static\" } }, { \"capabilities\": { \"mac\": true }, 5 \"type\": \"tuning\" }] }", "oc edit pod <name>", "apiVersion: v1 kind: Pod metadata: name: example-pod annotations: k8s.v1.cni.cncf.io/networks: '[ { \"name\": \"<name>\", 1 \"ips\": [ \"192.0.2.205/24\" ], 2 \"mac\": \"CA:FE:C0:FF:EE:00\" 3 } ]'", "oc exec -it <pod_name> -- ip a", "oc delete pod <name> -n <namespace>", "oc edit networks.operator.openshift.io cluster", "oc get network-attachment-definitions <network-name> -o yaml", "oc get network-attachment-definitions net1 -o go-template='{{printf \"%s\\n\" .spec.config}}' { \"cniVersion\": \"0.3.1\", \"type\": \"macvlan\", \"master\": \"ens5\", \"mode\": \"bridge\", \"ipam\": {\"type\":\"static\",\"routes\":[{\"dst\":\"0.0.0.0/0\",\"gw\":\"10.128.2.1\"}],\"addresses\":[{\"address\":\"10.128.2.100/23\",\"gateway\":\"10.128.2.1\"}],\"dns\":{\"nameservers\":[\"172.30.0.10\"],\"domain\":\"us-west-2.compute.internal\",\"search\":[\"us-west-2.compute.internal\"]}} }", "oc edit networks.operator.openshift.io cluster", "apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: additionalNetworks: [] 1", "oc get network-attachment-definition --all-namespaces", "apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: additionalNetworks: - name: test-network-1 namespace: additional-network-1 type: Raw rawCNIConfig: '{ \"cniVersion\": \"0.3.1\", \"name\": \"macvlan-vrf\", \"plugins\": [ 1 { \"type\": \"macvlan\", 2 \"master\": \"eth1\", \"ipam\": { \"type\": \"static\", \"addresses\": [ { \"address\": \"191.168.1.23/24\" } ] } }, { \"type\": \"vrf\", \"vrfname\": \"example-vrf-name\", 3 \"table\": 1001 4 }] }'", "oc create -f additional-network-attachment.yaml", "oc get network-attachment-definitions -n <namespace>", "NAME AGE additional-network-1 14m", "ip vrf show", "Name Table ----------------------- red 10", "ip link", "5: net1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master red state UP mode", "oc label node <node_name> feature.node.kubernetes.io/network-sriov.capable=\"true\"", "apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetworkNodeState metadata: name: node-25 1 namespace: openshift-sriov-network-operator ownerReferences: - apiVersion: sriovnetwork.openshift.io/v1 blockOwnerDeletion: true controller: true kind: SriovNetworkNodePolicy name: default spec: dpConfigVersion: \"39824\" status: interfaces: 2 - deviceID: \"1017\" driver: mlx5_core mtu: 1500 name: ens785f0 pciAddress: \"0000:18:00.0\" totalvfs: 8 vendor: 15b3 - deviceID: \"1017\" driver: mlx5_core mtu: 1500 name: ens785f1 pciAddress: \"0000:18:00.1\" totalvfs: 8 vendor: 15b3 - deviceID: 158b driver: i40e mtu: 1500 name: ens817f0 pciAddress: 0000:81:00.0 totalvfs: 64 vendor: \"8086\" - deviceID: 158b driver: i40e mtu: 1500 name: ens817f1 pciAddress: 0000:81:00.1 totalvfs: 64 vendor: \"8086\" - deviceID: 158b driver: i40e mtu: 1500 name: ens803f0 pciAddress: 0000:86:00.0 totalvfs: 64 vendor: \"8086\" syncStatus: Succeeded", "apiVersion: v1 kind: Pod metadata: name: rdma-app annotations: k8s.v1.cni.cncf.io/networks: sriov-rdma-mlnx spec: containers: - name: testpmd image: <RDMA_image> imagePullPolicy: IfNotPresent securityContext: runAsUser: 0 capabilities: add: [\"IPC_LOCK\",\"SYS_RESOURCE\",\"NET_RAW\"] command: [\"sleep\", \"infinity\"]", "apiVersion: v1 kind: Pod metadata: name: dpdk-app annotations: k8s.v1.cni.cncf.io/networks: sriov-dpdk-net spec: containers: - name: testpmd image: <DPDK_image> securityContext: runAsUser: 0 capabilities: add: [\"IPC_LOCK\",\"SYS_RESOURCE\",\"NET_RAW\"] volumeMounts: - mountPath: /dev/hugepages name: hugepage resources: limits: memory: \"1Gi\" cpu: \"2\" hugepages-1Gi: \"4Gi\" requests: memory: \"1Gi\" cpu: \"2\" hugepages-1Gi: \"4Gi\" command: [\"sleep\", \"infinity\"] volumes: - name: hugepage emptyDir: medium: HugePages", "cat << EOF| oc create -f - apiVersion: v1 kind: Namespace metadata: name: openshift-sriov-network-operator EOF", "cat << EOF| oc create -f - apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: sriov-network-operators namespace: openshift-sriov-network-operator spec: targetNamespaces: - openshift-sriov-network-operator EOF", "OC_VERSION=USD(oc version -o yaml | grep openshiftVersion | grep -o '[0-9]*[.][0-9]*' | head -1)", "cat << EOF| oc create -f - apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: sriov-network-operator-subscription namespace: openshift-sriov-network-operator spec: channel: \"USD{OC_VERSION}\" name: sriov-network-operator source: redhat-operators sourceNamespace: openshift-marketplace EOF", "oc get csv -n openshift-sriov-network-operator -o custom-columns=Name:.metadata.name,Phase:.status.phase", "Name Phase sriov-network-operator.4.4.0-202006160135 Succeeded", "oc get pods -n openshift-sriov-network-operator", "NAME READY STATUS RESTARTS AGE network-resources-injector-5cz5p 1/1 Running 0 10m network-resources-injector-dwqpx 1/1 Running 0 10m network-resources-injector-lktz5 1/1 Running 0 10m", "oc get pods -n openshift-sriov-network-operator", "NAME READY STATUS RESTARTS AGE operator-webhook-9jkw6 1/1 Running 0 16m operator-webhook-kbr5p 1/1 Running 0 16m operator-webhook-rpfrl 1/1 Running 0 16m", "oc patch sriovoperatorconfig default --type=merge -n openshift-sriov-network-operator --patch '{ \"spec\": { \"enableInjector\": <value> } }'", "oc patch sriovoperatorconfig default --type=merge -n openshift-sriov-network-operator --patch '{ \"spec\": { \"enableOperatorWebhook\": <value> } }'", "oc patch sriovoperatorconfig default --type=json -n openshift-sriov-network-operator --patch '[{ \"op\": \"replace\", \"path\": \"/spec/configDaemonNodeSelector\", \"value\": {<node-label>} }]'", "apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetworkNodePolicy metadata: name: <name> 1 namespace: openshift-sriov-network-operator 2 spec: resourceName: <sriov_resource_name> 3 nodeSelector: feature.node.kubernetes.io/network-sriov.capable: \"true\" 4 priority: <priority> 5 mtu: <mtu> 6 numVfs: <num> 7 nicSelector: 8 vendor: \"<vendor_code>\" 9 deviceID: \"<device_id>\" 10 pfNames: [\"<pf_name>\", ...] 11 rootDevices: [\"<pci_bus_id>\", ...] 12 netFilter: \"<filter_string>\" 13 deviceType: <device_type> 14 isRdma: false 15 linkType: <link_type> 16", "apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetworkNodePolicy metadata: name: policy-ib-net-1 namespace: openshift-sriov-network-operator spec: resourceName: ibnic1 nodeSelector: feature.node.kubernetes.io/network-sriov.capable: \"true\" numVfs: 4 nicSelector: vendor: \"15b3\" deviceID: \"101b\" rootDevices: - \"0000:19:00.0\" linkType: ib isRdma: true", "apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetworkNodePolicy metadata: name: policy-sriov-net-openstack-1 namespace: openshift-sriov-network-operator spec: resourceName: sriovnic1 nodeSelector: feature.node.kubernetes.io/network-sriov.capable: \"true\" numVfs: 1 1 nicSelector: vendor: \"15b3\" deviceID: \"101b\" netFilter: \"openstack/NetworkID:ea24bd04-8674-4f69-b0ee-fa0b3bd20509\" 2", "pfNames: [\"netpf0#2-7\"]", "apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetworkNodePolicy metadata: name: policy-net-1 namespace: openshift-sriov-network-operator spec: resourceName: net1 nodeSelector: feature.node.kubernetes.io/network-sriov.capable: \"true\" numVfs: 16 nicSelector: pfNames: [\"netpf0#0-0\"] deviceType: netdevice", "apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetworkNodePolicy metadata: name: policy-net-1-dpdk namespace: openshift-sriov-network-operator spec: resourceName: net1dpdk nodeSelector: feature.node.kubernetes.io/network-sriov.capable: \"true\" numVfs: 16 nicSelector: pfNames: [\"netpf0#8-15\"] deviceType: vfio-pci", "oc create -f <name>-sriov-node-network.yaml", "oc get sriovnetworknodestates -n openshift-sriov-network-operator <node_name> -o jsonpath='{.status.syncStatus}'", "oc get sriovnetworknodestates -n openshift-sriov-network-operator <node_name>", "\"lastSyncError\": \"write /sys/bus/pci/devices/0000:3b:00.1/sriov_numvfs: cannot allocate memory\"", "apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetwork metadata: name: example-network namespace: additional-sriov-network-1 spec: ipam: | { \"type\": \"host-local\", \"subnet\": \"10.56.217.0/24\", \"rangeStart\": \"10.56.217.171\", \"rangeEnd\": \"10.56.217.181\", \"routes\": [{ \"dst\": \"0.0.0.0/0\" }], \"gateway\": \"10.56.217.1\" } vlan: 0 resourceName: intelnics metaPlugins : | { \"type\": \"vrf\", 1 \"vrfname\": \"example-vrf-name\" 2 }", "oc create -f sriov-network-attachment.yaml", "oc get network-attachment-definitions -n <namespace> 1", "NAME AGE additional-sriov-network-1 14m", "ip vrf show", "Name Table ----------------------- red 10", "ip link", "5: net1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master red state UP mode", "apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetwork metadata: name: <name> 1 namespace: openshift-sriov-network-operator 2 spec: resourceName: <sriov_resource_name> 3 networkNamespace: <target_namespace> 4 vlan: <vlan> 5 spoofChk: \"<spoof_check>\" 6 ipam: |- 7 {} linkState: <link_state> 8 maxTxRate: <max_tx_rate> 9 minTxRate: <min_tx_rate> 10 vlanQoS: <vlan_qos> 11 trust: \"<trust_vf>\" 12 capabilities: <capabilities> 13", "{ \"ipam\": { \"type\": \"static\", \"addresses\": [ { \"address\": \"191.168.1.7/24\" } ] } }", "apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: additionalNetworks: - name: dhcp-shim namespace: default type: Raw rawCNIConfig: |- { \"name\": \"dhcp-shim\", \"cniVersion\": \"0.3.1\", \"type\": \"bridge\", \"ipam\": { \"type\": \"dhcp\" } } #", "{ \"ipam\": { \"type\": \"dhcp\" } }", "{ \"ipam\": { \"type\": \"whereabouts\", \"range\": \"192.0.2.192/27\", \"exclude\": [ \"192.0.2.192/30\", \"192.0.2.196/32\" ] } }", "apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetwork metadata: name: attach1 namespace: openshift-sriov-network-operator spec: resourceName: net1 networkNamespace: project2 ipam: |- { \"type\": \"host-local\", \"subnet\": \"10.56.217.0/24\", \"rangeStart\": \"10.56.217.171\", \"rangeEnd\": \"10.56.217.181\", \"gateway\": \"10.56.217.1\" }", "oc create -f <name>.yaml", "oc get net-attach-def -n <namespace>", "apiVersion: sriovnetwork.openshift.io/v1 kind: SriovIBNetwork metadata: name: <name> 1 namespace: openshift-sriov-network-operator 2 spec: resourceName: <sriov_resource_name> 3 networkNamespace: <target_namespace> 4 ipam: |- 5 {} linkState: <link_state> 6 capabilities: <capabilities> 7", "{ \"ipam\": { \"type\": \"static\", \"addresses\": [ { \"address\": \"191.168.1.7/24\" } ] } }", "apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: additionalNetworks: - name: dhcp-shim namespace: default type: Raw rawCNIConfig: |- { \"name\": \"dhcp-shim\", \"cniVersion\": \"0.3.1\", \"type\": \"bridge\", \"ipam\": { \"type\": \"dhcp\" } } #", "{ \"ipam\": { \"type\": \"dhcp\" } }", "{ \"ipam\": { \"type\": \"whereabouts\", \"range\": \"192.0.2.192/27\", \"exclude\": [ \"192.0.2.192/30\", \"192.0.2.196/32\" ] } }", "apiVersion: sriovnetwork.openshift.io/v1 kind: SriovIBNetwork metadata: name: attach1 namespace: openshift-sriov-network-operator spec: resourceName: net1 networkNamespace: project2 ipam: |- { \"type\": \"host-local\", \"subnet\": \"10.56.217.0/24\", \"rangeStart\": \"10.56.217.171\", \"rangeEnd\": \"10.56.217.181\", \"gateway\": \"10.56.217.1\" }", "oc create -f <name>.yaml", "oc get net-attach-def -n <namespace>", "[ { \"name\": \"<name>\", 1 \"mac\": \"<mac_address>\", 2 \"ips\": [\"<cidr_range>\"] 3 } ]", "apiVersion: v1 kind: Pod metadata: name: sample-pod annotations: k8s.v1.cni.cncf.io/networks: |- [ { \"name\": \"net1\", \"mac\": \"20:04:0f:f1:88:01\", \"ips\": [\"192.168.10.1/24\", \"2001::1/64\"] } ] spec: containers: - name: sample-container image: <image> imagePullPolicy: IfNotPresent command: [\"sleep\", \"infinity\"]", "[ { \"name\": \"<network_attachment>\", 1 \"infiniband-guid\": \"<guid>\", 2 \"ips\": [\"<cidr_range>\"] 3 } ]", "apiVersion: v1 kind: Pod metadata: name: sample-pod annotations: k8s.v1.cni.cncf.io/networks: |- [ { \"name\": \"ib1\", \"infiniband-guid\": \"c2:11:22:33:44:55:66:77\", \"ips\": [\"192.168.10.1/24\", \"2001::1/64\"] } ] spec: containers: - name: sample-container image: <image> imagePullPolicy: IfNotPresent command: [\"sleep\", \"infinity\"]", "metadata: annotations: k8s.v1.cni.cncf.io/networks: <network>[,<network>,...] 1", "metadata: annotations: k8s.v1.cni.cncf.io/networks: |- [ { \"name\": \"<network>\", 1 \"namespace\": \"<namespace>\", 2 \"default-route\": [\"<default-route>\"] 3 } ]", "oc create -f <name>.yaml", "oc get pod <name> -o yaml", "oc get pod example-pod -o yaml apiVersion: v1 kind: Pod metadata: annotations: k8s.v1.cni.cncf.io/networks: macvlan-bridge k8s.v1.cni.cncf.io/networks-status: |- 1 [{ \"name\": \"openshift-sdn\", \"interface\": \"eth0\", \"ips\": [ \"10.128.2.14\" ], \"default\": true, \"dns\": {} },{ \"name\": \"macvlan-bridge\", \"interface\": \"net1\", \"ips\": [ \"20.2.2.100\" ], \"mac\": \"22:2f:60:a5:f8:00\", \"dns\": {} }] name: example-pod namespace: default spec: status:", "apiVersion: v1 kind: Pod metadata: name: sample-pod annotations: k8s.v1.cni.cncf.io/networks: <name> 1 spec: containers: - name: sample-container image: <image> 2 command: [\"sleep\", \"infinity\"] resources: limits: memory: \"1Gi\" 3 cpu: \"2\" 4 requests: memory: \"1Gi\" cpu: \"2\"", "oc create -f <filename> 1", "oc describe pod sample-pod", "oc exec sample-pod -- cat /sys/fs/cgroup/cpuset/cpuset.cpus", "oc exec sample-pod -- cat /sys/fs/cgroup/cpuset/cpuset.cpus", "apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetworkNodePolicy metadata: name: policy-example namespace: openshift-sriov-network-operator spec: resourceName: example nodeSelector: feature.node.kubernetes.io/network-sriov.capable: \"true\" numVfs: 4 nicSelector: vendor: \"8086\" pfNames: ['ens803f0'] rootDevices: ['0000:86:00.0']", "apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetwork metadata: name: net-example namespace: openshift-sriov-network-operator spec: networkNamespace: default ipam: | 1 { \"type\": \"host-local\", 2 \"subnet\": \"10.56.217.0/24\", \"rangeStart\": \"10.56.217.171\", \"rangeEnd\": \"10.56.217.181\", \"routes\": [ {\"dst\": \"224.0.0.0/5\"}, {\"dst\": \"232.0.0.0/5\"} ], \"gateway\": \"10.56.217.1\" } resourceName: example", "apiVersion: v1 kind: Pod metadata: name: testpmd namespace: default annotations: k8s.v1.cni.cncf.io/networks: nic1 spec: containers: - name: example image: rhel7:latest securityContext: capabilities: add: [\"NET_ADMIN\"] 1 command: [ \"sleep\", \"infinity\"]", "apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetworkNodePolicy metadata: name: intel-dpdk-node-policy namespace: openshift-sriov-network-operator spec: resourceName: intelnics nodeSelector: feature.node.kubernetes.io/network-sriov.capable: \"true\" priority: <priority> numVfs: <num> nicSelector: vendor: \"8086\" deviceID: \"158b\" pfNames: [\"<pf_name>\", ...] rootDevices: [\"<pci_bus_id>\", \"...\"] deviceType: vfio-pci 1", "oc create -f intel-dpdk-node-policy.yaml", "apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetwork metadata: name: intel-dpdk-network namespace: openshift-sriov-network-operator spec: networkNamespace: <target_namespace> ipam: \"{}\" 1 vlan: <vlan> resourceName: intelnics", "oc create -f intel-dpdk-network.yaml", "apiVersion: v1 kind: Pod metadata: name: dpdk-app namespace: <target_namespace> 1 annotations: k8s.v1.cni.cncf.io/networks: intel-dpdk-network spec: containers: - name: testpmd image: <DPDK_image> 2 securityContext: runAsUser: 0 capabilities: add: [\"IPC_LOCK\",\"SYS_RESOURCE\",\"NET_RAW\"] 3 volumeMounts: - mountPath: /dev/hugepages 4 name: hugepage resources: limits: openshift.io/intelnics: \"1\" 5 memory: \"1Gi\" cpu: \"4\" 6 hugepages-1Gi: \"4Gi\" 7 requests: openshift.io/intelnics: \"1\" memory: \"1Gi\" cpu: \"4\" hugepages-1Gi: \"4Gi\" command: [\"sleep\", \"infinity\"] volumes: - name: hugepage emptyDir: medium: HugePages", "oc create -f intel-dpdk-pod.yaml", "apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetworkNodePolicy metadata: name: mlx-dpdk-node-policy namespace: openshift-sriov-network-operator spec: resourceName: mlxnics nodeSelector: feature.node.kubernetes.io/network-sriov.capable: \"true\" priority: <priority> numVfs: <num> nicSelector: vendor: \"15b3\" deviceID: \"1015\" 1 pfNames: [\"<pf_name>\", ...] rootDevices: [\"<pci_bus_id>\", \"...\"] deviceType: netdevice 2 isRdma: true 3", "oc create -f mlx-dpdk-node-policy.yaml", "apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetwork metadata: name: mlx-dpdk-network namespace: openshift-sriov-network-operator spec: networkNamespace: <target_namespace> ipam: |- 1 vlan: <vlan> resourceName: mlxnics", "oc create -f mlx-dpdk-network.yaml", "apiVersion: v1 kind: Pod metadata: name: dpdk-app namespace: <target_namespace> 1 annotations: k8s.v1.cni.cncf.io/networks: mlx-dpdk-network spec: containers: - name: testpmd image: <DPDK_image> 2 securityContext: runAsUser: 0 capabilities: add: [\"IPC_LOCK\",\"SYS_RESOURCE\",\"NET_RAW\"] 3 volumeMounts: - mountPath: /dev/hugepages 4 name: hugepage resources: limits: openshift.io/mlxnics: \"1\" 5 memory: \"1Gi\" cpu: \"4\" 6 hugepages-1Gi: \"4Gi\" 7 requests: openshift.io/mlxnics: \"1\" memory: \"1Gi\" cpu: \"4\" hugepages-1Gi: \"4Gi\" command: [\"sleep\", \"infinity\"] volumes: - name: hugepage emptyDir: medium: HugePages", "oc create -f mlx-dpdk-pod.yaml", "apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetworkNodePolicy metadata: name: mlx-rdma-node-policy namespace: openshift-sriov-network-operator spec: resourceName: mlxnics nodeSelector: feature.node.kubernetes.io/network-sriov.capable: \"true\" priority: <priority> numVfs: <num> nicSelector: vendor: \"15b3\" deviceID: \"1015\" 1 pfNames: [\"<pf_name>\", ...] rootDevices: [\"<pci_bus_id>\", \"...\"] deviceType: netdevice 2 isRdma: true 3", "oc create -f mlx-rdma-node-policy.yaml", "apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetwork metadata: name: mlx-rdma-network namespace: openshift-sriov-network-operator spec: networkNamespace: <target_namespace> ipam: |- 1 vlan: <vlan> resourceName: mlxnics", "oc create -f mlx-rdma-network.yaml", "apiVersion: v1 kind: Pod metadata: name: rdma-app namespace: <target_namespace> 1 annotations: k8s.v1.cni.cncf.io/networks: mlx-rdma-network spec: containers: - name: testpmd image: <RDMA_image> 2 securityContext: runAsUser: 0 capabilities: add: [\"IPC_LOCK\",\"SYS_RESOURCE\",\"NET_RAW\"] 3 volumeMounts: - mountPath: /dev/hugepages 4 name: hugepage resources: limits: memory: \"1Gi\" cpu: \"4\" 5 hugepages-1Gi: \"4Gi\" 6 requests: memory: \"1Gi\" cpu: \"4\" hugepages-1Gi: \"4Gi\" command: [\"sleep\", \"infinity\"] volumes: - name: hugepage emptyDir: medium: HugePages", "oc create -f mlx-rdma-pod.yaml", "oc delete sriovnetwork -n openshift-sriov-network-operator --all", "oc delete sriovnetworknodepolicy -n openshift-sriov-network-operator --all", "oc delete sriovibnetwork -n openshift-sriov-network-operator --all", "oc delete crd sriovibnetworks.sriovnetwork.openshift.io", "oc delete crd sriovnetworknodepolicies.sriovnetwork.openshift.io", "oc delete crd sriovnetworknodestates.sriovnetwork.openshift.io", "oc delete crd sriovnetworkpoolconfigs.sriovnetwork.openshift.io", "oc delete crd sriovnetworks.sriovnetwork.openshift.io", "oc delete crd sriovoperatorconfigs.sriovnetwork.openshift.io", "oc delete mutatingwebhookconfigurations network-resources-injector-config", "oc delete MutatingWebhookConfiguration sriov-operator-webhook-config", "oc delete ValidatingWebhookConfiguration sriov-operator-webhook-config", "oc delete namespace openshift-sriov-network-operator", "oc patch netnamespace <project_name> --type=merge -p \\ 1 '{ \"egressIPs\": [ \"<ip_address>\" 2 ] }'", "oc patch netnamespace project1 --type=merge -p '{\"egressIPs\": [\"192.168.1.100\"]}' oc patch netnamespace project2 --type=merge -p '{\"egressIPs\": [\"192.168.1.101\"]}'", "oc patch hostsubnet <node_name> --type=merge -p \\ 1 '{ \"egressCIDRs\": [ \"<ip_address_range_1>\", \"<ip_address_range_2>\" 2 ] }'", "oc patch hostsubnet node1 --type=merge -p '{\"egressCIDRs\": [\"192.168.1.0/24\"]}' oc patch hostsubnet node2 --type=merge -p '{\"egressCIDRs\": [\"192.168.1.0/24\"]}'", "oc patch netnamespace <project> --type=merge -p \\ 1 '{ \"egressIPs\": [ 2 \"<ip_address>\" ] }'", "oc patch netnamespace project1 --type=merge -p '{\"egressIPs\": [\"192.168.1.100\"]}'", "oc patch hostsubnet <node_name> --type=merge -p \\ 1 '{ \"egressIPs\": [ 2 \"<ip_address_1>\", \"<ip_address_N>\" ] }'", "oc patch hostsubnet node1 --type=merge -p '{\"egressIPs\": [\"192.168.1.100\", \"192.168.1.101\", \"192.168.1.102\"]}'", "apiVersion: network.openshift.io/v1 kind: EgressNetworkPolicy metadata: name: default namespace: <namespace> 1 spec: egress: - to: cidrSelector: <api_server_address_range> 2 type: Allow - to: cidrSelector: 0.0.0.0/0 3 type: Deny", "apiVersion: network.openshift.io/v1 kind: EgressNetworkPolicy metadata: name: <name> 1 spec: egress: 2", "egress: - type: <type> 1 to: 2 cidrSelector: <cidr> 3 dnsName: <dns_name> 4", "apiVersion: network.openshift.io/v1 kind: EgressNetworkPolicy metadata: name: default spec: egress: 1 - type: Allow to: cidrSelector: 1.2.3.0/24 - type: Allow to: dnsName: www.example.com - type: Deny to: cidrSelector: 0.0.0.0/0", "oc create -f <policy_name>.yaml -n <project>", "oc create -f default.yaml -n project1", "egressnetworkpolicy.network.openshift.io/v1 created", "oc get egressnetworkpolicy --all-namespaces", "oc describe egressnetworkpolicy <policy_name>", "Name: default Namespace: project1 Created: 20 minutes ago Labels: <none> Annotations: <none> Rule: Allow to 1.2.3.0/24 Rule: Allow to www.example.com Rule: Deny to 0.0.0.0/0", "oc get -n <project> egressnetworkpolicy", "oc get -n <project> egressnetworkpolicy <name> -o yaml > <filename>.yaml", "oc replace -f <filename>.yaml", "oc get -n <project> egressnetworkpolicy", "oc delete -n <project> egressnetworkpolicy <name>", "openstack port set --allowed-address ip_address=<ip_address>,mac_address=<mac_address> <neutron_port_uuid>", "apiVersion: apps/v1 kind: Deployment metadata: name: egress-demo-controller spec: replicas: 1 1 selector: matchLabels: name: egress-router template: metadata: name: egress-router labels: name: egress-router annotations: pod.network.openshift.io/assign-macvlan: \"true\" spec: 2 initContainers: containers:", "apiVersion: v1 kind: Pod metadata: name: egress-1 labels: name: egress-1 annotations: pod.network.openshift.io/assign-macvlan: \"true\" 1 spec: initContainers: - name: egress-router image: registry.redhat.io/openshift4/ose-egress-router securityContext: privileged: true env: - name: EGRESS_SOURCE 2 value: <egress_router> - name: EGRESS_GATEWAY 3 value: <egress_gateway> - name: EGRESS_DESTINATION 4 value: <egress_destination> - name: EGRESS_ROUTER_MODE value: init containers: - name: egress-router-wait image: registry.redhat.io/openshift4/ose-pod", "apiVersion: v1 kind: Pod metadata: name: egress-multi labels: name: egress-multi annotations: pod.network.openshift.io/assign-macvlan: \"true\" spec: initContainers: - name: egress-router image: registry.redhat.io/openshift4/ose-egress-router securityContext: privileged: true env: - name: EGRESS_SOURCE value: 192.168.12.99/24 - name: EGRESS_GATEWAY value: 192.168.12.1 - name: EGRESS_DESTINATION value: | 80 tcp 203.0.113.25 8080 tcp 203.0.113.26 80 8443 tcp 203.0.113.26 443 203.0.113.27 - name: EGRESS_ROUTER_MODE value: init containers: - name: egress-router-wait image: registry.redhat.io/openshift4/ose-pod", "80 tcp 203.0.113.25 8080 tcp 203.0.113.26 80 8443 tcp 203.0.113.26 443 203.0.113.27", "apiVersion: v1 kind: Service metadata: name: egress-1 spec: ports: - name: http port: 80 - name: https port: 443 type: ClusterIP selector: name: egress-1", "apiVersion: v1 kind: Pod metadata: name: egress-1 labels: name: egress-1 annotations: pod.network.openshift.io/assign-macvlan: \"true\" 1 spec: initContainers: - name: egress-router image: registry.redhat.io/openshift4/ose-egress-router securityContext: privileged: true env: - name: EGRESS_SOURCE 2 value: <egress-router> - name: EGRESS_GATEWAY 3 value: <egress-gateway> - name: EGRESS_ROUTER_MODE value: http-proxy containers: - name: egress-router-pod image: registry.redhat.io/openshift4/ose-egress-http-proxy env: - name: EGRESS_HTTP_PROXY_DESTINATION 4 value: |-", "!*.example.com !192.168.1.0/24 192.168.2.1 *", "apiVersion: v1 kind: Service metadata: name: egress-1 spec: ports: - name: http-proxy port: 8080 1 type: ClusterIP selector: name: egress-1", "apiVersion: v1 kind: Pod metadata: name: app-1 labels: name: app-1 spec: containers: env: - name: http_proxy value: http://egress-1:8080/ 1 - name: https_proxy value: http://egress-1:8080/", "apiVersion: v1 kind: Pod metadata: name: egress-1 labels: name: egress-1 annotations: pod.network.openshift.io/assign-macvlan: \"true\" 1 spec: initContainers: - name: egress-router image: registry.redhat.io/openshift4/ose-egress-router securityContext: privileged: true env: - name: EGRESS_SOURCE 2 value: <egress-router> - name: EGRESS_GATEWAY 3 value: <egress-gateway> - name: EGRESS_ROUTER_MODE value: dns-proxy containers: - name: egress-router-pod image: registry.redhat.io/openshift4/ose-egress-dns-proxy securityContext: privileged: true env: - name: EGRESS_DNS_PROXY_DESTINATION 4 value: |- - name: EGRESS_DNS_PROXY_DEBUG 5 value: \"1\"", "80 172.16.12.11 100 example.com", "8080 192.168.60.252 80 8443 web.example.com 443", "apiVersion: v1 kind: Service metadata: name: egress-dns-svc spec: ports: type: ClusterIP selector: name: egress-dns-proxy", "apiVersion: v1 kind: Service metadata: name: egress-dns-svc spec: ports: - name: con1 protocol: TCP port: 80 targetPort: 80 - name: con2 protocol: TCP port: 100 targetPort: 100 type: ClusterIP selector: name: egress-dns-proxy", "oc create -f egress-router-service.yaml", "Egress routes for Project \"Test\", version 3 80 tcp 203.0.113.25 8080 tcp 203.0.113.26 80 8443 tcp 203.0.113.26 443 Fallback 203.0.113.27", "oc delete configmap egress-routes --ignore-not-found", "oc create configmap egress-routes --from-file=destination=my-egress-destination.txt", "env: - name: EGRESS_DESTINATION valueFrom: configMapKeyRef: name: egress-routes key: destination", "oc annotate netnamespace <namespace> netnamespace.network.openshift.io/multicast-enabled=true", "oc project <project>", "cat <<EOF| oc create -f - apiVersion: v1 kind: Pod metadata: name: mlistener labels: app: multicast-verify spec: containers: - name: mlistener image: registry.access.redhat.com/ubi8 command: [\"/bin/sh\", \"-c\"] args: [\"dnf -y install socat hostname && sleep inf\"] ports: - containerPort: 30102 name: mlistener protocol: UDP EOF", "cat <<EOF| oc create -f - apiVersion: v1 kind: Pod metadata: name: msender labels: app: multicast-verify spec: containers: - name: msender image: registry.access.redhat.com/ubi8 command: [\"/bin/sh\", \"-c\"] args: [\"dnf -y install socat && sleep inf\"] EOF", "POD_IP=USD(oc get pods mlistener -o jsonpath='{.status.podIP}')", "oc exec mlistener -i -t -- socat UDP4-RECVFROM:30102,ip-add-membership=224.1.0.1:USDPOD_IP,fork EXEC:hostname", "CIDR=USD(oc get Network.config.openshift.io cluster -o jsonpath='{.status.clusterNetwork[0].cidr}')", "oc exec msender -i -t -- /bin/bash -c \"echo | socat STDIO UDP4-DATAGRAM:224.1.0.1:30102,range=USDCIDR,ip-multicast-ttl=64\"", "mlistener", "oc annotate netnamespace <namespace> \\ 1 netnamespace.network.openshift.io/multicast-enabled-", "oc adm pod-network join-projects --to=<project1> <project2> <project3>", "oc get netnamespaces", "oc adm pod-network isolate-projects <project1> <project2>", "oc adm pod-network make-projects-global <project1> <project2>", "oc edit network.operator.openshift.io cluster", "apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: kubeProxyConfig: iptablesSyncPeriod: 30s proxyArguments: iptables-min-sync-period: [\"30s\"]", "oc get networks.operator.openshift.io -o yaml", "apiVersion: v1 items: - apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 defaultNetwork: type: OpenShiftSDN kubeProxyConfig: iptablesSyncPeriod: 30s proxyArguments: iptables-min-sync-period: - 30s serviceNetwork: - 172.30.0.0/16 status: {} kind: List", "oc get clusteroperator network", "NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE network 4.1.0-0.9 True False False 1m", "oc get Network.config.openshift.io cluster -o yaml > cluster-openshift-sdn.yaml", "oc annotate Network.operator.openshift.io cluster 'networkoperator.openshift.io/network-migration'=\"\"", "oc patch MachineConfigPool master --type='merge' --patch '{ \"spec\": { \"paused\": true } }'", "oc patch MachineConfigPool worker --type='merge' --patch '{ \"spec\":{ \"paused\" :true } }'", "oc patch Network.config.openshift.io cluster --type='merge' --patch '{ \"spec\": { \"networkType\": \"OVNKubernetes\" } }'", "oc patch Network.config.openshift.io cluster --type='merge' --patch '{ \"spec\": { \"clusterNetwork\": [ { \"cidr\": \"<cidr>\", \"hostPrefix\": \"<prefix>\" } ], \"networkType\": \"OVNKubernetes\" } }'", "oc patch Network.operator.openshift.io cluster --type=merge --patch '{ \"spec\":{ \"defaultNetwork\":{ \"ovnKubernetesConfig\":{ \"mtu\":<mtu>, \"genevePort\":<port> }}}}'", "oc patch Network.operator.openshift.io cluster --type=merge --patch '{ \"spec\":{ \"defaultNetwork\":{ \"ovnKubernetesConfig\":{ \"mtu\":1200 }}}}'", "oc -n openshift-multus rollout status daemonset/multus", "Waiting for daemon set \"multus\" rollout to finish: 1 out of 6 new pods have been updated Waiting for daemon set \"multus\" rollout to finish: 5 of 6 updated pods are available daemon set \"multus\" successfully rolled out", "#!/bin/bash for ip in USD(oc get nodes -o jsonpath='{.items[*].status.addresses[?(@.type==\"InternalIP\")].address}') do echo \"reboot node USDip\" ssh -o StrictHostKeyChecking=no core@USDip sudo shutdown -r -t 3 done", "oc patch MachineConfigPool master --type='merge' --patch '{ \"spec\": { \"paused\": false } }'", "oc patch MachineConfigPool worker --type='merge' --patch '{ \"spec\": { \"paused\": false } }'", "oc describe node | egrep \"hostname|machineconfig\"", "kubernetes.io/hostname=master-0 machineconfiguration.openshift.io/currentConfig: rendered-master-c53e221d9d24e1c8bb6ee89dd3d8ad7b machineconfiguration.openshift.io/desiredConfig: rendered-master-c53e221d9d24e1c8bb6ee89dd3d8ad7b machineconfiguration.openshift.io/reason: machineconfiguration.openshift.io/state: Done", "oc get machineconfig <config_name> -o yaml | grep ExecStart", "ExecStart=/usr/local/bin/configure-ovs.sh OVNKubernetes", "oc get network.config/cluster -o jsonpath='{.status.networkType}{\"\\n\"}'", "oc get nodes", "oc get pod -n openshift-machine-config-operator", "NAME READY STATUS RESTARTS AGE machine-config-controller-75f756f89d-sjp8b 1/1 Running 0 37m machine-config-daemon-5cf4b 2/2 Running 0 43h machine-config-daemon-7wzcd 2/2 Running 0 43h machine-config-daemon-fc946 2/2 Running 0 43h machine-config-daemon-g2v28 2/2 Running 0 43h machine-config-daemon-gcl4f 2/2 Running 0 43h machine-config-daemon-l5tnv 2/2 Running 0 43h machine-config-operator-79d9c55d5-hth92 1/1 Running 0 37m machine-config-server-bsc8h 1/1 Running 0 43h machine-config-server-hklrm 1/1 Running 0 43h machine-config-server-k9rtx 1/1 Running 0 43h", "oc logs <pod> -n openshift-machine-config-operator", "oc get pods --all-namespaces -o wide --sort-by='{.spec.nodeName}'", "oc annotate Network.operator.openshift.io cluster networkoperator.openshift.io/network-migration-", "oc delete namespace openshift-sdn", "oc annotate Network.operator.openshift.io cluster 'networkoperator.openshift.io/network-migration'=\"\"", "oc patch MachineConfigPool master --type='merge' --patch '{ \"spec\": { \"paused\": true } }'", "oc patch MachineConfigPool worker --type='merge' --patch '{ \"spec\":{ \"paused\" :true } }'", "oc patch Network.config.openshift.io cluster --type='merge' --patch '{ \"spec\": { \"networkType\": \"OpenShiftSDN\" } }'", "oc patch Network.operator.openshift.io cluster --type=merge --patch '{ \"spec\":{ \"defaultNetwork\":{ \"openshiftSDNConfig\":{ \"mtu\":<mtu>, \"vxlanPort\":<port> }}}}'", "oc patch Network.operator.openshift.io cluster --type=merge --patch '{ \"spec\":{ \"defaultNetwork\":{ \"openshiftSDNConfig\":{ \"mtu\":1200 }}}}'", "oc -n openshift-multus rollout status daemonset/multus", "Waiting for daemon set \"multus\" rollout to finish: 1 out of 6 new pods have been updated Waiting for daemon set \"multus\" rollout to finish: 5 of 6 updated pods are available daemon set \"multus\" successfully rolled out", "#!/bin/bash for ip in USD(oc get nodes -o jsonpath='{.items[*].status.addresses[?(@.type==\"InternalIP\")].address}') do echo \"reboot node USDip\" ssh -o StrictHostKeyChecking=no core@USDip sudo shutdown -r -t 3 done", "oc patch MachineConfigPool master --type='merge' --patch '{ \"spec\": { \"paused\": false } }'", "oc patch MachineConfigPool worker --type='merge' --patch '{ \"spec\": { \"paused\": false } }'", "oc describe node | egrep \"hostname|machineconfig\"", "kubernetes.io/hostname=master-0 machineconfiguration.openshift.io/currentConfig: rendered-master-c53e221d9d24e1c8bb6ee89dd3d8ad7b machineconfiguration.openshift.io/desiredConfig: rendered-master-c53e221d9d24e1c8bb6ee89dd3d8ad7b machineconfiguration.openshift.io/reason: machineconfiguration.openshift.io/state: Done", "oc get machineconfig <config_name> -o yaml", "oc get network.config/cluster -o jsonpath='{.status.networkType}{\"\\n\"}'", "oc get nodes", "oc get pod -n openshift-machine-config-operator", "NAME READY STATUS RESTARTS AGE machine-config-controller-75f756f89d-sjp8b 1/1 Running 0 37m machine-config-daemon-5cf4b 2/2 Running 0 43h machine-config-daemon-7wzcd 2/2 Running 0 43h machine-config-daemon-fc946 2/2 Running 0 43h machine-config-daemon-g2v28 2/2 Running 0 43h machine-config-daemon-gcl4f 2/2 Running 0 43h machine-config-daemon-l5tnv 2/2 Running 0 43h machine-config-operator-79d9c55d5-hth92 1/1 Running 0 37m machine-config-server-bsc8h 1/1 Running 0 43h machine-config-server-hklrm 1/1 Running 0 43h machine-config-server-k9rtx 1/1 Running 0 43h", "oc logs <pod> -n openshift-machine-config-operator", "oc get pods --all-namespaces -o wide --sort-by='{.spec.nodeName}'", "oc annotate Network.operator.openshift.io cluster networkoperator.openshift.io/network-migration-", "oc delete namespace openshift-ovn-kubernetes", "apiVersion: k8s.ovn.org/v1 kind: EgressFirewall metadata: name: default namespace: <namespace> 1 spec: egress: - to: cidrSelector: <api_server_address_range> 2 type: Allow - to: cidrSelector: 0.0.0.0/0 3 type: Deny", "apiVersion: k8s.ovn.org/v1 kind: EgressFirewall metadata: name: <name> 1 spec: egress: 2", "egress: - type: <type> 1 to: 2 cidrSelector: <cidr> 3 dnsName: <dns_name> 4 ports: 5", "ports: - port: <port> 1 protocol: <protocol> 2", "apiVersion: k8s.ovn.org/v1 kind: EgressFirewall metadata: name: default spec: egress: 1 - type: Allow to: cidrSelector: 1.2.3.0/24 - type: Deny to: cidrSelector: 0.0.0.0/0", "apiVersion: k8s.ovn.org/v1 kind: EgressFirewall metadata: name: default spec: egress: - type: Deny to: cidrSelector: 172.16.1.1 ports: - port: 80 protocol: TCP - port: 443", "oc create -f <policy_name>.yaml -n <project>", "oc create -f default.yaml -n project1", "egressfirewall.k8s.ovn.org/v1 created", "oc get egressfirewall --all-namespaces", "oc describe egressfirewall <policy_name>", "Name: default Namespace: project1 Created: 20 minutes ago Labels: <none> Annotations: <none> Rule: Allow to 1.2.3.0/24 Rule: Allow to www.example.com Rule: Deny to 0.0.0.0/0", "oc get -n <project> egressfirewall", "oc get -n <project> egressfirewall <name> -o yaml > <filename>.yaml", "oc replace -f <filename>.yaml", "oc get -n <project> egressfirewall", "oc delete -n <project> egressfirewall <name>", "apiVersion: v1 kind: Namespace metadata: name: namespace1 labels: env: prod --- apiVersion: v1 kind: Namespace metadata: name: namespace2 labels: env: prod", "apiVersion: k8s.ovn.org/v1 kind: EgressIP metadata: name: egressips-prod spec: egressIPs: - 192.168.126.10 - 192.168.126.102 namespaceSelector: matchLabels: env: prod status: assignments: - node: node1 egressIP: 192.168.126.10 - node: node3 egressIP: 192.168.126.102", "apiVersion: k8s.ovn.org/v1 kind: EgressIP metadata: name: <name> 1 spec: egressIPs: 2 - <ip_address> namespaceSelector: 3 podSelector: 4", "namespaceSelector: 1 matchLabels: <label_name>: <label_value>", "podSelector: 1 matchLabels: <label_name>: <label_value>", "apiVersion: k8s.ovn.org/v1 kind: EgressIP metadata: name: egress-group1 spec: egressIPs: - 192.168.126.11 - 192.168.126.102 podSelector: matchLabels: app: web namespaceSelector: matchLabels: env: prod", "apiVersion: k8s.ovn.org/v1 kind: EgressIP metadata: name: egress-group2 spec: egressIPs: - 192.168.127.30 - 192.168.127.40 namespaceSelector: matchExpressions: - key: environment operator: NotIn values: - development", "oc label nodes <node_name> k8s.ovn.org/egress-assignable=\"\" 1", "apiVersion: k8s.ovn.org/v1 kind: EgressIP metadata: name: egress-project1 spec: egressIPs: - 192.168.127.10 - 192.168.127.11 namespaceSelector: matchLabels: env: qa", "oc apply -f <egressips_name>.yaml 1", "egressips.k8s.ovn.org/<egressips_name> created", "oc label ns <namespace> env=qa 1", "openstack port set --allowed-address ip_address=<ip_address>,mac_address=<mac_address> <neutron_port_uuid>", "apiVersion: apps/v1 kind: Deployment metadata: name: egress-demo-controller spec: replicas: 1 1 selector: matchLabels: name: egress-router template: metadata: name: egress-router labels: name: egress-router annotations: k8s.v1.cni.cncf.io/networks: egress-router-redirect spec: 2 containers: - name: egress-router-redirect image: registry.redhat.io/openshift3/ose-pod", "apiVersion: \"k8s.cni.cncf.io/v1\" kind: NetworkAttachmentDefinition metadata: name: egress-router-redirect <.> spec: config: '{ \"cniVersion\": \"0.4.0\", \"type\": \"egress-router\", \"name\": \"egress-router\", \"ip\": { \"addresses\": [ \"192.168.12.99/24\" <.> ], \"destinations\": [ \"192.168.12.91/32\" <.> ], \"gateway\": \"192.168.12.1\" <.> } }'", "apiVersion: v1 kind: Pod metadata: name: egress-router-pod annotations: k8s.v1.cni.cncf.io/networks: egress-router-redirect <.> spec: containers: - name: egress-router-pod image: registry.redhat.com/openshift3/ose-pod", "apiVersion: v1 kind: Service metadata: name: egress-1 spec: ports: - name: database protocol: TCP port: 3306 type: ClusterIP selector: name: egress-router-pod", "oc get events --field-selector involvedObject.name=egress-router-pod", "LAST SEEN TYPE REASON OBJECT MESSAGE 5m4s Normal Scheduled pod/egress-router-pod Successfully assigned default/egress-router-pod to ci-ln-9x2bnsk-f76d1-j2v6g-worker-c-24g65 5m3s Normal AddedInterface pod/egress-router-pod Add eth0 [10.129.2.31/23] 5m3s Normal AddedInterface pod/egress-router-pod Add net1 [192.168.12.99/24] from default/egress-router-redirect", "POD_NODENAME=USD(oc get pod egress-router-pod -o jsonpath=\"{.spec.nodeName}\")", "oc debug node/USDPOD_NODENAME", "chroot /host", "crictl ps --name egress-router-redirect | awk '{print USD1}'", "CONTAINER bac9fae69ddb6", "crictl inspect -o yaml bac9fae69ddb6 | grep 'pid:' | awk '{print USD2}'", "68857", "nsenter -n -t 68857", "ip route", "default via 192.168.12.1 dev net1 10.129.2.0/23 dev eth0 proto kernel scope link src 10.129.2.31 192.168.12.0/24 dev net1 proto kernel scope link src 192.168.12.99 192.168.12.1 dev net1", "oc annotate namespace <namespace> k8s.ovn.org/multicast-enabled=true", "oc project <project>", "cat <<EOF| oc create -f - apiVersion: v1 kind: Pod metadata: name: mlistener labels: app: multicast-verify spec: containers: - name: mlistener image: registry.access.redhat.com/ubi8 command: [\"/bin/sh\", \"-c\"] args: [\"dnf -y install socat hostname && sleep inf\"] ports: - containerPort: 30102 name: mlistener protocol: UDP EOF", "cat <<EOF| oc create -f - apiVersion: v1 kind: Pod metadata: name: msender labels: app: multicast-verify spec: containers: - name: msender image: registry.access.redhat.com/ubi8 command: [\"/bin/sh\", \"-c\"] args: [\"dnf -y install socat && sleep inf\"] EOF", "POD_IP=USD(oc get pods mlistener -o jsonpath='{.status.podIP}')", "oc exec mlistener -i -t -- socat UDP4-RECVFROM:30102,ip-add-membership=224.1.0.1:USDPOD_IP,fork EXEC:hostname", "CIDR=USD(oc get Network.config.openshift.io cluster -o jsonpath='{.status.clusterNetwork[0].cidr}')", "oc exec msender -i -t -- /bin/bash -c \"echo | socat STDIO UDP4-DATAGRAM:224.1.0.1:30102,range=USDCIDR,ip-multicast-ttl=64\"", "mlistener", "oc annotate namespace <namespace> \\ 1 k8s.ovn.org/multicast-enabled-", "./openshift-install create manifests --dir <installation_directory>", "cat <<EOF > <installation_directory>/manifests/cluster-network-03-config.yml apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: EOF", "apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: defaultNetwork: ovnKubernetesConfig: hybridOverlayConfig: hybridClusterNetwork: 1 - cidr: 10.132.0.0/14 hostPrefix: 23 hybridOverlayVXLANPort: 9898 2", "oc new-project hello-openshift", "oc create -f https://raw.githubusercontent.com/openshift/origin/master/examples/hello-openshift/hello-pod.json", "oc expose pod/hello-openshift", "oc expose svc hello-openshift", "apiVersion: route.openshift.io/v1 kind: Route metadata: name: hello-openshift spec: host: hello-openshift-hello-openshift.<Ingress_Domain> 1 port: targetPort: 8080 to: kind: Service name: hello-openshift", "oc get ingresses.config/cluster -o jsonpath={.spec.domain}", "oc annotate route <route_name> --overwrite haproxy.router.openshift.io/timeout=<timeout><time_unit> 1", "oc annotate route myroute --overwrite haproxy.router.openshift.io/timeout=2s", "apiVersion: v1 kind: Route metadata: annotations: haproxy.router.openshift.io/hsts_header: max-age=31536000;includeSubDomains;preload 1 2 3", "tcpdump -s 0 -i any -w /tmp/dump.pcap host <podip 1> && host <podip 2> 1", "tcpdump -s 0 -i any -w /tmp/dump.pcap port 4789", "oc annotate route <route_name> router.openshift.io/cookie_name=\"<cookie_name>\"", "oc annotate route my_route router.openshift.io/cookie_name=\"my_cookie\"", "ROUTE_NAME=USD(oc get route <route_name> -o jsonpath='{.spec.host}')", "curl USDROUTE_NAME -k -c /tmp/cookie_jar", "curl USDROUTE_NAME -k -b /tmp/cookie_jar", "apiVersion: route.openshift.io/v1 kind: Route metadata: name: route-unsecured spec: host: www.example.com path: \"/test\" 1 to: kind: Service name: service-name", "apiVersion: route.openshift.io/v1 kind: Route metadata: annotations: haproxy.router.openshift.io/timeout: 5500ms 1", "metadata: annotations: haproxy.router.openshift.io/ip_whitelist: 192.168.1.10", "metadata: annotations: haproxy.router.openshift.io/ip_whitelist: 192.168.1.10 192.168.1.11 192.168.1.12", "metadata: annotations: haproxy.router.openshift.io/ip_whitelist: 192.168.1.0/24", "metadata: annotations: haproxy.router.openshift.io/ip_whitelist: 180.5.61.153 192.168.1.0/24 10.0.0.0/8", "apiVersion: route.openshift.io/v1 kind: Route metadata: annotations: haproxy.router.openshift.io/rewrite-target: / 1", "oc -n openshift-ingress-operator patch ingresscontroller/default --patch '{\"spec\":{\"routeAdmission\":{\"namespaceOwnership\":\"InterNamespaceAllowed\"}}}' --type=merge", "spec: routeAdmission: namespaceOwnership: InterNamespaceAllowed", "apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: frontend annotations: route.openshift.io/termination: \"reencrypt\" 1 spec: rules: - host: www.example.com http: paths: - backend: service: name: frontend port: number: 443 path: / pathType: Prefix tls: - hosts: - www.example.com secretName: example-com-tls-certificate", "spec: rules: - host: www.example.com http: paths: - path: '' pathType: ImplementationSpecific backend: service: name: frontend port: number: 443", "oc apply -f ingress.yaml", "oc get routes", "NAME HOST/PORT PATH SERVICES PORT TERMINATION WILDCARD frontend-gnztq www.example.com frontend 443 reencrypt/Redirect None", "apiVersion: route.openshift.io/v1 kind: Route metadata: name: frontend-gnztq ownerReferences: - apiVersion: networking.k8s.io/v1 controller: true kind: Ingress name: frontend uid: 4e6c59cc-704d-4f44-b390-617d879033b6 spec: host: www.example.com path: / port: targetPort: https tls: certificate: | -----BEGIN CERTIFICATE----- [...] -----END CERTIFICATE----- insecureEdgeTerminationPolicy: Redirect key: | -----BEGIN RSA PRIVATE KEY----- [...] -----END RSA PRIVATE KEY----- termination: reencrypt to: kind: Service name: frontend", "openssl rsa -in password_protected_tls.key -out tls.key", "oc create route reencrypt --service=frontend --cert=tls.crt --key=tls.key --dest-ca-cert=destca.crt --ca-cert=ca.crt --hostname=www.example.com", "apiVersion: route.openshift.io/v1 kind: Route metadata: name: frontend spec: host: www.example.com to: kind: Service name: frontend tls: termination: reencrypt key: |- -----BEGIN PRIVATE KEY----- [...] -----END PRIVATE KEY----- certificate: |- -----BEGIN CERTIFICATE----- [...] -----END CERTIFICATE----- caCertificate: |- -----BEGIN CERTIFICATE----- [...] -----END CERTIFICATE----- destinationCACertificate: |- -----BEGIN CERTIFICATE----- [...] -----END CERTIFICATE-----", "openssl rsa -in password_protected_tls.key -out tls.key", "oc create route edge --service=frontend --cert=tls.crt --key=tls.key --ca-cert=ca.crt --hostname=www.example.com", "apiVersion: route.openshift.io/v1 kind: Route metadata: name: frontend spec: host: www.example.com to: kind: Service name: frontend tls: termination: edge key: |- -----BEGIN PRIVATE KEY----- [...] -----END PRIVATE KEY----- certificate: |- -----BEGIN CERTIFICATE----- [...] -----END CERTIFICATE----- caCertificate: |- -----BEGIN CERTIFICATE----- [...] -----END CERTIFICATE-----", "oc create route passthrough route-passthrough-secured --service=frontend --port=8080", "apiVersion: route.openshift.io/v1 kind: Route metadata: name: route-passthrough-secured 1 spec: host: www.example.com port: targetPort: 8080 tls: termination: passthrough 2 insecureEdgeTerminationPolicy: None 3 to: kind: Service name: frontend", "apiVersion: v1 kind: Service metadata: name: http-service spec: clusterIP: 172.30.163.110 externalIPs: - 192.168.132.253 externalTrafficPolicy: Cluster ports: - name: highport nodePort: 31903 port: 30102 protocol: TCP targetPort: 30102 selector: app: web sessionAffinity: None type: LoadBalancer status: loadBalancer: ingress: - ip: 192.168.132.253", "{ \"policy\": { \"allowedCIDRs\": [], \"rejectedCIDRs\": [] } }", "apiVersion: config.openshift.io/v1 kind: Network metadata: name: cluster spec: externalIP: policy: {}", "apiVersion: config.openshift.io/v1 kind: Network metadata: name: cluster spec: externalIP: policy: allowedCIDRs: - 172.16.66.10/23 rejectedCIDRs: - 172.16.66.10/24", "apiVersion: config.openshift.io/v1 kind: Network metadata: name: cluster spec: externalIP: policy: null", "apiVersion: config.openshift.io/v1 kind: Network metadata: name: cluster spec: externalIP: autoAssignCIDRs: [] 1 policy: 2", "policy: allowedCIDRs: [] 1 rejectedCIDRs: [] 2", "apiVersion: config.openshift.io/v1 kind: Network metadata: name: cluster spec: externalIP: autoAssignCIDRs: - 192.168.132.254/29", "apiVersion: config.openshift.io/v1 kind: Network metadata: name: cluster spec: externalIP: policy: allowedCIDRs: - 192.168.132.0/29 - 192.168.132.8/29 rejectedCIDRs: - 192.168.132.7/32", "oc describe networks.config cluster", "oc edit networks.config cluster", "apiVersion: config.openshift.io/v1 kind: Network metadata: name: cluster spec: externalIP: 1", "oc get networks.config cluster -o go-template='{{.spec.externalIP}}{{\"\\n\"}}'", "oc adm policy add-cluster-role-to-user cluster-admin username", "oc new-project myproject", "oc new-app nodejs:12~https://github.com/sclorg/nodejs-ex.git", "oc get svc -n myproject", "NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE nodejs-ex ClusterIP 172.30.197.157 <none> 8080/TCP 70s", "oc project myproject", "oc expose service nodejs-ex", "route.route.openshift.io/nodejs-ex exposed", "oc get route", "NAME HOST/PORT PATH SERVICES PORT TERMINATION WILDCARD nodejs-ex nodejs-ex-myproject.example.com nodejs-ex 8080-tcp None", "curl --head nodejs-ex-myproject.example.com", "HTTP/1.1 200 OK", "cat router-internal.yaml apiVersion: v1 items: - apiVersion: operator.openshift.io/v1 kind: IngressController metadata: name: sharded namespace: openshift-ingress-operator spec: domain: <apps-sharded.basedomain.example.net> nodePlacement: nodeSelector: matchLabels: node-role.kubernetes.io/worker: \"\" routeSelector: matchLabels: type: sharded status: {} kind: List metadata: resourceVersion: \"\" selfLink: \"\"", "oc apply -f router-internal.yaml", "cat router-internal.yaml", "apiVersion: v1 items: - apiVersion: operator.openshift.io/v1 kind: IngressController metadata: name: sharded namespace: openshift-ingress-operator spec: domain: <apps-sharded.basedomain.example.net> nodePlacement: nodeSelector: matchLabels: node-role.kubernetes.io/worker: \"\" namespaceSelector: matchLabels: type: sharded status: {} kind: List metadata: resourceVersion: \"\" selfLink: \"\"", "oc apply -f router-internal.yaml", "oc adm policy add-cluster-role-to-user cluster-admin username", "oc new-project myproject", "oc new-app nodejs:12~https://github.com/sclorg/nodejs-ex.git", "oc get svc -n myproject", "NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE nodejs-ex ClusterIP 172.30.197.157 <none> 8080/TCP 70s", "oc project myproject", "oc expose service nodejs-ex", "route.route.openshift.io/nodejs-ex exposed", "oc get route", "NAME HOST/PORT PATH SERVICES PORT TERMINATION WILDCARD nodejs-ex nodejs-ex-myproject.example.com nodejs-ex 8080-tcp None", "curl --head nodejs-ex-myproject.example.com", "HTTP/1.1 200 OK", "oc project project1", "apiVersion: v1 kind: Service metadata: name: egress-2 1 spec: ports: - name: db port: 3306 2 loadBalancerIP: loadBalancerSourceRanges: 3 - 10.0.0.0/8 - 192.168.0.0/16 type: LoadBalancer 4 selector: name: mysql 5", "oc create -f <file-name>", "oc create -f mysql-lb.yaml", "oc get svc", "NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE egress-2 LoadBalancer 172.30.22.226 ad42f5d8b303045-487804948.example.com 3306:30357/TCP 15m", "curl <public-ip>:<port>", "curl 172.29.121.74:3306", "mysql -h 172.30.131.89 -u admin -p", "Enter password: Welcome to the MariaDB monitor. Commands end with ; or \\g. MySQL [(none)]>", "apiVersion: operator.openshift.io/v1 kind: IngressController metadata: creationTimestamp: null name: default namespace: openshift-ingress-operator spec: endpointPublishingStrategy: loadBalancer: scope: External providerParameters: type: AWS aws: type: NLB type: LoadBalancerService", "oc replace --force --wait -f ingresscontroller.yml", "oc get infrastructure/cluster -o jsonpath='{.status.platformStatus.type}' AWS", "cat ingresscontroller-aws-nlb.yaml", "apiVersion: operator.openshift.io/v1 kind: IngressController metadata: name: USDmy_ingress_controller 1 namespace: openshift-ingress-operator spec: domain: USDmy_unique_ingress_domain 2 endpointPublishingStrategy: type: LoadBalancerService loadBalancer: scope: External 3 providerParameters: type: AWS aws: type: NLB", "oc create -f ingresscontroller-aws-nlb.yaml", "./openshift-install create manifests --dir <installation_directory> 1", "touch <installation_directory>/manifests/cluster-ingress-default-ingresscontroller.yaml 1", "ls <installation_directory>/manifests/cluster-ingress-default-ingresscontroller.yaml", "cluster-ingress-default-ingresscontroller.yaml", "apiVersion: operator.openshift.io/v1 kind: IngressController metadata: creationTimestamp: null name: default namespace: openshift-ingress-operator spec: endpointPublishingStrategy: loadBalancer: scope: External providerParameters: type: AWS aws: type: NLB type: LoadBalancerService", "oc get networks.config cluster -o jsonpath='{.spec.externalIP}{\"\\n\"}'", "apiVersion: v1 kind: Service metadata: name: svc-with-externalip spec: externalIPs: - 192.174.120.10", "oc patch svc <name> -p '{ \"spec\": { \"externalIPs\": [ \"<ip_address>\" ] } }'", "oc patch svc mysql-55-rhel7 -p '{\"spec\":{\"externalIPs\":[\"192.174.120.10\"]}}'", "\"mysql-55-rhel7\" patched", "oc get svc", "NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE mysql-55-rhel7 172.30.131.89 192.174.120.10 3306/TCP 13m", "oc adm policy add-cluster-role-to-user cluster-admin <user_name>", "oc new-project myproject", "oc new-app nodejs:12~https://github.com/sclorg/nodejs-ex.git", "oc get svc -n myproject", "NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE nodejs-ex ClusterIP 172.30.197.157 <none> 8080/TCP 70s", "oc project myproject", "oc expose service nodejs-ex --type=NodePort --name=nodejs-ex-nodeport --generator=\"service/v2\"", "service/nodejs-ex-nodeport exposed", "oc get svc -n myproject", "NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE nodejs-ex ClusterIP 172.30.217.127 <none> 3306/TCP 9m44s nodejs-ex-ingress NodePort 172.30.107.72 <none> 3306:31345/TCP 39s", "oc delete svc nodejs-ex", "oc get nns", "oc get nns node01 -o yaml", "apiVersion: nmstate.io/v1beta1 kind: NodeNetworkState metadata: name: node01 1 status: currentState: 2 dns-resolver: interfaces: route-rules: routes: lastSuccessfulUpdateTime: \"2020-01-31T12:14:00Z\" 3", "apiVersion: nmstate.io/v1beta1 kind: NodeNetworkConfigurationPolicy metadata: name: <br1-eth1-policy> 1 spec: nodeSelector: 2 node-role.kubernetes.io/worker: \"\" 3 desiredState: interfaces: - name: br1 description: Linux bridge with eth1 as a port 4 type: linux-bridge state: up ipv4: dhcp: true enabled: true bridge: options: stp: enabled: false port: - name: eth1", "oc apply -f <br1-eth1-policy.yaml> 1", "oc get nncp", "oc get nncp <policy> -o yaml", "oc get nnce", "oc get nnce <node>.<policy> -o yaml", "apiVersion: nmstate.io/v1beta1 kind: NodeNetworkConfigurationPolicy metadata: name: <br1-eth1-policy> 1 spec: nodeSelector: 2 node-role.kubernetes.io/worker: \"\" 3 desiredState: interfaces: - name: br1 type: linux-bridge state: absent 4 - name: eth1 5 type: ethernet 6 state: up 7 ipv4: dhcp: true 8 enabled: true 9", "oc apply -f <br1-eth1-policy.yaml> 1", "apiVersion: nmstate.io/v1beta1 kind: NodeNetworkConfigurationPolicy metadata: name: br1-eth1-policy 1 spec: nodeSelector: 2 kubernetes.io/hostname: <node01> 3 desiredState: interfaces: - name: br1 4 description: Linux bridge with eth1 as a port 5 type: linux-bridge 6 state: up 7 ipv4: dhcp: true 8 enabled: true 9 bridge: options: stp: enabled: false 10 port: - name: eth1 11", "apiVersion: nmstate.io/v1beta1 kind: NodeNetworkConfigurationPolicy metadata: name: vlan-eth1-policy 1 spec: nodeSelector: 2 kubernetes.io/hostname: <node01> 3 desiredState: interfaces: - name: eth1.102 4 description: VLAN using eth1 5 type: vlan 6 state: up 7 vlan: base-iface: eth1 8 id: 102 9", "apiVersion: nmstate.io/v1beta1 kind: NodeNetworkConfigurationPolicy metadata: name: bond0-eth1-eth2-policy 1 spec: nodeSelector: 2 kubernetes.io/hostname: <node01> 3 desiredState: interfaces: - name: bond0 4 description: Bond enslaving eth1 and eth2 5 type: bond 6 state: up 7 ipv4: dhcp: true 8 enabled: true 9 link-aggregation: mode: active-backup 10 options: miimon: '140' 11 slaves: 12 - eth1 - eth2 mtu: 1450 13", "apiVersion: nmstate.io/v1beta1 kind: NodeNetworkConfigurationPolicy metadata: name: eth1-policy 1 spec: nodeSelector: 2 kubernetes.io/hostname: <node01> 3 desiredState: interfaces: - name: eth1 4 description: Configuring eth1 on node01 5 type: ethernet 6 state: up 7 ipv4: dhcp: true 8 enabled: true 9", "interfaces: - name: bond10 description: Bonding eth2 and eth3 for Linux bridge type: bond state: up link-aggregation: slaves: - eth2 - eth3 - name: br1 description: Linux bridge on bond type: linux-bridge state: up bridge: port: - name: bond10", "interfaces: - name: eth1 description: static IP on eth1 type: ethernet state: up ipv4: address: - ip: 192.168.122.250 1 prefix-length: 24 enabled: true", "interfaces: - name: eth1 description: No IP on eth1 type: ethernet state: up ipv4: enabled: false", "interfaces: - name: eth1 description: DHCP on eth1 type: ethernet state: up ipv4: dhcp: true enabled: true", "interfaces: - name: eth1 description: DHCP without gateway or DNS on eth1 type: ethernet state: up ipv4: dhcp: true auto-gateway: false auto-dns: false enabled: true", "interfaces: dns-resolver: config: search: - example.com - example.org server: - 8.8.8.8", "interfaces: - name: eth1 description: Static routing on eth1 type: ethernet state: up ipv4: address: - ip: 192.0.2.251 1 prefix-length: 24 enabled: true routes: config: - destination: 198.51.100.0/24 metric: 150 next-hop-address: 192.0.2.1 2 next-hop-interface: eth1 table-id: 254", "apiVersion: nmstate.io/v1beta1 kind: NodeNetworkConfigurationPolicy metadata: name: ens01-bridge-testfail spec: desiredState: interfaces: - name: br1 description: Linux bridge with the wrong port type: linux-bridge state: up ipv4: dhcp: true enabled: true bridge: options: stp: enabled: false port: - name: ens01", "oc apply -f ens01-bridge-testfail.yaml", "nodenetworkconfigurationpolicy.nmstate.io/ens01-bridge-testfail created", "oc get nncp", "NAME STATUS ens01-bridge-testfail FailedToConfigure", "oc get nnce", "NAME STATUS control-plane-1.ens01-bridge-testfail FailedToConfigure control-plane-2.ens01-bridge-testfail FailedToConfigure control-plane-3.ens01-bridge-testfail FailedToConfigure compute-1.ens01-bridge-testfail FailedToConfigure compute-2.ens01-bridge-testfail FailedToConfigure compute-3.ens01-bridge-testfail FailedToConfigure", "oc get nnce compute-1.ens01-bridge-testfail -o jsonpath='{.status.conditions[?(@.type==\"Failing\")].message}'", "error reconciling NodeNetworkConfigurationPolicy at desired state apply: , failed to execute nmstatectl set --no-commit --timeout 480: 'exit status 1' '' libnmstate.error.NmstateVerificationError: desired ======= --- name: br1 type: linux-bridge state: up bridge: options: group-forward-mask: 0 mac-ageing-time: 300 multicast-snooping: true stp: enabled: false forward-delay: 15 hello-time: 2 max-age: 20 priority: 32768 port: - name: ens01 description: Linux bridge with the wrong port ipv4: address: [] auto-dns: true auto-gateway: true auto-routes: true dhcp: true enabled: true ipv6: enabled: false mac-address: 01-23-45-67-89-AB mtu: 1500 current ======= --- name: br1 type: linux-bridge state: up bridge: options: group-forward-mask: 0 mac-ageing-time: 300 multicast-snooping: true stp: enabled: false forward-delay: 15 hello-time: 2 max-age: 20 priority: 32768 port: [] description: Linux bridge with the wrong port ipv4: address: [] auto-dns: true auto-gateway: true auto-routes: true dhcp: true enabled: true ipv6: enabled: false mac-address: 01-23-45-67-89-AB mtu: 1500 difference ========== --- desired +++ current @@ -13,8 +13,7 @@ hello-time: 2 max-age: 20 priority: 32768 - port: - - name: ens01 + port: [] description: Linux bridge with the wrong port ipv4: address: [] line 651, in _assert_interfaces_equal\\n current_state.interfaces[ifname],\\nlibnmstate.error.NmstateVerificationError:", "oc get nns control-plane-1 -o yaml", "- ipv4: name: ens1 state: up type: ethernet", "oc edit nncp ens01-bridge-testfail", "port: - name: ens1", "oc get nncp", "NAME STATUS ens01-bridge-testfail SuccessfullyConfigured", "apiVersion: config.openshift.io/v1 kind: Proxy metadata: name: cluster spec: trustedCA: name: \"\" status:", "apiVersion: v1 data: ca-bundle.crt: | 1 <MY_PEM_ENCODED_CERTS> 2 kind: ConfigMap metadata: name: user-ca-bundle 3 namespace: openshift-config 4", "oc create -f user-ca-bundle.yaml", "oc edit proxy/cluster", "apiVersion: config.openshift.io/v1 kind: Proxy metadata: name: cluster spec: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: http://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 readinessEndpoints: - http://www.google.com 4 - https://www.google.com trustedCA: name: user-ca-bundle 5", "oc edit proxy/cluster", "apiVersion: config.openshift.io/v1 kind: Proxy metadata: name: cluster spec: {} status: {}", "apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE-----", "apiVersion: config.openshift.io/v1 kind: Proxy metadata: name: cluster spec: trustedCA: name: \"\" status:", "apiVersion: v1 data: ca-bundle.crt: | 1 <MY_PEM_ENCODED_CERTS> 2 kind: ConfigMap metadata: name: user-ca-bundle 3 namespace: openshift-config 4", "oc create -f user-ca-bundle.yaml", "oc edit proxy/cluster", "apiVersion: config.openshift.io/v1 kind: Proxy metadata: name: cluster spec: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: http://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 readinessEndpoints: - http://www.google.com 4 - https://www.google.com trustedCA: name: user-ca-bundle 5", "config.openshift.io/inject-trusted-cabundle=\"true\"", "apiVersion: v1 data: {} kind: ConfigMap metadata: labels: config.openshift.io/inject-trusted-cabundle: \"true\" name: ca-inject 1 namespace: apache", "apiVersion: apps/v1 kind: Deployment metadata: name: my-example-custom-ca-deployment namespace: my-example-custom-ca-ns spec: spec: containers: - name: my-container-that-needs-custom-ca volumeMounts: - name: trusted-ca mountPath: /etc/pki/ca-trust/extracted/pem readOnly: true volumes: - name: trusted-ca configMap: name: trusted-ca items: - key: ca-bundle.crt 1 path: tls-ca-bundle.pem 2", "oc -n openshift-kuryr edit cm kuryr-config", "kind: ConfigMap metadata: annotations: networkoperator.openshift.io/kuryr-octavia-provider: default 1", "oc -n openshift-kuryr edit cm kuryr-config", "kind: ConfigMap metadata: annotations: networkoperator.openshift.io/kuryr-octavia-provider: ovn", "openstack loadbalancer list | grep amphora", "a4db683b-2b7b-4988-a582-c39daaad7981 | ostest-7mbj6-kuryr-api-loadbalancer | 84c99c906edd475ba19478a9a6690efd | 172.30.0.1 | ACTIVE | amphora", "openstack loadbalancer list | grep ovn", "2dffe783-98ae-4048-98d0-32aa684664cc | openshift-apiserver-operator/metrics | 84c99c906edd475ba19478a9a6690efd | 172.30.167.119 | ACTIVE | ovn 0b1b2193-251f-4243-af39-2f99b29d18c5 | openshift-etcd/etcd | 84c99c906edd475ba19478a9a6690efd | 172.30.143.226 | ACTIVE | ovn f05b07fc-01b7-4673-bd4d-adaa4391458e | openshift-dns-operator/metrics | 84c99c906edd475ba19478a9a6690efd | 172.30.152.27 | ACTIVE | ovn", "openstack loadbalancer create --name API_OCP_CLUSTER --vip-subnet-id <id_of_worker_vms_subnet>", "openstack loadbalancer listener create --name API_OCP_CLUSTER_6443 --protocol HTTPS--protocol-port 6443 API_OCP_CLUSTER", "openstack loadbalancer pool create --name API_OCP_CLUSTER_pool_6443 --lb-algorithm ROUND_ROBIN --session-persistence type=<source_IP_address> --listener API_OCP_CLUSTER_6443 --protocol HTTPS", "openstack loadbalancer healthmonitor create --delay 5 --max-retries 4 --timeout 10 --type TCP API_OCP_CLUSTER_pool_6443", "for SERVER in USD(MASTER-0-IP MASTER-1-IP MASTER-2-IP) do openstack loadbalancer member create --address USDSERVER --protocol-port 6443 API_OCP_CLUSTER_pool_6443 done", "openstack floating ip unset USDAPI_FIP", "openstack floating ip set --port USD(openstack loadbalancer show -c <vip_port_id> -f value API_OCP_CLUSTER) USDAPI_FIP", "openstack floating ip unset USDAPI_FIP", "openstack floating ip set --port USD(openstack loadbalancer show -c <vip_port_id> -f value USD{OCP_CLUSTER}-kuryr-api-loadbalancer) USDAPI_FIP", "oc -n openshift-ingress get svc router-internal-default -o yaml > external_router.yaml", "apiVersion: v1 kind: Service metadata: labels: ingresscontroller.operator.openshift.io/owning-ingresscontroller: default name: router-external-default 1 namespace: openshift-ingress spec: ports: - name: http port: 80 protocol: TCP targetPort: http - name: https port: 443 protocol: TCP targetPort: https - name: metrics port: 1936 protocol: TCP targetPort: 1936 selector: ingresscontroller.operator.openshift.io/deployment-ingresscontroller: default sessionAffinity: None type: LoadBalancer 2", "oc apply -f external_router.yaml", "oc -n openshift-ingress get svc", "NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE router-external-default LoadBalancer 172.30.235.33 10.46.22.161 80:30112/TCP,443:32359/TCP,1936:30317/TCP 3m38s router-internal-default ClusterIP 172.30.115.123 <none> 80/TCP,443/TCP,1936/TCP 22h", "openstack loadbalancer list | grep router-external", "| 21bf6afe-b498-4a16-a958-3229e83c002c | openshift-ingress/router-external-default | 66f3816acf1b431691b8d132cc9d793c | 172.30.235.33 | ACTIVE | octavia |", "openstack floating ip list | grep 172.30.235.33", "| e2f80e97-8266-4b69-8636-e58bacf1879e | 10.46.22.161 | 172.30.235.33 | 655e7122-806a-4e0a-a104-220c6e17bda6 | a565e55a-99e7-4d15-b4df-f9d7ee8c9deb | 66f3816acf1b431691b8d132cc9d793c |", "listen my-cluster-api-6443 bind 0.0.0.0:6443 mode tcp balance roundrobin server my-cluster-master-2 192.0.2.2:6443 check server my-cluster-master-0 192.0.2.3:6443 check server my-cluster-master-1 192.0.2.1:6443 check listenmy-cluster-apps-443 bind 0.0.0.0:443 mode tcp balance roundrobin server my-cluster-worker-0 192.0.2.6:443 check server my-cluster-worker-1 192.0.2.5:443 check server my-cluster-worker-2 192.0.2.4:443 check listenmy-cluster-apps-80 bind 0.0.0.0:80 mode tcp balance roundrobin server my-cluster-worker-0 192.0.2.7:80 check server my-cluster-worker-1 192.0.2.9:80 check server my-cluster-worker-2 192.0.2.8:80 check", "<load_balancer_ip_address> api.<cluster_name>.<base_domain> <load_balancer_ip_address> apps.<cluster_name>.<base_domain>", "curl https://<loadbalancer_ip_address>:6443/version --insecure", "{ \"major\": \"1\", \"minor\": \"11+\", \"gitVersion\": \"v1.11.0+ad103ed\", \"gitCommit\": \"ad103ed\", \"gitTreeState\": \"clean\", \"buildDate\": \"2019-01-09T06:44:10Z\", \"goVersion\": \"go1.10.3\", \"compiler\": \"gc\", \"platform\": \"linux/amd64\" }", "curl http://console-openshift-console.apps.<cluster_name>.<base_domain> -I -L --insecure", "HTTP/1.1 302 Found content-length: 0 location: https://console-openshift-console.apps.<cluster-name>.<base domain>/ cache-control: no-cacheHTTP/1.1 200 OK referrer-policy: strict-origin-when-cross-origin set-cookie: csrf-token=39HoZgztDnzjJkq/JuLJMeoKNXlfiVv2YgZc09c3TBOBU4NI6kDXaJH1LdicNhN1UsQWzon4Dor9GWGfopaTEQ==; Path=/; Secure x-content-type-options: nosniff x-dns-prefetch-control: off x-frame-options: DENY x-xss-protection: 1; mode=block date: Tue, 17 Nov 2020 08:42:10 GMT content-type: text/html; charset=utf-8 set-cookie: 1e2670d92730b515ce3a1bb65da45062=9b714eb87e93cf34853e87a92d6894be; path=/; HttpOnly; Secure; SameSite=None cache-control: private", "pod_network_name_info{interface=\"net0\",namespace=\"namespacename\",network_name=\"nadnamespace/firstNAD\",pod=\"podname\"} 0", "(container_network_receive_bytes_total) + on(namespace,pod,interface) group_left(network_name) ( pod_network_name_info ) (container_network_receive_errors_total) + on(namespace,pod,interface) group_left(network_name) ( pod_network_name_info ) (container_network_receive_packets_total) + on(namespace,pod,interface) group_left(network_name) ( pod_network_name_info ) (container_network_receive_packets_dropped_total) + on(namespace,pod,interface) group_left(network_name) ( pod_network_name_info ) (container_network_transmit_bytes_total) + on(namespace,pod,interface) group_left(network_name) ( pod_network_name_info ) (container_network_transmit_errors_total) + on(namespace,pod,interface) group_left(network_name) ( pod_network_name_info ) (container_network_transmit_packets_total) + on(namespace,pod,interface) group_left(network_name) ( pod_network_name_info ) (container_network_transmit_packets_dropped_total) + on(namespace,pod,interface) group_left(network_name)" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.7/html-single/networking/index
Chapter 3. Try It
Chapter 3. Try It Data scientists and developers can try OpenShift AI and access tutorials and activities in the Red Hat Developer sandbox environment. IT operations administrators can try OpenShift AI in your own cluster with a 60-day product trial .
null
https://docs.redhat.com/en/documentation/red_hat_openshift_ai_cloud_service/1/html/introduction_to_red_hat_openshift_ai_cloud_service/try_it
4.5. Using Security Officer Mode
4.5. Using Security Officer Mode The Enterprise Security Client, together with the TPS subsystem, supports a special security officer mode of operation. This mode allows a supervisory individual, a security officer, the ability to oversee the face to face enrollment of regular users in a given organization. Security officer mode provides the ability to enroll individuals under the supervision of a security officer, a designated user-type who can manage other user's smart cards in face-to-face and very secure operations. Security officer mode overlaps with some regular user operations, with additional security features: The ability to search for an individual within an organization. An interface that displays a photo and other pertinent information about an individual. The ability to enroll approved individuals. Formatting or resetting a user's card. Formatting or resetting a security officer's card. Enrolling a temporary card for a user that has misplaced their primary card. Storing TPS server information on a card. This Phone Home information is used by the Enterprise Security Client to contact a given TPS server installation. Working in the security officer mode falls into two distinct areas: Creating and managing security officers. Managing regular users by security officers. When security officer mode is enabled, the Enterprise Security Client uses an external user interface provided by the server. This interface takes control of smart card operations in place of the local XUL code that the Enterprise Security Client normally uses. The external interface maintains control until security officer mode is disabled. Note It is a good idea to run security officer clients over SSL, so make sure that the TPS is configured to run in SSL, and then point the Enterprise Security Client to the TPS's SSL agent port. 4.5.1. Enabling Security Officer Mode There are two areas where the security officer mode must be configured, both in the TPS and in the Enterprise Security Client's esc-prefs.js file. In the TPS: Add the security officer user entry to the TPS database as a member of the TUS Officers group. This group is created by default in the TPS LDAP database and is the expected location for all security officer user entries. Note It can be simpler to add and copy user entries in the LDAP database using the Red Hat Directory Server Console. Using the Directory Server Console is described in the Red Hat Directory Server Administrators Guide in section 3.1.2, "Creating Directory Entries ." There are two subtrees associated with the TPS, each associated with a different database. (Commonly, both databases can be on the same server, but that is not required.) The first suffix, within the authentication database , is for external users; the TPS checks their user credentials against the directory to authenticate any user attempting to enroll a smart card. This has a distinguished name (DN) like dc=server,dc=example,dc=com . The other database is used for internal TPS instance entries, including TPS agents, administrators, and security officers. This subtree is within the internal database for the TPS, which includes the token database . This subtree has a DN based on the TPS server, like dc=server.example.com-pki-tps . The TUS Officers group entry is under the dc=server.example.com-pki-tps suffix. The LDAP directory and the suffix are defined in the token profile in the TPS CS.cfg file in the authId and baseDN parameters for the security officer's auth instance. For example: Any security officer entry has to be a child entry of the TUS Officers group entry. This means that the group entry is the main entry, and the user entry is directly beneath it in the directory tree. The TUS Officers group entry is cn=TUS Officers,ou=Groups,dc=server.example.com-pki-tps . For example, to add the security officer entry using ldapmodify : Press the Enter key twice to send the entry, or use Ctrl+D . Then, configure the Enterprise Security Client. First, trust the CA certificate chain. Note This step is only required if the certificate is not yet trusted in the Enterprise Security Client database. If you want to point the Enterprise Security Client to a database which already contains the required certificates, use the esc.global.alt.nss.db in the esc-prefs.js file to point to another database. Open the CA's end-entities page. Click the Retrieval tab, and download the CA certificate chain. Open the Enterprise Security Client. Click the View Certificates button. Click the Authorities tab. Click the Import button, and import the CA certificate chain. Set the trust settings for the CA certificate chain. Then, format and enroll the security officer's token. This token is used to access the security officer Smart Card Manager UI. Insert a blank token. When the prompt for the Phone Home information opens, enter the security officer URL. Click the Format button to format the security officer's token. Close the interface and stop the Enterprise Security Client. Add two parameters in the esc-prefs.js file. The first, esc.disable.password.prompt , sets security officer mode. The second, esc.security.url , points to the security officer enrollment page. Just the presence of the esc.security.url parameter instructs the Enterprise Security Client to open in security officer mode time it opens. Start the Enterprise Security Client again, and open the UI. The Enterprise Security Client is configured to connect to the security officer enrollment form in order to enroll the new security officer's token. Enroll the token as described in Section 4.5.2, "Enrolling a New Security Officer" . Close the interface and stop the Enterprise Security Client. Edit the esc-prefs.js file again, and this time change the esc.security.url parameter to point to the security officer workstation page. Restart the Enterprise Security Client again. The UI now points to the security officer workstation to allow security officers to enroll tokens for regular users. To disable security officer mode, close the Smart Card Manager GUI, stop the escd process, and comment out the esc.security.url and esc.disable.password.prompt lines in the esc-prefs.js file. When the esc process is restarted, it starts in normal mode. 4.5.2. Enrolling a New Security Officer Security officers are set up using a separate, unique interface rather than the one for regular enrollments or the one used for security officer-managed enrollments. Make sure the esc process is running. With security officer mode enabled in the esc-pref.js file ( Section 4.5.1, "Enabling Security Officer Mode" ), the security officer enrollment page opens. In the Security Officer Enrollment window, enter the LDAP user name and password of the new security officer and a password that will be used with the security officer's smart card. Note If the password is stored using the SSHA hash, then any exclamation point (!) and dollar sign (USD) characters in the password must be properly escaped for a user to bind successfully to the Enterprise Security Client on Windows XP and Vista systems. For the dollar sign (USD) character, escape the dollar sign when the password is created : Then, enter only the dollar sign (USD) character when logging into the Enterprise Security Client. For the exclamation point (!) character, escape the character when the password is created and when the password is entered to log into the Enterprise Security Client. Click Enroll My Smartcard . This produces a smart card which contains the certificates needed by the security officer to access the Enterprise Security Client security officer, so that regular users can be enrolled and managed within the system. 4.5.3. Using Security Officers to Manage Users The security officer Station page manages regular users through operations such as enrolling new or temporary cards, formatting cards, and setting the Phone Home URL. 4.5.3.1. Enrolling a New User There is one significant difference between enrolling a user's smart card in security officer mode and the process in Section 5.3, "Enrolling a Smart Card Automatically" and Section 5.4.6, "Enrolling Smart Cards" . All processes require logging into an LDAP database to verify the user's identity, but the security officer mode has an extra step to compare some credentials presented by the user against some information in the database (such as a photograph). Make sure the esc process is running. If necessary, start the process. Also, make sure that security officer mode is enabled, as described in Section 4.5.1, "Enabling Security Officer Mode" . Then open the Smart Card Manager UI. Note Ensure that there is a valid and enrolled security officer card plugged into the computer. A security officer's credentials are required to access the following pages. Click Continue to display the security officer Station page. The client prompts for the password for the security officer's card (which is required for SSL client authentication) or to select the security officer's signing certificate from the drop-down menu. Click the Enroll New Card link to display the Security Officer Select User page. Enter the LDAP name of the user who is to receive a new smart card. Click Continue . If the user exists, the Security Officer Confirm User page opens. Compare the information returned in the Smart Card Manager UI to the person or credentials that are present. If all the details are correct, click Continue to display the Security Officer Enroll User page. This page prompts the officer to insert a new smart card into the computer. If the smart card is properly recognized, enter the new password for this card and click Start Enrollment . A successful enrollment produces a smart card that a user can use to access the secured network and services for which the smart card was made. 4.5.3.2. Performing Other Security Officer Tasks All of the other operations that can be performed for regular users by a security officer - issuing temporary tokens, re-enrolling tokens, or setting a Phone Home URL - are performed as described in Chapter 4, Setting up Enterprise Security Client , after opening the security officer UI. Make sure the esc process is running. If necessary, start the process. Also, make sure that security officer mode is enabled, as described in Section 4.5.1, "Enabling Security Officer Mode" . Then open the Smart Card Manager UI. Note Ensure that there is a valid and enrolled security officer card plugged into the computer. A security officer's credentials are required to access the following pages. Click Continue to display the security officer Station page. If prompted, enter the password for the security officer's card. This is required for SSL client authentication. Select the operation from the menu (enrolling a temporary token, formatting the card, or setting the Phone Home URL). Continue the operation as described in Chapter 4, Setting up Enterprise Security Client . 4.5.3.3. Formatting an Existing Security Officer Smart Card Important Reformatting a token is a destructive operation to the security officer's token and should only be done if absolutely needed. Make sure that security officer mode is enabled, as described in Section 4.5.1, "Enabling Security Officer Mode" . Open the Smart Card Manager UI. Note Ensure that there is a valid and enrolled security officer card plugged into the computer. A security officer's credentials are required to access the following pages. Click Continue to display the security officer Station page. If prompted, enter the password for the security officer's card. This is required for SSL client authentication. Select the operation from the menu (enrolling a temporary token, formatting the card, or setting the Phone Home URL). Click Format SO Card . Because the security officer card is already inserted, the following screen displays: Click Format to begin the operation. When the card is successfully formatted, the security officer's card values are reset. Another security officer's card must be used to enter security officer mode and perform any further operations.
[ "auth.instance.1.authId=ldap2 auth.instance.1.baseDN=dc=sec officers,dc=server.example.com-pki-tps", "/usr/lib/mozldap/ldapmodify -a -D \"cn=Directory Manager\" -w secret -p 389 -h server.example.com dn: uid=jsmith ,cn=TUS Officers,ou=Groups,dc=server.example.com-pki-tps objectclass: inetorgperson objectclass: organizationalPerson objectclass: person objectclass: top sn: smith uid: jsmith cn: John Smith mail: [email protected] userPassword: secret", "http s ://server.example.com: 9444/ca/ee/ca/", "esc", "/var/lib/pki-tps/cgi-bin/so/index.cgi", "pref(\"esc.disable.password.prompt\",\"no\"); pref(\"esc.security.url\",\"http s ://server.example.com:7888 /cgi-bin/so/enroll.cgi \");", "esc", "pref(\"esc.security.url\",\"http s ://server.example.com:7889 /cgi-bin/sow/welcome.cgi \");", "esc", "\\USD", "\\!", "esc", "esc" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/managing_smart_cards/Using_the_Enterprise_Security_Client-Security_Officer_Mode
Chapter 3. Installing a cluster quickly on AWS
Chapter 3. Installing a cluster quickly on AWS In OpenShift Container Platform version 4.14, you can install a cluster on Amazon Web Services (AWS) that uses the default configuration options. 3.1. Prerequisites You reviewed details about the OpenShift Container Platform installation and update processes. You read the documentation on selecting a cluster installation method and preparing it for users . You configured an AWS account to host the cluster. Important If you have an AWS profile stored on your computer, it must not use a temporary session token that you generated while using a multi-factor authentication device. The cluster continues to use your current AWS credentials to create AWS resources for the entire life of the cluster, so you must use key-based, long-term credentials. To generate appropriate keys, see Managing Access Keys for IAM Users in the AWS documentation. You can supply the keys when you run the installation program. If you use a firewall, you configured it to allow the sites that your cluster requires access to. 3.2. Internet access for OpenShift Container Platform In OpenShift Container Platform 4.14, you require access to the internet to install your cluster. You must have internet access to: Access OpenShift Cluster Manager to download the installation program and perform subscription management. If the cluster has internet access and you do not disable Telemetry, that service automatically entitles your cluster. Access Quay.io to obtain the packages that are required to install your cluster. Obtain the packages that are required to perform cluster updates. Important If your cluster cannot have direct internet access, you can perform a restricted network installation on some types of infrastructure that you provision. During that process, you download the required content and use it to populate a mirror registry with the installation packages. With some installation types, the environment that you install your cluster in will not require internet access. Before you update the cluster, you update the content of the mirror registry. 3.3. Generating a key pair for cluster node SSH access During an OpenShift Container Platform installation, you can provide an SSH public key to the installation program. The key is passed to the Red Hat Enterprise Linux CoreOS (RHCOS) nodes through their Ignition config files and is used to authenticate SSH access to the nodes. The key is added to the ~/.ssh/authorized_keys list for the core user on each node, which enables password-less authentication. After the key is passed to the nodes, you can use the key pair to SSH in to the RHCOS nodes as the user core . To access the nodes through SSH, the private key identity must be managed by SSH for your local user. If you want to SSH in to your cluster nodes to perform installation debugging or disaster recovery, you must provide the SSH public key during the installation process. The ./openshift-install gather command also requires the SSH public key to be in place on the cluster nodes. Important Do not skip this procedure in production environments, where disaster recovery and debugging is required. Note You must use a local key, not one that you configured with platform-specific approaches such as AWS key pairs . Procedure If you do not have an existing SSH key pair on your local machine to use for authentication onto your cluster nodes, create one. For example, on a computer that uses a Linux operating system, run the following command: USD ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1 1 Specify the path and file name, such as ~/.ssh/id_ed25519 , of the new SSH key. If you have an existing key pair, ensure your public key is in the your ~/.ssh directory. Note If you plan to install an OpenShift Container Platform cluster that uses the RHEL cryptographic libraries that have been submitted to NIST for FIPS 140-2/140-3 Validation on only the x86_64 , ppc64le , and s390x architectures, do not create a key that uses the ed25519 algorithm. Instead, create a key that uses the rsa or ecdsa algorithm. View the public SSH key: USD cat <path>/<file_name>.pub For example, run the following to view the ~/.ssh/id_ed25519.pub public key: USD cat ~/.ssh/id_ed25519.pub Add the SSH private key identity to the SSH agent for your local user, if it has not already been added. SSH agent management of the key is required for password-less SSH authentication onto your cluster nodes, or if you want to use the ./openshift-install gather command. Note On some distributions, default SSH private key identities such as ~/.ssh/id_rsa and ~/.ssh/id_dsa are managed automatically. If the ssh-agent process is not already running for your local user, start it as a background task: USD eval "USD(ssh-agent -s)" Example output Agent pid 31874 Note If your cluster is in FIPS mode, only use FIPS-compliant algorithms to generate the SSH key. The key must be either RSA or ECDSA. Add your SSH private key to the ssh-agent : USD ssh-add <path>/<file_name> 1 1 Specify the path and file name for your SSH private key, such as ~/.ssh/id_ed25519 Example output Identity added: /home/<you>/<path>/<file_name> (<computer_name>) steps When you install OpenShift Container Platform, provide the SSH public key to the installation program. 3.4. Obtaining the installation program Before you install OpenShift Container Platform, download the installation file on the host you are using for installation. Prerequisites You have a computer that runs Linux or macOS, with at least 1.2 GB of local disk space. Procedure Go to the Cluster Type page on the Red Hat Hybrid Cloud Console. If you have a Red Hat account, log in with your credentials. If you do not, create an account. Select your infrastructure provider from the Run it yourself section of the page. Select your host operating system and architecture from the dropdown menus under OpenShift Installer and click Download Installer . Place the downloaded file in the directory where you want to store the installation configuration files. Important The installation program creates several files on the computer that you use to install your cluster. You must keep the installation program and the files that the installation program creates after you finish installing the cluster. Both of the files are required to delete the cluster. Deleting the files created by the installation program does not remove your cluster, even if the cluster failed during installation. To remove your cluster, complete the OpenShift Container Platform uninstallation procedures for your specific cloud provider. Extract the installation program. For example, on a computer that uses a Linux operating system, run the following command: USD tar -xvf openshift-install-linux.tar.gz Download your installation pull secret from Red Hat OpenShift Cluster Manager . This pull secret allows you to authenticate with the services that are provided by the included authorities, including Quay.io, which serves the container images for OpenShift Container Platform components. Tip Alternatively, you can retrieve the installation program from the Red Hat Customer Portal , where you can specify a version of the installation program to download. However, you must have an active subscription to access this page. 3.5. Deploying the cluster You can install OpenShift Container Platform on a compatible cloud platform. Important You can run the create cluster command of the installation program only once, during initial installation. Prerequisites You have configured an account with the cloud platform that hosts your cluster. You have the OpenShift Container Platform installation program and the pull secret for your cluster. You have verified that the cloud provider account on your host has the correct permissions to deploy the cluster. An account with incorrect permissions causes the installation process to fail with an error message that displays the missing permissions. Procedure Change to the directory that contains the installation program and initialize the cluster deployment: USD ./openshift-install create cluster --dir <installation_directory> \ 1 --log-level=info 2 1 For <installation_directory> , specify the directory name to store the files that the installation program creates. 2 To view different installation details, specify warn , debug , or error instead of info . When specifying the directory: Verify that the directory has the execute permission. This permission is required to run Terraform binaries under the installation directory. Use an empty directory. Some installation assets, such as bootstrap X.509 certificates, have short expiration intervals, therefore you must not reuse an installation directory. If you want to reuse individual files from another cluster installation, you can copy them into your directory. However, the file names for the installation assets might change between releases. Use caution when copying installation files from an earlier OpenShift Container Platform version. Provide values at the prompts: Optional: Select an SSH key to use to access your cluster machines. Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. Select aws as the platform to target. If you do not have an Amazon Web Services (AWS) profile stored on your computer, enter the AWS access key ID and secret access key for the user that you configured to run the installation program. Note The AWS access key ID and secret access key are stored in ~/.aws/credentials in the home directory of the current user on the installation host. You are prompted for the credentials by the installation program if the credentials for the exported profile are not present in the file. Any credentials that you provide to the installation program are stored in the file. Select the AWS region to deploy the cluster to. Select the base domain for the Route 53 service that you configured for your cluster. Enter a descriptive name for your cluster. Paste the pull secret from Red Hat OpenShift Cluster Manager . Optional: Remove or disable the AdministratorAccess policy from the IAM account that you used to install the cluster. Note The elevated permissions provided by the AdministratorAccess policy are required only during installation. Verification When the cluster deployment completes successfully: The terminal displays directions for accessing your cluster, including a link to the web console and credentials for the kubeadmin user. Credential information also outputs to <installation_directory>/.openshift_install.log . Important Do not delete the installation program or the files that the installation program creates. Both are required to delete the cluster. Example output ... INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: "kubeadmin", and password: "password" INFO Time elapsed: 36m22s Important The Ignition config files that the installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending node-bootstrapper certificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information. It is recommended that you use Ignition config files within 12 hours after they are generated because the 24-hour certificate rotates from 16 to 22 hours after the cluster is installed. By using the Ignition config files within 12 hours, you can avoid installation failure if the certificate update runs during installation. Additional resources See Configuration and credential file settings in the AWS documentation for more information about AWS profile and credential configuration. 3.6. Installing the OpenShift CLI by downloading the binary You can install the OpenShift CLI ( oc ) to interact with OpenShift Container Platform from a command-line interface. You can install oc on Linux, Windows, or macOS. Important If you installed an earlier version of oc , you cannot use it to complete all of the commands in OpenShift Container Platform 4.14. Download and install the new version of oc . Installing the OpenShift CLI on Linux You can install the OpenShift CLI ( oc ) binary on Linux by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the architecture from the Product Variant drop-down list. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.14 Linux Client entry and save the file. Unpack the archive: USD tar xvf <file> Place the oc binary in a directory that is on your PATH . To check your PATH , execute the following command: USD echo USDPATH Verification After you install the OpenShift CLI, it is available using the oc command: USD oc <command> Installing the OpenShift CLI on Windows You can install the OpenShift CLI ( oc ) binary on Windows by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.14 Windows Client entry and save the file. Unzip the archive with a ZIP program. Move the oc binary to a directory that is on your PATH . To check your PATH , open the command prompt and execute the following command: C:\> path Verification After you install the OpenShift CLI, it is available using the oc command: C:\> oc <command> Installing the OpenShift CLI on macOS You can install the OpenShift CLI ( oc ) binary on macOS by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.14 macOS Client entry and save the file. Note For macOS arm64, choose the OpenShift v4.14 macOS arm64 Client entry. Unpack and unzip the archive. Move the oc binary to a directory on your PATH. To check your PATH , open a terminal and execute the following command: USD echo USDPATH Verification Verify your installation by using an oc command: USD oc <command> 3.7. Logging in to the cluster by using the CLI You can log in to your cluster as a default system user by exporting the cluster kubeconfig file. The kubeconfig file contains information about the cluster that is used by the CLI to connect a client to the correct cluster and API server. The file is specific to a cluster and is created during OpenShift Container Platform installation. Prerequisites You deployed an OpenShift Container Platform cluster. You installed the oc CLI. Procedure Export the kubeadmin credentials: USD export KUBECONFIG=<installation_directory>/auth/kubeconfig 1 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. Verify you can run oc commands successfully using the exported configuration: USD oc whoami Example output system:admin /validating-an-installation.adoc 3.8. Logging in to the cluster by using the web console The kubeadmin user exists by default after an OpenShift Container Platform installation. You can log in to your cluster as the kubeadmin user by using the OpenShift Container Platform web console. Prerequisites You have access to the installation host. You completed a cluster installation and all cluster Operators are available. Procedure Obtain the password for the kubeadmin user from the kubeadmin-password file on the installation host: USD cat <installation_directory>/auth/kubeadmin-password Note Alternatively, you can obtain the kubeadmin password from the <installation_directory>/.openshift_install.log log file on the installation host. List the OpenShift Container Platform web console route: USD oc get routes -n openshift-console | grep 'console-openshift' Note Alternatively, you can obtain the OpenShift Container Platform route from the <installation_directory>/.openshift_install.log log file on the installation host. Example output console console-openshift-console.apps.<cluster_name>.<base_domain> console https reencrypt/Redirect None Navigate to the route detailed in the output of the preceding command in a web browser and log in as the kubeadmin user. Additional resources See Accessing the web console for more details about accessing and understanding the OpenShift Container Platform web console. 3.9. Telemetry access for OpenShift Container Platform In OpenShift Container Platform 4.14, the Telemetry service, which runs by default to provide metrics about cluster health and the success of updates, requires internet access. If your cluster is connected to the internet, Telemetry runs automatically, and your cluster is registered to OpenShift Cluster Manager . After you confirm that your OpenShift Cluster Manager inventory is correct, either maintained automatically by Telemetry or manually by using OpenShift Cluster Manager, use subscription watch to track your OpenShift Container Platform subscriptions at the account or multi-cluster level. Additional resources See About remote health monitoring for more information about the Telemetry service 3.10. steps Validating an installation . Customize your cluster . If necessary, you can opt out of remote health reporting . If necessary, you can remove cloud provider credentials .
[ "ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1", "cat <path>/<file_name>.pub", "cat ~/.ssh/id_ed25519.pub", "eval \"USD(ssh-agent -s)\"", "Agent pid 31874", "ssh-add <path>/<file_name> 1", "Identity added: /home/<you>/<path>/<file_name> (<computer_name>)", "tar -xvf openshift-install-linux.tar.gz", "./openshift-install create cluster --dir <installation_directory> \\ 1 --log-level=info 2", "INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: \"kubeadmin\", and password: \"password\" INFO Time elapsed: 36m22s", "tar xvf <file>", "echo USDPATH", "oc <command>", "C:\\> path", "C:\\> oc <command>", "echo USDPATH", "oc <command>", "export KUBECONFIG=<installation_directory>/auth/kubeconfig 1", "oc whoami", "system:admin", "cat <installation_directory>/auth/kubeadmin-password", "oc get routes -n openshift-console | grep 'console-openshift'", "console console-openshift-console.apps.<cluster_name>.<base_domain> console https reencrypt/Redirect None" ]
https://docs.redhat.com/en/documentation/openshift_container_platform_installation/4.14/html/installing_on_aws/installing-aws-default
Deploying RHEL 8 on Microsoft Azure
Deploying RHEL 8 on Microsoft Azure Red Hat Enterprise Linux 8 Obtaining RHEL system images and creating RHEL instances on Azure Red Hat Customer Content Services
[ "virt-install --name kvmtest --memory 2048 --vcpus 2 --disk rhel-8.0-x86_64-kvm.qcow2,bus=virtio --import --os-variant=rhel8.0", "lsinitrd | grep hv", "lsinitrd | grep hv drwxr-xr-x 2 root root 0 Aug 12 14:21 usr/lib/modules/3.10.0-932.el8.x86_64/kernel/drivers/hv -rw-r--r-- 1 root root 31272 Aug 11 08:45 usr/lib/modules/3.10.0-932.el8.x86_64/kernel/drivers/hv/hv_vmbus.ko.xz -rw-r--r-- 1 root root 25132 Aug 11 08:46 usr/lib/modules/3.10.0-932.el8.x86_64/kernel/drivers/net/hyperv/hv_netvsc.ko.xz -rw-r--r-- 1 root root 9796 Aug 11 08:45 usr/lib/modules/3.10.0-932.el8.x86_64/kernel/drivers/scsi/hv_storvsc.ko.xz", "add_drivers+=\" hv_vmbus \" add_drivers+=\" hv_netvsc \" add_drivers+=\" hv_storvsc \" add_drivers+=\" nvme \"", "dracut -f -v --regenerate-all", "subscription-manager register --auto-attach Installed Product Current Status: Product Name: Red Hat Enterprise Linux for x86_64 Status: Subscribed", "yum install cloud-init hyperv-daemons -y", "reporting: logging: type: log telemetry: type: hyperv", "datasource_list: [ Azure ] datasource: Azure: apply_network_config: False", "blacklist nouveau blacklist lbm-nouveau blacklist floppy blacklist amdgpu blacklist skx_edac blacklist intel_cstate", "rm -f /etc/udev/rules.d/70-persistent-net.rules rm -f /etc/udev/rules.d/75-persistent-net-generator.rules rm -f /etc/udev/rules.d/80-net-name-slot-rules", "SUBSYSTEM==\"net\", DRIVERS==\"hv_pci\", ACTION==\"add\", ENV{NM_UNMANAGED}=\"1\"", "systemctl enable sshd systemctl is-enabled sshd", "GRUB_TIMEOUT=10", "rhgb quiet", "GRUB_CMDLINE_LINUX=\"loglevel=3 crashkernel=auto console=tty1 console=ttyS0 earlyprintk=ttyS0 rootdelay=300\" GRUB_TIMEOUT_STYLE=countdown GRUB_TERMINAL=\"serial console\" GRUB_SERIAL_COMMAND=\"serial --speed=115200 --unit=0 --word=8 --parity=no --stop=1\"", "grub2-mkconfig -o /boot/grub2/grub.cfg", "grub2-mkconfig -o /boot/efi/EFI/redhat/grub.cfg", "yum install WALinuxAgent -y systemctl enable waagent", "Provisioning.DeleteRootPassword=y ResourceDisk.Format=n ResourceDisk.EnableSwap=n", "subscription-manager unregister", "waagent -force -deprovision", "export HISTSIZE=0 poweroff", "qemu-img convert -f qcow2 -O raw <image-name> .qcow2 <image-name> .raw", "#!/bin/bash MB=USD((1024 * 1024)) size=USD(qemu-img info -f raw --output json \"USD1\" | gawk 'match(USD0, /\"virtual-size\": ([0-9]+),/, val) {print val[1]}') rounded_size=USD(((USDsize/USDMB + 1) * USDMB)) if [ USD((USDsize % USDMB)) -eq 0 ] then echo \"Your image is already aligned. You do not need to resize.\" exit 1 fi echo \"rounded size = USDrounded_size\" export rounded_size", "sh align.sh <image-xxx> .raw", "qemu-img convert -f raw -o subformat=fixed,force_size -O vpc <image-xxx> .raw <image.xxx> .vhd", "qemu-img resize -f raw <image-xxx> .raw <rounded-value>", "qemu-img convert -f raw -o subformat=fixed,force_size -O vpc <image-xxx> .raw <image.xxx> .vhd", "sudo rpm --import https://packages.microsoft.com/keys/microsoft.asc", "sudo sh -c 'echo -e \"[azure-cli]\\nname=Azure CLI\\nbaseurl=https://packages.microsoft.com/yumrepos/azure-cli\\nenabled=1\\ngpgcheck=1\\ngpgkey=https://packages.microsoft.com/keys/microsoft.asc\" > /etc/yum.repos.d/azure-cli.repo'", "yum check-update", "sudo yum install python3", "sudo yum install -y azure-cli", "az", "az login", "az group create --name <resource-group> --location <azure-region>", "[clouduser@localhost]USD az group create --name azrhelclirsgrp --location southcentralus { \"id\": \"/subscriptions//resourceGroups/azrhelclirsgrp\", \"location\": \"southcentralus\", \"managedBy\": null, \"name\": \"azrhelclirsgrp\", \"properties\": { \"provisioningState\": \"Succeeded\" }, \"tags\": null }", "az storage account create -l <azure-region> -n <storage-account-name> -g <resource-group> --sku <sku_type>", "[clouduser@localhost]USD az storage account create -l southcentralus -n azrhelclistact -g azrhelclirsgrp --sku Standard_LRS { \"accessTier\": null, \"creationTime\": \"2017-04-05T19:10:29.855470+00:00\", \"customDomain\": null, \"encryption\": null, \"id\": \"/subscriptions//resourceGroups/azrhelclirsgrp/providers/Microsoft.Storage/storageAccounts/azrhelclistact\", \"kind\": \"StorageV2\", \"lastGeoFailoverTime\": null, \"location\": \"southcentralus\", \"name\": \"azrhelclistact\", \"primaryEndpoints\": { \"blob\": \"https://azrhelclistact.blob.core.windows.net/\", \"file\": \"https://azrhelclistact.file.core.windows.net/\", \"queue\": \"https://azrhelclistact.queue.core.windows.net/\", \"table\": \"https://azrhelclistact.table.core.windows.net/\" }, \"primaryLocation\": \"southcentralus\", \"provisioningState\": \"Succeeded\", \"resourceGroup\": \"azrhelclirsgrp\", \"secondaryEndpoints\": null, \"secondaryLocation\": null, \"sku\": { \"name\": \"Standard_LRS\", \"tier\": \"Standard\" }, \"statusOfPrimary\": \"available\", \"statusOfSecondary\": null, \"tags\": {}, \"type\": \"Microsoft.Storage/storageAccounts\" }", "az storage account show-connection-string -n <storage-account-name> -g <resource-group>", "[clouduser@localhost]USD az storage account show-connection-string -n azrhelclistact -g azrhelclirsgrp { \"connectionString\": \"DefaultEndpointsProtocol=https;EndpointSuffix=core.windows.net;AccountName=azrhelclistact;AccountKey=NreGk...==\" }", "export AZURE_STORAGE_CONNECTION_STRING=\"<storage-connection-string>\"", "[clouduser@localhost]USD export AZURE_STORAGE_CONNECTION_STRING=\"DefaultEndpointsProtocol=https;EndpointSuffix=core.windows.net;AccountName=azrhelclistact;AccountKey=NreGk...==\"", "az storage container create -n <container-name>", "[clouduser@localhost]USD az storage container create -n azrhelclistcont { \"created\": true }", "az network vnet create -g <resource group> --name <vnet-name> --subnet-name <subnet-name>", "[clouduser@localhost]USD az network vnet create --resource-group azrhelclirsgrp --name azrhelclivnet1 --subnet-name azrhelclisubnet1 { \"newVNet\": { \"addressSpace\": { \"addressPrefixes\": [ \"10.0.0.0/16\" ] }, \"dhcpOptions\": { \"dnsServers\": [] }, \"etag\": \"W/\\\"\\\"\", \"id\": \"/subscriptions//resourceGroups/azrhelclirsgrp/providers/Microsoft.Network/virtualNetworks/azrhelclivnet1\", \"location\": \"southcentralus\", \"name\": \"azrhelclivnet1\", \"provisioningState\": \"Succeeded\", \"resourceGroup\": \"azrhelclirsgrp\", \"resourceGuid\": \"0f25efee-e2a6-4abe-a4e9-817061ee1e79\", \"subnets\": [ { \"addressPrefix\": \"10.0.0.0/24\", \"etag\": \"W/\\\"\\\"\", \"id\": \"/subscriptions//resourceGroups/azrhelclirsgrp/providers/Microsoft.Network/virtualNetworks/azrhelclivnet1/subnets/azrhelclisubnet1\", \"ipConfigurations\": null, \"name\": \"azrhelclisubnet1\", \"networkSecurityGroup\": null, \"provisioningState\": \"Succeeded\", \"resourceGroup\": \"azrhelclirsgrp\", \"resourceNavigationLinks\": null, \"routeTable\": null } ], \"tags\": {}, \"type\": \"Microsoft.Network/virtualNetworks\", \"virtualNetworkPeerings\": null } }", "az storage blob upload --account-name <storage-account-name> --container-name <container-name> --type page --file <path-to-vhd> --name <image-name>.vhd", "[clouduser@localhost]USD az storage blob upload --account-name azrhelclistact --container-name azrhelclistcont --type page --file rhel-image-{ProductNumber}.vhd --name rhel-image-{ProductNumber}.vhd Percent complete: %100.0", "az storage blob url -c <container-name> -n <image-name>.vhd", "az storage blob url -c azrhelclistcont -n rhel-image-8.vhd \"https://azrhelclistact.blob.core.windows.net/azrhelclistcont/rhel-image-8.vhd\"", "az image create -n <image-name> -g <resource-group> -l <azure-region> --source <URL> --os-type linux", "az image create -n rhel8 -g azrhelclirsgrp2 -l southcentralus --source https://azrhelclistact.blob.core.windows.net/azrhelclistcont/rhel-image-8.vhd --os-type linux", "az vm create -g <resource-group> -l <azure-region> -n <vm-name> --vnet-name <vnet-name> --subnet <subnet-name> --size Standard_A2 --os-disk-name <simple-name> --admin-username <administrator-name> --generate-ssh-keys --image <path-to-image>", "[clouduser@localhost]USD az vm create -g azrhelclirsgrp2 -l southcentralus -n rhel-azure-vm-1 --vnet-name azrhelclivnet1 --subnet azrhelclisubnet1 --size Standard_A2 --os-disk-name vm-1-osdisk --admin-username clouduser --generate-ssh-keys --image rhel8 { \"fqdns\": \"\", \"id\": \"/subscriptions//resourceGroups/azrhelclirsgrp/providers/Microsoft.Compute/virtualMachines/rhel-azure-vm-1\", \"location\": \"southcentralus\", \"macAddress\": \"\", \"powerState\": \"VM running\", \"privateIpAddress\": \"10.0.0.4\", \"publicIpAddress\": \" <public-IP-address> \", \"resourceGroup\": \"azrhelclirsgrp2\"", "[clouduser@localhost]USD ssh -i /home/clouduser/.ssh/id_rsa clouduser@ <public-IP-address> . The authenticity of host ', <public-IP-address> ' can't be established. Are you sure you want to continue connecting (yes/no)? yes Warning: Permanently added ' <public-IP-address> ' (ECDSA) to the list of known hosts. [clouduser@rhel-azure-vm-1 ~]USD", "az vm create -g <resource-group> -l <azure-region> -n <vm-name> --vnet-name <vnet-name> --subnet <subnet-name> --size Standard_A2 --os-disk-name <simple-name> --authentication-type password --admin-username <administrator-name> --admin-password <ssh-password> --image <path-to-image>", "ssh <admin-username> @ <public-ip-address>", "az vm create -g <resource-group> -l <azure-region> -n <vm-name> --vnet-name <vnet-name> --subnet <subnet-name> --size Standard_A2 --os-disk-name <simple-name> --admin-username <administrator-name> --ssh-key-value <path-to-existing-ssh-key> --image <path-to-image>", "ssh -i <path-to-existing-ssh-key> <admin-username> @ <public-ip-address>", "subscription-manager register --auto-attach", "insights-client register --display-name <display-name-value>", "subscription-manager config --rhsmcertd.auto_registration=1", "systemctl enable rhsmcertd.service", "subscription-manager config --rhsm.manage_repos=0", "subscription-manager identity system identity: fdc46662-c536-43fb-a18a-bbcb283102b7 name: 192.168.122.222 org name: 6340056 org ID: 6340056", "dnf install kexec-tools", "grep -v \"#\" /etc/kdump.conf path /var/crash core_collector makedumpfile -l --message-level 7 -d 31", "sed s/\"path /var/crash\"/\"path /mnt/crash\"", "vi /etc/default/grub GRUB_CMDLINE_LINUX=\"console=tty1 console=ttyS0 earlyprintk=ttyS0 rootdelay=300 crashkernel=512M\"", "grub2-mkconfig -o /boot/grub2/grub.cfg", "systemctl status kdump ● kdump.service - Crash recovery kernel arming Loaded: loaded (/usr/lib/systemd/system/kdump.service; enabled; vendor prese> Active: active (exited) since Fri 2024-02-09 10:50:18 CET; 1h 20min ago Process: 1252 ExecStart=/usr/bin/kdumpctl start (code=exited, status=0/SUCCES> Main PID: 1252 (code=exited, status=0/SUCCESS) Tasks: 0 (limit: 16975) Memory: 512B CGroup: /system.slice/kdump.service", "az login", "[clouduser@localhost]USD az login To sign in, use a web browser to open the page https://aka.ms/devicelogin and enter the code FDMSCMETZ to authenticate. [ { \"cloudName\": \"AzureCloud\", \"id\": \" Subscription ID \", \"isDefault\": true, \"name\": \" MySubscriptionName \", \"state\": \"Enabled\", \"tenantId\": \" Tenant ID \", \"user\": { \"name\": \" [email protected] \", \"type\": \"user\" } } ]", "az group create --name resource-group --location azure-region", "[clouduser@localhost]USD az group create --name azrhelclirsgrp --location southcentralus { \"id\": \"/subscriptions//resourceGroups/azrhelclirsgrp\", \"location\": \"southcentralus\", \"managedBy\": null, \"name\": \"azrhelclirsgrp\", \"properties\": { \"provisioningState\": \"Succeeded\" }, \"tags\": null }", "az storage account create -l azure-region -n storage-account-name -g resource-group --sku sku_type --kind StorageV2", "[clouduser@localhost]USD az storage account create -l southcentralus -n azrhelclistact -g azrhelclirsgrp --sku Standard_LRS --kind StorageV2 { \"accessTier\": null, \"creationTime\": \"2017-04-05T19:10:29.855470+00:00\", \"customDomain\": null, \"encryption\": null, \"id\": \"/subscriptions//resourceGroups/azrhelclirsgrp/providers/Microsoft.Storage/storageAccounts/azrhelclistact\", \"kind\": \"StorageV2\", \"lastGeoFailoverTime\": null, \"location\": \"southcentralus\", \"name\": \"azrhelclistact\", \"primaryEndpoints\": { \"blob\": \"https://azrhelclistact.blob.core.windows.net/\", \"file\": \"https://azrhelclistact.file.core.windows.net/\", \"queue\": \"https://azrhelclistact.queue.core.windows.net/\", \"table\": \"https://azrhelclistact.table.core.windows.net/\" }, \"primaryLocation\": \"southcentralus\", \"provisioningState\": \"Succeeded\", \"resourceGroup\": \"azrhelclirsgrp\", \"secondaryEndpoints\": null, \"secondaryLocation\": null, \"sku\": { \"name\": \"Standard_LRS\", \"tier\": \"Standard\" }, \"statusOfPrimary\": \"available\", \"statusOfSecondary\": null, \"tags\": {}, \"type\": \"Microsoft.Storage/storageAccounts\" }", "az storage account show-connection-string -n storage-account-name -g resource-group", "[clouduser@localhost]USD az storage account show-connection-string -n azrhelclistact -g azrhelclirsgrp { \"connectionString\": \"DefaultEndpointsProtocol=https;EndpointSuffix=core.windows.net;AccountName=azrhelclistact;AccountKey=NreGk...==\" }", "export AZURE_STORAGE_CONNECTION_STRING=\" storage-connection-string \"", "[clouduser@localhost]USD export AZURE_STORAGE_CONNECTION_STRING=\"DefaultEndpointsProtocol=https;EndpointSuffix=core.windows.net;AccountName=azrhelclistact;AccountKey=NreGk...==\"", "az storage container create -n container-name", "[clouduser@localhost]USD az storage container create -n azrhelclistcont { \"created\": true }", "az network vnet create -g resource group --name vnet-name --subnet-name subnet-name", "[clouduser@localhost]USD az network vnet create --resource-group azrhelclirsgrp --name azrhelclivnet1 --subnet-name azrhelclisubnet1 { \"newVNet\": { \"addressSpace\": { \"addressPrefixes\": [ \"10.0.0.0/16\" ] }, \"dhcpOptions\": { \"dnsServers\": [] }, \"etag\": \"W/\\\"\\\"\", \"id\": \"/subscriptions//resourceGroups/azrhelclirsgrp/providers/Microsoft.Network/virtualNetworks/azrhelclivnet1\", \"location\": \"southcentralus\", \"name\": \"azrhelclivnet1\", \"provisioningState\": \"Succeeded\", \"resourceGroup\": \"azrhelclirsgrp\", \"resourceGuid\": \"0f25efee-e2a6-4abe-a4e9-817061ee1e79\", \"subnets\": [ { \"addressPrefix\": \"10.0.0.0/24\", \"etag\": \"W/\\\"\\\"\", \"id\": \"/subscriptions//resourceGroups/azrhelclirsgrp/providers/Microsoft.Network/virtualNetworks/azrhelclivnet1/subnets/azrhelclisubnet1\", \"ipConfigurations\": null, \"name\": \"azrhelclisubnet1\", \"networkSecurityGroup\": null, \"provisioningState\": \"Succeeded\", \"resourceGroup\": \"azrhelclirsgrp\", \"resourceNavigationLinks\": null, \"routeTable\": null } ], \"tags\": {}, \"type\": \"Microsoft.Network/virtualNetworks\", \"virtualNetworkPeerings\": null } }", "az vm availability-set create --name MyAvailabilitySet --resource-group MyResourceGroup", "[clouduser@localhost]USD az vm availability-set create --name rhelha-avset1 --resource-group azrhelclirsgrp { \"additionalProperties\": {}, \"id\": \"/subscriptions/.../resourceGroups/azrhelclirsgrp/providers/Microsoft.Compute/availabilitySets/rhelha-avset1\", \"location\": \"southcentralus\", \"name\": \"rhelha-avset1\", \"platformFaultDomainCount\": 2, \"platformUpdateDomainCount\": 5, [omitted]", "lsinitrd | grep hv", "lsinitrd | grep hv drwxr-xr-x 2 root root 0 Aug 12 14:21 usr/lib/modules/3.10.0-932.el8.x86_64/kernel/drivers/hv -rw-r--r-- 1 root root 31272 Aug 11 08:45 usr/lib/modules/3.10.0-932.el8.x86_64/kernel/drivers/hv/hv_vmbus.ko.xz -rw-r--r-- 1 root root 25132 Aug 11 08:46 usr/lib/modules/3.10.0-932.el8.x86_64/kernel/drivers/net/hyperv/hv_netvsc.ko.xz -rw-r--r-- 1 root root 9796 Aug 11 08:45 usr/lib/modules/3.10.0-932.el8.x86_64/kernel/drivers/scsi/hv_storvsc.ko.xz", "add_drivers+=\" hv_vmbus \" add_drivers+=\" hv_netvsc \" add_drivers+=\" hv_storvsc \" add_drivers+=\" nvme \"", "dracut -f -v --regenerate-all", "subscription-manager register --auto-attach Installed Product Current Status: Product Name: Red Hat Enterprise Linux for x86_64 Status: Subscribed", "yum install cloud-init hyperv-daemons -y", "reporting: logging: type: log telemetry: type: hyperv", "datasource_list: [ Azure ] datasource: Azure: apply_network_config: False", "blacklist nouveau blacklist lbm-nouveau blacklist floppy blacklist amdgpu blacklist skx_edac blacklist intel_cstate", "rm -f /etc/udev/rules.d/70-persistent-net.rules rm -f /etc/udev/rules.d/75-persistent-net-generator.rules rm -f /etc/udev/rules.d/80-net-name-slot-rules", "SUBSYSTEM==\"net\", DRIVERS==\"hv_pci\", ACTION==\"add\", ENV{NM_UNMANAGED}=\"1\"", "systemctl enable sshd systemctl is-enabled sshd", "GRUB_TIMEOUT=10", "rhgb quiet", "GRUB_CMDLINE_LINUX=\"loglevel=3 crashkernel=auto console=tty1 console=ttyS0 earlyprintk=ttyS0 rootdelay=300\" GRUB_TIMEOUT_STYLE=countdown GRUB_TERMINAL=\"serial console\" GRUB_SERIAL_COMMAND=\"serial --speed=115200 --unit=0 --word=8 --parity=no --stop=1\"", "grub2-mkconfig -o /boot/grub2/grub.cfg", "grub2-mkconfig -o /boot/efi/EFI/redhat/grub.cfg", "yum install WALinuxAgent -y systemctl enable waagent", "Provisioning.DeleteRootPassword=y ResourceDisk.Format=n ResourceDisk.EnableSwap=n", "subscription-manager unregister", "waagent -force -deprovision", "export HISTSIZE=0 poweroff", "az login", "{ \"Name\": \"Linux Fence Agent Role\", \"description\": \"Allows to power-off and start virtual machines\", \"assignableScopes\": [ \"/subscriptions/ <subscription-id> \" ], \"actions\": [ \"Microsoft.Compute/*/read\", \"Microsoft.Compute/virtualMachines/powerOff/action\", \"Microsoft.Compute/virtualMachines/start/action\" ], \"notActions\": [], \"dataActions\": [], \"notDataActions\": [] }", "az role definition create --role-definition azure-fence-role.json { \"assignableScopes\": [ \"/subscriptions/ <my-subscription-id> \" ], \"description\": \"Allows to power-off and start virtual machines\", \"id\": \"/subscriptions/ <my-subscription-id> /providers/Microsoft.Authorization/roleDefinitions/ <role-id> \", \"name\": \" <role-id> \", \"permissions\": [ { \"actions\": [ \"Microsoft.Compute/*/read\", \"Microsoft.Compute/virtualMachines/powerOff/action\", \"Microsoft.Compute/virtualMachines/start/action\" ], \"dataActions\": [], \"notActions\": [], \"notDataActions\": [] } ], \"roleName\": \"Linux Fence Agent Role\", \"roleType\": \"CustomRole\", \"type\": \"Microsoft.Authorization/roleDefinitions\" }", "fence_azure_arm --msi -o list node1, node2, [...]", "qemu-img convert -f qcow2 -O raw <image-name> .qcow2 <image-name> .raw", "#!/bin/bash MB=USD((1024 * 1024)) size=USD(qemu-img info -f raw --output json \"USD1\" | gawk 'match(USD0, /\"virtual-size\": ([0-9]+),/, val) {print val[1]}') rounded_size=USD(((USDsize/USDMB + 1) * USDMB)) if [ USD((USDsize % USDMB)) -eq 0 ] then echo \"Your image is already aligned. You do not need to resize.\" exit 1 fi echo \"rounded size = USDrounded_size\" export rounded_size", "sh align.sh <image-xxx> .raw", "qemu-img convert -f raw -o subformat=fixed,force_size -O vpc <image-xxx> .raw <image.xxx> .vhd", "qemu-img resize -f raw <image-xxx> .raw <rounded-value>", "qemu-img convert -f raw -o subformat=fixed,force_size -O vpc <image-xxx> .raw <image.xxx> .vhd", "az storage blob upload --account-name <storage-account-name> --container-name <container-name> --type page --file <path-to-vhd> --name <image-name>.vhd", "[clouduser@localhost]USD az storage blob upload --account-name azrhelclistact --container-name azrhelclistcont --type page --file rhel-image-{ProductNumber}.vhd --name rhel-image-{ProductNumber}.vhd Percent complete: %100.0", "az storage blob url -c <container-name> -n <image-name>.vhd", "az storage blob url -c azrhelclistcont -n rhel-image-8.vhd \"https://azrhelclistact.blob.core.windows.net/azrhelclistcont/rhel-image-8.vhd\"", "az image create -n <image-name> -g <resource-group> -l <azure-region> --source <URL> --os-type linux", "az image create -n rhel8 -g azrhelclirsgrp2 -l southcentralus --source https://azrhelclistact.blob.core.windows.net/azrhelclistcont/rhel-image-8.vhd --os-type linux", "ssh administrator@PublicIP", "az vm list -g <resource-group> -d --output table", "[clouduser@localhost ~] USD az vm list -g azrhelclirsgrp -d --output table Name ResourceGroup PowerState PublicIps Location ------ ---------------------- -------------- ------------- -------------- node01 azrhelclirsgrp VM running 192.98.152.251 southcentralus", "sudo -i subscription-manager register --auto-attach", "subscription-manager repos --disable= *", "subscription-manager repos --enable=rhel-8-for-x86_64-highavailability-rpms", "yum update -y", "yum install pcs pacemaker fence-agents-azure-arm", "passwd hacluster", "firewall-cmd --permanent --add-service=high-availability firewall-cmd --reload", "systemctl start pcsd.service systemctl enable pcsd.service Created symlink from /etc/systemd/system/multi-user.target.wants/pcsd.service to /usr/lib/systemd/system/pcsd.service.", "systemctl status pcsd.service pcsd.service - PCS GUI and remote configuration interface Loaded: loaded (/usr/lib/systemd/system/pcsd.service; enabled; vendor preset: disabled) Active: active (running) since Fri 2018-02-23 11:00:58 EST; 1min 23s ago Docs: man:pcsd(8) man:pcs(8) Main PID: 46235 (pcsd) CGroup: /system.slice/pcsd.service └─46235 /usr/bin/ruby /usr/lib/pcsd/pcsd > /dev/null &", "pcs host auth <hostname1> <hostname2> <hostname3>", "pcs host auth node01 node02 node03 Username: hacluster Password: node01: Authorized node02: Authorized node03: Authorized", "pcs cluster setup <cluster_name> <hostname1> <hostname2> <hostname3>", "pcs cluster setup new_cluster node01 node02 node03 [...] Synchronizing pcsd certificates on nodes node01, node02, node03 node02: Success node03: Success node01: Success Restarting pcsd on the nodes in order to reload the certificates node02: Success node03: Success node01: Success", "pcs cluster enable --all node02: Cluster Enabled node03: Cluster Enabled node01: Cluster Enabled", "pcs cluster start --all node02: Starting Cluster node03: Starting Cluster node01: Starting Cluster", "fence_azure_arm -l <AD-Application-ID> -p <AD-Password> --resourceGroup <MyResourceGroup> --tenantId <Tenant-ID> --subscriptionId <Subscription-ID> -o list", "fence_azure_arm -l e04a6a49-9f00-xxxx-xxxx-a8bdda4af447 -p z/a05AwCN0IzAjVwXXXXXXXEWIoeVp0xg7QT//JE= --resourceGroup azrhelclirsgrp --tenantId 77ecefb6-cff0-XXXX-XXXX-757XXXX9485 --subscriptionId XXXXXXXX-38b4-4527-XXXX-012d49dfc02c -o list node01, node02, node03,", "pcs stonith describe fence_azure_arm", "pcs stonith describe fence_apc Stonith options: password: Authentication key password_script: Script to run to retrieve password", "pcs stonith create clusterfence fence_azure_arm", "pcs stonith fence azurenodename", "pcs status Cluster name: newcluster Stack: corosync Current DC: node01 (version 1.1.18-11.el7-2b07d5c5a9) - partition with quorum Last updated: Fri Feb 23 11:44:35 2018 Last change: Fri Feb 23 11:21:01 2018 by root via cibadmin on node01 3 nodes configured 1 resource configured Online: [ node01 node03 ] OFFLINE: [ node02 ] Full list of resources: clusterfence (stonith:fence_azure_arm): Started node01 Daemon Status: corosync: active/disabled pacemaker: active/disabled pcsd: active/enabled", "pcs cluster start <hostname>", "pcs status", "pcs status Cluster name: newcluster Stack: corosync Current DC: node01 (version 1.1.18-11.el7-2b07d5c5a9) - partition with quorum Last updated: Fri Feb 23 11:34:59 2018 Last change: Fri Feb 23 11:21:01 2018 by root via cibadmin on node01 3 nodes configured 1 resource configured Online: [ node01 node02 node03 ] Full list of resources: clusterfence (stonith:fence_azure_arm): Started node01 Daemon Status: corosync: active/disabled pacemaker: active/disabled pcsd: active/enabled", "yum install nmap-ncat resource-agents", "pcs resource create resource-name IPaddr2 ip=\"10.0.0.7\" --group cluster-resources-group", "pcs resource create resource-loadbalancer-name azure-lb port= port-number --group cluster-resources-group", "pcs status", "Cluster name: clusterfence01 Stack: corosync Current DC: node02 (version 1.1.16-12.el7_4.7-94ff4df) - partition with quorum Last updated: Tue Jan 30 12:42:35 2018 Last change: Tue Jan 30 12:26:42 2018 by root via cibadmin on node01 3 nodes configured 3 resources configured Online: [ node01 node02 node03 ] Full list of resources: clusterfence (stonith:fence_azure_arm): Started node01 Resource Group: g_azure vip_azure (ocf::heartbeat:IPaddr2): Started node02 lb_azure (ocf::heartbeat:azure-lb): Started node02 Daemon Status: corosync: active/disabled pacemaker: active/disabled pcsd: active/enabled", "az disk create -g <resource_group> -n <shared_block_volume_name> --size-gb <disk_size> --max-shares <number_vms> -l <location>", "az disk create -g sharedblock-rg -n shared-block-volume.vhd --size-gb 1024 --max-shares 3 -l westcentralus { \"creationData\": { \"createOption\": \"Empty\", \"galleryImageReference\": null, \"imageReference\": null, \"sourceResourceId\": null, \"sourceUniqueId\": null, \"sourceUri\": null, \"storageAccountId\": null, \"uploadSizeBytes\": null }, \"diskAccessId\": null, \"diskIopsReadOnly\": null, \"diskIopsReadWrite\": 5000, \"diskMbpsReadOnly\": null, \"diskMbpsReadWrite\": 200, \"diskSizeBytes\": 1099511627776, \"diskSizeGb\": 1024, \"diskState\": \"Unattached\", \"encryption\": { \"diskEncryptionSetId\": null, \"type\": \"EncryptionAtRestWithPlatformKey\" }, \"encryptionSettingsCollection\": null, \"hyperVgeneration\": \"V1\", \"id\": \"/subscriptions/12345678910-12345678910/resourceGroups/sharedblock-rg/providers/Microsoft.Compute/disks/shared-block-volume.vhd\", \"location\": \"westcentralus\", \"managedBy\": null, \"managedByExtended\": null, \"maxShares\": 3, \"name\": \"shared-block-volume.vhd\", \"networkAccessPolicy\": \"AllowAll\", \"osType\": null, \"provisioningState\": \"Succeeded\", \"resourceGroup\": \"sharedblock-rg\", \"shareInfo\": null, \"sku\": { \"name\": \"Premium_LRS\", \"tier\": \"Premium\" }, \"tags\": {}, \"timeCreated\": \"2020-08-27T15:36:56.263382+00:00\", \"type\": \"Microsoft.Compute/disks\", \"uniqueId\": \"cd8b0a25-6fbe-4779-9312-8d9cbb89b6f2\", \"zones\": null }", "az disk show -g <resource_group> -n <shared_block_volume_name>", "az disk show -g sharedblock-rg -n shared-block-volume.vhd { \"creationData\": { \"createOption\": \"Empty\", \"galleryImageReference\": null, \"imageReference\": null, \"sourceResourceId\": null, \"sourceUniqueId\": null, \"sourceUri\": null, \"storageAccountId\": null, \"uploadSizeBytes\": null }, \"diskAccessId\": null, \"diskIopsReadOnly\": null, \"diskIopsReadWrite\": 5000, \"diskMbpsReadOnly\": null, \"diskMbpsReadWrite\": 200, \"diskSizeBytes\": 1099511627776, \"diskSizeGb\": 1024, \"diskState\": \"Unattached\", \"encryption\": { \"diskEncryptionSetId\": null, \"type\": \"EncryptionAtRestWithPlatformKey\" }, \"encryptionSettingsCollection\": null, \"hyperVgeneration\": \"V1\", \"id\": \"/subscriptions/12345678910-12345678910/resourceGroups/sharedblock-rg/providers/Microsoft.Compute/disks/shared-block-volume.vhd\", \"location\": \"westcentralus\", \"managedBy\": null, \"managedByExtended\": null, \"maxShares\": 3, \"name\": \"shared-block-volume.vhd\", \"networkAccessPolicy\": \"AllowAll\", \"osType\": null, \"provisioningState\": \"Succeeded\", \"resourceGroup\": \"sharedblock-rg\", \"shareInfo\": null, \"sku\": { \"name\": \"Premium_LRS\", \"tier\": \"Premium\" }, \"tags\": {}, \"timeCreated\": \"2020-08-27T15:36:56.263382+00:00\", \"type\": \"Microsoft.Compute/disks\", \"uniqueId\": \"cd8b0a25-6fbe-4779-9312-8d9cbb89b6f2\", \"zones\": null }", "az network nic create -g <resource_group> -n <nic_name> --subnet <subnet_name> --vnet-name <virtual_network> --location <location> --network-security-group <network_security_group> --private-ip-address-version IPv4", "az network nic create -g sharedblock-rg -n sharedblock-nodea-vm-nic-protected --subnet sharedblock-subnet-protected --vnet-name sharedblock-vn --location westcentralus --network-security-group sharedblock-nsg --private-ip-address-version IPv4", "az vm create -n <vm_name> -g <resource_group> --attach-data-disks <shared_block_volume_name> --data-disk-caching None --os-disk-caching ReadWrite --os-disk-name <new-vm-disk-name> --os-disk-size-gb <disk_size> --location <location> --size <virtual_machine_size> --image <image_name> --admin-username <vm_username> --authentication-type ssh --ssh-key-values <ssh_key> --nics <nic_name> --availability-set <availability_set> --ppg <proximity_placement_group>", "az vm create -n sharedblock-nodea-vm -g sharedblock-rg --attach-data-disks shared-block-volume.vhd --data-disk-caching None --os-disk-caching ReadWrite --os-disk-name sharedblock-nodea-vm.vhd --os-disk-size-gb 64 --location westcentralus --size Standard_D2s_v3 --image /subscriptions/12345678910-12345678910/resourceGroups/sample-azureimagesgroupwestcentralus/providers/Microsoft.Compute/images/sample-azure-rhel-8.3.0-20200713.n.0.x86_64 --admin-username sharedblock-user --authentication-type ssh --ssh-key-values @sharedblock-key.pub --nics sharedblock-nodea-vm-nic-protected --availability-set sharedblock-as --ppg sharedblock-ppg { \"fqdns\": \"\", \"id\": \"/subscriptions/12345678910-12345678910/resourceGroups/sharedblock-rg/providers/Microsoft.Compute/virtualMachines/sharedblock-nodea-vm\", \"location\": \"westcentralus\", \"macAddress\": \"00-22-48-5D-EE-FB\", \"powerState\": \"VM running\", \"privateIpAddress\": \"198.51.100.3\", \"publicIpAddress\": \"\", \"resourceGroup\": \"sharedblock-rg\", \"zones\": \"\" }", "ssh <ip_address> \"hostname ; lsblk -d | grep ' 1T '\"", "ssh 198.51.100.3 \"hostname ; lsblk -d | grep ' 1T '\" nodea sdb 8:16 0 1T 0 disk", "ssh <ip_address> \"hostname ; lsblk -d | grep ' 1T ' | awk '{print \\USD1}' | xargs -i udevadm info --query=all --name=/dev/{} | grep '^E: ID_SERIAL='\"", "ssh 198.51.100.3 \"hostname ; lsblk -d | grep ' 1T ' | awk '{print \\USD1}' | xargs -i udevadm info --query=all --name=/dev/{} | grep '^E: ID_SERIAL='\" nodea E: ID_SERIAL=3600224808dd8eb102f6ffc5822c41d89" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html-single/deploying_rhel_8_on_microsoft_azure/console.redhat.com
Chapter 47. Passing Information into Resource Classes and Methods
Chapter 47. Passing Information into Resource Classes and Methods Abstract JAX-RS specifies a number of annotations that allow the developer to control where the information passed into resources come from. The annotations conform to common HTTP concepts such as matrix parameters in a URI. The standard APIs allow the annotations to be used on method parameters, bean properties, and resource class fields. Apache CXF provides an extension that allows for the injection of a sequence of parameters to be injected into a bean. 47.1. Basics of injecting data Overview Parameters, fields, and bean properties that are initialized using data from the HTTP request message have their values injected into them by the runtime. The specific data that is injected is specified by a set of annotations described in Section 47.2, "Using JAX-RS APIs" . The JAX-RS specification places a few restrictions on when the data is injected. It also places a few restrictions on the types of objects into which request data can be injected. When data is injected Request data is injected into objects when they are instantiated due to a request. This means that only objects that directly correspond to a resource can use the injection annotations. As discussed in Chapter 46, Creating Resources , these objects will either be a root resource decorated with the @Path annotation or an object returned from a sub-resource locator method. Supported data types The specific set of data types that data can be injected into depends on the annotation used to specify the source of the injected data. However, all of the injection annotations support at least the following set of data types: primitives such as int , char , or long Objects that have a constructor that accepts a single String argument Objects that have a static valueOf() method that accepts a single String argument List< T >, Set< T >, or SortedSet< T > objects where T satisfies the other conditions in the list Note Where injection annotations have different requirements for supported data types, the differences will be highlighted in the discussion of the annotation. 47.2. Using JAX-RS APIs 47.2.1. JAX-RS Annotation Types The standard JAX-RS API specifies annotations that can be used to inject values into fields, bean properties, and method parameters. The annotations can be split up into three distinct types: Section 47.2.2, "Injecting data from a request URI" Section 47.2.3, "Injecting data from the HTTP message header" Section 47.2.4, "Injecting data from HTML forms" 47.2.2. Injecting data from a request URI Overview One of the best practices for designing a RESTful Web service is that each resource should have a unique URI. A developer can use this principle to provide a good deal of information to the underlying resource implementation. When designing URI templates for a resource, a developer can build the templates to include parameter information that can be injected into the resource implementation. Developers can also leverage query and matrix parameters for feeding information into the resource implementations. Getting data from the URI's path One of the more common mechanisms for getting information about a resource is through the variables used in creating the URI templates for a resource. This is accomplished using the javax.ws.rs.PathParam annotation. The @PathParam annotation has a single parameter that identifies the URI template variable from which the data will be injected. In Example 47.1, "Injecting data from a URI template variable" the @PathParam annotation specifies that the value of the URI template variable color is injected into the itemColor field. Example 47.1. Injecting data from a URI template variable The data types supported by the @PathParam annotation are different from the ones described in the section called "Supported data types" . The entity into which the @PathParam annotation injects data must be of one of the following types: PathSegment The value will be the final segment of the matching part of the path. List<PathSegment> The value will be a list of PathSegment objects corresponding to the path segment(s) that matched the named template parameter. primitives such as int , char , or long Objects that have a constructor that accepts a single String argument Objects that have a static valueOf() method that accepts a single String argument Using query parameters A common way of passing information on the Web is to use query parameters in a URI. Query parameters appear at the end of the URI and are separated from the resource location portion of the URI by a question mark( ? ). They consist of one, or more, name value pairs where the name and value are separated by an equal sign( = ). When more than one query parameter is specified, the pairs are separated from each other by either a semicolon( ; ) or an ampersand( & ). Example 47.2, "URI with a query string" shows the syntax of a URI with query parameters. Example 47.2. URI with a query string Note You can use either the semicolon or the ampersand to separate query parameters, but not both. The javax.ws.rs.QueryParam annotation extracts the value of a query parameter and injects it into a JAX-RS resource. The annotation takes a single parameter that identifies the name of the query parameter from which the value is extracted and injected into the specified field, bean property, or parameter. The @QueryParam annotation supports the types described in the section called "Supported data types" . Example 47.3, "Resource method using data from a query parameter" shows a resource method that injects the value of the query parameter id into the method's id parameter. Example 47.3. Resource method using data from a query parameter To process an HTTP POST to /monstersforhire/daikaiju?id=jonas the updateMonster() method's type is set to daikaiju and the id is set to jonas . Using matrix parameters URI matrix parameters, like URI query parameters, are name/value pairs that can provide additional information selecting a resource. Unlike query parameters, matrix parameters can appear anywhere in a URI and they are separated from the hierarchical path segments of the URI using a semicolon( ; ). /mostersforhire/daikaiju;id=jonas has one matrix parameter called id and /monstersforhire/japan;type=daikaiju/flying;wingspan=40 has two matrix parameters called type and wingspan . Note Matrix parameters are not evaluated when computing a resource's URI. So, the URI used to locate the proper resource to handle the request URI /monstersforhire/japan;type=daikaiju/flying;wingspan=40 is /monstersforhire/japan/flying . The value of a matrix parameter is injected into a field, parameter, or bean property using the javax.ws.rs.MatrixParam annotation. The annotation takes a single parameter that identifies the name of the matrix parameter from which the value is extracted and injected into the specified field, bean property, or parameter. The @MatrixParam annotation supports the types described in the section called "Supported data types" . Example 47.4, "Resource method using data from matrix parameters" shows a resource method that injects the value of the matrix parameters type and id into the method's parameters. Example 47.4. Resource method using data from matrix parameters To process an HTTP POST to /monstersforhire;type=daikaiju;id=whale the updateMonster() method's type is set to daikaiju and the id is set to whale . Note JAX-RS evaluates all of the matrix parameters in a URI at once, so it cannot enforce constraints on a matrix parameters location in a URI. For example /monstersforhire/japan;type=daikaiju/flying;wingspan=40 , /monstersforhire/japan/flying;type=daikaiju;wingspan=40 , and /monstersforhire/japan;type=daikaiju;wingspan=40/flying are all treated as equivalent by a RESTful Web service implemented using the JAX-RS APIs. Disabling URI decoding By default all request URIs are decoded. So the URI /monster/night%20stalker and the URI /monster/night stalker are equivalent. The automatic URI decoding makes it easy to send characters outside of the ASCII character set as parameters. If you do not wish to have URI automatically decoded, you can use the javax.ws.rs.Encoded annotation to deactivate the URI decoding. The annotation can be used to deactivate URI decoding at the following levels: class level-Decorating a class with the @Encoded annotation deactivates the URI decoding for all parameters, field, and bean properties in the class. method level-Decorating a method with the @Encoded annotation deactivates the URI decoding for all parameters of the class. parameter/field level-Decorating a parameter or field with the @Encoded annotation deactivates the URI decoding for all parameters of the class. Example 47.5, "Disabling URI decoding" shows a resource whose getMonster() method does not use URI decoding. The addMonster() method only disables URI decoding for the type parameter. Example 47.5. Disabling URI decoding Error handling If an error occurs when attempting to inject data using one of the URI injection annotations a WebApplicationException exception wrapping the original exception is generated. The WebApplicationException exception's status is set to 404 . 47.2.3. Injecting data from the HTTP message header Overview In normal usage the HTTP headers in a request message pass along generic information about the message, how it is to be handled in transit, and details about the expected response. While a few standard headers are commonly recognized and used, the HTTP specification allows for any name/value pair to be used as an HTTP header. The JAX-RS APIs provide an easy mechanism for injecting HTTP header information into a resource implementation. One of the most commonly used HTTP headers is the cookie. Cookies allow HTTP clients and servers to share static information across multiple request/response sequences. The JAX-RS APIs provide an annotation inject data directly from a cookie into a resource implementation. Injecting information from the HTTP headers The javax.ws.rs.HeaderParam annotation is used to inject the data from an HTTP header field into a parameter, field, or bean property. It has a single parameter that specifies the name of the HTTP header field from which the value is extracted and injected into the resource implementation. The associated parameter, field, or bean property must conform to the data types described in the section called "Supported data types" . Injecting the If-Modified-Since header shows code for injecting the value of the HTTP If-Modified-Since header into a class' oldestDate field. Injecting the If-Modified-Since header Injecting information from a cookie Cookies are a special type of HTTP header. They are made up of one or more name/value pairs that are passed to the resource implementation on the first request. After the first request, the cookie is passes back and forth between the provider and consumer with each message. Only the consumer, because they generate requests, can change the cookie. Cookies are commonly used to maintain session across multiple request/response sequences, storing user settings, and other data that can persist. The javax.ws.rs.CookieParam annotation extracts the value from a cookie's field and injects it into a resource implementation. It takes a single parameter that specifies the name of the cookie's field from which the value is to be extracted. In addition to the data types listed in the section called "Supported data types" , entities decorated with the @CookieParam can also be a Cookie object. Example 47.6, "Injecting a cookie" shows code for injecting the value of the handle cookie into a field in the CB class. Example 47.6. Injecting a cookie Error handling If an error occurs when attempting to inject data using one of the HTTP message injection annotations a WebApplicationException exception wrapping the original exception is generated. The WebApplicationException exception's status is set to 400 . 47.2.4. Injecting data from HTML forms Overview HTML forms are an easy means of getting information from a user and they are also easy to create. Form data can be used for HTTP GET requests and HTTP POST requests: GET When form data is sent as part of an HTTP GET request the data is appended to the URI as a set of query parameters. Injecting data from query parameters is discussed in the section called "Using query parameters" . POST When form data is sent as part of an HTTP POST request the data is placed in the HTTP message body. The form data can be handled using a regular entity parameter that supports the form data. It can also be handled by using the @FormParam annotation to extract the data and inject the pieces into resource method parameters. Using the @FormParam annotation to inject form data The javax.ws.rs.FormParam annotation extracts field values from form data and injects the value into resource method parameters. The annotation takes a single parameter that specifies the key of the field from which it extracts the values. The associated parameter must conform to the data types described in the section called "Supported data types" . Important The JAX-RS API Javadoc states that the @FormParam annotation can be placed on fields, methods, and parameters. However, the @FormParam annotation is only meaningful when placed on resource method parameters. Example Injecting form data into resource method parameters shows a resource method that injects form data into its parameters. The method assumes that the client's form includes three fields- title , tags , and body -that contain string data. Injecting form data into resource method parameters 47.2.5. Specifying a default value to inject Overview To provide for a more robust service implementation, you may want to ensure that any optional parameters can be set to a default value. This can be particularly useful for values that are taken from query parameters and matrix parameters since entering long URI strings is highly error prone. You may also want to set a default value for a parameter extracted from a cookie since it is possible for a requesting system not have the proper information to construct a cookie with all the values. The javax.ws.rs.DefaultValue annotation can be used in conjunction with the following injection annotations: @PathParam @QueryParam @MatrixParam @FormParam @HeaderParam @CookieParam The @DefaultValue annotation specifies a default value to be used when the data corresponding to the injection annotation is not present in the request. Syntax Syntax for setting the default value of a parameter shows the syntax for using the @DefaultValue annotation. Syntax for setting the default value of a parameter The annotation must come before the parameter, bean, or field, it will effect. The position of the @DefaultValue annotation relative to the accompanying injection annotation does not matter. The @DefaultValue annotation takes a single parameter. This parameter is the value that will be injected into the field if the proper data cannot be extracted based on the injection annotation. The value can be any String value. The value should be compatible with type of the associated field. For example, if the associated field is of type int , a default value of blue results in an exception. Dealing with lists and sets If the type of the annotated parameter, bean or field is List, Set, or SortedSet then the resulting collection will have a single entry mapped from the supplied default value. Example Setting default values shows two examples of using the @DefaultValue to specify a default value for a field whose value is injected. Setting default values The getMonster() method in Setting default values is invoked when a GET request is sent to baseURI /monster . The method expects two query parameters, id and type , appended to the URI. So a GET request using the URI baseURI /monster?id=1&type=fomoiri would return the Fomoiri with the id of one. Because the @DefaultValue annotation is placed on both parameters, the getMonster() method can function if the query parameters are omitted. A GET request sent to baseURI /monster is equivalent to a GET request using the URI baseURI /monster?id=42&type=bogeyman . 47.2.6. Injecting Parameters into a Java Bean Overview When posting HTML forms over REST, a common pattern on the server side is to create a Java bean to encapsulate all of the data received in the form (and possibly data from other parameters and HTML headers, as well). Normally, creating this Java bean would be a two step process: a resource method receives the form values by injection (for example, by adding @FormParam annotations to its method parameters), and the resource method then calls the bean's constructor, passing in the form data. Using the JAX-RS 2.0 @BeanParam annotation, it is possible to implement this pattern in a single step. The form data can be injected directly into the fields of the bean class and the bean itself is created automatically by the JAX-RS runtime. This is most easily explained by example. Injection target The @BeanParam annotation can be attached to resource method parameters, resource fields, or bean properties. A parameter target is the only kind of target that can be used with all resource class lifecycles, however. The other kinds of target are restricted to the per-request lifecycle. This situation is summarized in Table 47.1, "@BeanParam Injection Targets" . Table 47.1. @BeanParam Injection Targets Target Resource Class Lifecycles PARAMETER All FIELD Per-request (default) METHOD (bean property) Per-request (default) Example without BeanParam annotation The following example shows how you might go about capturing form data in a Java bean using the conventional approach (without using @BeanParam ): In this example, the orderTable method processes a form that is used to order a quantity of tables from a furniture Web site. When the order form is posted, the form values are injected into the parameters of the orderTable method, and the orderTable method explicitly creates an instance of the TableOrder class, using the injected form data. Example with BeanParam annotation The example can be refactored to take advantage of the @BeanParam annotation. When using the @BeanParam approach, the form parameters can be injected directly into the fields of the bean class, TableOrder . In fact, you can use any of the standard JAX-RS parameter annotations in the bean class: including @PathParam , @QueryParam , @FormParam , @MatrixParam , @CookieParam , and @HeaderParam . The code for processing the form can be refactored as follows: Now that the form annotations have been added to the bean class, TableOrder, you can replace all of the @FormParam annotations in the signature of the resource method with just a single @BeanParam annotation, as shown. Now, when the form is posted to the orderTable resource method, the JAX-RS runtime automatically creates a TableOrder instance, orderBean , and injects all of the data specified by the parameter annotations on the bean class. 47.3. Parameter Converters Overview Using parameter converters, it is possible to inject a parameter (of String type) into any type of field, bean property, or resource method argument. By implementing and binding a suitable parameter converter, you can extend the JAX-RS runtime so that it is capable of converting the parameter String value to the target type. Automatic conversions Parameters are received as instances of String , so you can always inject them directly into fields, bean properties, and method parameters of String type. In addition, the JAX-RS runtime has the capability to convert parameter strings automatically to the following types: Primitive types. Types that have a constructor that accepts a single String argument. Types that have a static method named valueOf or fromString with a single String argument that returns an instance of the type. List<T> , Set<T> , or SortedSet<T> , if T is one of the types described in 2 or 3. Parameter converters In order to inject a parameter into a type not covered by automatic conversion, you can define a custom parameter converter for the type. A parameter converter is a JAX-RS extension that enables you to define conversion from String to a custom type, and also in the reverse direction, from the custom type to a String . Factory pattern The JAX-RS parameter converter mechanism uses a factory pattern. So, instead of registering a parameter converter directly, you must register a parameter converter provider (of type, javax.ws.rs.ext.ParamConverterProvider ), which creates a parameter converter (of type, javax.ws.rs.ext.ParamConverter ) on demand. ParamConverter interface The javax.ws.rs.ext.ParamConverter interface is defined as follows: To implement your own ParamConverter class, you must implement this interface, overriding the fromString method (to convert the parameter string to your target type) and the toString method (to convert your target type back to a string). ParamConverterProvider interface The javax.ws.rs.ext.ParamConverterProvider interface is defined as follows: To implement your own ParamConverterProvider class, you must implement this interface, overriding the getConverter method, which is a factory method that creates ParamConverter instances. Binding the parameter converter provider To bind the parameter converter provider to the JAX-RS runtime (thus making it available to your application), you must annotate your implementation class with the @Provider annotation, as follows: This annotation ensures that your parameter converter provider is automatically registered during the scanning phase of deployment. Example The following example shows how to implement a ParamConverterProvider and a ParamConverter which has the capability to convert parameter strings to and from the TargetType type: Using the parameter converter Now that you have defined a parameter converter for TargetType , it is possible to inject parameters directly into TargetType fields and arguments, for example: Lazy conversion of default value If you specify default values for your parameters (using the @DefaultValue annotation), you can choose whether the default value is converted to the target type right away (default behaviour), or whether the default value should be converted only when required (lazy conversion). To select lazy conversion, add the @ParamConverter.Lazy annotation to the target type. For example: 47.4. Using Apache CXF extensions Overview Apache CXF provides an extension to the standard JAX-WS injection mechanism that allows developers to replace a sequence of injection annotations with a single annotation. The single annotation is placed on a bean containing fields for the data that is extracted using the annotation. For example, if a resource method is expecting a request URI to include three query parameters called id , type , and size , it could use a single @QueryParam annotation to inject all of the parameters into a bean with corresponding fields. Note Consider using the @BeanParam annotation instead (available since JAX-RS 2.0). The standardized @BeanParam approach is more flexible than the proprietary Apache CXF extension, and is thus the recommended alternative. For details, see Section 47.2.6, "Injecting Parameters into a Java Bean" . Supported injection annotations This extension does not support all of the injection parameters. It only supports the following ones: @PathParam @QueryParam @MatrixParam @FormParam Syntax To indicate that an annotation is going to use serial injection into a bean, you need to do two things: Specify the annotation's parameter as an empty string. For example @PathParam("") specifies that a sequence of URI template variables are to be serialized into a bean. Ensure that the annotated parameter is a bean with fields that match the values being injected. Example Example 47.7, "Injecting query parameters into a bean" shows an example of injecting a number of Query parameters into a bean. The resource method expect the request URI to include two query parameters: type and id . Their values are injected into the corresponding fields of the Monster bean. Example 47.7. Injecting query parameters into a bean
[ "import javax.ws.rs.Path; import javax.ws.rs.PathParam @Path(\"/boxes/{shape}/{color}\") class Box { @PathParam(\"color\") String itemColor; }", "http://fusesource.org ? name = value ; name2 = value2 ;", "import javax.ws.rs.QueryParam; import javax.ws.rs.PathParam; import javax.ws.rs.POST; import javax.ws.rs.Path; @Path(\"/monstersforhire/\") public class MonsterService { @POST @Path(\"/{type}\") public void updateMonster(@PathParam(\"type\") String type, @QueryParam(\"id\") String id) { } }", "import javax.ws.rs.MatrixParam; import javax.ws.rs.POST; import javax.ws.rs.Path; @Path(\"/monstersforhire/\") public class MonsterService { @POST public void updateMonster(@MatrixParam(\"type\") String type, @MatrixParam(\"id\") String id) { } }", "@Path(\"/monstersforhire/\") public class MonsterService { @GET @Encoded @Path(\"/{type}\") public Monster getMonster(@PathParam(\"type\") String type, @QueryParam(\"id\") String id) { } @PUT @Path(\"/{id}\") public void addMonster(@Encoded @PathParam(\"type\") String type, @QueryParam(\"id\") String id) { } }", "import javax.ws.rs.HeaderParam; class RecordKeeper { @HeaderParam(\"If-Modified-Since\") String oldestDate; }", "import javax.ws.rs.CookieParam; class CB { @CookieParam(\"handle\") String handle; }", "import javax.ws.rs.FormParam; import javax.ws.rs.POST; @POST public boolean updatePost(@FormParam(\"title\") String title, @FormParam(\"tags\") String tags, @FormParam(\"body\") String post) { }", "import javax.ws.rs.DefaultValue; void resourceMethod(@MatrixParam(\"matrix\") @DefaultValue(\" value ) int someValue, ... )", "import javax.ws.rs.DefaultValue; import javax.ws.rs.PathParam; import javax.ws.rs.QueryParam; import javax.ws.rs.GET; import javax.ws.rs.Path; @Path(\"/monster\") public class MonsterService { @Get public Monster getMonster(@QueryParam(\"id\") @DefaultValue(\"42\") int id, @QueryParam(\"type\") @DefaultValue(\"bogeyman\") String type) { } }", "// Java import javax.ws.rs.POST; import javax.ws.rs.FormParam; import javax.ws.rs.core.Response; @POST public Response orderTable(@FormParam(\"orderId\") String orderId, @FormParam(\"color\") String color, @FormParam(\"quantity\") String quantity, @FormParam(\"price\") String price) { TableOrder bean = new TableOrder(orderId, color, quantity, price); return Response.ok().build(); }", "// Java import javax.ws.rs.POST; import javax.ws.rs.FormParam; import javax.ws.rs.core.Response; public class TableOrder { @FormParam(\"orderId\") private String orderId; @FormParam(\"color\") private String color; @FormParam(\"quantity\") private String quantity; @FormParam(\"price\") private String price; // Define public getter/setter methods // (Not shown) } @POST public Response orderTable(@BeanParam TableOrder orderBean) { // Do whatever you like with the 'orderBean' bean return Response.ok().build(); }", "// Java package javax.ws.rs.ext; import java.lang.annotation.Documented; import java.lang.annotation.ElementType; import java.lang.annotation.Retention; import java.lang.annotation.RetentionPolicy; import java.lang.annotation.Target; import javax.ws.rs.DefaultValue; public interface ParamConverter<T> { @Target({ElementType.TYPE}) @Retention(RetentionPolicy.RUNTIME) @Documented public static @interface Lazy {} public T fromString(String value); public String toString(T value); }", "// Java package javax.ws.rs.ext; import java.lang.annotation.Annotation; import java.lang.reflect.Type; public interface ParamConverterProvider { public <T> ParamConverter<T> getConverter(Class<T> rawType, Type genericType, Annotation annotations[]); }", "// Java import javax.ws.rs.ext.ParamConverterProvider; import javax.ws.rs.ext.Provider; @Provider public class TargetTypeProvider implements ParamConverterProvider { }", "// Java import java.lang.annotation.Annotation; import java.lang.reflect.Type; import javax.ws.rs.ext.ParamConverter; import javax.ws.rs.ext.ParamConverterProvider; import javax.ws.rs.ext.Provider; @Provider public class TargetTypeProvider implements ParamConverterProvider { @Override public <T> ParamConverter<T> getConverter( Class<T> rawType, Type genericType, Annotation[] annotations ) { if (rawType.getName().equals(TargetType.class.getName())) { return new ParamConverter<T>() { @Override public T fromString(String value) { // Perform conversion of value // TargetType convertedValue = // ... ; return convertedValue; } @Override public String toString(T value) { if (value == null) { return null; } // Assuming that TargetType.toString is defined return value.toString(); } }; } return null; } }", "// Java import javax.ws.rs.FormParam; import javax.ws.rs.POST; @POST public Response updatePost(@FormParam(\"target\") TargetType target) { }", "// Java import javax.ws.rs.FormParam; import javax.ws.rs.POST; import javax.ws.rs.DefaultValue; import javax.ws.rs.ext.ParamConverter.Lazy; @POST public Response updatePost( @FormParam(\"target\") @DefaultValue(\"default val\") @ParamConverter.Lazy TargetType target) { }", "import javax.ws.rs.QueryParam; import javax.ws.rs.PathParam; import javax.ws.rs.POST; import javax.ws.rs.Path; @Path(\"/monstersforhire/\") public class MonsterService { @POST public void updateMonster(@QueryParam(\"\") Monster bean) { } } public class Monster { String type; String id; }" ]
https://docs.redhat.com/en/documentation/red_hat_fuse/7.13/html/apache_cxf_development_guide/RESTParameters
Chapter 1. Planning an upgrade
Chapter 1. Planning an upgrade An in-place upgrade is the recommended way to upgrade your system to a later major version of RHEL. To ensure that you are aware of all major changes between RHEL 6 and RHEL 7, consult the Migration Planning Guide before beginning the in-place upgrade process. You can also verify whether your system can be upgraded by running the Preupgrade Assistant . The Preupgrade Assistant assesses your system for potential problems that could interfere or inhibit the upgrade before any changes are made to your system. See also Known Issues . Note Once you perform an in-place upgrade on your system, it is possible to get the working system back in limited configurations of the system by using the Red Hat Upgrade Tool integrated rollback capability or by using suitable custom backup and recovery solution, for example, by using the Relax-and-Recover (ReaR) utility. For more information, see Rolling back the upgrade . This RHEL 6 to RHEL 7 upgrade procedure is available if your RHEL system meets the following criteria: Red Hat Enterprise Linux 6.10: Your system must have the latest RHEL 6.10 packages installed. Note that for RHEL 6.10, only the Extended Life Phase (ELP) support is available. Architecture and variant: Only the indicated combinations of architecture and variant from the following matrix can be upgraded: Product Variant Intel 64-bit architecture IBM POWER, big endian IBM Z 64-bit architecture Intel 32-bit architecture Server Edition Available Available Available Not available HPC Compute Node Available N/A N/A Not available Desktop Edition Not Available N/A N/A Not available Workstation Edition Not available N/A N/A Not available Server running CloudForms software Not available N/A N/A N/A Server running Satellite software Not available. To upgrade Satellite environments from RHEL 6 to RHEL 7, see the Red Hat Satellite Installation Guide . N/A N/A N/A Note Upgrades of 64-bit IBM Z systems are allowed unless Direct Access Storage Device (DASD) with Linux Disk Layout (LDL) is used. Supported packages: The in-place upgrade is available for the following packages: Packages installed from the base repository, for example, the rhel-6-server-rpms if the system is on the RHEL 6 Server for the Intel architecture. The Preupgrade Assistant, Red Hat Upgrade Tool, and any other packages that are required for the upgrade. Note It is recommended to perform the upgrade with a minimum number of packages installed. File systems: File systems formats are intact. As a result, file systems have the same limitations as when they were originally created. Desktop: System upgrades with GNOME and KDE installs are not allowed. For more information, see Upgrading from RHEL 6 to RHEL 7 on Gnome Desktop Environment failed . Virtualization: Upgrades with KVM or VMware virtualization are available. Upgrades of RHEL on Microsoft Hyper-V are not allowed. High Availability: Upgrades of systems using the High Availability add-on are not allowed. Public Clouds: The in-place upgrade is not allowed for on-demand instances on Public Clouds. Third-party packages: The in-place upgrade is not allowed on systems using third-party packages, especially packages with third-party drivers that are needed for booting. The /usr directory: The in-place upgrade is not allowed on systems where the /usr directory is on a separate partition. For more information, see Why does Red Hat Enterprise Linux 6 to 7 in-place upgrade fail if /usr is on separate partition? .
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/upgrading_from_rhel_6_to_rhel_7/planning-an-upgrade-from-rhel-6-to-rhel-7upgrading-from-rhel-6-to-rhel-7
Making open source more inclusive
Making open source more inclusive Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see our CTO Chris Wright's message .
null
https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.1/html/deploying_and_upgrading_amq_streams_on_openshift/making-open-source-more-inclusive
17.2. Delegating Host Management
17.2. Delegating Host Management Hosts are delegated authority over other hosts through the host-add-managedby utility. This creates a managedby entry. Once the managedby entry is created, then the host can retrieve a keytab for the host over which it has delegated authority. Log in as the admin user. Add the managedby entry. For example, this delegates authority over client2 to client1 . Obtain a ticket as the host client1 : Retrieve a keytab for client2 :
[ "kinit admin", "ipa host-add-managedby client2.example.com --hosts=client1.example.com", "kinit -kt /etc/krb5.keytab host/client1.example.com", "ipa-getkeytab -s server.example.com -k /tmp/client2.keytab -p host/client2.example.com Keytab successfully retrieved and stored in: /tmp/client2.keytab" ]
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/linux_domain_identity_authentication_and_policy_guide/delegating_host_management
Getting started
Getting started OpenShift Container Platform 4.17 Getting started in OpenShift Container Platform Red Hat OpenShift Documentation Team
[ "/ws/data/load", "Items inserted in database: 2893", "oc login -u=<username> -p=<password> --server=<your-openshift-server> --insecure-skip-tls-verify", "oc login <https://api.your-openshift-server.com> --token=<tokenID>", "oc login <cluster_url> --web", "oc new-project user-getting-started --display-name=\"Getting Started with OpenShift\"", "Now using project \"user-getting-started\" on server \"https://openshift.example.com:6443\".", "oc adm policy add-role-to-user view -z default -n user-getting-started", "oc new-app quay.io/openshiftroadshow/parksmap:latest --name=parksmap -l 'app=national-parks-app,component=parksmap,role=frontend,app.kubernetes.io/part-of=national-parks-app'", "--> Found container image 0c2f55f (12 months old) from quay.io for \"quay.io/openshiftroadshow/parksmap:latest\" * An image stream tag will be created as \"parksmap:latest\" that will track this image --> Creating resources with label app=national-parks-app,app.kubernetes.io/part-of=national-parks-app,component=parksmap,role=frontend imagestream.image.openshift.io \"parksmap\" created deployment.apps \"parksmap\" created service \"parksmap\" created --> Success", "oc get service", "NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE parksmap ClusterIP <your-cluster-IP> <123.456.789> 8080/TCP 8m29s", "oc create route edge parksmap --service=parksmap", "route.route.openshift.io/parksmap created", "oc get route", "NAME HOST/PORT PATH SERVICES PORT TERMINATION WILDCARD parksmap parksmap-user-getting-started.apps.cluster.example.com parksmap 8080-tcp edge None", "oc get pods", "NAME READY STATUS RESTARTS AGE parksmap-5f9579955-6sng8 1/1 Running 0 77s", "oc describe pods", "Name: parksmap-848bd4954b-5pvcc Namespace: user-getting-started Priority: 0 Node: ci-ln-fr1rt92-72292-4fzf9-worker-a-g9g7c/10.0.128.4 Start Time: Sun, 13 Feb 2022 14:14:14 -0500 Labels: app=national-parks-app app.kubernetes.io/part-of=national-parks-app component=parksmap deployment=parksmap pod-template-hash=848bd4954b role=frontend Annotations: k8s.v1.cni.cncf.io/network-status: [{ \"name\": \"ovn-kubernetes\", \"interface\": \"eth0\", \"ips\": [ \"10.131.0.14\" ], \"default\": true, \"dns\": {} }] k8s.v1.cni.cncf.io/network-status: [{ \"name\": \"ovn-kubernetes\", \"interface\": \"eth0\", \"ips\": [ \"10.131.0.14\" ], \"default\": true, \"dns\": {} }] openshift.io/generated-by: OpenShiftNewApp openshift.io/scc: restricted Status: Running IP: 10.131.0.14 IPs: IP: 10.131.0.14 Controlled By: ReplicaSet/parksmap-848bd4954b Containers: parksmap: Container ID: cri-o://4b2625d4f61861e33cc95ad6d455915ea8ff6b75e17650538cc33c1e3e26aeb8 Image: quay.io/openshiftroadshow/parksmap@sha256:89d1e324846cb431df9039e1a7fd0ed2ba0c51aafbae73f2abd70a83d5fa173b Image ID: quay.io/openshiftroadshow/parksmap@sha256:89d1e324846cb431df9039e1a7fd0ed2ba0c51aafbae73f2abd70a83d5fa173b Port: 8080/TCP Host Port: 0/TCP State: Running Started: Sun, 13 Feb 2022 14:14:25 -0500 Ready: True Restart Count: 0 Environment: <none> Mounts: /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-6f844 (ro) Conditions: Type Status Initialized True Ready True ContainersReady True PodScheduled True Volumes: kube-api-access-6f844: Type: Projected (a volume that contains injected data from multiple sources) TokenExpirationSeconds: 3607 ConfigMapName: kube-root-ca.crt ConfigMapOptional: <nil> DownwardAPI: true ConfigMapName: openshift-service-ca.crt ConfigMapOptional: <nil> QoS Class: BestEffort Node-Selectors: <none> Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s node.kubernetes.io/unreachable:NoExecute op=Exists for 300s Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 46s default-scheduler Successfully assigned user-getting-started/parksmap-848bd4954b-5pvcc to ci-ln-fr1rt92-72292-4fzf9-worker-a-g9g7c Normal AddedInterface 44s multus Add eth0 [10.131.0.14/23] from ovn-kubernetes Normal Pulling 44s kubelet Pulling image \"quay.io/openshiftroadshow/parksmap@sha256:89d1e324846cb431df9039e1a7fd0ed2ba0c51aafbae73f2abd70a83d5fa173b\" Normal Pulled 35s kubelet Successfully pulled image \"quay.io/openshiftroadshow/parksmap@sha256:89d1e324846cb431df9039e1a7fd0ed2ba0c51aafbae73f2abd70a83d5fa173b\" in 9.49243308s Normal Created 35s kubelet Created container parksmap Normal Started 35s kubelet Started container parksmap", "oc scale --current-replicas=1 --replicas=2 deployment/parksmap", "deployment.apps/parksmap scaled", "oc get pods", "NAME READY STATUS RESTARTS AGE parksmap-5f9579955-6sng8 1/1 Running 0 7m39s parksmap-5f9579955-8tgft 1/1 Running 0 24s", "oc scale --current-replicas=2 --replicas=1 deployment/parksmap", "oc new-app python~https://github.com/openshift-roadshow/nationalparks-py.git --name nationalparks -l 'app=national-parks-app,component=nationalparks,role=backend,app.kubernetes.io/part-of=national-parks-app,app.kubernetes.io/name=python' --allow-missing-images=true", "--> Found image 0406f6c (13 days old) in image stream \"openshift/python\" under tag \"3.9-ubi9\" for \"python\" Python 3.9 ---------- Python 3.9 available as container is a base platform for building and running various Python 3.9 applications and frameworks. Python is an easy to learn, powerful programming language. It has efficient high-level data structures and a simple but effective approach to object-oriented programming. Python's elegant syntax and dynamic typing, together with its interpreted nature, make it an ideal language for scripting and rapid application development in many areas on most platforms. Tags: builder, python, python39, python-39, rh-python39 * A source build using source code from https://github.com/openshift-roadshow/nationalparks-py.git will be created * The resulting image will be pushed to image stream tag \"nationalparks:latest\" * Use 'oc start-build' to trigger a new build --> Creating resources with label app=national-parks-app,app.kubernetes.io/name=python,app.kubernetes.io/part-of=national-parks-app,component=nationalparks,role=backend imagestream.image.openshift.io \"nationalparks\" created buildconfig.build.openshift.io \"nationalparks\" created deployment.apps \"nationalparks\" created service \"nationalparks\" created --> Success", "oc create route edge nationalparks --service=nationalparks", "route.route.openshift.io/parksmap created", "oc get route", "NAME HOST/PORT PATH SERVICES PORT TERMINATION WILDCARD nationalparks nationalparks-user-getting-started.apps.cluster.example.com nationalparks 8080-tcp edge None parksmap parksmap-user-getting-started.apps.cluster.example.com parksmap 8080-tcp edge None", "oc new-app quay.io/centos7/mongodb-36-centos7:master --name mongodb-nationalparks -e MONGODB_USER=mongodb -e MONGODB_PASSWORD=mongodb -e MONGODB_DATABASE=mongodb -e MONGODB_ADMIN_PASSWORD=mongodb -l 'app.kubernetes.io/part-of=national-parks-app,app.kubernetes.io/name=mongodb'", "--> Found container image dc18f52 (3 years old) from quay.io for \"quay.io/centos7/mongodb-36-centos7:master\" MongoDB 3.6 ----------- MongoDB (from humongous) is a free and open-source cross-platform document-oriented database program. Classified as a NoSQL database program, MongoDB uses JSON-like documents with schemas. This container image contains programs to run mongod server. Tags: database, mongodb, rh-mongodb36 * An image stream tag will be created as \"mongodb-nationalparks:master\" that will track this image --> Creating resources with label app.kubernetes.io/name=mongodb,app.kubernetes.io/part-of=national-parks-app imagestream.image.openshift.io \"mongodb-nationalparks\" created deployment.apps \"mongodb-nationalparks\" created service \"mongodb-nationalparks\" created --> Success", "oc create secret generic nationalparks-mongodb-parameters --from-literal=DATABASE_SERVICE_NAME=mongodb-nationalparks --from-literal=MONGODB_USER=mongodb --from-literal=MONGODB_PASSWORD=mongodb --from-literal=MONGODB_DATABASE=mongodb --from-literal=MONGODB_ADMIN_PASSWORD=mongodb", "secret/nationalparks-mongodb-parameters created", "oc set env --from=secret/nationalparks-mongodb-parameters deploy/nationalparks", "deployment.apps/nationalparks updated", "oc rollout status deployment nationalparks", "deployment \"nationalparks\" successfully rolled out", "oc rollout status deployment mongodb-nationalparks", "deployment \"mongodb-nationalparks\" successfully rolled out", "oc exec USD(oc get pods -l component=nationalparks | tail -n 1 | awk '{print USD1;}') -- curl -s http://localhost:8080/ws/data/load", "\"Items inserted in database: 2893\"", "oc exec USD(oc get pods -l component=nationalparks | tail -n 1 | awk '{print USD1;}') -- curl -s http://localhost:8080/ws/data/all", ", {\"id\": \"Great Zimbabwe\", \"latitude\": \"-20.2674635\", \"longitude\": \"30.9337986\", \"name\": \"Great Zimbabwe\"}]", "oc label route nationalparks type=parksmap-backend", "route.route.openshift.io/nationalparks labeled", "oc get routes", "NAME HOST/PORT PATH SERVICES PORT TERMINATION WILDCARD nationalparks nationalparks-user-getting-started.apps.cluster.example.com nationalparks 8080-tcp edge None parksmap parksmap-user-getting-started.apps.cluster.example.com parksmap 8080-tcp edge None" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.17/html-single/getting_started/index
Chapter 5. Installing RHACS on other platforms
Chapter 5. Installing RHACS on other platforms 5.1. High-level overview of installing RHACS on other platforms Red Hat Advanced Cluster Security for Kubernetes (RHACS) provides security services for self-managed RHACS on platforms such as Amazon Elastic Kubernetes Service (Amazon EKS), Google Kubernetes Engine (Google GKE), and Microsoft Azure Kubernetes Service (Microsoft AKS). Before you install: Understand the installation methods for different platforms . Understand Red Hat Advanced Cluster Security for Kubernetes architecture . Check the default resource requirements page . The following list provides a high-level overview of installation steps: Install Central services on a cluster using Helm charts or the roxctl CLI. Generate and apply an init bundle . Install secured cluster resources on each of your secured clusters. 5.2. Installing Central services for RHACS on other platforms Central is the resource that contains the RHACS application management interface and services. It handles data persistence, API interactions, and RHACS portal access. You can use the same Central instance to secure multiple OpenShift Container Platform or Kubernetes clusters. You can install Central by using one of the following methods: Install using Helm charts Install using the roxctl CLI (do not use this method unless you have a specific installation need that requires using it) 5.2.1. Install Central using Helm charts You can install Central using Helm charts without any customization, using the default values, or by using Helm charts with additional customizations of configuration parameters. 5.2.1.1. Install Central using Helm charts without customization You can install RHACS on your Red Hat OpenShift cluster without any customizations. You must add the Helm chart repository and install the central-services Helm chart to install the centralized components of Central and Scanner. 5.2.1.1.1. Adding the Helm chart repository Procedure Add the RHACS charts repository. USD helm repo add rhacs https://mirror.openshift.com/pub/rhacs/charts/ The Helm repository for Red Hat Advanced Cluster Security for Kubernetes includes Helm charts for installing different components, including: Central services Helm chart ( central-services ) for installing the centralized components (Central and Scanner). Note You deploy centralized components only once and you can monitor multiple separate clusters by using the same installation. Secured Cluster Services Helm chart ( secured-cluster-services ) for installing the per-cluster and per-node components (Sensor, Admission Controller, Collector, and Scanner-slim). Note Deploy the per-cluster components into each cluster that you want to monitor and deploy the per-node components in all nodes that you want to monitor. Verification Run the following command to verify the added chart repository: USD helm search repo -l rhacs/ 5.2.1.1.2. Installing the central-services Helm chart without customizations Use the following instructions to install the central-services Helm chart to deploy the centralized components (Central and Scanner). Prerequisites You must have access to the Red Hat Container Registry. For information about downloading images from registry.redhat.io , see Red Hat Container Registry Authentication . Procedure Run the following command to install Central services and expose Central using a route: USD helm install -n stackrox \ --create-namespace stackrox-central-services rhacs/central-services \ --set imagePullSecrets.username=<username> \ 1 --set imagePullSecrets.password=<password> \ 2 --set central.exposure.route.enabled=true 1 Include the user name for your pull secret for Red Hat Container Registry authentication. 2 Include the password for your pull secret for Red Hat Container Registry authentication. Or, run the following command to install Central services and expose Central using a load balancer: USD helm install -n stackrox \ --create-namespace stackrox-central-services rhacs/central-services \ --set imagePullSecrets.username=<username> \ 1 --set imagePullSecrets.password=<password> \ 2 --set central.exposure.loadBalancer.enabled=true 1 Include the user name for your pull secret for Red Hat Container Registry authentication. 2 Include the password for your pull secret for Red Hat Container Registry authentication. Or, run the following command to install Central services and expose Central using port forward: USD helm install -n stackrox \ --create-namespace stackrox-central-services rhacs/central-services \ --set imagePullSecrets.username=<username> \ 1 --set imagePullSecrets.password=<password> 2 1 Include the user name for your pull secret for Red Hat Container Registry authentication. 2 Include the password for your pull secret for Red Hat Container Registry authentication. Important If you are installing Red Hat Advanced Cluster Security for Kubernetes in a cluster that requires a proxy to connect to external services, you must specify your proxy configuration by using the proxyConfig parameter. For example: env: proxyConfig: | url: http://proxy.name:port username: username password: password excludes: - some.domain If you already created one or more image pull secrets in the namespace in which you are installing, instead of using a username and password, you can use --set imagePullSecrets.useExisting="<pull-secret-1;pull-secret-2>" . Do not use image pull secrets: If you are pulling your images from quay.io/stackrox-io or a registry in a private network that does not require authentication. Use use --set imagePullSecrets.allowNone=true instead of specifying a username and password. If you already configured image pull secrets in the default service account in the namespace you are installing. Use --set imagePullSecrets.useFromDefaultServiceAccount=true instead of specifying a username and password. The output of the installation command includes: An automatically generated administrator password. Instructions on storing all the configuration values. Any warnings that Helm generates. 5.2.1.2. Install Central using Helm charts with customizations You can install RHACS on your Red Hat OpenShift cluster with customizations by using Helm chart configuration parameters with the helm install and helm upgrade commands. You can specify these parameters by using the --set option or by creating YAML configuration files. Create the following files for configuring the Helm chart for installing Red Hat Advanced Cluster Security for Kubernetes: Public configuration file values-public.yaml : Use this file to save all non-sensitive configuration options. Private configuration file values-private.yaml : Use this file to save all sensitive configuration options. Ensure that you store this file securely. Configuration file declarative-config-values.yaml : Create this file if you are using declarative configuration to add the declarative configuration mounts to Central. 5.2.1.2.1. Private configuration file This section lists the configurable parameters of the values-private.yaml file. There are no default values for these parameters. 5.2.1.2.1.1. Image pull secrets The credentials that are required for pulling images from the registry depend on the following factors: If you are using a custom registry, you must specify these parameters: imagePullSecrets.username imagePullSecrets.password image.registry If you do not use a username and password to log in to the custom registry, you must specify one of the following parameters: imagePullSecrets.allowNone imagePullSecrets.useExisting imagePullSecrets.useFromDefaultServiceAccount Parameter Description imagePullSecrets.username The username of the account that is used to log in to the registry. imagePullSecrets.password The password of the account that is used to log in to the registry. imagePullSecrets.allowNone Use true if you are using a custom registry and it allows pulling images without credentials. imagePullSecrets.useExisting A comma-separated list of secrets as values. For example, secret1, secret2, secretN . Use this option if you have already created pre-existing image pull secrets with the given name in the target namespace. imagePullSecrets.useFromDefaultServiceAccount Use true if you have already configured the default service account in the target namespace with sufficiently scoped image pull secrets. 5.2.1.2.1.2. Proxy configuration If you are installing Red Hat Advanced Cluster Security for Kubernetes in a cluster that requires a proxy to connect to external services, you must specify your proxy configuration by using the proxyConfig parameter. For example: env: proxyConfig: | url: http://proxy.name:port username: username password: password excludes: - some.domain Parameter Description env.proxyConfig Your proxy configuration. 5.2.1.2.1.3. Central Configurable parameters for Central. For a new installation, you can skip the following parameters: central.jwtSigner.key central.serviceTLS.cert central.serviceTLS.key central.adminPassword.value central.adminPassword.htpasswd central.db.serviceTLS.cert central.db.serviceTLS.key central.db.password.value When you do not specify values for these parameters the Helm chart autogenerates values for them. If you want to modify these values you can use the helm upgrade command and specify the values using the --set option. Important For setting the administrator password, you can only use either central.adminPassword.value or central.adminPassword.htpasswd , but not both. Parameter Description central.jwtSigner.key A private key which RHACS should use for signing JSON web tokens (JWTs) for authentication. central.serviceTLS.cert An internal certificate that the Central service should use for deploying Central. central.serviceTLS.key The private key of the internal certificate that the Central service should use. central.defaultTLS.cert The user-facing certificate that Central should use. RHACS uses this certificate for RHACS portal. For a new installation, you must provide a certificate, otherwise, RHACS installs Central by using a self-signed certificate. If you are upgrading, RHACS uses the existing certificate and its key. central.defaultTLS.key The private key of the user-facing certificate that Central should use. For a new installation, you must provide the private key, otherwise, RHACS installs Central by using a self-signed certificate. If you are upgrading, RHACS uses the existing certificate and its key. central.db.password.value Connection password for Central database. central.adminPassword.value Administrator password for logging into RHACS. central.adminPassword.htpasswd Administrator password for logging into RHACS. This password is stored in hashed format using bcrypt. central.db.serviceTLS.cert An internal certificate that the Central DB service should use for deploying Central DB. central.db.serviceTLS.key The private key of the internal certificate that the Central DB service should use. central.db.password.value The password used to connect to the Central DB. Note If you are using central.adminPassword.htpasswd parameter, you must use a bcrypt encoded password hash. You can run the command htpasswd -nB admin to generate a password hash. For example, htpasswd: | admin:<bcrypt-hash> 5.2.1.2.1.4. Scanner Configurable parameters for the StackRox Scanner and Scanner V4. For a new installation, you can skip the following parameters and the Helm chart autogenerates values for them. Otherwise, if you are upgrading to a new version, specify the values for the following parameters: scanner.dbPassword.value scanner.serviceTLS.cert scanner.serviceTLS.key scanner.dbServiceTLS.cert scanner.dbServiceTLS.key scannerV4.db.password.value scannerV4.indexer.serviceTLS.cert scannerV4.indexer.serviceTLS.key scannerV4.matcher.serviceTLS.cert scannerV4.matcher.serviceTLS.key scannerV4.db.serviceTLS.cert scannerV4.db.serviceTLS.key Parameter Description scanner.dbPassword.value The password to use for authentication with Scanner database. Do not modify this parameter because RHACS automatically creates and uses its value internally. scanner.serviceTLS.cert An internal certificate that the StackRox Scanner service should use for deploying the StackRox Scanner. scanner.serviceTLS.key The private key of the internal certificate that the Scanner service should use. scanner.dbServiceTLS.cert An internal certificate that the Scanner-db service should use for deploying Scanner database. scanner.dbServiceTLS.key The private key of the internal certificate that the Scanner-db service should use. scannerV4.db.password.value The password to use for authentication with the Scanner V4 database. Do not modify this parameter because RHACS automatically creates and uses its value internally. scannerV4.db.serviceTLS.cert An internal certificate that the Scanner V4 DB service should use for deploying the Scanner V4 database. scannerV4.db.serviceTLS.key The private key of the internal certificate that the Scanner V4 DB service should use. scannerV4.indexer.serviceTLS.cert An internal certificate that the Scanner V4 service should use for deploying the Scanner V4 Indexer. scannerV4.indexer.serviceTLS.key The private key of the internal certificate that the Scanner V4 Indexer should use. scannerV4.matcher.serviceTLS.cert An internal certificate that the Scanner V4 service should use for deploying the the Scanner V4 Matcher. scannerV4.matcher.serviceTLS.key The private key of the internal certificate that the Scanner V4 Matcher should use. 5.2.1.2.2. Public configuration file This section lists the configurable parameters of the values-public.yaml file. 5.2.1.2.2.1. Image pull secrets Image pull secrets are the credentials required for pulling images from your registry. Parameter Description imagePullSecrets.allowNone Use true if you are using a custom registry and it allows pulling images without credentials. imagePullSecrets.useExisting A comma-separated list of secrets as values. For example, secret1, secret2 . Use this option if you have already created pre-existing image pull secrets with the given name in the target namespace. imagePullSecrets.useFromDefaultServiceAccount Use true if you have already configured the default service account in the target namespace with sufficiently scoped image pull secrets. 5.2.1.2.2.2. Image Image declares the configuration to set up the main registry, which the Helm chart uses to resolve images for the central.image , scanner.image , scanner.dbImage , scannerV4.image , and scannerV4.db.image parameters. Parameter Description image.registry Address of your image registry. Either use a hostname, such as registry.redhat.io , or a remote registry hostname, such as us.gcr.io/stackrox-mirror . 5.2.1.2.2.3. Environment variables Red Hat Advanced Cluster Security for Kubernetes automatically detects your cluster environment and sets values for env.openshift , env.istio , and env.platform . Only set these values to override the automatic cluster environment detection. Parameter Description env.openshift Use true for installing on an OpenShift Container Platform cluster and overriding automatic cluster environment detection. env.istio Use true for installing on an Istio enabled cluster and overriding automatic cluster environment detection. env.platform The platform on which you are installing RHACS. Set its value to default or gke to specify cluster platform and override automatic cluster environment detection. env.offlineMode Use true to use RHACS in offline mode. 5.2.1.2.2.4. Additional trusted certificate authorities The RHACS automatically references the system root certificates to trust. When Central, the StackRox Scanner, or Scanner V4 must reach out to services that use certificates issued by an authority in your organization or a globally trusted partner organization, you can add trust for these services by specifying the root certificate authority to trust by using the following parameter: Parameter Description additionalCAs.<certificate_name> Specify the PEM encoded certificate of the root certificate authority to trust. 5.2.1.2.2.5. Default network policies To provide security at the network level, RHACS creates default NetworkPolicy resources in the namespace where Central is installed. These network policies allow ingress to specific components on specific ports. If you do not want RHACS to create these policies, set this parameter to Disabled . The default value is Enabled . Warning Disabling creation of default network policies can break communication between RHACS components. If you disable creation of default policies, you must create your own network policies to allow this communication. Parameter Description network.enableNetworkPolicies Specify if RHACS creates default network policies to allow communication between components. To create your own network policies, set this parameter to False . The default value is True . 5.2.1.2.2.6. Central Configurable parameters for Central. You must specify a persistent storage option as either hostPath or persistentVolumeClaim . For exposing Central deployment for external access. You must specify one parameter, either central.exposure.loadBalancer , central.exposure.nodePort , or central.exposure.route . When you do not specify any value for these parameters, you must manually expose Central or access it by using port-forwarding. The following table includes settings for an external PostgreSQL database. Parameter Description central.declarativeConfiguration.mounts.configMaps Mounts config maps used for declarative configurations. Central.declarativeConfiguration.mounts.secrets Mounts secrets used for declarative configurations. central.endpointsConfig The endpoint configuration options for Central. central.nodeSelector If the node selector selects tainted nodes, use this parameter to specify a taint toleration key, value, and effect for Central. This parameter is mainly used for infrastructure nodes. central.tolerations If the node selector selects tainted nodes, use this parameter to specify a taint toleration key, value, and effect for Central. This parameter is mainly used for infrastructure nodes. central.exposeMonitoring Specify true to expose Prometheus metrics endpoint for Central on port number 9090 . central.image.registry A custom registry that overrides the global image.registry parameter for the Central image. central.image.name The custom image name that overrides the default Central image name ( main ). central.image.tag The custom image tag that overrides the default tag for Central image. If you specify your own image tag during a new installation, you must manually increment this tag when you to upgrade to a new version by running the helm upgrade command. If you mirror Central images in your own registry, do not modify the original image tags. central.image.fullRef Full reference including registry address, image name, and image tag for the Central image. Setting a value for this parameter overrides the central.image.registry , central.image.name , and central.image.tag parameters. central.resources.requests.memory The memory request for Central. central.resources.requests.cpu The CPU request for Central. central.resources.limits.memory The memory limit for Central. central.resources.limits.cpu The CPU limit for Central. central.persistence.hostPath The path on the node where RHACS should create a database volume. Red Hat does not recommend using this option. central.persistence.persistentVolumeClaim.claimName The name of the persistent volume claim (PVC) you are using. central.persistence.persistentVolumeClaim.createClaim Use true to create a new PVC, or false to use an existing claim. central.persistence.persistentVolumeClaim.size The size (in GiB) of the persistent volume managed by the specified claim. central.exposure.loadBalancer.enabled Use true to expose Central by using a load balancer. central.exposure.loadBalancer.port The port number on which to expose Central. The default port number is 443. central.exposure.nodePort.enabled Use true to expose Central by using the node port service. central.exposure.nodePort.port The port number on which to expose Central. When you skip this parameter, OpenShift Container Platform automatically assigns a port number. Red Hat recommends that you do not specify a port number if you are exposing RHACS by using a node port. central.exposure.route.enabled Use true to expose Central by using a route. This parameter is only available for OpenShift Container Platform clusters. central.db.external Use true to specify that Central DB should not be deployed and that an external database will be used. central.db.source.connectionString The connection string for Central to use to connect to the database. This is only used when central.db.external is set to true. The connection string must be in keyword/value format as described in the PostgreSQL documentation in "Additional resources". Only PostgreSQL 13 is supported. Connections through PgBouncer are not supported. User must be superuser with ability to create and delete databases. central.db.source.minConns The minimum number of connections to the database to be established. central.db.source.maxConns The maximum number of connections to the database to be established. central.db.source.statementTimeoutMs The number of milliseconds a single query or transaction can be active against the database. central.db.postgresConfig The postgresql.conf to be used for Central DB as described in the PostgreSQL documentation in "Additional resources". central.db.hbaConfig The pg_hba.conf to be used for Central DB as described in the PostgreSQL documentation in "Additional resources". central.db.nodeSelector Specify a node selector label as label-key: label-value to force Central DB to only schedule on nodes with the specified label. central.db.image.registry A custom registry that overrides the global image.registry parameter for the Central DB image. central.db.image.name The custom image name that overrides the default Central DB image name ( central-db ). central.db.image.tag The custom image tag that overrides the default tag for Central DB image. If you specify your own image tag during a new installation, you must manually increment this tag when you to upgrade to a new version by running the helm upgrade command. If you mirror Central DB images in your own registry, do not modify the original image tags. central.db.image.fullRef Full reference including registry address, image name, and image tag for the Central DB image. Setting a value for this parameter overrides the central.db.image.registry , central.db.image.name , and central.db.image.tag parameters. central.db.resources.requests.memory The memory request for Central DB. central.db.resources.requests.cpu The CPU request for Central DB. central.db.resources.limits.memory The memory limit for Central DB. central.db.resources.limits.cpu The CPU limit for Central DB. central.db.persistence.hostPath The path on the node where RHACS should create a database volume. Red Hat does not recommend using this option. central.db.persistence.persistentVolumeClaim.claimName The name of the persistent volume claim (PVC) you are using. central.db.persistence.persistentVolumeClaim.createClaim Use true to create a new persistent volume claim, or false to use an existing claim. central.db.persistence.persistentVolumeClaim.size The size (in GiB) of the persistent volume managed by the specified claim. 5.2.1.2.2.7. StackRox Scanner The following table lists the configurable parameters for the StackRox Scanner. This is the scanner used for node and platform scanning. If Scanner V4 is not enabled, the StackRox scanner also performs image scanning. Beginning with version 4.4, Scanner V4 can be enabled to provide image scanning. See the table for Scanner V4 parameters. Parameter Description scanner.disable Use true to install RHACS without the StackRox Scanner. When you use it with the helm upgrade command, Helm removes the existing StackRox Scanner deployment. scanner.exposeMonitoring Specify true to expose Prometheus metrics endpoint for the StackRox Scanner on port number 9090 . scanner.replicas The number of replicas to create for the StackRox Scanner deployment. When you use it with the scanner.autoscaling parameter, this value sets the initial number of replicas. scanner.logLevel Configure the log level for the StackRox Scanner. Red Hat recommends that you not change the default log level value ( INFO ). scanner.nodeSelector Specify a node selector label as label-key: label-value to force the StackRox Scanner to only schedule on nodes with the specified label. scanner.tolerations If the node selector selects tainted nodes, use this parameter to specify a taint toleration key, value, and effect for the StackRox Scanner. This parameter is mainly used for infrastructure nodes. scanner.autoscaling.disable Use true to disable autoscaling for the StackRox Scanner deployment. When you disable autoscaling, the minReplicas and maxReplicas parameters do not have any effect. scanner.autoscaling.minReplicas The minimum number of replicas for autoscaling. scanner.autoscaling.maxReplicas The maximum number of replicas for autoscaling. scanner.resources.requests.memory The memory request for the StackRox Scanner. scanner.resources.requests.cpu The CPU request for the StackRox Scanner. scanner.resources.limits.memory The memory limit for the StackRox Scanner. scanner.resources.limits.cpu The CPU limit for the StackRox Scanner. scanner.dbResources.requests.memory The memory request for the StackRox Scanner database deployment. scanner.dbResources.requests.cpu The CPU request for the StackRox Scanner database deployment. scanner.dbResources.limits.memory The memory limit for the StackRox Scanner database deployment. scanner.dbResources.limits.cpu The CPU limit for the StackRox Scanner database deployment. scanner.image.registry A custom registry for the StackRox Scanner image. scanner.image.name The custom image name that overrides the default StackRox Scanner image name ( scanner ). scanner.dbImage.registry A custom registry for the StackRox Scanner DB image. scanner.dbImage.name The custom image name that overrides the default StackRox Scanner DB image name ( scanner-db ). scanner.dbNodeSelector Specify a node selector label as label-key: label-value to force the StackRox Scanner DB to only schedule on nodes with the specified label. scanner.dbTolerations If the node selector selects tainted nodes, use this parameter to specify a taint toleration key, value, and effect for the StackRox Scanner DB. This parameter is mainly used for infrastructure nodes. 5.2.1.2.2.8. Scanner V4 The following table lists the configurable parameters for Scanner V4. Parameter Description scannerV4.db.persistence.persistentVolumeClaim.claimName The name of the PVC to manage persistent data for Scanner V4. If no PVC with the given name exists, it is created. The default value is scanner-v4-db if not set. To prevent data loss, the PVC is not removed automatically when Central is deleted. scannerV4.db.persistence.persistentVolumeClaim.size The size of the PVC to manage persistent data for Scanner V4. scannerV4.db.persistence.persistentVolumeClaim.storageClassName The name of the storage class to use for the PVC. If your cluster is not configured with a default storage class, you must provide a value for this parameter. scannerV4.disable Use false to enable Scanner V4. When setting this parameter, the StackRox Scanner must also be enabled by setting scanner.disable=false . Until feature parity between the StackRox Scanner and Scanner V4 is reached, Scanner V4 can only be used in combination with the StackRox Scanner. Enabling Scanner V4 without also enabling the StackRox Scanner is not supported. When you set this parameter to true with the helm upgrade command, Helm removes the existing Scanner V4 deployment. scannerV4.exposeMonitoring Specify true to expose Prometheus metrics endpoint for Scanner V4 on port number 9090 . scannerV4.indexer.replicas The number of replicas to create for the Scanner V4 Indexer deployment. When you use it with the scannerV4.indexer.autoscaling parameter, this value sets the initial number of replicas. scannerV4.indexer.logLevel Configure the log level for the Scanner V4 Indexer. Red Hat recommends that you not change the default log level value ( INFO ). scannerV4.indexer.nodeSelector Specify a node selector label as label-key: label-value to force the Scanner V4 Indexer to only schedule on nodes with the specified label. scannerV4.indexer.tolerations If the node selector selects tainted nodes, use this parameter to specify a taint toleration key, value, and effect for the Scanner V4 Indexer. This parameter is mainly used for infrastructure nodes. scannerV4.indexer.autoscaling.disable Use true to disable autoscaling for the Scanner V4 Indexer deployment. When you disable autoscaling, the minReplicas and maxReplicas parameters do not have any effect. scannerV4.indexer.autoscaling.minReplicas The minimum number of replicas for autoscaling. scannerV4.indexer.autoscaling.maxReplicas The maximum number of replicas for autoscaling. scannerV4.indexer.resources.requests.memory The memory request for the Scanner V4 Indexer. scannerV4.indexer.resources.requests.cpu The CPU request for the Scanner V4 Indexer. scannerV4.indexer.resources.limits.memory The memory limit for the Scanner V4 Indexer. scannerV4.indexer.resources.limits.cpu The CPU limit for the Scanner V4 Indexer. scannerV4.matcher.replicas The number of replicas to create for the Scanner V4 Matcher deployment. When you use it with the scannerV4.matcher.autoscaling parameter, this value sets the initial number of replicas. scannerV4.matcher.logLevel Red Hat recommends that you not change the default log level value ( INFO ). scannerV4.matcher.nodeSelector Specify a node selector label as label-key: label-value to force the Scanner V4 Matcher to only schedule on nodes with the specified label. scannerV4.matcher.tolerations If the node selector selects tainted nodes, use this parameter to specify a taint toleration key, value, and effect for the Scanner V4 Matcher. This parameter is mainly used for infrastructure nodes. scannerV4.matcher.autoscaling.disable Use true to disable autoscaling for the Scanner V4 Matcher deployment. When you disable autoscaling, the minReplicas and maxReplicas parameters do not have any effect. scannerV4.matcher.autoscaling.minReplicas The minimum number of replicas for autoscaling. scannerV4.matcher.autoscaling.maxReplicas The maximum number of replicas for autoscaling. scannerV4.matcher.resources.requests.memory The memory request for the Scanner V4 Matcher. scannerV4.matcher.resources.requests.cpu The CPU request for the Scanner V4 Matcher. scannerV4.db.resources.requests.memory The memory request for the Scanner V4 database deployment. scannerV4.db.resources.requests.cpu The CPU request for the Scanner V4 database deployment. scannerV4.db.resources.limits.memory The memory limit for the Scanner V4 database deployment. scannerV4.db.resources.limits.cpu The CPU limit for the Scanner V4 database deployment. scannerV4.db.nodeSelector Specify a node selector label as label-key: label-value to force the Scanner V4 DB to only schedule on nodes with the specified label. scannerV4.db.tolerations If the node selector selects tainted nodes, use this parameter to specify a taint toleration key, value, and effect for the Scanner V4 DB. This parameter is mainly used for infrastructure nodes. scannerV4.db.image.registry A custom registry for the Scanner V4 DB image. scannerV4.db.image.name The custom image name that overrides the default Scanner V4 DB image name ( scanner-v4-db ). scannerV4.image.registry A custom registry for the Scanner V4 image. scannerV4.image.name The custom image name that overrides the default Scanner V4 image name ( scanner-v4 ). 5.2.1.2.2.9. Customization Use these parameters to specify additional attributes for all objects that RHACS creates. Parameter Description customize.labels A custom label to attach to all objects. customize.annotations A custom annotation to attach to all objects. customize.podLabels A custom label to attach to all deployments. customize.podAnnotations A custom annotation to attach to all deployments. customize.envVars A custom environment variable for all containers in all objects. customize.central.labels A custom label to attach to all objects that Central creates. customize.central.annotations A custom annotation to attach to all objects that Central creates. customize.central.podLabels A custom label to attach to all Central deployments. customize.central.podAnnotations A custom annotation to attach to all Central deployments. customize.central.envVars A custom environment variable for all Central containers. customize.scanner.labels A custom label to attach to all objects that Scanner creates. customize.scanner.annotations A custom annotation to attach to all objects that Scanner creates. customize.scanner.podLabels A custom label to attach to all Scanner deployments. customize.scanner.podAnnotations A custom annotation to attach to all Scanner deployments. customize.scanner.envVars A custom environment variable for all Scanner containers. customize.scanner-db.labels A custom label to attach to all objects that Scanner DB creates. customize.scanner-db.annotations A custom annotation to attach to all objects that Scanner DB creates. customize.scanner-db.podLabels A custom label to attach to all Scanner DB deployments. customize.scanner-db.podAnnotations A custom annotation to attach to all Scanner DB deployments. customize.scanner-db.envVars A custom environment variable for all Scanner DB containers. customize.scanner-v4-indexer.labels A custom label to attach to all objects that Scanner V4 Indexer creates and into the pods belonging to them. customize.scanner-v4-indexer.annotations A custom annotation to attach to all objects that Scanner V4 Indexer creates and into the pods belonging to them. customize.scanner-v4-indexer.podLabels A custom label to attach to all objects that Scanner V4 Indexer creates and into the pods belonging to them. customize.scanner-v4-indexer.podAnnotations A custom annotation to attach to all objects that Scanner V4 Indexer creates and into the pods belonging to them. customize.scanner-4v-indexer.envVars A custom environment variable for all Scanner V4 Indexer containers and the pods belonging to them. customize.scanner-v4-matcher.labels A custom label to attach to all objects that Scanner V4 Matcher creates and into the pods belonging to them. customize.scanner-v4-matcher.annotations A custom annotation to attach to all objects that Scanner V4 Matcher creates and into the pods belonging to them. customize.scanner-v4-matcher.podLabels A custom label to attach to all objects that Scanner V4 Matcher creates and into the pods belonging to them. customize.scanner-v4-matcher.podAnnotations A custom annotation to attach to all objects that Scanner V4 Matcher creates and into the pods belonging to them. customize.scanner-4v-matcher.envVars A custom environment variable for all Scanner V4 Matcher containers and the pods belonging to them. customize.scanner-v4-db.labels A custom label to attach to all objects that Scanner V4 DB creates and into the pods belonging to them. customize.scanner-v4-db.annotations A custom annotation to attach to all objects that Scanner V4 DB creates and into the pods belonging to them. customize.scanner-v4-db.podLabels A custom label to attach to all objects that Scanner V4 DB creates and into the pods belonging to them. customize.scanner-v4-db.podAnnotations A custom annotation to attach to all objects that Scanner V4 DB creates and into the pods belonging to them. customize.scanner-4v-db.envVars A custom environment variable for all Scanner V4 DB containers and the pods belonging to them. You can also use: the customize.other.service/*.labels and the customize.other.service/*.annotations parameters, to specify labels and annotations for all objects. or, provide a specific service name, for example, customize.other.service/central-loadbalancer.labels and customize.other.service/central-loadbalancer.annotations as parameters and set their value. 5.2.1.2.2.10. Advanced customization Important The parameters specified in this section are for information only. Red Hat does not support RHACS instances with modified namespace and release names. Parameter Description allowNonstandardNamespace Use true to deploy RHACS into a namespace other than the default namespace stackrox . allowNonstandardReleaseName Use true to deploy RHACS with a release name other than the default stackrox-central-services . 5.2.1.2.3. Declarative configuration values To use declarative configuration, you must create a YAML file (in this example, named "declarative-config-values.yaml") that adds the declarative configuration mounts to Central. This file is used in a Helm installation. Procedure Create the YAML file (in this example, named declarative-config-values.yaml ) using the following example as a guideline: central: declarativeConfiguration: mounts: configMaps: - declarative-configs secrets: - sensitive-declarative-configs Install the Central services Helm chart as documented in the "Installing the central-services Helm chart", referencing the declarative-config-values.yaml file. Additional resources Connection Strings - PostgreSQL Docs Parameter Interaction via the Configuration File - PostgreSQL Docs The pg_hba.conf File - PostgreSQL Docs 5.2.1.2.4. Installing the central-services Helm chart After you configure the values-public.yaml and values-private.yaml files, install the central-services Helm chart to deploy the centralized components (Central and Scanner). Procedure Run the following command: USD helm install -n stackrox --create-namespace \ stackrox-central-services rhacs/central-services \ -f <path_to_values_public.yaml> -f <path_to_values_private.yaml> 1 1 Use the -f option to specify the paths for your YAML configuration files. Note Optional: If using declarative configuration, add -f <path_to_declarative-config-values.yaml to this command to mount the declarative configurations file in Central. 5.2.1.3. Changing configuration options after deploying the central-services Helm chart You can make changes to any configuration options after you have deployed the central-services Helm chart. When using the helm upgrade command to make changes, the following guidelines and requirements apply: You can also specify configuration values using the --set or --set-file parameters. However, these options are not saved, and you must manually specify all the options again whenever you make changes. Some changes, such as enabling a new component like Scanner V4, require new certificates to be issued for the component. Therefore, you must provide a CA when making these changes. If the CA was generated by the Helm chart during the initial installation, you must retrieve these automatically generated values from the cluster and provide them to the helm upgrade command. The post-installation notes of the central-services Helm chart include a command for retrieving the automatically generated values. If the CA was generated outside of the Helm chart and provided during the installation of the central-services chart, then you must perform that action again when using the helm upgrade command, for example, by using the --reuse-values flag with the helm upgrade command. Procedure Update the values-public.yaml and values-private.yaml configuration files with new values. Run the helm upgrade command and specify the configuration files using the -f option: USD helm upgrade -n stackrox \ stackrox-central-services rhacs/central-services \ --reuse-values \ 1 -f <path_to_init_bundle_file \ -f <path_to_values_public.yaml> \ -f <path_to_values_private.yaml> 1 If you have modified values that are not included in the values_public.yaml and values_private.yaml files, include the --reuse-values parameter. 5.2.2. Install Central using the roxctl CLI Warning For production environments, Red Hat recommends using the Operator or Helm charts to install RHACS. Do not use the roxctl install method unless you have a specific installation need that requires using this method. 5.2.2.1. Installing the roxctl CLI To install Red Hat Advanced Cluster Security for Kubernetes you must install the roxctl CLI by downloading the binary. You can install roxctl on Linux, Windows, or macOS. 5.2.2.1.1. Installing the roxctl CLI on Linux You can install the roxctl CLI binary on Linux by using the following procedure. Note roxctl CLI for Linux is available for amd64 , arm64 , ppc64le , and s390x architectures. Procedure Determine the roxctl architecture for the target operating system: USD arch="USD(uname -m | sed "s/x86_64//")"; arch="USD{arch:+-USDarch}" Download the roxctl CLI: USD curl -L -f -o roxctl "https://mirror.openshift.com/pub/rhacs/assets/4.5.6/bin/Linux/roxctlUSD{arch}" Make the roxctl binary executable: USD chmod +x roxctl Place the roxctl binary in a directory that is on your PATH : To check your PATH , execute the following command: USD echo USDPATH Verification Verify the roxctl version you have installed: USD roxctl version 5.2.2.1.2. Installing the roxctl CLI on macOS You can install the roxctl CLI binary on macOS by using the following procedure. Note roxctl CLI for macOS is available for amd64 and arm64 architectures. Procedure Determine the roxctl architecture for the target operating system: USD arch="USD(uname -m | sed "s/x86_64//")"; arch="USD{arch:+-USDarch}" Download the roxctl CLI: USD curl -L -f -o roxctl "https://mirror.openshift.com/pub/rhacs/assets/4.5.6/bin/Darwin/roxctlUSD{arch}" Remove all extended attributes from the binary: USD xattr -c roxctl Make the roxctl binary executable: USD chmod +x roxctl Place the roxctl binary in a directory that is on your PATH : To check your PATH , execute the following command: USD echo USDPATH Verification Verify the roxctl version you have installed: USD roxctl version 5.2.2.1.3. Installing the roxctl CLI on Windows You can install the roxctl CLI binary on Windows by using the following procedure. Note roxctl CLI for Windows is available for the amd64 architecture. Procedure Download the roxctl CLI: USD curl -f -O https://mirror.openshift.com/pub/rhacs/assets/4.5.6/bin/Windows/roxctl.exe Verification Verify the roxctl version you have installed: USD roxctl version 5.2.2.2. Using the interactive installer Use the interactive installer to generate the required secrets, deployment configurations, and deployment scripts for your environment. Procedure Run the interactive install command: USD roxctl central generate interactive Important Installing RHACS using the roxctl CLI creates PodSecurityPolicy (PSP) objects by default for backward compatibility. If you install RHACS on Kubernetes versions 1.25 and newer or OpenShift Container Platform version 4.12 and newer, you must disable the PSP object creation. To do this, specify --enable-pod-security-policies option as false for the roxctl central generate and roxctl sensor generate commands. Press Enter to accept the default value for a prompt or enter custom values as required. The following example shows the interactive installer prompts: Enter path to the backup bundle from which to restore keys and certificates (optional): Enter read templates from local filesystem (default: "false"): Enter path to helm templates on your local filesystem (default: "/path"): Enter PEM cert bundle file (optional): 1 Enter Create PodSecurityPolicy resources (for pre-v1.25 Kubernetes) (default: "true"): 2 Enter administrator password (default: autogenerated): Enter orchestrator (k8s, openshift): Enter default container images settings (development_build, stackrox.io, rhacs, opensource); it controls repositories from where to download the images, image names and tags format (default: "development_build"): Enter the directory to output the deployment bundle to (default: "central-bundle"): Enter the OpenShift major version (3 or 4) to deploy on (default: "0"): Enter whether to enable telemetry (default: "false"): Enter central-db image to use (if unset, a default will be used according to --image-defaults): Enter Istio version when deploying into an Istio-enabled cluster (leave empty when not running Istio) (optional): Enter the method of exposing Central (route, lb, np, none) (default: "none"): 3 Enter main image to use (if unset, a default will be used according to --image-defaults): Enter whether to run StackRox in offline mode, which avoids reaching out to the Internet (default: "false"): Enter list of secrets to add as declarative configuration mounts in central (default: "[]"): 4 Enter list of config maps to add as declarative configuration mounts in central (default: "[]"): 5 Enter the deployment tool to use (kubectl, helm, helm-values) (default: "kubectl"): Enter scanner-db image to use (if unset, a default will be used according to --image-defaults): Enter scanner image to use (if unset, a default will be used according to --image-defaults): Enter Central volume type (hostpath, pvc): 6 Enter external volume name for Central (default: "stackrox-db"): Enter external volume size in Gi for Central (default: "100"): Enter storage class name for Central (optional if you have a default StorageClass configured): Enter external volume name for Central DB (default: "central-db"): Enter external volume size in Gi for Central DB (default: "100"): Enter storage class name for Central DB (optional if you have a default StorageClass configured): 1 If you want to add a custom TLS certificate, provide the file path for the PEM-encoded certificate. When you specify a custom certificate the interactive installer also prompts you to provide a PEM private key for the custom certificate you are using. 2 If you are running Kubernetes version 1.25 or later, set this value to false . 3 To use the RHACS portal, you must expose Central by using a route, a load balancer or a node port. 4 For more information on using declarative configurations for authentication and authorization, see "Declarative configuration for authentication and authorization resources" in "Managing RBAC in Red Hat Advanced Cluster Security for Kubernetes". 5 For more information on using declarative configurations for authentication and authorization, see "Declarative configuration for authentication and authorization resources" in "Managing RBAC in Red Hat Advanced Cluster Security for Kubernetes". 6 If you plan to install Red Hat Advanced Cluster Security for Kubernetes on OpenShift Container Platform with a hostPath volume, you must modify the SELinux policy. Warning On OpenShift Container Platform, for using a hostPath volume, you must modify the SELinux policy to allow access to the directory, which the host and the container share. It is because SELinux blocks directory sharing by default. To modify the SELinux policy, run the following command: USD sudo chcon -Rt svirt_sandbox_file_t <full_volume_path> However, Red Hat does not recommend modifying the SELinux policy, instead use PVC when installing on OpenShift Container Platform. On completion, the installer creates a folder named central-bundle, which contains the necessary YAML manifests and scripts to deploy Central. In addition, it shows on-screen instructions for the scripts you need to run to deploy additional trusted certificate authorities, Central and Scanner, and the authentication instructions for logging into the RHACS portal along with the autogenerated password if you did not provide one when answering the prompts. 5.2.2.3. Running the Central installation scripts After you run the interactive installer, you can run the setup.sh script to install Central. Procedure Run the setup.sh script to configure image registry access: USD ./central-bundle/central/scripts/setup.sh Create the necessary resources: USD oc create -R -f central-bundle/central Check the deployment progress: USD oc get pod -n stackrox -w After Central is running, find the RHACS portal IP address and open it in your browser. Depending on the exposure method you selected when answering the prompts, use one of the following methods to get the IP address. Exposure method Command Address Example Route oc -n stackrox get route central The address under the HOST/PORT column in the output https://central-stackrox.example.route Node Port oc get node -owide && oc -n stackrox get svc central-loadbalancer IP or hostname of any node, on the port shown for the service https://198.51.100.0:31489 Load Balancer oc -n stackrox get svc central-loadbalancer EXTERNAL-IP or hostname shown for the service, on port 443 https://192.0.2.0 None central-bundle/central/scripts/port-forward.sh 8443 https://localhost:8443 https://localhost:8443 Note If you have selected autogenerated password during the interactive install, you can run the following command to see it for logging into Central: USD cat central-bundle/password 5.3. Generating and applying an init bundle for RHACS on other platforms Before you install the SecuredCluster resource on a cluster, you must create an init bundle. The cluster that has SecuredCluster installed and configured then uses this bundle to authenticate with Central. You can create an init bundle by using either the RHACS portal or the roxctl CLI. You then apply the init bundle by using it to create resources. Note You must have the Admin user role to create an init bundle. 5.3.1. Generating an init bundle 5.3.1.1. Generating an init bundle by using the RHACS portal You can create an init bundle containing secrets by using the RHACS portal. Note You must have the Admin user role to create an init bundle. Procedure Find the address of the RHACS portal as described in "Verifying Central installation using the Operator method". Log in to the RHACS portal. If you do not have secured clusters, the Platform Configuration Clusters page appears. Click Create init bundle . Enter a name for the cluster init bundle. Select your platform. Select the installation method you will use for your secured clusters: Operator or Helm chart . Click Download to generate and download the init bundle, which is created in the form of a YAML file. You can use one init bundle and its corresponding YAML file for all secured clusters if you are using the same installation method. Important Store this bundle securely because it contains secrets. Apply the init bundle by using it to create resources on the secured cluster. Install secured cluster services on each cluster. 5.3.1.2. Generating an init bundle by using the roxctl CLI You can create an init bundle with secrets by using the roxctl CLI. Note You must have the Admin user role to create init bundles. Prerequisites You have configured the ROX_API_TOKEN and the ROX_CENTRAL_ADDRESS environment variables: Set the ROX_API_TOKEN by running the following command: USD export ROX_API_TOKEN=<api_token> Set the ROX_CENTRAL_ADDRESS environment variable by running the following command: USD export ROX_CENTRAL_ADDRESS=<address>:<port_number> Procedure To generate a cluster init bundle containing secrets for Helm installations, run the following command: USD roxctl -e "USDROX_CENTRAL_ADDRESS" \ central init-bundles generate --output \ <cluster_init_bundle_name> cluster_init_bundle.yaml To generate a cluster init bundle containing secrets for Operator installations, run the following command: USD roxctl -e "USDROX_CENTRAL_ADDRESS" \ central init-bundles generate --output-secrets \ <cluster_init_bundle_name> cluster_init_bundle.yaml Important Ensure that you store this bundle securely because it contains secrets. You can use the same bundle to set up multiple secured clusters. 5.3.1.3. Applying the init bundle on the secured cluster Before you configure a secured cluster, you must apply the init bundle by using it to create the required resources on the cluster. Applying the init bundle allows the services on the secured cluster to communicate with Central. Note If you are installing by using Helm charts, do not perform this step. Complete the installation by using Helm; See "Installing RHACS on secured clusters by using Helm charts" in the additional resources section. Prerequisites You must have generated an init bundle containing secrets. You must have created the stackrox project, or namespace, on the cluster where secured cluster services will be installed. Using stackrox for the project is not required, but ensures that vulnerabilities for RHACS processes are not reported when scanning your clusters. Procedure To create resources, perform only one of the following steps: Create resources using the OpenShift Container Platform web console: In the OpenShift Container Platform web console, make sure that you are in the stackrox namespace. In the top menu, click + to open the Import YAML page. You can drag the init bundle file or copy and paste its contents into the editor, and then click Create . When the command is complete, the display shows that the collector-tls , sensor-tls , and admission-control-tls` resources were created. Create resources using the Red Hat OpenShift CLI: Using the Red Hat OpenShift CLI, run the following command to create the resources: USD oc create -f <init_bundle>.yaml \ 1 -n <stackrox> 2 1 Specify the file name of the init bundle containing the secrets. 2 Specify the name of the project where Central services are installed. Using the kubectl CLI, run the following commands to create the resources: USD kubectl create namespace stackrox 1 USD kubectl create -f <init_bundle>.yaml \ 2 -n <stackrox> 3 1 Create the project where secured cluster resources will be installed. This example uses stackrox . 2 Specify the file name of the init bundle containing the secrets. 3 Specify the project name that you created. This example uses stackrox . 5.3.2. steps Install RHACS secured cluster services in all clusters that you want to monitor. 5.4. Installing Secured Cluster services for RHACS on other platforms You can install RHACS on your secured clusters for platforms such as Amazon Elastic Kubernetes Service (Amazon EKS), Google Kubernetes Engine (Google GKE), and Microsoft Azure Kubernetes Service (Microsoft AKS). 5.4.1. Installing RHACS on secured clusters by using Helm charts You can install RHACS on secured clusters by using Helm charts with no customization, using the default values, or with customizations of configuration parameters. 5.4.1.1. Installing RHACS on secured clusters by using Helm charts without customizations 5.4.1.1.1. Adding the Helm chart repository Procedure Add the RHACS charts repository. USD helm repo add rhacs https://mirror.openshift.com/pub/rhacs/charts/ The Helm repository for Red Hat Advanced Cluster Security for Kubernetes includes Helm charts for installing different components, including: Central services Helm chart ( central-services ) for installing the centralized components (Central and Scanner). Note You deploy centralized components only once and you can monitor multiple separate clusters by using the same installation. Secured Cluster Services Helm chart ( secured-cluster-services ) for installing the per-cluster and per-node components (Sensor, Admission Controller, Collector, and Scanner-slim). Note Deploy the per-cluster components into each cluster that you want to monitor and deploy the per-node components in all nodes that you want to monitor. Verification Run the following command to verify the added chart repository: USD helm search repo -l rhacs/ 5.4.1.1.2. Installing the secured-cluster-services Helm chart without customization Use the following instructions to install the secured-cluster-services Helm chart to deploy the per-cluster and per-node components (Sensor, Admission controller, Collector, and Scanner-slim). Prerequisites You must have generated an RHACS init bundle for your cluster. You must have access to the Red Hat Container Registry and a pull secret for authentication. For information about downloading images from registry.redhat.io , see Red Hat Container Registry Authentication . You must have the address that you are exposing the Central service on. Additional resources Generating and applying an init bundle for RHACS on other platforms 5.4.1.2. Configuring the secured-cluster-services Helm chart with customizations This section describes Helm chart configuration parameters that you can use with the helm install and helm upgrade commands. You can specify these parameters by using the --set option or by creating YAML configuration files. Create the following files for configuring the Helm chart for installing Red Hat Advanced Cluster Security for Kubernetes: Public configuration file values-public.yaml : Use this file to save all non-sensitive configuration options. Private configuration file values-private.yaml : Use this file to save all sensitive configuration options. Ensure that you store this file securely. Important While using the secured-cluster-services Helm chart, do not modify the values.yaml file that is part of the chart. 5.4.1.2.1. Configuration parameters Parameter Description clusterName Name of your cluster. centralEndpoint Address of the Central endpoint. If you are using a non-gRPC capable load balancer, use the WebSocket protocol by prefixing the endpoint address with wss:// . When configuring multiple clusters, use the hostname for the address. For example, central.example.com . sensor.endpoint Address of the Sensor endpoint including port number. sensor.imagePullPolicy Image pull policy for the Sensor container. sensor.serviceTLS.cert The internal service-to-service TLS certificate that Sensor uses. sensor.serviceTLS.key The internal service-to-service TLS certificate key that Sensor uses. sensor.resources.requests.memory The memory request for the Sensor container. Use this parameter to override the default value. sensor.resources.requests.cpu The CPU request for the Sensor container. Use this parameter to override the default value. sensor.resources.limits.memory The memory limit for the Sensor container. Use this parameter to override the default value. sensor.resources.limits.cpu The CPU limit for the Sensor container. Use this parameter to override the default value. sensor.nodeSelector Specify a node selector label as label-key: label-value to force Sensor to only schedule on nodes with the specified label. sensor.tolerations If the node selector selects tainted nodes, use this parameter to specify a taint toleration key, value, and effect for Sensor. This parameter is mainly used for infrastructure nodes. image.main.name The name of the main image. image.collector.name The name of the Collector image. image.main.registry The address of the registry you are using for the main image. image.collector.registry The address of the registry you are using for the Collector image. image.scanner.registry The address of the registry you are using for the Scanner image. image.scannerDb.registry The address of the registry you are using for the Scanner DB image. image.scannerV4.registry The address of the registry you are using for the Scanner V4 image. image.scannerV4DB.registry The address of the registry you are using for the Scanner V4 DB image. image.main.pullPolicy Image pull policy for main images. image.collector.pullPolicy Image pull policy for the Collector images. image.main.tag Tag of main image to use. image.collector.tag Tag of collector image to use. collector.collectionMethod Either CORE_BPF or NO_COLLECTION . collector.imagePullPolicy Image pull policy for the Collector container. collector.complianceImagePullPolicy Image pull policy for the Compliance container. collector.disableTaintTolerations If you specify false , tolerations are applied to Collector, and the collector pods can schedule onto all nodes with taints. If you specify it as true , no tolerations are applied, and the collector pods are not scheduled onto nodes with taints. collector.resources.requests.memory The memory request for the Collector container. Use this parameter to override the default value. collector.resources.requests.cpu The CPU request for the Collector container. Use this parameter to override the default value. collector.resources.limits.memory The memory limit for the Collector container. Use this parameter to override the default value. collector.resources.limits.cpu The CPU limit for the Collector container. Use this parameter to override the default value. collector.complianceResources.requests.memory The memory request for the Compliance container. Use this parameter to override the default value. collector.complianceResources.requests.cpu The CPU request for the Compliance container. Use this parameter to override the default value. collector.complianceResources.limits.memory The memory limit for the Compliance container. Use this parameter to override the default value. collector.complianceResources.limits.cpu The CPU limit for the Compliance container. Use this parameter to override the default value. collector.serviceTLS.cert The internal service-to-service TLS certificate that Collector uses. collector.serviceTLS.key The internal service-to-service TLS certificate key that Collector uses. admissionControl.listenOnCreates This setting controls whether Kubernetes is configured to contact Red Hat Advanced Cluster Security for Kubernetes with AdmissionReview requests for workload creation events. admissionControl.listenOnUpdates When you set this parameter as false , Red Hat Advanced Cluster Security for Kubernetes creates the ValidatingWebhookConfiguration in a way that causes the Kubernetes API server not to send object update events. Since the volume of object updates is usually higher than the object creates, leaving this as false limits the load on the admission control service and decreases the chances of a malfunctioning admission control service. admissionControl.listenOnEvents This setting controls whether the cluster is configured to contact Red Hat Advanced Cluster Security for Kubernetes with AdmissionReview requests for Kubernetes exec and portforward events. RHACS does not support this feature on OpenShift Container Platform 3.11. admissionControl.dynamic.enforceOnCreates This setting controls whether Red Hat Advanced Cluster Security for Kubernetes evaluates policies; if it is disabled, all AdmissionReview requests are automatically accepted. admissionControl.dynamic.enforceOnUpdates This setting controls the behavior of the admission control service. You must specify listenOnUpdates as true for this to work. admissionControl.dynamic.scanInline If you set this option to true , the admission control service requests an image scan before making an admission decision. Since image scans take several seconds, enable this option only if you can ensure that all images used in your cluster are scanned before deployment (for example, by a CI integration during image build). This option corresponds to the Contact image scanners option in the RHACS portal. admissionControl.dynamic.disableBypass Set it to true to disable bypassing the Admission controller. admissionControl.dynamic.timeout Use this parameter to specify the maximum number of seconds RHACS must wait for an admission review before marking it as fail open. If the admission webhook does not receive information that it is requesting before the end of the timeout period, it fails, but in fail open status, it still allows the operation to succeed. For example, the admission controller would allow a deployment to be created even if a scan had timed out and RHACS could not determine if the deployment violated a policy. Beginning in release 4.5, Red Hat reduced the default timeout setting for the RHACS admission controller webhooks from 20 seconds to 10 seconds, resulting in an effective timeout of 12 seconds within the ValidatingWebhookConfiguration . admissionControl.resources.requests.memory The memory request for the Admission Control container. Use this parameter to override the default value. admissionControl.resources.requests.cpu The CPU request for the Admission Control container. Use this parameter to override the default value. admissionControl.resources.limits.memory The memory limit for the Admission Control container. Use this parameter to override the default value. admissionControl.resources.limits.cpu The CPU limit for the Admission Control container. Use this parameter to override the default value. admissionControl.nodeSelector Specify a node selector label as label-key: label-value to force Admission Control to only schedule on nodes with the specified label. admissionControl.tolerations If the node selector selects tainted nodes, use this parameter to specify a taint toleration key, value, and effect for Admission Control. This parameter is mainly used for infrastructure nodes. admissionControl.serviceTLS.cert The internal service-to-service TLS certificate that Admission Control uses. admissionControl.serviceTLS.key The internal service-to-service TLS certificate key that Admission Control uses. registryOverride Use this parameter to override the default docker.io registry. Specify the name of your registry if you are using some other registry. collector.disableTaintTolerations If you specify false , tolerations are applied to Collector, and the Collector pods can schedule onto all nodes with taints. If you specify it as true , no tolerations are applied, and the Collector pods are not scheduled onto nodes with taints. createUpgraderServiceAccount Specify true to create the sensor-upgrader account. By default, Red Hat Advanced Cluster Security for Kubernetes creates a service account called sensor-upgrader in each secured cluster. This account is highly privileged but is only used during upgrades. If you do not create this account, you must complete future upgrades manually if the Sensor does not have enough permissions. createSecrets Specify false to skip the orchestrator secret creation for the Sensor, Collector, and Admission controller. collector.slimMode Deprecated. Specify true if you want to use a slim Collector image for deploying Collector. sensor.resources Resource specification for Sensor. admissionControl.resources Resource specification for Admission controller. collector.resources Resource specification for Collector. collector.complianceResources Resource specification for Collector's Compliance container. exposeMonitoring If you set this option to true , Red Hat Advanced Cluster Security for Kubernetes exposes Prometheus metrics endpoints on port number 9090 for the Sensor, Collector, and the Admission controller. auditLogs.disableCollection If you set this option to true , Red Hat Advanced Cluster Security for Kubernetes disables the audit log detection features used to detect access and modifications to configuration maps and secrets. scanner.disable If you set this option to false , Red Hat Advanced Cluster Security for Kubernetes deploys a Scanner-slim and Scanner DB in the secured cluster to allow scanning images on the integrated OpenShift image registry. Enabling Scanner-slim is supported on OpenShift Container Platform and Kubernetes secured clusters. Defaults to true . scanner.dbTolerations If the node selector selects tainted nodes, use this parameter to specify a taint toleration key, value, and effect for Scanner DB. scanner.replicas Resource specification for Collector's Compliance container. scanner.logLevel Setting this parameter allows you to modify the scanner log level. Use this option only for troubleshooting purposes. scanner.autoscaling.disable If you set this option to true , Red Hat Advanced Cluster Security for Kubernetes disables autoscaling on the Scanner deployment. scanner.autoscaling.minReplicas The minimum number of replicas for autoscaling. Defaults to 2. scanner.autoscaling.maxReplicas The maximum number of replicas for autoscaling. Defaults to 5. scanner.nodeSelector Specify a node selector label as label-key: label-value to force Scanner to only schedule on nodes with the specified label. scanner.tolerations If the node selector selects tainted nodes, use this parameter to specify a taint toleration key, value, and effect for Scanner. scanner.dbNodeSelector Specify a node selector label as label-key: label-value to force Scanner DB to only schedule on nodes with the specified label. scanner.dbTolerations If the node selector selects tainted nodes, use this parameter to specify a taint toleration key, value, and effect for Scanner DB. scanner.resources.requests.memory The memory request for the Scanner container. Use this parameter to override the default value. scanner.resources.requests.cpu The CPU request for the Scanner container. Use this parameter to override the default value. scanner.resources.limits.memory The memory limit for the Scanner container. Use this parameter to override the default value. scanner.resources.limits.cpu The CPU limit for the Scanner container. Use this parameter to override the default value. scanner.dbResources.requests.memory The memory request for the Scanner DB container. Use this parameter to override the default value. scanner.dbResources.requests.cpu The CPU request for the Scanner DB container. Use this parameter to override the default value. scanner.dbResources.limits.memory The memory limit for the Scanner DB container. Use this parameter to override the default value. scanner.dbResources.limits.cpu The CPU limit for the Scanner DB container. Use this parameter to override the default value. monitoring.openshift.enabled If you set this option to false , Red Hat Advanced Cluster Security for Kubernetes will not set up Red Hat OpenShift monitoring. Defaults to true on Red Hat OpenShift 4. network.enableNetworkPolicies To provide security at the network level, RHACS creates default NetworkPolicy resources in the namespace where secured cluster resources are installed. These network policies allow ingress to specific components on specific ports. If you do not want RHACS to create these policies, set this parameter to False . This is a Boolean value. The default value is True , which means the default policies are automatically created. Warning Disabling creation of default network policies can break communication between RHACS components. If you disable creation of default policies, you must create your own network policies to allow this communication. 5.4.1.2.1.1. Environment variables You can specify environment variables for Sensor and Admission controller in the following format: customize: envVars: ENV_VAR1: "value1" ENV_VAR2: "value2" The customize setting allows you to specify custom Kubernetes metadata (labels and annotations) for all objects created by this Helm chart and additional pod labels, pod annotations, and container environment variables for workloads. The configuration is hierarchical, in the sense that metadata defined at a more generic scope (for example, for all objects) can be overridden by metadata defined at a narrower scope (for example, only for the Sensor deployment). 5.4.1.2.2. Installing the secured-cluster-services Helm chart with customizations After you configure the values-public.yaml and values-private.yaml files, install the secured-cluster-services Helm chart to deploy the following per-cluster and per-node components: Sensor Admission controller Collector Scanner: optional for secured clusters when the StackRox Scanner is installed Scanner DB: optional for secured clusters when the StackRox Scanner is installed Scanner V4 Indexer and Scanner V4 DB: optional for secured clusters when Scanner V4 is installed Prerequisites You must have generated an RHACS init bundle for your cluster. You must have access to the Red Hat Container Registry and a pull secret for authentication. For information about downloading images from registry.redhat.io , see Red Hat Container Registry Authentication . You must have the address and the port number that you are exposing the Central service on. Procedure Run the following command: USD helm install -n stackrox \ --create-namespace stackrox-secured-cluster-services rhacs/secured-cluster-services \ -f <name_of_cluster_init_bundle.yaml> \ -f <path_to_values_public.yaml> -f <path_to_values_private.yaml> \ 1 --set imagePullSecrets.username=<username> \ 2 --set imagePullSecrets.password=<password> 3 1 Use the -f option to specify the paths for your YAML configuration files. 2 Include the user name for your pull secret for Red Hat Container Registry authentication. 3 Include the password for your pull secret for Red Hat Container Registry authentication. Note To deploy secured-cluster-services Helm chart by using a continuous integration (CI) system, pass the init bundle YAML file as an environment variable to the helm install command: USD helm install ... -f <(echo "USDINIT_BUNDLE_YAML_SECRET") 1 1 If you are using base64 encoded variables, use the helm install ... -f <(echo "USDINIT_BUNDLE_YAML_SECRET" | base64 --decode) command instead. Additional resources Generating and applying an init bundle for RHACS on other platforms 5.4.1.3. Changing configuration options after deploying the secured-cluster-services Helm chart You can make changes to any configuration options after you have deployed the secured-cluster-services Helm chart. When using the helm upgrade command to make changes, the following guidelines and requirements apply: You can also specify configuration values using the --set or --set-file parameters. However, these options are not saved, and you must manually specify all the options again whenever you make changes. Some changes, such as enabling a new component like Scanner V4, require new certificates to be issued for the component. Therefore, you must provide a CA when making these changes. If the CA was generated by the Helm chart during the initial installation, you must retrieve these automatically generated values from the cluster and provide them to the helm upgrade command. The post-installation notes of the central-services Helm chart include a command for retrieving the automatically generated values. If the CA was generated outside of the Helm chart and provided during the installation of the central-services chart, then you must perform that action again when using the helm upgrade command, for example, by using the --reuse-values flag with the helm upgrade command. Procedure Update the values-public.yaml and values-private.yaml configuration files with new values. Run the helm upgrade command and specify the configuration files using the -f option: USD helm upgrade -n stackrox \ stackrox-secured-cluster-services rhacs/secured-cluster-services \ --reuse-values \ 1 -f <path_to_values_public.yaml> \ -f <path_to_values_private.yaml> 1 If you have modified values that are not included in the values_public.yaml and values_private.yaml files, include the --reuse-values parameter. 5.4.2. Installing RHACS on secured clusters by using the roxctl CLI To install RHACS on secured clusters by using the CLI, perform the following steps: Install the roxctl CLI Install Sensor. 5.4.2.1. Installing the roxctl CLI You must first download the binary. You can install roxctl on Linux, Windows, or macOS. 5.4.2.1.1. Installing the roxctl CLI on Linux You can install the roxctl CLI binary on Linux by using the following procedure. Note roxctl CLI for Linux is available for amd64 , arm64 , ppc64le , and s390x architectures. Procedure Determine the roxctl architecture for the target operating system: USD arch="USD(uname -m | sed "s/x86_64//")"; arch="USD{arch:+-USDarch}" Download the roxctl CLI: USD curl -L -f -o roxctl "https://mirror.openshift.com/pub/rhacs/assets/4.5.6/bin/Linux/roxctlUSD{arch}" Make the roxctl binary executable: USD chmod +x roxctl Place the roxctl binary in a directory that is on your PATH : To check your PATH , execute the following command: USD echo USDPATH Verification Verify the roxctl version you have installed: USD roxctl version 5.4.2.1.2. Installing the roxctl CLI on macOS You can install the roxctl CLI binary on macOS by using the following procedure. Note roxctl CLI for macOS is available for amd64 and arm64 architectures. Procedure Determine the roxctl architecture for the target operating system: USD arch="USD(uname -m | sed "s/x86_64//")"; arch="USD{arch:+-USDarch}" Download the roxctl CLI: USD curl -L -f -o roxctl "https://mirror.openshift.com/pub/rhacs/assets/4.5.6/bin/Darwin/roxctlUSD{arch}" Remove all extended attributes from the binary: USD xattr -c roxctl Make the roxctl binary executable: USD chmod +x roxctl Place the roxctl binary in a directory that is on your PATH : To check your PATH , execute the following command: USD echo USDPATH Verification Verify the roxctl version you have installed: USD roxctl version 5.4.2.1.3. Installing the roxctl CLI on Windows You can install the roxctl CLI binary on Windows by using the following procedure. Note roxctl CLI for Windows is available for the amd64 architecture. Procedure Download the roxctl CLI: USD curl -f -O https://mirror.openshift.com/pub/rhacs/assets/4.5.6/bin/Windows/roxctl.exe Verification Verify the roxctl version you have installed: USD roxctl version 5.4.2.2. Installing Sensor To monitor a cluster, you must deploy Sensor. You must deploy Sensor into each cluster that you want to monitor. This installation method is also called the manifest installation method. To perform an installation by using the manifest installation method, follow only one of the following procedures: Use the RHACS web portal to download the cluster bundle, and then extract and run the sensor script. Use the roxctl CLI to generate the required sensor configuration for your OpenShift Container Platform cluster and associate it with your Central instance. Prerequisites You must have already installed Central services, or you can access Central services by selecting your ACS instance on Red Hat Advanced Cluster Security Cloud Service (RHACS Cloud Service). 5.4.2.2.1. Manifest installation method by using the web portal Procedure On your secured cluster, in the RHACS portal, go to Platform Configuration Clusters . Select Secure a cluster Legacy installation method . Specify a name for the cluster. Provide appropriate values for the fields based on where you are deploying the Sensor. If you are deploying Sensor in the same cluster, accept the default values for all the fields. If you are deploying into a different cluster, replace central.stackrox.svc:443 with a load balancer, node port, or other address, including the port number, that is accessible from the other cluster. If you are using a non-gRPC capable load balancer, such as HAProxy, AWS Application Load Balancer (ALB), or AWS Elastic Load Balancing (ELB), use the WebSocket Secure ( wss ) protocol. To use wss : Prefix the address with wss:// . Add the port number after the address, for example, wss://stackrox-central.example.com:443 . Click to continue with the Sensor setup. Click Download YAML File and Keys to download the cluster bundle (zip archive). Important The cluster bundle zip archive includes unique configurations and keys for each cluster. Do not reuse the same files in another cluster. From a system that has access to the monitored cluster, extract and run the sensor script from the cluster bundle: USD unzip -d sensor sensor-<cluster_name>.zip USD ./sensor/sensor.sh If you get a warning that you do not have the required permissions to deploy Sensor, follow the on-screen instructions, or contact your cluster administrator for help. After Sensor is deployed, it contacts Central and provides cluster information. 5.4.2.2.2. Manifest installation by using the roxctl CLI Procedure Generate the required sensor configuration for your OpenShift Container Platform cluster and associate it with your Central instance by running the following command: USD roxctl sensor generate openshift --openshift-version <ocp_version> --name <cluster_name> --central "USDROX_ENDPOINT" 1 1 For the --openshift-version option, specify the major OpenShift Container Platform version number for your cluster. For example, specify 3 for OpenShift Container Platform version 3.x and specify 4 for OpenShift Container Platform version 4.x . From a system that has access to the monitored cluster, extract and run the sensor script from the cluster bundle: USD unzip -d sensor sensor-<cluster_name>.zip USD ./sensor/sensor.sh If you get a warning that you do not have the required permissions to deploy Sensor, follow the on-screen instructions, or contact your cluster administrator for help. After Sensor is deployed, it contacts Central and provides cluster information. Verification Return to the RHACS portal and check if the deployment is successful. If successful, when viewing your list of clusters in Platform Configuration Clusters , the cluster status displays a green checkmark and a Healthy status. If you do not see a green checkmark, use the following command to check for problems: On Kubernetes, enter the following command: USD kubectl get pod -n stackrox -w Click Finish to close the window. After installation, Sensor starts reporting security information to RHACS and the RHACS portal dashboard begins showing deployments, images, and policy violations from the cluster on which you have installed the Sensor. 5.5. Verifying installation of RHACS on other platforms Provides steps to verify that RHACS is properly installed. 5.5.1. Verifying installation After you complete the installation, run a few vulnerable applications and go to the RHACS portal to evaluate the results of security assessments and policy violations. Note The sample applications listed in the following section contain critical vulnerabilities and they are specifically designed to verify the build and deploy-time assessment features of Red Hat Advanced Cluster Security for Kubernetes. To verify installation: Find the address of the RHACS portal based on your exposure method: For a load balancer: USD kubectl get service central-loadbalancer -n stackrox For port forward: Run the following command: USD kubectl port-forward svc/central 18443:443 -n stackrox Go to https://localhost:18443/ . Create a new namespace: USD kubectl create namespace test Start some applications with critical vulnerabilities: USD kubectl run shell --labels=app=shellshock,team=test-team \ --image=quay.io/stackrox-io/docs:example-vulnerables-cve-2014-6271 -n test USD kubectl run samba --labels=app=rce \ --image=quay.io/stackrox-io/docs:example-vulnerables-cve-2017-7494 -n test Red Hat Advanced Cluster Security for Kubernetes automatically scans these deployments for security risks and policy violations as soon as they are submitted to the cluster. Go to the RHACS portal to view the violations. You can log in to the RHACS portal by using the default username admin and the generated password.
[ "helm repo add rhacs https://mirror.openshift.com/pub/rhacs/charts/", "helm search repo -l rhacs/", "helm install -n stackrox --create-namespace stackrox-central-services rhacs/central-services --set imagePullSecrets.username=<username> \\ 1 --set imagePullSecrets.password=<password> \\ 2 --set central.exposure.route.enabled=true", "helm install -n stackrox --create-namespace stackrox-central-services rhacs/central-services --set imagePullSecrets.username=<username> \\ 1 --set imagePullSecrets.password=<password> \\ 2 --set central.exposure.loadBalancer.enabled=true", "helm install -n stackrox --create-namespace stackrox-central-services rhacs/central-services --set imagePullSecrets.username=<username> \\ 1 --set imagePullSecrets.password=<password> 2", "env: proxyConfig: | url: http://proxy.name:port username: username password: password excludes: - some.domain", "env: proxyConfig: | url: http://proxy.name:port username: username password: password excludes: - some.domain", "htpasswd: | admin:<bcrypt-hash>", "central: declarativeConfiguration: mounts: configMaps: - declarative-configs secrets: - sensitive-declarative-configs", "helm install -n stackrox --create-namespace stackrox-central-services rhacs/central-services -f <path_to_values_public.yaml> -f <path_to_values_private.yaml> 1", "helm upgrade -n stackrox stackrox-central-services rhacs/central-services --reuse-values \\ 1 -f <path_to_init_bundle_file -f <path_to_values_public.yaml> -f <path_to_values_private.yaml>", "arch=\"USD(uname -m | sed \"s/x86_64//\")\"; arch=\"USD{arch:+-USDarch}\"", "curl -L -f -o roxctl \"https://mirror.openshift.com/pub/rhacs/assets/4.5.6/bin/Linux/roxctlUSD{arch}\"", "chmod +x roxctl", "echo USDPATH", "roxctl version", "arch=\"USD(uname -m | sed \"s/x86_64//\")\"; arch=\"USD{arch:+-USDarch}\"", "curl -L -f -o roxctl \"https://mirror.openshift.com/pub/rhacs/assets/4.5.6/bin/Darwin/roxctlUSD{arch}\"", "xattr -c roxctl", "chmod +x roxctl", "echo USDPATH", "roxctl version", "curl -f -O https://mirror.openshift.com/pub/rhacs/assets/4.5.6/bin/Windows/roxctl.exe", "roxctl version", "roxctl central generate interactive", "Enter path to the backup bundle from which to restore keys and certificates (optional): Enter read templates from local filesystem (default: \"false\"): Enter path to helm templates on your local filesystem (default: \"/path\"): Enter PEM cert bundle file (optional): 1 Enter Create PodSecurityPolicy resources (for pre-v1.25 Kubernetes) (default: \"true\"): 2 Enter administrator password (default: autogenerated): Enter orchestrator (k8s, openshift): Enter default container images settings (development_build, stackrox.io, rhacs, opensource); it controls repositories from where to download the images, image names and tags format (default: \"development_build\"): Enter the directory to output the deployment bundle to (default: \"central-bundle\"): Enter the OpenShift major version (3 or 4) to deploy on (default: \"0\"): Enter whether to enable telemetry (default: \"false\"): Enter central-db image to use (if unset, a default will be used according to --image-defaults): Enter Istio version when deploying into an Istio-enabled cluster (leave empty when not running Istio) (optional): Enter the method of exposing Central (route, lb, np, none) (default: \"none\"): 3 Enter main image to use (if unset, a default will be used according to --image-defaults): Enter whether to run StackRox in offline mode, which avoids reaching out to the Internet (default: \"false\"): Enter list of secrets to add as declarative configuration mounts in central (default: \"[]\"): 4 Enter list of config maps to add as declarative configuration mounts in central (default: \"[]\"): 5 Enter the deployment tool to use (kubectl, helm, helm-values) (default: \"kubectl\"): Enter scanner-db image to use (if unset, a default will be used according to --image-defaults): Enter scanner image to use (if unset, a default will be used according to --image-defaults): Enter Central volume type (hostpath, pvc): 6 Enter external volume name for Central (default: \"stackrox-db\"): Enter external volume size in Gi for Central (default: \"100\"): Enter storage class name for Central (optional if you have a default StorageClass configured): Enter external volume name for Central DB (default: \"central-db\"): Enter external volume size in Gi for Central DB (default: \"100\"): Enter storage class name for Central DB (optional if you have a default StorageClass configured):", "sudo chcon -Rt svirt_sandbox_file_t <full_volume_path>", "./central-bundle/central/scripts/setup.sh", "oc create -R -f central-bundle/central", "oc get pod -n stackrox -w", "cat central-bundle/password", "export ROX_API_TOKEN=<api_token>", "export ROX_CENTRAL_ADDRESS=<address>:<port_number>", "roxctl -e \"USDROX_CENTRAL_ADDRESS\" central init-bundles generate --output <cluster_init_bundle_name> cluster_init_bundle.yaml", "roxctl -e \"USDROX_CENTRAL_ADDRESS\" central init-bundles generate --output-secrets <cluster_init_bundle_name> cluster_init_bundle.yaml", "oc create -f <init_bundle>.yaml \\ 1 -n <stackrox> 2", "kubectl create namespace stackrox 1 kubectl create -f <init_bundle>.yaml \\ 2 -n <stackrox> 3", "helm repo add rhacs https://mirror.openshift.com/pub/rhacs/charts/", "helm search repo -l rhacs/", "customize: envVars: ENV_VAR1: \"value1\" ENV_VAR2: \"value2\"", "helm install -n stackrox --create-namespace stackrox-secured-cluster-services rhacs/secured-cluster-services -f <name_of_cluster_init_bundle.yaml> -f <path_to_values_public.yaml> -f <path_to_values_private.yaml> \\ 1 --set imagePullSecrets.username=<username> \\ 2 --set imagePullSecrets.password=<password> 3", "helm install ... -f <(echo \"USDINIT_BUNDLE_YAML_SECRET\") 1", "helm upgrade -n stackrox stackrox-secured-cluster-services rhacs/secured-cluster-services --reuse-values \\ 1 -f <path_to_values_public.yaml> -f <path_to_values_private.yaml>", "arch=\"USD(uname -m | sed \"s/x86_64//\")\"; arch=\"USD{arch:+-USDarch}\"", "curl -L -f -o roxctl \"https://mirror.openshift.com/pub/rhacs/assets/4.5.6/bin/Linux/roxctlUSD{arch}\"", "chmod +x roxctl", "echo USDPATH", "roxctl version", "arch=\"USD(uname -m | sed \"s/x86_64//\")\"; arch=\"USD{arch:+-USDarch}\"", "curl -L -f -o roxctl \"https://mirror.openshift.com/pub/rhacs/assets/4.5.6/bin/Darwin/roxctlUSD{arch}\"", "xattr -c roxctl", "chmod +x roxctl", "echo USDPATH", "roxctl version", "curl -f -O https://mirror.openshift.com/pub/rhacs/assets/4.5.6/bin/Windows/roxctl.exe", "roxctl version", "unzip -d sensor sensor-<cluster_name>.zip", "./sensor/sensor.sh", "roxctl sensor generate openshift --openshift-version <ocp_version> --name <cluster_name> --central \"USDROX_ENDPOINT\" 1", "unzip -d sensor sensor-<cluster_name>.zip", "./sensor/sensor.sh", "kubectl get pod -n stackrox -w", "kubectl get service central-loadbalancer -n stackrox", "kubectl port-forward svc/central 18443:443 -n stackrox", "kubectl create namespace test", "kubectl run shell --labels=app=shellshock,team=test-team --image=quay.io/stackrox-io/docs:example-vulnerables-cve-2014-6271 -n test kubectl run samba --labels=app=rce --image=quay.io/stackrox-io/docs:example-vulnerables-cve-2017-7494 -n test" ]
https://docs.redhat.com/en/documentation/red_hat_advanced_cluster_security_for_kubernetes/4.5/html/installing/installing-rhacs-on-other-platforms
5.230. parted
5.230. parted 5.230.1. RHBA-2012:0773 - parted bug fix update Updated parted packages that fix three bugs are now available for Red Hat Enterprise Linux 6. The parted packages allow you to create, destroy, resize, move, and copy hard disk partitions. The parted program can be used for creating space for new operating systems, reorganizing disk usage, and copying data to new hard disks. Bug Fixes BZ# 698121 , BZ# 751164 Prior to this update, editing partitions on a mpath device udev could, under certain circumstances, interfere with re-reading the partition table. This update adds the dm_udev_wait option so that udev now correctly synchronizes. BZ# 750395 Prior to this update, the libparted partition_duplicate() function did not correctly copy all GPT partition flags. This update modifies the underlying code so that all flags are correctly copied and adds a test to ensure correct operation. All parted users are advised to upgrade to these updated packages, which fix these bugs.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.3_technical_notes/parted
Chapter 5. Leveraging certification
Chapter 5. Leveraging certification Leveraging allows you to request credit for successful certification tests when similar or substantially similar BMCs are used across a family of server systems. It is based on your internal qualification testing of the specific BMC on each system, confirming that any variations are not material and the solution matches a previously certified one. Leveraging can reduce the amount of official testing needed for certification. You can request leveraging when the solution includes a previously certified BMC with the same firmware branch and equal or fewer features. Note It is your responsibility to verify that any differences in BMC-to-server interaction do not affect the certification.
null
https://docs.redhat.com/en/documentation/red_hat_hardware_certification/2025/html/red_hat_openshift_container_platform_hardware_bare_metal_certification_policy_guide/assembly-leveraging-certification_rhosp-bm-pol-cert-testing
Chapter 37. Virtualization
Chapter 37. Virtualization Guests no longer shut down unexpectedly during reboot On a Red Hat Enterprise Linux 7.4 guest running on qemu-kvm-1.5.3-139.el7 , if the i6300esb watchdog was set to poweroff , the watchdog was triggered when shutting down due to the timeout being calculated incorrectly. Consequently, when rebooting the guest, it shut down instead. With this update, the timeout calculations in qemu-kvm have been corrected. As a result, the virtual machine reboots properly. (BZ# 1470244 ) Guests accessed using a serial console no longer become unresponsive Previously, if a client opened a host-side pseudoterminal device (pty) on a KVM guest pty serial console and did not read from it, the guest in some cases became unresponsive because because of blocking read/write calls. With this update, the host-side pty open mode was set to non-blocking. As a result, the guest machine does not become unresponsive in the described scenario. (BZ# 1455451 ) virt-v2v now warns about not converting PCI passthrough devices The virt-v2v utility currently cannot convert PCI passthrough devices and thus ignores them in the conversion process. Prior to this update, however, attempting to convert a guest virtual machine with a PCI passthrough device successfully converted the guest, but did not provide any warning about the ignored PCI passthrough device. Now, converting such a guest logs an appropriate warning message during the conversion. (BZ# 1472719 ) When importing OVAs, virt-v2v now parses MAC addresses Previously, the virt-v2v utility did not parse the MAC addresses of network interfaces when importing Open Virtual Appliances (OVAs). Consequently, the converted guest virtual machines had network interfaces with different MAC addresses, resulting in the network setup breaking. With this release, virt-v2v parses the MAC addresses, if available, of network interfaces when importing OVAs. As a result, network converted guests have the same MAC addresses as specified in the OVAs and the network setup does not break. (BZ# 1506572 )
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/7.5_release_notes/bug_fixes_virtualization
2.8. Virtualization
2.8. Virtualization Performance monitoring in KVM guests, BZ# 645365 KVM can now virtualize a performance monitoring unit (vPMU) to allow virtual machines to use performance monitoring. Additionally it supports Intel's " architectural PMU " which can be live-migrated across different host CPU versions, using the -cpu host flag. With this feature, Red Hat virtualization customers are now able to utilize performance monitoring in KVM guests seamlessly. The virtual performance monitoring feature allows virtual machine users to identify sources of performance problems in their guests, using their preferred pre-existing profiling tools that work on the host as well as the guest. This is an addition to the existing ability to profile a KVM guest from the host. This feature is a Technology Preview in Red Hat Enterprise Linux 6.3. Package: kernel-2.6.32-279 Dynamic virtual CPU allocation KVM in Red Hat Enterprise Linux 6.3 now supports dynamic virtual CPU allocation, also called vCPU hot plug, to dynamically manage capacity and react to unexpected load increases on their platforms during off-peak hours. The virtual CPU hot-plugging feature gives system administrators the ability to dynamically adjust CPU resources in a guest. Because a guest no longer has to be taken offline to adjust the CPU resources, the availability of the guest is increased. This feature is a Technology Preview in Red Hat Enterprise Linux 6.3. Currently, only the vCPU hot-add functionality works. The vCPU hot-unplug feature is not yet implemented. Package: qemu-kvm-0.12.1.2-2.295 Virtio-SCSI capabilities KVM Virtualization's storage stack has been improved with the addition of virtio-SCSI (a storage architecture for KVM based on SCSI) capabilities. Virtio-SCSI provides the ability to connect directly to SCSI LUNs and significantly improves scalability compared to virtio-blk. The advantage of virtio-SCSI is that it is capable of handling hundreds of devices compared to virtio-blk which can only handle 25 devices and exhausts PCI slots. Virtio-SCSI is now capable of inheriting the feature set of the target device with the ability to: attach a virtual hard drive or CD through the virtio-scsi controller, pass-through a physical SCSI device from the host to the guest via the QEMU scsi-block device, and allow the usage of hundreds of devices per guest; an improvement from the 32-device limit of virtio-blk. This feature is a Technology Preview in Red Hat Enterprise Linux 6.3 Package: qemu-kvm-0.12.1.2-2.295 Support for in-guest S4/S3 states KVM's power management features have been extended to include native support for S4 (suspend to disk) and S3 (suspend to RAM) states within the virtual machine, speeding up guest restoration from one of these low power states. In earlier implementations guests were saved or restored to/from a disk or memory that was external to the guest, which introduced latency. Additionally, machines can be awakened from S3 with events from a remote keyboard through SPICE. This feature is a Technology Preview and is disabled by default in Red Hat Enterprise Linux 6.3. To enable it, select the /usr/share/seabios/bios-pm.bin file for the VM bios instead of the default /usr/share/seabios/bios.bin file. The native, in-guest S4 (suspend to disk) and S3 (suspend to RAM) power management features support the ability to perform suspend to disk and suspend to RAM functions in the guest (as opposed to the host), reducing the time needed to restore a guest by responding to simple keyboard gestures input. This also removes the need to maintain an external memory-state file. This capability is supported on Red Hat Enterprise Linux 6.3 guests and Windows guests running on any hypervisor capable of supporting S3 and S4. Package: seabios-0.6.1.2-19 System monitoring via SNMP, BZ# 642556 This feature provides KVM support for stable technology that is already used in data center with bare metal systems. SNMP is the standard for monitoring and is extremely well understood as well as computationally efficient. System monitoring via SNMP in Red Hat Enterprise Linux 6 allows the KVM hosts to send SNMP traps on events so that hypervisor events can be communicated to the user via standard SNMP protocol. This feature is provided through the addition of a new package: libvirt-snmp . This feature is introduced as a Technology Preview. Package: libvirt-snmp-0.0.2-3 Wire speed requirement in KVM network drivers Virtualization and cloud products that run networking work loads need to run wire speeds. Up until Red Hat Enterprise Linux 6.1, the only way to reach wire speed on a 10 GB Ethernet NIC with a lower CPU utilization was to use PCI device assignment (passthrough), which limits other features like memory overcommit and guest migration The macvtap / vhost zero-copy capabilities allow the user to use those features when high performance is required. This feature improves performance for any Red Hat Enterprise Linux 6.x guest in the VEPA use case. This feature is introduced as a Technology Preview. Package: qemu-kvm-0.12.1.2-2.295
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.3_technical_notes/virtualization_tp
Chapter 10. Component details
Chapter 10. Component details The following table shows the component versions for each AMQ Streams release. AMQ Streams Apache Kafka Strimzi Operators Kafka Bridge Oauth Cruise Control 2.5.2 3.5.0 (+ 3.5.2) 0.36.0 0.26 0.13.0 2.5.123 2.5.1 3.5.0 0.36.0 0.26 0.13.0 2.5.123 2.5.0 3.5.0 0.36.0 0.26 0.13.0 2.5.123 2.4.0 3.4.0 0.34.0 0.25.0 0.12.0 2.5.112 2.3.0 3.3.1 0.32.0 0.22.3 0.11.0 2.5.103 2.2.2 3.2.3 0.29.0 0.21.5 0.10.0 2.5.103 2.2.1 3.2.3 0.29.0 0.21.5 0.10.0 2.5.103 2.2.0 3.2.3 0.29.0 0.21.5 0.10.0 2.5.89 2.1.0 3.1.0 0.28.0 0.21.4 0.10.0 2.5.82 2.0.1 3.0.0 0.26.0 0.20.3 0.9.0 2.5.73 2.0.0 3.0.0 0.26.0 0.20.3 0.9.0 2.5.73 1.8.4 2.8.0 0.24.0 0.20.1 0.8.1 2.5.59 1.8.0 2.8.0 0.24.0 0.20.1 0.8.1 2.5.59 1.7.0 2.7.0 0.22.1 0.19.0 0.7.1 2.5.37 1.6.7 2.6.3 0.20.1 0.19.0 0.6.1 2.5.11 1.6.6 2.6.3 0.20.1 0.19.0 0.6.1 2.5.11 1.6.5 2.6.2 0.20.1 0.19.0 0.6.1 2.5.11 1.6.4 2.6.2 0.20.1 0.19.0 0.6.1 2.5.11 1.6.0 2.6.0 0.20.0 0.19.0 0.6.1 2.5.11 1.5.0 2.5.0 0.18.0 0.16.0 0.5.0 - 1.4.1 2.4.0 0.17.0 0.15.2 0.3.0 - 1.4.0 2.4.0 0.17.0 0.15.2 0.3.0 - 1.3.0 2.3.0 0.14.0 0.14.0 0.1.0 - 1.2.0 2.2.1 0.12.1 0.12.2 - - 1.1.1 2.1.1 0.11.4 - - - 1.1.0 2.1.1 0.11.1 - - - 1.0 2.0.0 0.8.1 - - - Note Strimzi 0.26.0 contains a Log4j vulnerability. The version included in the product has been updated to depend on versions that do not contain the vulnerability.
null
https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.5/html/release_notes_for_amq_streams_2.5_on_rhel/ref-component-details-str
Chapter 3. Configuring the Maven settings.xml file for the online repository
Chapter 3. Configuring the Maven settings.xml file for the online repository You can use the online Maven repository with your Maven project by configuring your user settings.xml file. This is the recommended approach. Maven settings used with a repository manager or repository on a shared server provide better control and manageability of projects. Note When you configure the repository by modifying the Maven settings.xml file, the changes apply to all of your Maven projects. Procedure Open the Maven ~/.m2/settings.xml file in a text editor or integrated development environment (IDE). Note If there is not a settings.xml file in the ~/.m2/ directory, copy the settings.xml file from the USDMAVEN_HOME/.m2/conf/ directory into the ~/.m2/ directory. Add the following lines to the <profiles> element of the settings.xml file: <!-- Configure the Maven repository --> <profile> <id>red-hat-enterprise-maven-repository</id> <repositories> <repository> <id>red-hat-enterprise-maven-repository</id> <url>https://maven.repository.redhat.com/ga/</url> <releases> <enabled>true</enabled> </releases> <snapshots> <enabled>false</enabled> </snapshots> </repository> </repositories> <pluginRepositories> <pluginRepository> <id>red-hat-enterprise-maven-repository</id> <url>https://maven.repository.redhat.com/ga/</url> <releases> <enabled>true</enabled> </releases> <snapshots> <enabled>false</enabled> </snapshots> </pluginRepository> </pluginRepositories> </profile> Add the following lines to the <activeProfiles> element of the settings.xml file and save the file. <activeProfile>red-hat-enterprise-maven-repository</activeProfile> 3.1. Creating a Spring Boot business application from Maven archetypes You can use Maven archetypes to create business applications that use the Spring Boot framework. Doing this by-passes the need to install and configure Red Hat Decision Manager. You can create a business asset project, a data model project, or a service project: Prerequisites Apache Maven 3.5 or higher Procedure Enter one of the following commands to create your Spring Boot business application project. In these commands, replace business-application with the name of your business application: To create a business asset project that contains business processes, rules, and forms: This command creates a project which generates business-application-kjar-1.0-SNAPSHOT.jar . To create a data model asset project that provides common data structures that are shared between the service projects and business assets projects: This command creates a project which generates business-application-model-1.0-SNAPSHOT.jar . To create a dynamic assets project that provides case management capabilities: This command creates a project which generates business-application-kjar-1.0-SNAPSHOT.jar . To create a service project, a deployable project that provides a service with various capabilities including the business logic that operates your business, enter one of the following commands: Business automation covers features for process management, case management, decision management and optimization. These will be by default configured in the service project of your business application but you can turn them off through configuration. To create a business application service project (the default configuration) that includes features for process management, case management, decision management, and optimization: Decision management covers mainly decision and rules related features. To create a decision management service project that includes decision and rules-related features: Business optimization covers planning problems and solutions related features. To create a Red Hat build of OptaPlanner service project to help you solve planning problems and solutions related features: These commands create a project which generates business-application-service-1.0-SNAPSHOT.jar . In most cases, a service project includes business assets and data model projects. A business application can split services into smaller component service projects for better manageability. 3.2. Configuring an Red Hat Decision Manager Spring Boot project for the online Maven repository After you create your Red Hat Decision Manager Spring Boot project, configure it with the online Maven Repository to store your application data. Prerequisites You have a Spring Boot business application service file that you created using the Maven archetype command. For more information, see Section 3.1, "Creating a Spring Boot business application from Maven archetypes" . Procedure In the directory that contains your Red Hat Decision Manager Spring Boot application, open the <BUSINESS-APPLICATION>-service/pom.xml file in a text editor or IDE, where <BUSINESS-APPLICATION> is the name of your Spring Boot project. Add the following repository to the repositories element: <repository> <id>jboss-enterprise-repository-group</id> <name>Red Hat JBoss Enterprise Maven Repository</name> <url>https://maven.repository.redhat.com/ga/</url> <layout>default</layout> <releases> <updatePolicy>never</updatePolicy> </releases> <snapshots> <updatePolicy>daily</updatePolicy> </snapshots> </repository> Add the following plug-in repository to the pluginRepositories element: Note If your pom.xml file does not have the pluginRepositories element, add it as well. <pluginRepository> <id>jboss-enterprise-repository-group</id> <name>Red Hat JBoss Enterprise Maven Repository</name> <url>https://maven.repository.redhat.com/ga/</url> <layout>default</layout> <releases> <updatePolicy>never</updatePolicy> </releases> <snapshots> <updatePolicy>daily</updatePolicy> </snapshots> </pluginRepository> Doing this adds the productized Maven repository to your business application. 3.3. Downloading and configuring the Red Hat Process Automation Manager Maven repository If you do not want to use the online Maven repository, you can download and configure the Red Hat Process Automation Manager Maven repository. The Red Hat Process Automation Manager Maven repository contains many of the requirements that Java developers typically use to build their applications. This procedure describes how to edit the Maven settings.xml file to configure the Red Hat Process Automation Manager Maven repository. Note When you configure the repository by modifying the Maven settings.xml file, the changes apply to all of your Maven projects. Prerequisites You have created a Red Hat Process Automation Manager Spring Boot project. Procedure Navigate to the Software Downloads page in the Red Hat Customer Portal (login required) and then select the following product and version from the drop-down options: Product: Process Automation Manager Version: 7.13.5 Download Red Hat Process Automation Manager 7.13 Maven Repository ( rhpam-7.13.5-maven-repository.zip ). Extract the downloaded archive. Change to the ~/.m2/ directory and open the Maven settings.xml file in a text editor or integrated development environment (IDE). Add the following lines to the <profiles> element of the Maven settings.xml file, where <MAVEN_REPOSITORY> is the path of the Maven repository that you downloaded. The format of <MAVEN_REPOSITORY> must be file://USDPATH , for example file:///home/userX/rhpam-7.13.5.GA-maven-repository/maven-repository . <profile> <id>red-hat-enterprise-maven-repository</id> <repositories> <repository> <id>red-hat-enterprise-maven-repository</id> <url><MAVEN_REPOSITORY></url> <releases> <enabled>true</enabled> </releases> <snapshots> <enabled>false</enabled> </snapshots> </repository> </repositories> <pluginRepositories> <pluginRepository> <id>red-hat-enterprise-maven-repository</id> <url><MAVEN_REPOSITORY></url> <releases> <enabled>true</enabled> </releases> <snapshots> <enabled>false</enabled> </snapshots> </pluginRepository> </pluginRepositories> </profile> Add the following lines to the <activeProfiles> element of the Maven settings.xml file and save the file. <activeProfile>red-hat-enterprise-maven-repository</activeProfile> Important If your Maven repository contains outdated artifacts, you might encounter one of the following Maven error messages when you build or deploy your project, where <ARTIFACT_NAME> is the name of a missing artifact and <PROJECT_NAME> is the name of the project you are trying to build: Missing artifact <PROJECT_NAME> [ERROR] Failed to execute goal on project <ARTIFACT_NAME> ; Could not resolve dependencies for <PROJECT_NAME> To resolve the issue, delete the cached version of your local repository located in the ~/.m2/repository directory to force a download of the latest Maven artifacts.
[ "<!-- Configure the Maven repository --> <profile> <id>red-hat-enterprise-maven-repository</id> <repositories> <repository> <id>red-hat-enterprise-maven-repository</id> <url>https://maven.repository.redhat.com/ga/</url> <releases> <enabled>true</enabled> </releases> <snapshots> <enabled>false</enabled> </snapshots> </repository> </repositories> <pluginRepositories> <pluginRepository> <id>red-hat-enterprise-maven-repository</id> <url>https://maven.repository.redhat.com/ga/</url> <releases> <enabled>true</enabled> </releases> <snapshots> <enabled>false</enabled> </snapshots> </pluginRepository> </pluginRepositories> </profile>", "<activeProfile>red-hat-enterprise-maven-repository</activeProfile>", "mvn archetype:generate -B -DarchetypeGroupId=org.kie -DarchetypeArtifactId=kie-kjar-archetype -DarchetypeVersion=7.67.0.Final-redhat-00024 -DgroupId=com.company -DartifactId=business-application-kjar -Dversion=1.0-SNAPSHOT -Dpackage=com.company", "mvn archetype:generate -B -DarchetypeGroupId=org.kie -DarchetypeArtifactId=kie-model-archetype -DarchetypeVersion=7.67.0.Final-redhat-00024 -DgroupId=com.company -DartifactId=business-application-model -Dversion=1.0-SNAPSHOT -Dpackage=com.company.model", "mvn archetype:generate -B -DarchetypeGroupId=org.kie -DarchetypeArtifactId=kie-kjar-archetype -DarchetypeVersion=7.67.0.Final-redhat-00024 -DcaseProject=true -DgroupId=com.company -DartifactId=business-application-kjar -Dversion=1.0-SNAPSHOT -Dpackage=com.company", "mvn archetype:generate -B -DarchetypeGroupId=org.kie -DarchetypeArtifactId=kie-service-spring-boot-archetype -DarchetypeVersion=7.67.0.Final-redhat-00024 -DgroupId=com.company -DartifactId=business-application-service -Dversion=1.0-SNAPSHOT -Dpackage=com.company.service -DappType=bpm", "mvn archetype:generate -B -DarchetypeGroupId=org.kie -DarchetypeArtifactId=kie-service-spring-boot-archetype -DarchetypeVersion=7.67.0.Final-redhat-00024 -DgroupId=com.company -DartifactId=business-application-service -Dversion=1.0-SNAPSHOT -Dpackage=com.company.service -DappType=brm", "mvn archetype:generate -B -DarchetypeGroupId=org.kie -DarchetypeArtifactId=kie-service-spring-boot-archetype -DarchetypeVersion=7.67.0.Final-redhat-00024 -DgroupId=com.company -DartifactId=business-application-service -Dversion=1.0-SNAPSHOT -Dpackage=com.company.service -DappType=planner", "<repository> <id>jboss-enterprise-repository-group</id> <name>Red Hat JBoss Enterprise Maven Repository</name> <url>https://maven.repository.redhat.com/ga/</url> <layout>default</layout> <releases> <updatePolicy>never</updatePolicy> </releases> <snapshots> <updatePolicy>daily</updatePolicy> </snapshots> </repository>", "<pluginRepository> <id>jboss-enterprise-repository-group</id> <name>Red Hat JBoss Enterprise Maven Repository</name> <url>https://maven.repository.redhat.com/ga/</url> <layout>default</layout> <releases> <updatePolicy>never</updatePolicy> </releases> <snapshots> <updatePolicy>daily</updatePolicy> </snapshots> </pluginRepository>", "<profile> <id>red-hat-enterprise-maven-repository</id> <repositories> <repository> <id>red-hat-enterprise-maven-repository</id> <url><MAVEN_REPOSITORY></url> <releases> <enabled>true</enabled> </releases> <snapshots> <enabled>false</enabled> </snapshots> </repository> </repositories> <pluginRepositories> <pluginRepository> <id>red-hat-enterprise-maven-repository</id> <url><MAVEN_REPOSITORY></url> <releases> <enabled>true</enabled> </releases> <snapshots> <enabled>false</enabled> </snapshots> </pluginRepository> </pluginRepositories> </profile>", "<activeProfile>red-hat-enterprise-maven-repository</activeProfile>" ]
https://docs.redhat.com/en/documentation/red_hat_decision_manager/7.13/html/integrating_red_hat_decision_manager_with_other_products_and_components/proc-online-maven_business-applications
Chapter 8. Connecting to remote Fuse applications
Chapter 8. Connecting to remote Fuse applications The Fuse Console uses Jolokia, an agent-based approach to Java Management Extensions (JMX) that requires extra software (an agent) installed on the client. By default, Red Hat Fuse includes a jolokia agent. With standalone Fuse Console distributions, you can connect to remote integrations that already have a jolokia agent ( https://jolokia.org/ ) running inside them. If the process that you want to connect to does not have a jolokia agent inside, refer to the jolokia documentation ( http://jolokia.org/agent.html ). 8.1. Unlocking the Fuse Console By default, Jolokia for Fuse 7 standalone on Apache Karaf is locked and the Fuse Console is not accessible remotely. To unlock the Fuse Console for a hostname or IP address other than locahost or 127.0.0.1 , follow these steps: Open the USDKARAF_HOME/etc/jolokia-access.xml file in an editor. Register the hostnames or IP addresses for the Fuse integrations that you want to access with the Fuse console by adding them to the <cors> section. For example, to access hostname 0.0.0.3 from the Fuse Console, add the line as shown: Save the file. 8.2. Restricting remote access Optionally, you can restrict remote access to the Fuse Console for specific hosts and IP addresses. You can grant overall access based on the IP address of an HTTP client. To specify these restrictions: In the jolokia-access.xml file, add or edit a <remote> section that contains one or more <host> elements. For the <host> element, you can specify an IP address, a host name, or a netmask given in CIDR format (for example, 10.0.0.0/16 for all clients coming from the 10.0 network). The following example allows access from localhost and all clients whose IP addresses start with 10.0 . For all other IP addresses, access is denied. For more details, see the Jolokia security documentation ( https://jolokia.org/reference/html/security.html ). 8.3. Allowing connections to remote Fuse instances The Fuse Console's proxy servlet uses whitelist host protection, with which by default the Fuse Console can only connect to localhost. If you want to connect the Fuse Console to other remote Fuse instances, you need to configure the whitelist as follows: For Apache Karaf, make the following configuration changes in etc/system.properties file: 8.4. Connecting to a remote Jolokia agent Before you begin, you need to know the connection details (host name, port, and path) of the remote Jolokia agent. The default connection URL for the Jolokia agent for Fuse on Apache Karaf is http://<host>:8181/hawtio/jolokia . As a system administrator, you can change this default. Typically, the URL to remotely connect to a Jolokia agent is the URL to open the Fuse Console plus /jolokia . For example, if the URL to open the Fuse Console is http://<host>:1234/hawtio , then the URL to remotely connect to it would probably be http://<host>:1234/hawtio/jolokia . To connect to a remote Jolokia instance so that you can examine its JVM: Click the Connect tab. Click the Remote tab, and then Add connection . Type the Name , Scheme (HTTP or HTTPS), and the hostname . Click Test Connection . Click Add . Note The Fuse Console automatically probes the local network interfaces other than localhost and 127.0.0.1 and adds them to the whitelist. Hence, you do not need to manually register the local machine's addresses to the whitelist. 8.5. Setting data moving preferences You can change the following Jolokia preferences, for example, if you want to more frequently refresh data that displays in the Fuse Console. Note that increasing the frequency of data updates impacts networking traffic and increases the number of requests made to the server. Update rate - The period between polls to Jolokia to fetch JMX data (the default is 5 seconds). Maximum depth - The number of levels that Jolokia will marshal an object to JSON on the server side before returning (the default is 7). Maximum collection size - The maximum number of elements in an array that Jolokia marshals in a response (the default is 50,000). To change the values of these settings: In the upper right of the Fuse Console, click the user icon and then click Preferences . Edit the options and then click Close . 8.6. Viewing JVM runtime information To view JVM runtime information, such as system properties, metrics, and threads, click the Runtime tab.
[ "*<allow-origin>http://0.0.0.3:*</allow-origin>*", "<!-- Cross-Origin Resource Sharing (CORS) restrictions By default, only CORS access within localhost is allowed for maximum security. You can add trusted hostnames in the <cors> section to unlock CORS access from them. --> <cors> <!-- Allow cross origin access only within localhost --> <allow-origin>http*://localhost:*</allow-origin> <allow-origin>http*://127.0.0.1:*</allow-origin> <allow-origin>http://0.0.0.3:*</allow-origin> <!-- Whitelist the hostname patterns as <allow-origin> --> <!-- <allow-origin>http*://*.example.com</allow-origin> <allow-origin>http*://*.example.com:*</allow-origin> --> <!-- Check for the proper origin on the server side to protect against CSRF --> <strict-checking /> </cors>", "<remote> <host>localhost</host> <host>10.0.0.0/16</host> </remote>", "hawtio.proxyWhitelist = localhost, 127.0.0.1, myhost1, myhost2, myhost3" ]
https://docs.redhat.com/en/documentation/red_hat_fuse/7.13/html/managing_fuse_on_karaf_standalone/fuse-console-remote-karaf
5.3. Default Settings
5.3. Default Settings The default settings configure parameters that apply to all proxy subsections in a configuration ( frontend , backend , and listen ). A typical default section may look like the following: Note Any parameter configured in proxy subsection ( frontend , backend , or listen ) takes precedence over the parameter value in default . mode specifies the protocol for the HAProxy instance. Using the http mode connects source requests to real servers based on HTTP, ideal for load balancing web servers. For other applications, use the tcp mode. log specifies log address and syslog facilities to which log entries are written. The global value refers the HAProxy instance to whatever is specified in the log parameter in the global section. option httplog enables logging of various values of an HTTP session, including HTTP requests, session status, connection numbers, source address, and connection timers among other values. option dontlognull disables logging of null connections, meaning that HAProxy will not log connections wherein no data has been transferred. This is not recommended for environments such as web applications over the Internet where null connections could indicate malicious activities such as open port-scanning for vulnerabilities. retries specifies the number of times a real server will retry a connection request after failing to connect on the first try. The various timeout values specify the length of time of inactivity for a given request, connection, or response. These values are generally expressed in milliseconds (unless explicitly stated otherwise) but may be expressed in any other unit by suffixing the unit to the numeric value. Supported units are us (microseconds), ms (milliseconds), s (seconds), m (minutes), h (hours) and d (days). http-request 10s gives 10 seconds to wait for a complete HTTP request from a client. queue 1m sets one minute as the amount of time to wait before a connection is dropped and a client receives a 503 or "Service Unavailable" error. connect 10s specifies the number of seconds to wait for a successful connection to a server. client 1m specifies the amount of time (in minutes) a client can remain inactive (it neither accepts nor sends data). server 1m specifies the amount of time (in minutes) a server is given to accept or send data before timeout occurs.
[ "defaults mode http log global option httplog option dontlognull retries 3 timeout http-request 10s timeout queue 1m timeout connect 10s timeout client 1m timeout server 1m" ]
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/load_balancer_administration/s1-haproxy-setup-defaults
6.7.6. Removing Fence Methods and Fence Instances
6.7.6. Removing Fence Methods and Fence Instances To remove a fence method from your cluster configuration, execute the following command: For example, to remove a fence method that you have named APC that you have configured for node01.example.com from the cluster configuration file on cluster node node01.example.com , execute the following command: To remove all fence instances of a fence device from a fence method, execute the following command: For example, to remove all instances of the fence device named apc1 from the method named APC-dual configured for node01.example.com from the cluster configuration file on cluster node node01.example.com , execute the following command:
[ "ccs -h host --rmmethod method node", "ccs -h node01.example.com --rmmethod APC node01.example.com", "ccs -h host --rmfenceinst fencedevicename node method", "ccs -h node01.example.com --rmfenceinst apc1 node01.example.com APC-dual" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/cluster_administration/s2-fenceinst-remove-ccs-ca
19.2. Basic Options
19.2. Basic Options This section provides information about the basic options. Emulated Machine -M <machine-type> -machine <machine-type> [,<property>[=<value>][,..]] Processor Type -cpu <model>[,<FEATURE>][...] Additional models are visible by running -cpu ? command. Opteron_G5 - AMD Opteron 63xx class CPU Opteron_G4 - AMD Opteron 62xx class CPU Opteron_G3 - AMD Opteron 23xx (AMD Opteron Gen 3) Opteron_G2 - AMD Opteron 22xx (AMD Opteron Gen 2) Opteron_G1 - AMD Opteron 240 (AMD Opteron Gen 1) Westmere - Westmere E56xx/L56xx/X56xx (Nehalem-C) Haswell - Intel Core Processor (Haswell) SandyBridge - Intel Xeon E312xx (Sandy Bridge) Nehalem - Intel Core i7 9xx (Nehalem Class Core i7) Penryn - Intel Core 2 Duo P9xxx (Penryn Class Core 2) Conroe - Intel Celeron_4x0 (Conroe/Merom Class Core 2) cpu64-rhel5 - Red Hat Enterprise Linux 5 supported QEMU Virtual CPU version cpu64-rhel6 - Red Hat Enterprise Linux 6 supported QEMU Virtual CPU version default - special option use default option from above. Processor Topology -smp <n>[,cores=<ncores>][,threads=<nthreads>][,sockets=<nsocks>][,maxcpus=<maxcpus>] Hypervisor and guest operating system limits on processor topology apply. NUMA System -numa <nodes>[,mem=<size>][,cpus=<cpu[-cpu>]][,nodeid=<node>] Hypervisor and guest operating system limits on processor topology apply. Memory Size -m <megs> Supported values are limited by guest minimal and maximal values and hypervisor limits. Keyboard Layout -k <language> Guest Name -name <name> Guest UUID -uuid <uuid>
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/virtualization_administration_guide/sec-qemu_kvm_whitelist_basic_options
14.12. Supported qemu-img Formats
14.12. Supported qemu-img Formats When a format is specified in any of the qemu-img commands, the following format types may be used: raw - Raw disk image format (default). This can be the fastest file-based format. If your file system supports holes (for example in ext2 or ext3 ), then only the written sectors will reserve space. Use qemu-img info to obtain the real size used by the image or ls -ls on Unix/Linux. Although Raw images give optimal performance, only very basic features are available with a Raw image. For example, no snapshots are available. qcow2 - QEMU image format, the most versatile format with the best feature set. Use it to have optional AES encryption, zlib-based compression, support of multiple VM snapshots, and smaller images, which are useful on file systems that do not support holes . Note that this expansive feature set comes at the cost of performance. Although only the formats above can be used to run on a guest virtual machine or host physical machine, qemu-img also recognizes and supports the following formats in order to convert from them into either raw , or qcow2 format. The format of an image is usually detected automatically. In addition to converting these formats into raw or qcow2 , they can be converted back from raw or qcow2 to the original format. Note that the qcow2 version supplied with Red Hat Enterprise Linux 7 is 1.1. The format that is supplied with versions of Red Hat Enterprise Linux will be 0.10. You can revert image files to versions of qcow2. To know which version you are using, run qemu-img info qcow2 [imagefilename.img] command. To change the qcow version see Section 23.19.2, "Setting Target Elements" . bochs - Bochs disk image format. cloop - Linux Compressed Loop image, useful only to reuse directly compressed CD-ROM images present for example in the Knoppix CD-ROMs. cow - User Mode Linux Copy On Write image format. The cow format is included only for compatibility with versions. dmg - Mac disk image format. nbd - Network block device. parallels - Parallels virtualization disk image format. qcow - Old QEMU image format. Only included for compatibility with older versions. qed - Old QEMU image format. Only included for compatibility with older versions. vdi - Oracle VM VirtualBox hard disk image format. vhdx - Microsoft Hyper-V virtual hard disk-X disk image format. vmdk - VMware 3 and 4 compatible image format. vvfat - Virtual VFAT disk image format.
null
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/virtualization_deployment_and_administration_guide/sect-Using_qemu_img-Supported_qemu_img_formats
2.2. Elements of a Tapset
2.2. Elements of a Tapset The following sections describe the most important aspects of writing a tapset. Most of the content herein is suitable for developers who wish to contribute to SystemTap's upstream library of tapsets. 2.2.1. Tapset Files Tapset files are stored in src /tapset/ of the SystemTap GIT directory. Most tapset files are kept at that level. If you have code that only works with a specific architecture or kernel version, you may choose to put your tapset in the appropriate subdirectory. Installed tapsets are located in /usr/share/systemtap/tapset/ or /usr/local/share/systemtap/tapset . Personal tapsets can be stored anywhere. However, to ensure that SystemTap can use them, use -I tapset_directory to specify their location when invoking stap . 2.2.2. Namespace Probe alias names should take the form tapset_name.probe_name . For example, the probe for sending a signal could be named signal.send . Global symbol names (probes, functions, and variables) should be unique accross all tapsets. This helps avoid namespace collisions in scripts that use multiple tapsets. To ensure this, use tapset-specific prefixes in your global symbols. Internal symbol names should be prefixed with an underscore ( _ ). 2.2.3. Comments and Documentation All probes and functions should include comment blocks that describe their purpose, the data they provide, and the context in which they run (e.g. interrupt, process, etc). Use comments in areas where your intent may not be clear from reading the code. Note that specially-formatted comments are automatically extracted from most tapsets and included in this guide. This helps ensure that tapset contributors can write their tapset and document it in the same place. The specified format for documenting tapsets is as follows: For example: To override the automatically-generated Synopsis content, use: For example: It is recommended that you use the <programlisting> tag in this instance, since overriding the Synopsis content of an entry does not automatically form the necessary tags. For the purposes of improving the DocBook XML output of your comments, you can also use the following XML tags in your comments: command emphasis programlisting remark (tagged strings will appear in Publican beta builds of the document)
[ "/** * probe tapset.name - Short summary of what the tapset does. * @argument: Explanation of argument. * @argument2: Explanation of argument2. Probes can have multiple arguments. * * Context: * A brief explanation of the tapset context. * Note that the context should only be 1 paragraph short. * * Text that will appear under \"Description.\" * * A new paragraph that will also appear under the heading \"Description\". * * Header: * A paragraph that will appear under the heading \"Header\". **/", "/** * probe vm.write_shared_copy- Page copy for shared page write. * @address: The address of the shared write. * @zero: Boolean indicating whether it is a zero page * (can do a clear instead of a copy). * * Context: * The process attempting the write. * * Fires when a write to a shared page requires a page copy. This is * always preceded by a vm.shared_write . **/", "* Synopsis: * New Synopsis string *", "/** * probe signal.handle - Fires when the signal handler is invoked * @sig: The signal number that invoked the signal handler * * Synopsis: * <programlisting>static int handle_signal(unsigned long sig, siginfo_t *info, struct k_sigaction *ka, * sigset_t *oldset, struct pt_regs * regs)</programlisting> */" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/systemtap_tapset_reference/tapsetelements
Chapter 7. Advisories related to this release
Chapter 7. Advisories related to this release The following advisories have been issued to document enhancements, bugfixes, and CVE fixes included in this release. RHSA-2024:5239 RHSA-2024:5240
null
https://docs.redhat.com/en/documentation/red_hat_jboss_core_services/2.4.57/html/red_hat_jboss_core_services_apache_http_server_2.4.57_service_pack_5_release_notes/errata
Chapter 4. Interactive cluster creation mode reference
Chapter 4. Interactive cluster creation mode reference This section provides an overview of the options that are presented when you use the interactive mode to create the OCM role, the user role, and Red Hat OpenShift Service on AWS (ROSA) clusters by using the ROSA CLI ( rosa ). 4.1. Interactive OCM and user role creation mode options Before you can use Red Hat OpenShift Cluster Manager to create Red Hat OpenShift Service on AWS (ROSA) clusters that use the AWS Security Token Service (STS), you must associate your AWS account with your Red Hat organization by creating and linking the OCM and user roles. You can enable interactive mode by specifying the --interactive option when you run the rosa create ocm-role command or the rosa create user-role command. The following tables describe the interactive OCM role creation mode options: Table 4.1. --interactive OCM role creation mode options Field Description Role prefix Specify the prefix to include in the OCM IAM role name. The default is ManagedOpenShift . You can create only one OCM role per AWS account for your Red Hat organization. Enable admin capabilities for the OCM role (optional) Enable the admin OCM IAM role, which is equivalent to specifying the --admin argument. The admin role is required if you want to use auto mode to automatically provision the cluster-specific Operator roles and the OIDC provider by using OpenShift Cluster Manager. Permissions boundary ARN (optional) Specify a permissions boundary Amazon Resource Name (ARN) for the OCM role. For more information, see Permissions boundaries for IAM entities in the AWS documentation. Role Path (optional) Specify a custom ARN path for your OCM role. The path must contain alphanumeric characters only and start and end with / , for example /test/path/dev/ . For more information, see ARN path customization for IAM roles and policies . Role creation mode Select the role creation mode. You can use auto mode to automatically create the OCM role and link it to your Red Hat organization account. In manual mode, the ROSA CLI ( rosa ) generates the aws commands needed to create and link the role. In manual mode, the corresponding policy JSON files are also saved to the current directory. manual mode enables you to review the details before running the aws commands manually. Create the '<ocm_role_name>' role? Confirm if you want to create the OCM role. Link the '<ocm_role_arn>' role with organization '<red_hat_organization_id>'? Confirm if you want to link the OCM role with your Red Hat organization. The following tables describe the interactive user role creation mode options: Table 4.2. --interactive user role creation mode options Field Description Role prefix Specify the prefix to include in the user role name. The default is ManagedOpenShift . Permissions boundary ARN (optional) Specify a permissions boundary Amazon Resource Name (ARN) for the user role. For more information, see Permissions boundaries for IAM entities in the AWS documentation. Role Path (optional) Specify a custom ARN path for your user role. The path must contain alphanumeric characters only and start and end with / , for example /test/path/dev/ . For more information, see ARN path customization for IAM roles and policies . Role creation mode Selects the role creation mode. You can use auto mode to automatically create the user role and link it to your OpenShift Cluster Manager user account. In manual mode, the ROSA CLI generates the aws commands needed to create and link the role. In manual mode, the corresponding policy JSON files are also saved to the current directory. manual mode enables you to review the details before running the aws commands manually. Create the '<user_role_name>' role? Confirm if you want to create the user role. Link the '<user_role_arn>' role with account '<red_hat_user_account_id>'? Confirm if you want to link the user role with your Red Hat user account. 4.2. Interactive cluster creation mode options You can create a Red Hat OpenShift Service on AWS cluster with the AWS Security Token Service (STS) by using the interactive mode. You can enable the mode by specifying the --interactive option when you run the rosa create cluster command. The following table describes the interactive cluster creation mode options: Table 4.3. --interactive cluster creation mode options Field Description Cluster name Enter a name for your cluster, for example my-rosa-cluster . Domain prefix Enter a name for the domain prefix for the subdomain of your cluster, for example my-rosa-cluster . Deploy cluster with Hosted Control Plane (optional) Enable the use of Hosted Control Planes. Create cluster admin user Create a local administrator user ( cluster-admin ) for your cluster. This automatically configures a htpasswd identity provider for the cluster-admin user. Create custom password for cluster admin Create a custom password for the cluster-admin user, or use a system-generated password. If you create a custom password, the password must be at least 14 characters (ASCII-standard) and contain no whitespace characters. If you do not create a custom password, the system generates a password and displays it in the command line output. Deploy cluster using AWS STS Create an OpenShift cluster that uses the AWS Security Token Service (STS) to allocate temporary, limited-privilege credentials for component-specific AWS Identity and Access Management (IAM) roles. The service enables cluster components to make AWS API calls using secure cloud resource management practices. The default is Yes . OpenShift version Select the version of OpenShift to install, for example 4. The default is the latest version. Configure the use of IMDSv2 for ec2 instances optional/required (optional) Specify whether all EC2 instances will use both v1 and v2 endpoints of EC2 Instance Metadata Service (IMDS)(optional) or only IMDSv2 (required). Installer role ARN If you have more than one set of account roles in your AWS account for your cluster version, a list of installer role ARNs are provided. Select the ARN for the installer role that you want to use with your cluster. The cluster uses the account-wide roles and policies that relate to the selected installer role. External ID (optional) Specify an unique identifier that is passed by OpenShift Cluster Manager and the OpenShift installer when an account role is assumed. This option is only required for custom account roles that expect an external ID. Operator roles prefix Enter a prefix to assign to the cluster-specific Operator IAM roles. The default is the name of the cluster and a 4-digit random string, for example my-rosa-cluster-a0b1 . Deploy cluster using pre registered OIDC Configuration ID Specify if you want to use a preconfigured OIDC configuration or if you want to create a new OIDC configuration as part of the cluster creation process. Tags (optional) Specify a tag that is used on all resources created by Red Hat OpenShift Service on AWS in AWS. Tags can help you manage, identify, organize, search for, and filter resources within AWS. Tags are comma separated, for example: "key value, foo bar". Important Red Hat OpenShift Service on AWS only supports custom tags to Red Hat OpenShift resources during cluster creation. Once added, the tags cannot be removed or edited. Tags that are added by Red Hat are required for clusters to stay in compliance with Red Hat production service level agreements (SLAs). These tags must not be removed. Red Hat OpenShift Service on AWS does not support adding additional tags outside of ROSA cluster-managed resources. These tags can be lost when AWS resources are managed by the ROSA cluster. In these cases, you might need custom solutions or tools to reconcile the tags and keep them intact. Multiple availability zones (optional) Deploy the cluster to multiple availability zones in the AWS region. The default is No , which results in a cluster being deployed to a single availability zone. If you deploy a cluster into multiple availability zones, the AWS region must have at least 3 availability zones. Multiple availability zones are recommended for production workloads. AWS region Specify the AWS region to deploy the cluster in. This overrides the AWS_REGION environment variable. PrivateLink cluster (optional) Create a cluster using AWS PrivateLink. This option provides private connectivity between Virtual Private Clouds (VPCs), AWS services, and your on-premise networks, without exposing your traffic to the public internet. To provide support, Red Hat Site Reliability Engineering (SRE) can connect to the cluster by using AWS PrivateLink Virtual Private Cloud (VPC) endpoints. This option cannot be changed after a cluster is created. The default is No . Machine CIDR Specify the IP address range for machines (cluster nodes), which must encompass all CIDR address ranges for your VPC subnets. Subnets must be contiguous. A minimum IP address range of 128 addresses, using the subnet prefix /25 , is supported for single availability zone deployments. A minimum address range of 256 addresses, using the subnet prefix /24 , is supported for deployments that use multiple availability zones. The default is 10.0.0.0/16 . This range must not conflict with any connected networks. Service CIDR Specify the IP address range for services. It is recommended, but not required, that the address block is the same between clusters. This will not create IP address conflicts. The range must be large enough to accommodate your workload. The address block must not overlap with any external service accessed from within the cluster. The default is 172.30.0.0/16 . Pod CIDR Specify the IP address range for pods. It is recommended, but not required, that the address block is the same between clusters. This will not create IP address conflicts. The range must be large enough to accommodate your workload. The address block must not overlap with any external service accessed from within the cluster. The default is 10.128.0.0/14 . Install into an existing VPC (optional) Install a cluster into an existing AWS VPC. To use this option, your VPC must have 2 subnets for each availability zone that you are installing the cluster into. The default is No . Select availability zones (optional) Specify the availability zones that are used when installing into an existing AWS VPC. Use a comma-separated list to provide the availability zones. If you specify No , the installer selects the availability zones automatically. Enable customer managed key (optional) Enable this option to use a specific AWS Key Management Service (KMS) key as the encryption key for persistent data. This key functions as the encryption key for control plane, infrastructure, and worker node root volumes. The key is also configured on the default storage class to ensure that persistent volumes created with the default storage class will be encrypted with the specific KMS key. When disabled, the account KMS key for the specified region is used by default to ensure persistent data is always encrypted. The default is No . Compute nodes instance type Select a compute node instance type. The default is m5.xlarge . Enable autoscaling (optional) Enable compute node autoscaling. The autoscaler adjusts the size of the cluster to meet your deployment demands. The default is No . Additional Compute Security Group IDs (optional) Select the additional custom security group IDs that are used with the standard machine pool created along side the cluster. The default is none selected. Only security groups associated with the selected VPC are displayed. You can select a maximum of 5 additional security groups. Additional Infra Security Group IDs (optional) Select the additional custom security group IDs that are used with the infra nodes created along side the cluster. The default is none selected. Only security groups associated with the selected VPC are displayed. You can select a maximum of 5 additional security groups. Additional Control Plane Security Group IDs (optional) Select the additional custom security group IDs that are used with the control plane nodes created along side the cluster. The default is none selected. Only security groups associated with the selected VPC are displayed. You can select a maximum of 5 additional security groups. Compute nodes Specify the number of compute nodes to provision into each availability zone. Clusters deployed in a single availability zone require at least 2 nodes. Clusters deployed in multiple zones must have at least 3 nodes. The maximum number of worker nodes is 249 nodes. The default value is 2 . Default machine pool labels (optional) Specify the labels for the default machine pool. The label format should be a comma-separated list of key-value pairs. This list will overwrite any modifications made to node labels on an ongoing basis. Host prefix Specify the subnet prefix length assigned to pods scheduled to individual machines. The host prefix determines the pod IP address pool for each machine. For example, if the host prefix is set to /23 , each machine is assigned a /23 subnet from the pod CIDR address range. The default is /23 , allowing 512 cluster nodes and 512 pods per node, both of which are beyond our supported maximums. For information on the supported maximums, see the Additional resources section below. Machine pool root disk size (GiB or TiB) Specify the size of the machine pool root disk. This value must include a unit suffix like GiB or TiB, for example the default value of 300GiB . Enable FIPS support (optional) Enable or disable FIPS mode. The default is false (disabled). If FIPS mode is enabled, the Red Hat Enterprise Linux CoreOS (RHCOS) machines that Red Hat OpenShift Service on AWS runs on bypass the default Kubernetes cryptography suite and use the cryptography modules that are provided with RHCOS instead. Important To enable FIPS mode for your cluster, you must run the installation program from a {op-system-base-full} computer configured to operate in FIPS mode. For more information about configuring FIPS mode on {op-system-base}, see Switching {op-system-base} to FIPS mode . When running {op-system-base-full} or Red Hat Enterprise Linux CoreOS (RHCOS) booted in FIPS mode, Red Hat OpenShift Service on AWS core components use the {op-system-base} cryptographic libraries that have been submitted to NIST for FIPS 140-2/140-3 Validation on only the x86_64, ppc64le, and s390x architectures. Encrypt etcd data (optional) In Red Hat OpenShift Service on AWS, the control plane storage is encrypted at rest by default and this includes encryption of the etcd volumes. You can additionally enable the Encrypt etcd data option to encrypt the key values for some resources in etcd, but not the keys. Important By enabling etcd encryption for the key values in etcd, you will incur a performance overhead of approximately 20%. The overhead is a result of introducing this second layer of encryption, in addition to the default control plane storage encryption that encrypts the etcd volumes. Red Hat recommends that you enable etcd encryption only if you specifically require it for your use case. Disable workload monitoring (optional) Disable monitoring for user-defined projects. Monitoring for user-defined projects is enabled by default. Route Selector for ingress (optional) Specify the route selector for your ingress. The format should be a comma-separated list of key-value pairs. If you do not specify a label, all routes will be exposed on both routers. For legacy ingress support, these labels are inclusion labels; otherwise, they are treated as exclusion labels. Excluded namespaces for ingress (optional) Specify the excluded namespaces for your ingress. The format should be a comma-separated list value1, value2... . If you do not specify any values, all namespaces will be exposed. Wildcard Policy (optional, choose 'Skip' to skip selection. The default value will be supplied.) Choose the wildcard policy for your ingress. The options are WildcardsDisallowed and WildcardsAllowed . Default is WildcardsDisallowed . Namespace Ownership Policy (optional, choose 'Skip' to skip selection. The default value will be supplied.) Choose the namespace ownership policy for your ingress. The options are Strict and InterNamespaceAllowed . The default is Strict . 4.3. Additional resources For more information about using custom ARN paths for the OCM role, user role, and account-wide roles, see ARN path customization for IAM roles and policies . For a list of the supported maximums, see ROSA tested cluster maximums . For detailed steps to quickly create a ROSA cluster with STS, including the AWS IAM resources, see Creating a ROSA cluster with STS using the default options . For detailed steps to create a ROSA cluster with STS using customizations, including the AWS IAM resources, see Creating a ROSA cluster with STS using customizations . For more information about etcd encryption, see the etcd encryption service definition . For an example VPC architecture, see this sample VPC architecture .
null
https://docs.redhat.com/en/documentation/red_hat_openshift_service_on_aws/4/html/install_rosa_classic_clusters/rosa-sts-interactive-mode-reference
Chapter 41. JSLT Action
Chapter 41. JSLT Action Apply a JSLT query or transformation on JSON. 41.1. Configuration Options The following table summarizes the configuration options available for the jslt-action Kamelet: Property Name Description Type Default Example template * Template The inline template for JSLT Transformation string "file://template.json" Note Fields marked with an asterisk (*) are mandatory. 41.2. Dependencies At runtime, the jslt-action Kamelet relies upon the presence of the following dependencies: camel:jslt camel:kamelet 41.3. Usage This section describes how you can use the jslt-action . 41.3.1. Knative Action You can use the jslt-action Kamelet as an intermediate step in a Knative binding. jslt-action-binding.yaml apiVersion: camel.apache.org/v1alpha1 kind: KameletBinding metadata: name: jslt-action-binding spec: source: ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: timer-source properties: message: {"foo" : "bar"} steps: - ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: jslt-action properties: template: "file://template.json" sink: ref: kind: Channel apiVersion: messaging.knative.dev/v1 name: mychannel 41.3.1.1. Prerequisite Make sure you have "Red Hat Integration - Camel K" installed into the OpenShift cluster you are connected to. 41.3.1.2. Procedure for using the cluster CLI Save the jslt-action-binding.yaml file to your local drive, and then edit it as needed for your configuration. Run the action by using the following command: oc apply -f jslt-action-binding.yaml 41.3.1.3. Procedure for using the Kamel CLI Configure and run the action by using the following command: kamel bind timer-source?message=Hello --step jslt-action -p "step-0.template=file://template.json" channel:mychannel This command creates the KameletBinding in the current namespace on the cluster. If the template points to a file that is not in the current directory, and if file:// or classpath:// is used, supply the transformation using the secret or the configmap. To view examples, see with secret and with configmap . For details about necessary traits, see Mount trait and JVM classpath trait . 41.3.2. Kafka Action You can use the jslt-action Kamelet as an intermediate step in a Kafka binding. jslt-action-binding.yaml apiVersion: camel.apache.org/v1alpha1 kind: KameletBinding metadata: name: jslt-action-binding spec: source: ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: timer-source properties: message: {"foo" : "bar"} steps: - ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: jslt-action properties: template: "file://template.json" sink: ref: kind: KafkaTopic apiVersion: kafka.strimzi.io/v1beta1 name: my-topic 41.3.2.1. Prerequisites Ensure that you have installed the AMQ Streams operator in your OpenShift cluster and create a topic named my-topic in the current namespace. Also, you must have "Red Hat Integration - Camel K" installed into the OpenShift cluster you are connected to. 41.3.2.2. Procedure for using the cluster CLI Save the jslt-action-binding.yaml file to your local drive, and then edit it as needed for your configuration. Run the action by using the following command: oc apply -f jslt-action-binding.yaml 41.3.2.3. Procedure for using the Kamel CLI Configure and run the action by using the following command: kamel bind timer-source?message=Hello --step jslt-action -p "step-0.template=file://template.json" kafka.strimzi.io/v1beta1:KafkaTopic:my-topic This command creates the KameletBinding in the current namespace on the cluster. 41.4. Kamelet source file https://github.com/openshift-integration/kamelet-catalog/blob/main/jslt-action.kamelet.yaml
[ "apiVersion: camel.apache.org/v1alpha1 kind: KameletBinding metadata: name: jslt-action-binding spec: source: ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: timer-source properties: message: {\"foo\" : \"bar\"} steps: - ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: jslt-action properties: template: \"file://template.json\" sink: ref: kind: Channel apiVersion: messaging.knative.dev/v1 name: mychannel", "apply -f jslt-action-binding.yaml", "kamel bind timer-source?message=Hello --step jslt-action -p \"step-0.template=file://template.json\" channel:mychannel", "apiVersion: camel.apache.org/v1alpha1 kind: KameletBinding metadata: name: jslt-action-binding spec: source: ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: timer-source properties: message: {\"foo\" : \"bar\"} steps: - ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: jslt-action properties: template: \"file://template.json\" sink: ref: kind: KafkaTopic apiVersion: kafka.strimzi.io/v1beta1 name: my-topic", "apply -f jslt-action-binding.yaml", "kamel bind timer-source?message=Hello --step jslt-action -p \"step-0.template=file://template.json\" kafka.strimzi.io/v1beta1:KafkaTopic:my-topic" ]
https://docs.redhat.com/en/documentation/red_hat_build_of_apache_camel_k/1.10.7/html/kamelets_reference/jslt-action
Chapter 7. Installation configuration parameters for Alibaba Cloud
Chapter 7. Installation configuration parameters for Alibaba Cloud Before you deploy an OpenShift Container Platform cluster on Alibaba Cloud, you provide parameters to customize your cluster and the platform that hosts it. When you create the install-config.yaml file, you provide values for the required parameters through the command line. You can then modify the install-config.yaml file to customize your cluster further. 7.1. Available installation configuration parameters for Alibaba Cloud The following tables specify the required, optional, and Alibaba Cloud-specific installation configuration parameters that you can set as part of the installation process. Note After installation, you cannot modify these parameters in the install-config.yaml file. 7.1.1. Required configuration parameters Required installation configuration parameters are described in the following table: Table 7.1. Required parameters Parameter Description Values The API version for the install-config.yaml content. The current version is v1 . The installation program may also support older API versions. String The base domain of your cloud provider. The base domain is used to create routes to your OpenShift Container Platform cluster components. The full DNS name for your cluster is a combination of the baseDomain and metadata.name parameter values that uses the <metadata.name>.<baseDomain> format. A fully-qualified domain or subdomain name, such as example.com . Kubernetes resource ObjectMeta , from which only the name parameter is consumed. Object The name of the cluster. DNS records for the cluster are all subdomains of {{.metadata.name}}.{{.baseDomain}} . String of lowercase letters, hyphens ( - ), and periods ( . ), such as dev . The configuration for the specific platform upon which to perform the installation: alibabacloud , aws , baremetal , azure , gcp , ibmcloud , nutanix , openstack , powervs , vsphere , or {} . For additional information about platform.<platform> parameters, consult the table for your specific platform that follows. Object Get a pull secret from Red Hat OpenShift Cluster Manager to authenticate downloading container images for OpenShift Container Platform components from services such as Quay.io. { "auths":{ "cloud.openshift.com":{ "auth":"b3Blb=", "email":"[email protected]" }, "quay.io":{ "auth":"b3Blb=", "email":"[email protected]" } } } 7.1.2. Network configuration parameters You can customize your installation configuration based on the requirements of your existing network infrastructure. For example, you can expand the IP address block for the cluster network or provide different IP address blocks than the defaults. Only IPv4 addresses are supported. Note Globalnet is not supported with Red Hat OpenShift Data Foundation disaster recovery solutions. For regional disaster recovery scenarios, ensure that you use a nonoverlapping range of private IP addresses for the cluster and service networks in each cluster. Table 7.2. Network parameters Parameter Description Values The configuration for the cluster network. Object Note You cannot modify parameters specified by the networking object after installation. The Red Hat OpenShift Networking network plugin to install. OVNKubernetes . OVNKubernetes is a CNI plugin for Linux networks and hybrid networks that contain both Linux and Windows servers. The default value is OVNKubernetes . The IP address blocks for pods. The default value is 10.128.0.0/14 with a host prefix of /23 . If you specify multiple IP address blocks, the blocks must not overlap. An array of objects. For example: networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 Required if you use networking.clusterNetwork . An IP address block. An IPv4 network. An IP address block in Classless Inter-Domain Routing (CIDR) notation. The prefix length for an IPv4 block is between 0 and 32 . The subnet prefix length to assign to each individual node. For example, if hostPrefix is set to 23 then each node is assigned a /23 subnet out of the given cidr . A hostPrefix value of 23 provides 510 (2^(32 - 23) - 2) pod IP addresses. A subnet prefix. The default value is 23 . The IP address block for services. The default value is 172.30.0.0/16 . The OVN-Kubernetes network plugins supports only a single IP address block for the service network. An array with an IP address block in CIDR format. For example: networking: serviceNetwork: - 172.30.0.0/16 The IP address blocks for machines. If you specify multiple IP address blocks, the blocks must not overlap. An array of objects. For example: networking: machineNetwork: - cidr: 10.0.0.0/16 Required if you use networking.machineNetwork . An IP address block. The default value is 10.0.0.0/16 for all platforms other than libvirt and IBM Power(R) Virtual Server. For libvirt, the default value is 192.168.126.0/24 . For IBM Power(R) Virtual Server, the default value is 192.168.0.0/24 . An IP network block in CIDR notation. For example, 10.0.0.0/16 . Note Set the networking.machineNetwork to match the CIDR that the preferred NIC resides in. 7.1.3. Optional configuration parameters Optional installation configuration parameters are described in the following table: Table 7.3. Optional parameters Parameter Description Values A PEM-encoded X.509 certificate bundle that is added to the nodes' trusted certificate store. This trust bundle may also be used when a proxy has been configured. String Controls the installation of optional core cluster components. You can reduce the footprint of your OpenShift Container Platform cluster by disabling optional components. For more information, see the "Cluster capabilities" page in Installing . String array Selects an initial set of optional capabilities to enable. Valid values are None , v4.11 , v4.12 and vCurrent . The default value is vCurrent . String Extends the set of optional capabilities beyond what you specify in baselineCapabilitySet . You may specify multiple capabilities in this parameter. String array Enables workload partitioning, which isolates OpenShift Container Platform services, cluster management workloads, and infrastructure pods to run on a reserved set of CPUs. Workload partitioning can only be enabled during installation and cannot be disabled after installation. While this field enables workload partitioning, it does not configure workloads to use specific CPUs. For more information, see the Workload partitioning page in the Scalability and Performance section. None or AllNodes . None is the default value. The configuration for the machines that comprise the compute nodes. Array of MachinePool objects. Determines the instruction set architecture of the machines in the pool. Currently, clusters with varied architectures are not supported. All pools must specify the same architecture. Valid values are amd64 (the default). String Whether to enable or disable simultaneous multithreading, or hyperthreading , on compute machines. By default, simultaneous multithreading is enabled to increase the performance of your machines' cores. Important If you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance. Enabled or Disabled Required if you use compute . The name of the machine pool. worker Required if you use compute . Use this parameter to specify the cloud provider to host the worker machines. This parameter value must match the controlPlane.platform parameter value. alibabacloud , aws , azure , gcp , ibmcloud , nutanix , openstack , powervs , vsphere , or {} The number of compute machines, which are also known as worker machines, to provision. A positive integer greater than or equal to 2 . The default value is 3 . Enables the cluster for a feature set. A feature set is a collection of OpenShift Container Platform features that are not enabled by default. For more information about enabling a feature set during installation, see "Enabling features using feature gates". String. The name of the feature set to enable, such as TechPreviewNoUpgrade . The configuration for the machines that comprise the control plane. Array of MachinePool objects. Determines the instruction set architecture of the machines in the pool. Currently, clusters with varied architectures are not supported. All pools must specify the same architecture. Valid values are amd64 (the default). String Whether to enable or disable simultaneous multithreading, or hyperthreading , on control plane machines. By default, simultaneous multithreading is enabled to increase the performance of your machines' cores. Important If you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance. Enabled or Disabled Required if you use controlPlane . The name of the machine pool. master Required if you use controlPlane . Use this parameter to specify the cloud provider that hosts the control plane machines. This parameter value must match the compute.platform parameter value. alibabacloud , aws , azure , gcp , ibmcloud , nutanix , openstack , powervs , vsphere , or {} The number of control plane machines to provision. Supported values are 3 , or 1 when deploying single-node OpenShift. The Cloud Credential Operator (CCO) mode. If no mode is specified, the CCO dynamically tries to determine the capabilities of the provided credentials, with a preference for mint mode on the platforms where multiple modes are supported. Mint , Passthrough , Manual or an empty string ( "" ). [1] Enable or disable FIPS mode. The default is false (disabled). If FIPS mode is enabled, the Red Hat Enterprise Linux CoreOS (RHCOS) machines that OpenShift Container Platform runs on bypass the default Kubernetes cryptography suite and use the cryptography modules that are provided with RHCOS instead. Important To enable FIPS mode for your cluster, you must run the installation program from a Red Hat Enterprise Linux (RHEL) computer configured to operate in FIPS mode. For more information about configuring FIPS mode on RHEL, see Installing the system in FIPS mode . When running Red Hat Enterprise Linux (RHEL) or Red Hat Enterprise Linux CoreOS (RHCOS) booted in FIPS mode, OpenShift Container Platform core components use the RHEL cryptographic libraries that have been submitted to NIST for FIPS 140-2/140-3 Validation on only the x86_64, ppc64le, and s390x architectures. Note If you are using Azure File storage, you cannot enable FIPS mode. false or true Sources and repositories for the release-image content. Array of objects. Includes a source and, optionally, mirrors , as described in the following rows of this table. Required if you use imageContentSources . Specify the repository that users refer to, for example, in image pull specifications. String Specify one or more repositories that may also contain the same images. Array of strings How to publish or expose the user-facing endpoints of your cluster, such as the Kubernetes API, OpenShift routes. Internal or External . The default value is External . Setting this field to Internal is not supported on non-cloud platforms. Important If the value of the field is set to Internal , the cluster will become non-functional. For more information, refer to BZ#1953035 . The SSH key to authenticate access to your cluster machines. Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. For example, sshKey: ssh-ed25519 AAAA.. . Not all CCO modes are supported for all cloud providers. For more information about CCO modes, see the "Managing cloud provider credentials" entry in the Authentication and authorization content. 7.1.4. Additional Alibaba Cloud configuration parameters Additional Alibaba Cloud configuration parameters are described in the following table. The alibabacloud parameters are the configuration used when installing on Alibaba Cloud. The defaultMachinePlatform parameters are the default configuration used when installing on Alibaba Cloud for machine pools that do not define their own platform configuration. These parameters apply to both compute machines and control plane machines where specified. Note If defined, the parameters compute.platform.alibabacloud and controlPlane.platform.alibabacloud will overwrite platform.alibabacloud.defaultMachinePlatform settings for compute machines and control plane machines respectively. Table 7.4. Optional Alibaba Cloud parameters Parameter Description Values The imageID used to create the ECS instance. ImageID must belong to the same region as the cluster. String. InstanceType defines the ECS instance type. Example: ecs.g6.large String. Defines the category of the system disk. Examples: cloud_efficiency , cloud_essd String. Defines the size of the system disk in gibibytes (GiB). Integer. The list of availability zones that can be used. Examples: cn-hangzhou-h , cn-hangzhou-j String list. The imageID used to create the ECS instance. ImageID must belong to the same region as the cluster. String. InstanceType defines the ECS instance type. Example: ecs.g6.xlarge String. Defines the category of the system disk. Examples: cloud_efficiency , cloud_essd String. Defines the size of the system disk in gibibytes (GiB). Integer. The list of availability zones that can be used. Examples: cn-hangzhou-h , cn-hangzhou-j String list. Required. The Alibaba Cloud region where the cluster will be created. String. The ID of an already existing resource group where the cluster will be installed. If empty, the installation program will create a new resource group for the cluster. String. Additional keys and values to apply to all Alibaba Cloud resources created for the cluster. Object. The ID of an already existing VPC where the cluster should be installed. If empty, the installation program will create a new VPC for the cluster. String. The ID list of already existing VSwitches where cluster resources will be created. The existing VSwitches can only be used when also using existing VPC. If empty, the installation program will create new VSwitches for the cluster. String list. For both compute machines and control plane machines, the image ID that should be used to create ECS instance. If set, the image ID should belong to the same region as the cluster. String. For both compute machines and control plane machines, the ECS instance type used to create the ECS instance. Example: ecs.g6.xlarge String. For both compute machines and control plane machines, the category of the system disk. Examples: cloud_efficiency , cloud_essd . String, for example "", cloud_efficiency , cloud_essd . For both compute machines and control plane machines, the size of the system disk in gibibytes (GiB). The minimum is 120 . Integer. For both compute machines and control plane machines, the list of availability zones that can be used. Examples: cn-hangzhou-h , cn-hangzhou-j String list. The ID of an existing private zone into which to add DNS records for the cluster's internal API. An existing private zone can only be used when also using existing VPC. The private zone must be associated with the VPC containing the subnets. Leave the private zone unset to have the installation program create the private zone on your behalf. String.
[ "apiVersion:", "baseDomain:", "metadata:", "metadata: name:", "platform:", "pullSecret:", "{ \"auths\":{ \"cloud.openshift.com\":{ \"auth\":\"b3Blb=\", \"email\":\"[email protected]\" }, \"quay.io\":{ \"auth\":\"b3Blb=\", \"email\":\"[email protected]\" } } }", "networking:", "networking: networkType:", "networking: clusterNetwork:", "networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23", "networking: clusterNetwork: cidr:", "networking: clusterNetwork: hostPrefix:", "networking: serviceNetwork:", "networking: serviceNetwork: - 172.30.0.0/16", "networking: machineNetwork:", "networking: machineNetwork: - cidr: 10.0.0.0/16", "networking: machineNetwork: cidr:", "additionalTrustBundle:", "capabilities:", "capabilities: baselineCapabilitySet:", "capabilities: additionalEnabledCapabilities:", "cpuPartitioningMode:", "compute:", "compute: architecture:", "compute: hyperthreading:", "compute: name:", "compute: platform:", "compute: replicas:", "featureSet:", "controlPlane:", "controlPlane: architecture:", "controlPlane: hyperthreading:", "controlPlane: name:", "controlPlane: platform:", "controlPlane: replicas:", "credentialsMode:", "fips:", "imageContentSources:", "imageContentSources: source:", "imageContentSources: mirrors:", "publish:", "sshKey:", "compute: platform: alibabacloud: imageID:", "compute: platform: alibabacloud: instanceType:", "compute: platform: alibabacloud: systemDiskCategory:", "compute: platform: alibabacloud: systemDisksize:", "compute: platform: alibabacloud: zones:", "controlPlane: platform: alibabacloud: imageID:", "controlPlane: platform: alibabacloud: instanceType:", "controlPlane: platform: alibabacloud: systemDiskCategory:", "controlPlane: platform: alibabacloud: systemDisksize:", "controlPlane: platform: alibabacloud: zones:", "platform: alibabacloud: region:", "platform: alibabacloud: resourceGroupID:", "platform: alibabacloud: tags:", "platform: alibabacloud: vpcID:", "platform: alibabacloud: vswitchIDs:", "platform: alibabacloud: defaultMachinePlatform: imageID:", "platform: alibabacloud: defaultMachinePlatform: instanceType:", "platform: alibabacloud: defaultMachinePlatform: systemDiskCategory:", "platform: alibabacloud: defaultMachinePlatform: systemDiskSize:", "platform: alibabacloud: defaultMachinePlatform: zones:", "platform: alibabacloud: privateZoneID:" ]
https://docs.redhat.com/en/documentation/openshift_container_platform_installation/4.15/html/installing_on_alibaba/installation-config-parameters-alibaba
Chapter 4. Upgrading your system from RHEL 6 to RHEL 7
Chapter 4. Upgrading your system from RHEL 6 to RHEL 7 After you have corrected all problems reported by the Preupgrade Assistant, use the Red Hat Upgrade Tool to upgrade your system from RHEL 6.10 to RHEL 7.9. Always perform any necessary post-install tasks to ensure your system is up-to-date and to prevent upgrade-related problems. Important Test the upgrade process on a safe, non-production system before you perform it on any production system. Prerequisites You have completed the preparation steps described in Preparing a RHEL 6 system for the upgrade , including a full system backup. You have performed the pre-upgrade system assessment and resolved all reported problems. For details, see Assessing system upgrade suitability . Procedure Prepare source repositories or media with RHEL 7 packages in one of the following locations: An installation repository created from a DVD ISO where you download RHEL 7 packages, for example, an FTP server or an HTTPS site that contains the RHEL 7.9 packages. For more information, see Preparing installation sources . Mounted installation media An ISO image In any of the above options, you can configure custom repositories and additional repositories provided by Red Hat. For example, certain packages available in the RHEL 6 Base system are provided in the RHEL 7 Extras repository and are not on a RHEL 7 DVD. If you know that your system requires packages that are not in the RHEL 7 Base repository, you can install a separate RHEL 7 system to act as a yum repository that provides the required packages over FTP or HTTP. To set up an additional repository that you can use during the upgrade, follow instructions in How to create a local repository for updates . Then use the --addrepo=REPOID=URL option with the redhat-upgrade-tool command. Important It is strongly recommended to use RHEL 7.9 GA source repositories to prevent booting issues after the upgrade. For more information, see Known Issues . Disable active repositories to prevent problems with combining packages from different major releases of RHEL. Install the yum-utils package: Disable active repositories: For more information, see Can I install packages from different versions of RHEL . Run the Red Hat Upgrade Tool to download RHEL 7 packages and prepare the package installation. Specify the location of the Red Hat Enterprise Linux 7 packages: Installation repository Mounted installation media If you do not specify the device path, the Red Hat Upgrade Tool scans all mounted removable devices. ISO image Important You can use the following options with the redhat-upgrade-tool command for all three locations: --cleanup post: Automatically removes Red Hat-signed packages that do not have a RHEL 7 replacement. Recommended. If you do not use the --cleanup-post option, you must remove all remaining RHEL 6 packages after the in-place upgrade to ensure that your system is fully supported. --snapshot-root-lv and --snapshot-lv: Creates snapshots of system volumes. Snapshots are required to perform a rollback of the RHEL system in case of upgrade failure. For more information, see Rollbacks and cleanup after upgrading RHEL 6 to RHEL 7 . Reboot the system when prompted. Depending on the number of packages being upgraded, this process can take up to several hours to complete. Manually perform any post-upgrade tasks described in the pre-upgrade assessment result. If your system architecture is 64-bit Intel, upgrade from GRUB Legacy to GRUB 2. See the System Administrators Guide for more information. If Samba is installed on the upgraded host, manually run the testparm utility to verify the /etc/samba/smb.conf file. If the utility reports any configuration errors, you must fix them before you can start Samba. Optional: If you did not use the --cleanup-post option when running the Red Hat Upgrade Tool, clean up orphaned RHEL 6 packages: Warning Be careful not to accidentally remove custom packages that are compatible with RHEL 7. Warning Using the rpm command to remove orphaned packages might cause broken dependencies in some RHEL 7 packages. Refer to Fixing dependency errors for information about how to fix those dependency errors. Update your new RHEL 7 packages to their latest version. Verification Verify that the system was upgraded to the latest version of RHEL 7. Verify that the system is automatically resubscribed for RHEL 7. If the repository list does not contain RHEL repositories, run the following commands to unsubscribe the system, resubscribe the system as a RHEL 7 system, and add required repositories: If any problems occur during or after the in-place upgrade, see Troubleshooting for assistance.
[ "yum install yum-utils", "yum-config-manager --disable \\*", "redhat-upgrade-tool --network 7.9 --instrepo ftp-or-http-url --cleanup-post", "redhat-upgrade-tool --device device_path --cleanup-post", "redhat-upgrade-tool --iso iso_path --cleanup-post", "reboot", "rpm -qa | grep .el6 &> /tmp/el6.txt rpm -e USD(cat /tmp/el6.txt) --nodeps", "yum update reboot", "cat /etc/redhat-release Red Hat Enterprise Linux Server release 7.9 (Maipo)", "yum repolist Loaded plugins: product-id, subscription-manager repo id repo name status rhel-7-server-rpms/7Server/x86_64 Red Hat Enterprise Linux 7 Server (RPMs) 23,676", "subscription-manager remove --all subscription-manager unregister subscription-manager register subscription-manager attach --pool= poolID subscription-manager repos --enable= repoID" ]
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/upgrading_from_rhel_6_to_rhel_7/upgrading-your-system-from-rhel-6-to-rhel-7_upgrading-from-rhel-6-to-rhel-7
Chapter 5. Upgrading a geo-replication deployment of Red Hat Quay on OpenShift Container Platform
Chapter 5. Upgrading a geo-replication deployment of Red Hat Quay on OpenShift Container Platform Use the following procedure to upgrade your geo-replicated Red Hat Quay on OpenShift Container Platform deployment. Important When upgrading geo-replicated Red Hat Quay on OpenShift Container Platform deployment to the y-stream release (for example, Red Hat Quay 3.7 Red Hat Quay 3.8), you must stop operations before upgrading. There is intermittent downtime down upgrading from one y-stream release to the . It is highly recommended to back up your Red Hat Quay on OpenShift Container Platform deployment before upgrading. Procedure This procedure assumes that you are running the Red Hat Quay registry on three or more systems. For this procedure, we will assume three systems named System A, System B, and System C . System A will serve as the primary system in which the Red Hat Quay Operator is deployed. On System B and System C, scale down your Red Hat Quay registry. This is done by disabling auto scaling and overriding the replica county for Red Hat Quay, mirror workers, and Clair if it is managed. Use the following quayregistry.yaml file as a reference: apiVersion: quay.redhat.com/v1 kind: QuayRegistry metadata: name: registry namespace: ns spec: components: ... - kind: horizontalpodautoscaler managed: false 1 - kind: quay managed: true overrides: 2 replicas: 0 - kind: clair managed: true overrides: replicas: 0 - kind: mirror managed: true overrides: replicas: 0 ... 1 Disable auto scaling of Quay , Clair and Mirroring workers 2 Set the replica count to 0 for components accessing the database and objectstorage Note You must keep the Red Hat Quay registry running on System A. Do not update the quayregistry.yaml file on System A. Wait for the registry-quay-app , registry-quay-mirror , and registry-clair-app pods to disappear. Enter the following command to check their status: oc get pods -n <quay-namespace> Example output quay-operator.v3.7.1-6f9d859bd-p5ftc 1/1 Running 0 12m quayregistry-clair-postgres-7487f5bd86-xnxpr 1/1 Running 1 (12m ago) 12m quayregistry-quay-app-upgrade-xq2v6 0/1 Completed 0 12m quayregistry-quay-redis-84f888776f-hhgms 1/1 Running 0 12m On System A, initiate a Red Hat Quay upgrade to the latest y-stream version. This is a manual process. For more information about upgrading installed Operators, see Upgrading installed Operators . For more information about Red Hat Quay upgrade paths, see Upgrading the Red Hat Quay Operator . After the new Red Hat Quay registry is installed, the necessary upgrades on the cluster are automatically completed. Afterwards, new Red Hat Quay pods are started with the latest y-stream version. Additionally, new Quay pods are scheduled and started. Confirm that the update has properly worked by navigating to the Red Hat Quay UI: In the OpenShift console, navigate to Operators Installed Operators , and click the Registry Endpoint link. Important Do not execute the following step until the Red Hat Quay UI is available. Do not upgrade the Red Hat Quay registry on System B and on System C until the UI is available on System A. Confirm that the update has properly worked on System A, initiate the Red Hat Quay upgrade on System B and on System C. The Operator upgrade results in an upgraded Red Hat Quay installation, and the pods are restarted. Note Because the database schema is correct for the new y-stream installation, the new pods on System B and on System C should quickly start. After updating, revert the changes made in step 1 of this procedure by removing overrides for the components. For example: apiVersion: quay.redhat.com/v1 kind: QuayRegistry metadata: name: registry namespace: ns spec: components: ... - kind: horizontalpodautoscaler managed: true 1 - kind: quay managed: true - kind: clair managed: true - kind: mirror managed: true ... 1 If the horizontalpodautoscaler resource was set to true before the upgrade procedure, or if you want Red Hat Quay to scale in case of a resource shortage, set it to true .
[ "apiVersion: quay.redhat.com/v1 kind: QuayRegistry metadata: name: registry namespace: ns spec: components: ... - kind: horizontalpodautoscaler managed: false 1 - kind: quay managed: true overrides: 2 replicas: 0 - kind: clair managed: true overrides: replicas: 0 - kind: mirror managed: true overrides: replicas: 0 ...", "get pods -n <quay-namespace>", "quay-operator.v3.7.1-6f9d859bd-p5ftc 1/1 Running 0 12m quayregistry-clair-postgres-7487f5bd86-xnxpr 1/1 Running 1 (12m ago) 12m quayregistry-quay-app-upgrade-xq2v6 0/1 Completed 0 12m quayregistry-quay-redis-84f888776f-hhgms 1/1 Running 0 12m", "apiVersion: quay.redhat.com/v1 kind: QuayRegistry metadata: name: registry namespace: ns spec: components: ... - kind: horizontalpodautoscaler managed: true 1 - kind: quay managed: true - kind: clair managed: true - kind: mirror managed: true ..." ]
https://docs.redhat.com/en/documentation/red_hat_quay/3.13/html/upgrade_red_hat_quay/upgrading-geo-repl-quay-operator
Chapter 41. identity
Chapter 41. identity This chapter describes the commands under the identity command. 41.1. identity provider create Create new identity provider Usage: Table 41.1. Positional Arguments Value Summary <name> New identity provider name (must be unique) Table 41.2. Optional Arguments Value Summary -h, --help Show this help message and exit --remote-id <remote-id> Remote ids to associate with the identity provider (repeat option to provide multiple values) --remote-id-file <file-name> Name of a file that contains many remote ids to associate with the identity provider, one per line --description <description> New identity provider description --domain <domain> Domain to associate with the identity provider. if not specified, a domain will be created automatically. (Name or ID) --enable Enable identity provider (default) --disable Disable the identity provider Table 41.3. Output Formatters Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated Table 41.4. JSON Formatter Value Summary --noindent Whether to disable indenting the json Table 41.5. Shell Formatter Value Summary --prefix PREFIX Add a prefix to all variable names Table 41.6. Table Formatter Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 41.2. identity provider delete Delete identity provider(s) Usage: Table 41.7. Positional Arguments Value Summary <identity-provider> Identity provider(s) to delete Table 41.8. Optional Arguments Value Summary -h, --help Show this help message and exit 41.3. identity provider list List identity providers Usage: Table 41.9. Optional Arguments Value Summary -h, --help Show this help message and exit Table 41.10. Output Formatters Value Summary -f {csv,json,table,value,yaml}, --format {csv,json,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated --sort-column SORT_COLUMN Specify the column(s) to sort the data (columns specified first have a priority, non-existing columns are ignored), can be repeated Table 41.11. CSV Formatter Value Summary --quote {all,minimal,none,nonnumeric} When to include quotes, defaults to nonnumeric Table 41.12. JSON Formatter Value Summary --noindent Whether to disable indenting the json Table 41.13. Table Formatter Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 41.4. identity provider set Set identity provider properties Usage: Table 41.14. Positional Arguments Value Summary <identity-provider> Identity provider to modify Table 41.15. Optional Arguments Value Summary -h, --help Show this help message and exit --description <description> Set identity provider description --remote-id <remote-id> Remote ids to associate with the identity provider (repeat option to provide multiple values) --remote-id-file <file-name> Name of a file that contains many remote ids to associate with the identity provider, one per line --enable Enable the identity provider --disable Disable the identity provider 41.5. identity provider show Display identity provider details Usage: Table 41.16. Positional Arguments Value Summary <identity-provider> Identity provider to display Table 41.17. Optional Arguments Value Summary -h, --help Show this help message and exit Table 41.18. Output Formatters Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated Table 41.19. JSON Formatter Value Summary --noindent Whether to disable indenting the json Table 41.20. Shell Formatter Value Summary --prefix PREFIX Add a prefix to all variable names Table 41.21. Table Formatter Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show.
[ "openstack identity provider create [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] [--remote-id <remote-id> | --remote-id-file <file-name>] [--description <description>] [--domain <domain>] [--enable | --disable] <name>", "openstack identity provider delete [-h] <identity-provider> [<identity-provider> ...]", "openstack identity provider list [-h] [-f {csv,json,table,value,yaml}] [-c COLUMN] [--quote {all,minimal,none,nonnumeric}] [--noindent] [--max-width <integer>] [--fit-width] [--print-empty] [--sort-column SORT_COLUMN]", "openstack identity provider set [-h] [--description <description>] [--remote-id <remote-id> | --remote-id-file <file-name>] [--enable | --disable] <identity-provider>", "openstack identity provider show [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] <identity-provider>" ]
https://docs.redhat.com/en/documentation/red_hat_openstack_platform/16.0/html/command_line_interface_reference/identity
14.16. Guest Virtual Machine CPU Model Configuration
14.16. Guest Virtual Machine CPU Model Configuration This section provides information about guest virtual machine CPU model configuration. 14.16.1. Introduction Every hypervisor has its own policy for what a guest virtual machine will see for its CPUs by default. Whereas some hypervisors decide which CPU host physical machine features will be available for the guest virtual machine, QEMU/KVM presents the guest virtual machine with a generic model named qemu32 or qemu64 . These hypervisors perform more advanced filtering, classifying all physical CPUs into a handful of groups and have one baseline CPU model for each group that is presented to the guest virtual machine. Such behavior enables the safe migration of guest virtual machines between host physical machines, provided they all have physical CPUs that classify into the same group. libvirt does not typically enforce policy itself, rather it provides the mechanism on which the higher layers define their own desired policy. Understanding how to obtain CPU model information and define a suitable guest virtual machine CPU model is critical to ensure guest virtual machine migration is successful between host physical machines. Note that a hypervisor can only emulate features that it is aware of and features that were created after the hypervisor was released may not be emulated.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/virtualization_administration_guide/sect-managing_guest_virtual_machines_with_virsh-guest_virtual_machine_cpu_model_configuration
Chapter 13. IPPool [whereabouts.cni.cncf.io/v1alpha1]
Chapter 13. IPPool [whereabouts.cni.cncf.io/v1alpha1] Description IPPool is the Schema for the ippools API Type object 13.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata spec object IPPoolSpec defines the desired state of IPPool 13.1.1. .spec Description IPPoolSpec defines the desired state of IPPool Type object Required allocations range Property Type Description allocations object Allocations is the set of allocated IPs for the given range. Its` indices are a direct mapping to the IP with the same index/offset for the pool's range. allocations{} object IPAllocation represents metadata about the pod/container owner of a specific IP range string Range is a RFC 4632/4291-style string that represents an IP address and prefix length in CIDR notation 13.1.2. .spec.allocations Description Allocations is the set of allocated IPs for the given range. Its` indices are a direct mapping to the IP with the same index/offset for the pool's range. Type object 13.1.3. .spec.allocations{} Description IPAllocation represents metadata about the pod/container owner of a specific IP Type object Required id podref Property Type Description id string ifname string podref string 13.2. API endpoints The following API endpoints are available: /apis/whereabouts.cni.cncf.io/v1alpha1/ippools GET : list objects of kind IPPool /apis/whereabouts.cni.cncf.io/v1alpha1/namespaces/{namespace}/ippools DELETE : delete collection of IPPool GET : list objects of kind IPPool POST : create an IPPool /apis/whereabouts.cni.cncf.io/v1alpha1/namespaces/{namespace}/ippools/{name} DELETE : delete an IPPool GET : read the specified IPPool PATCH : partially update the specified IPPool PUT : replace the specified IPPool 13.2.1. /apis/whereabouts.cni.cncf.io/v1alpha1/ippools HTTP method GET Description list objects of kind IPPool Table 13.1. HTTP responses HTTP code Reponse body 200 - OK IPPoolList schema 401 - Unauthorized Empty 13.2.2. /apis/whereabouts.cni.cncf.io/v1alpha1/namespaces/{namespace}/ippools HTTP method DELETE Description delete collection of IPPool Table 13.2. HTTP responses HTTP code Reponse body 200 - OK Status schema 401 - Unauthorized Empty HTTP method GET Description list objects of kind IPPool Table 13.3. HTTP responses HTTP code Reponse body 200 - OK IPPoolList schema 401 - Unauthorized Empty HTTP method POST Description create an IPPool Table 13.4. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 13.5. Body parameters Parameter Type Description body IPPool schema Table 13.6. HTTP responses HTTP code Reponse body 200 - OK IPPool schema 201 - Created IPPool schema 202 - Accepted IPPool schema 401 - Unauthorized Empty 13.2.3. /apis/whereabouts.cni.cncf.io/v1alpha1/namespaces/{namespace}/ippools/{name} Table 13.7. Global path parameters Parameter Type Description name string name of the IPPool HTTP method DELETE Description delete an IPPool Table 13.8. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed Table 13.9. HTTP responses HTTP code Reponse body 200 - OK Status schema 202 - Accepted Status schema 401 - Unauthorized Empty HTTP method GET Description read the specified IPPool Table 13.10. HTTP responses HTTP code Reponse body 200 - OK IPPool schema 401 - Unauthorized Empty HTTP method PATCH Description partially update the specified IPPool Table 13.11. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 13.12. HTTP responses HTTP code Reponse body 200 - OK IPPool schema 401 - Unauthorized Empty HTTP method PUT Description replace the specified IPPool Table 13.13. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 13.14. Body parameters Parameter Type Description body IPPool schema Table 13.15. HTTP responses HTTP code Reponse body 200 - OK IPPool schema 201 - Created IPPool schema 401 - Unauthorized Empty
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.15/html/network_apis/ippool-whereabouts-cni-cncf-io-v1alpha1
Chapter 9. Object Storage
Chapter 9. Object Storage The Object Storage (swift) service stores and retrieves data over HTTP. Objects (blobs of data) are stored in an organizational hierarchy that can be configured to offer anonymous read-only access, ACL defined access, or even temporary access. Swift supports multiple token-based authentication mechanisms implemented through middleware. Applications store and retrieve data in Object Storage using an industry-standard HTTP RESTful API. The back end swift components follow the same RESTful model, although some APIs (such as those managing durability) are kept private to the cluster. The components of swift fall into the following primary groups: Proxy services Auth services Storage services Account service Container service Object service Note An Object Storage installation does not have to be internet-facing and could also be a private cloud with the public switch a part of the organization's internal network infrastructure. 9.1. Network security Security hardening for swift begins with securing the networking component. See the networking chapter for more information. For high availability, the rsync protocol is used to replicate data between storage service nodes. In addition, the proxy service communicates with the storage service when relaying data between the client end-point and the cloud environment. Note Swift does not use encryption or authentication with inter-node communications. This is because swift uses the native rsync protocol for performance reasons, and does not use SSH for rsync communications.This is why you see a private switch or private network ([V]LAN) in the architecture diagrams. This data zone should be separate from other OpenStack data networks as well. Note Use a private (V)LAN network segment for your storage nodes in the data zone. This requires that the proxy nodes have dual interfaces (physical or virtual): One interface as a public interface for consumers to reach. Another interface as a private interface with access to the storage nodes. The following figure demonstrates one possible network architecture, using the Object Storage network architecture with a management node (OSAM): 9.2. General service security 9.2.1. Run services as non-root user It is recommend that you configure swift to run under a non-root ( UID 0 ) service account. One recommendation is the username swift with the primary group swift , as deployed by director. Object Storage services include, for example, proxy-server , container-server , account-server . 9.2.2. File permissions The /var/lib/config-data/puppet-generated/swift/etc/swift/ directory contains information about the ring topology and environment configuration. The following permissions are recommended: This restriction only allows root to modify configuration files, while still allowing the services to read them, due to their membership in the swift group. 9.3. Securing storage services The following are the default listening ports for the various storage services: Account service - TCP/6002 Container service - TCP/6001 Object Service - TCP/6000 Rsync - TCP/873 Note If ssync is used instead of rsync, the object service port is used for maintaining durability. Note Authentication does not occur at the storage nodes. If you are able to connect to a storage node on one of these ports, you can access or modify data without authentication. To help mitigate this issue, you should follow the recommendations given previously about using a private storage network. 9.3.1. Object Storage account terminology A swift account is not a user account or credential. The following distinctions exist: Swift account - A collection of containers (not user accounts or authentication). The authentication system you use will determine which users are associated with the account and how they might access it. Swift containers - A collection of objects. Metadata on the container is available for ACLs. The usage of ACLs is dependent on the authentication system used. Swift objects - The actual data objects. ACLs at the object level are also available with metadata, and are dependent on the authentication system used. At each level, you have ACLs that control user access; ACLs are interpreted based on the authentication system in use. The most common type of authentication provider is the Identity Service (keystone); custom authentication providers are also available. 9.4. Securing proxy services A proxy node should have at least two interfaces (physical or virtual): one public and one private. You can use firewalls or service binding to help protect the public interface. The public-facing service is an HTTP web server that processes end-point client requests, authenticates them, and performs the appropriate action. The private interface does not require any listening services, but is instead used to establish outgoing connections to storage nodes on the private storage network. 9.4.1. HTTP listening port Director configures the web services to run under a non-root (no UID 0) user. Using port numbers higher than 1024 help avoid running any part of the web container as root. Normally, clients that use the HTTP REST API (and perform automatic authentication) will retrieve the full REST API URL they require from the authentication response. The OpenStack REST API allows a client to authenticate to one URL and then be redirected to use a completely different URL for the actual service. For example, a client can authenticate to https://identity.cloud.example.org:55443/v1/auth and get a response with their authentication key and storage URL (the URL of the proxy nodes or load balancer) of https://swift.cloud.example.org:44443/v1/AUTH_8980 . 9.4.2. Load balancer If the option of using Apache is not feasible, or for performance you wish to offload your TLS work, you might employ a dedicated network device load balancer. This is a common way to provide redundancy and load balancing when using multiple proxy nodes. If you choose to offload your TLS, ensure that the network link between the load balancer and your proxy nodes are on a private (V)LAN segment such that other nodes on the network (possibly compromised) cannot wiretap (sniff) the unencrypted traffic. If such a breach was to occur, the attacker could gain access to endpoint client or cloud administrator credentials and access the cloud data. The authentication service you use will determine how you configure a different URL in the responses to endpoint clients, allowing them to use your load balancer instead of an individual proxy node. 9.5. Object Storage authentication Object Storage (swift) uses a WSGI model to provide for a middleware capability that not only provides general extensibility, but is also used for authentication of endpoint clients. The authentication provider defines what roles and user types exist. Some use traditional username and password credentials, while others might leverage API key tokens or even client-side x.509 certificates. Custom providers can be integrated using custom middleware. Object Storage comes with two authentication middleware modules by default, either of which can be used as sample code for developing a custom authentication middleware. 9.5.1. Keystone Keystone is the commonly used Identity provider in OpenStack. It may also be used for authentication in Object Storage. 9.6. Encrypt at-rest swift objects Swift can integrate with Barbican to transparently encrypt and decrypt your stored (at-rest) objects. At-rest encryption is distinct from in-transit encryption, and refers to the objects being encrypted while being stored on disk. Swift performs these encryption tasks transparently, with the objects being automatically encrypted when uploaded to swift, then automatically decrypted when served to a user. This encryption and decryption is done using the same (symmetric) key, which is stored in Barbican. For more information, see the Barbican integration guide: https://access.redhat.com/documentation/en-us/red_hat_openstack_platform/16.0/html-single/manage_secrets_with_openstack_key_manager/ 9.7. Additional items In /var/lib/config-data/puppet-generated/swift/etc/swift/swift.conf on every node, there is a swift_hash_path_prefix setting and a swift_hash_path_suffix setting. These are provided to reduce the chance of hash collisions for objects being stored and avert one user overwriting the data of another user. This value should be initially set with a cryptographically secure random number generator and consistent across all nodes. Ensure that it is protected with proper ACLs and that you have a backup copy to avoid data loss.
[ "chown -R root:swift /var/lib/config-data/puppet-generated/swift/etc/swift/* find /var/lib/config-data/puppet-generated/swift/etc/swift/ -type f -exec chmod 640 {} \\; find /var/lib/config-data/puppet-generated/swift/etc/swift/ -type d -exec chmod 750 {} \\;" ]
https://docs.redhat.com/en/documentation/red_hat_openstack_platform/16.0/html/security_and_hardening_guide/object_storage
Chapter 4. Introduction to devfile in Dev Spaces
Chapter 4. Introduction to devfile in Dev Spaces Devfiles are yaml text files used for development environment customization. Use them to configure a devfile to suit your specific needs and share the customized devfile across multiple workspaces to ensure identical user experience and build, run, and deploy behaviours across your team. Red Hat OpenShift Dev Spaces-specific devfile features Red Hat OpenShift Dev Spaces is expected to work with most of the popular images defined in the components section of devfile. For production purposes, it is recommended to use one of the Universal Base Images as a base image for defining the Cloud Development Environment. Warning Some images can not be used as-is for defining Cloud Development Environment since Visual Studio Code - Open Source ("Code - OSS") can not be started in the containers with missing openssl and libbrotli . Missing libraries should be explicitly installed on the Dockerfile level e.g. RUN yum install compat-openssl11 libbrotli Devfile and Universal Developer Image You do not need a devfile to start a workspace. If you do not include a devfile in your project repository, Red Hat OpenShift Dev Spaces automatically loads a default devfile with a Universal Developer Image (UDI). Devfile Registry {Devfile Registry contains ready-to-use community-supported devfiles for different languages and technologies. Devfiles included in the registry should be treated as samples rather than templates. Additional resources What is a devfile Benefits of devfile Devfile customization overview Devfile.io Customizing Cloud Development Environments
null
https://docs.redhat.com/en/documentation/red_hat_openshift_dev_spaces/3.14/html/user_guide/devfile-introduction
Chapter 9. Log collection and forwarding
Chapter 9. Log collection and forwarding 9.1. About log collection and forwarding The Red Hat OpenShift Logging Operator deploys a collector based on the ClusterLogForwarder resource specification. There are two collector options supported by this Operator: the legacy Fluentd collector, and the Vector collector. Note Fluentd is deprecated and is planned to be removed in a future release. Red Hat provides bug fixes and support for this feature during the current release lifecycle, but this feature no longer receives enhancements. As an alternative to Fluentd, you can use Vector instead. 9.1.1. Log collection The log collector is a daemon set that deploys pods to each OpenShift Container Platform node to collect container and node logs. By default, the log collector uses the following sources: System and infrastructure logs generated by journald log messages from the operating system, the container runtime, and OpenShift Container Platform. /var/log/containers/*.log for all container logs. If you configure the log collector to collect audit logs, it collects them from /var/log/audit/audit.log . The log collector collects the logs from these sources and forwards them internally or externally depending on your logging configuration. 9.1.1.1. Log collector types Vector is a log collector offered as an alternative to Fluentd for the logging. You can configure which logging collector type your cluster uses by modifying the ClusterLogging custom resource (CR) collection spec: Example ClusterLogging CR that configures Vector as the collector apiVersion: logging.openshift.io/v1 kind: ClusterLogging metadata: name: instance namespace: openshift-logging spec: collection: logs: type: vector vector: {} # ... 9.1.1.2. Log collection limitations The container runtimes provide minimal information to identify the source of log messages: project, pod name, and container ID. This information is not sufficient to uniquely identify the source of the logs. If a pod with a given name and project is deleted before the log collector begins processing its logs, information from the API server, such as labels and annotations, might not be available. There might not be a way to distinguish the log messages from a similarly named pod and project or trace the logs to their source. This limitation means that log collection and normalization are considered best effort . Important The available container runtimes provide minimal information to identify the source of log messages and do not guarantee unique individual log messages or that these messages can be traced to their source. 9.1.1.3. Log collector features by type Table 9.1. Log Sources Feature Fluentd Vector App container logs [✓] [✓] App-specific routing [✓] [✓] App-specific routing by namespace [✓] [✓] Infra container logs [✓] [✓] Infra journal logs [✓] [✓] Kube API audit logs [✓] [✓] OpenShift API audit logs [✓] [✓] Open Virtual Network (OVN) audit logs [✓] [✓] Table 9.2. Authorization and Authentication Feature Fluentd Vector Elasticsearch certificates [✓] [✓] Elasticsearch username / password [✓] [✓] Cloudwatch keys [✓] [✓] Cloudwatch STS [✓] [✓] Kafka certificates [✓] [✓] Kafka username / password [✓] [✓] Kafka SASL [✓] [✓] Loki bearer token [✓] [✓] Table 9.3. Normalizations and Transformations Feature Fluentd Vector Viaq data model - app [✓] [✓] Viaq data model - infra [✓] [✓] Viaq data model - infra(journal) [✓] [✓] Viaq data model - Linux audit [✓] [✓] Viaq data model - kube-apiserver audit [✓] [✓] Viaq data model - OpenShift API audit [✓] [✓] Viaq data model - OVN [✓] [✓] Loglevel Normalization [✓] [✓] JSON parsing [✓] [✓] Structured Index [✓] [✓] Multiline error detection [✓] [✓] Multicontainer / split indices [✓] [✓] Flatten labels [✓] [✓] CLF static labels [✓] [✓] Table 9.4. Tuning Feature Fluentd Vector Fluentd readlinelimit [✓] Fluentd buffer [✓] - chunklimitsize [✓] - totallimitsize [✓] - overflowaction [✓] - flushthreadcount [✓] - flushmode [✓] - flushinterval [✓] - retrywait [✓] - retrytype [✓] - retrymaxinterval [✓] - retrytimeout [✓] Table 9.5. Visibility Feature Fluentd Vector Metrics [✓] [✓] Dashboard [✓] [✓] Alerts [✓] [✓] Table 9.6. Miscellaneous Feature Fluentd Vector Global proxy support [✓] [✓] x86 support [✓] [✓] ARM support [✓] [✓] PowerPC support [✓] [✓] IBM Z support [✓] [✓] IPv6 support [✓] [✓] Log event buffering [✓] Disconnected Cluster [✓] [✓] 9.1.1.4. Collector outputs The following collector outputs are supported: Table 9.7. Supported outputs Feature Fluentd Vector Elasticsearch v6-v8 [✓] [✓] Fluent forward [✓] Syslog RFC3164 [✓] [✓] (Logging 5.7+) Syslog RFC5424 [✓] [✓] (Logging 5.7+) Kafka [✓] [✓] Cloudwatch [✓] [✓] Cloudwatch STS [✓] [✓] Loki [✓] [✓] HTTP [✓] [✓] (Logging 5.7+) Google Cloud Logging [✓] [✓] Splunk [✓] (Logging 5.6+) 9.1.2. Log forwarding Administrators can create ClusterLogForwarder resources that specify which logs are collected, how they are transformed, and where they are forwarded to. ClusterLogForwarder resources can be used up to forward container, infrastructure, and audit logs to specific endpoints within or outside of a cluster. Transport Layer Security (TLS) is supported so that log forwarders can be configured to send logs securely. Administrators can also authorize RBAC permissions that define which service accounts and users can access and forward which types of logs. Additional resources Using RBAC to define and apply permissions Using service accounts in applications Using RBAC Authorization Kubernetes documentation 9.2. Log output types Outputs define the destination where logs are sent to from a log forwarder. You can configure multiple types of outputs in the ClusterLogForwarder custom resource (CR) to send logs to servers that support different protocols. 9.2.1. Supported log forwarding outputs Outputs can be any of the following types: Table 9.8. Supported log output types Output type Protocol Tested with Logging versions Supported collector type Elasticsearch v6 HTTP 1.1 6.8.1, 6.8.23 5.6+ Fluentd, Vector Elasticsearch v7 HTTP 1.1 7.12.2, 7.17.7, 7.10.1 5.6+ Fluentd, Vector Elasticsearch v8 HTTP 1.1 8.4.3, 8.6.1 5.6+ Fluentd [1] , Vector Fluent Forward Fluentd forward v1 Fluentd 1.14.6, Logstash 7.10.1, Fluentd 1.14.5 5.4+ Fluentd Google Cloud Logging REST over HTTPS Latest 5.7+ Vector HTTP HTTP 1.1 Fluentd 1.14.6, Vector 0.21 5.7+ Fluentd, Vector Kafka Kafka 0.11 Kafka 2.4.1, 2.7.0, 3.3.1 5.4+ Fluentd, Vector Loki REST over HTTP and HTTPS 2.3.0, 2.5.0, 2.7, 2.2.1 5.4+ Fluentd, Vector Splunk HEC 8.2.9, 9.0.0 5.7+ Vector Syslog RFC3164, RFC5424 Rsyslog 8.37.0-9.el7, rsyslog-8.39.0 5.4+ Fluentd, Vector [2] Amazon CloudWatch REST over HTTPS Latest 5.4+ Fluentd, Vector Fluentd does not support Elasticsearch 8 in the logging version 5.6.2. Vector supports Syslog in the logging version 5.7 and higher. 9.2.2. Output type descriptions default The on-cluster, Red Hat managed log store. You are not required to configure the default output. Note If you configure a default output, you receive an error message, because the default output name is reserved for referencing the on-cluster, Red Hat managed log store. loki Loki, a horizontally scalable, highly available, multi-tenant log aggregation system. kafka A Kafka broker. The kafka output can use a TCP or TLS connection. elasticsearch An external Elasticsearch instance. The elasticsearch output can use a TLS connection. fluentdForward An external log aggregation solution that supports Fluentd. This option uses the Fluentd forward protocols. The fluentForward output can use a TCP or TLS connection and supports shared-key authentication by providing a shared_key field in a secret. Shared-key authentication can be used with or without TLS. Important The fluentdForward output is only supported if you are using the Fluentd collector. It is not supported if you are using the Vector collector. If you are using the Vector collector, you can forward logs to Fluentd by using the http output. syslog An external log aggregation solution that supports the syslog RFC3164 or RFC5424 protocols. The syslog output can use a UDP, TCP, or TLS connection. cloudwatch Amazon CloudWatch, a monitoring and log storage service hosted by Amazon Web Services (AWS). 9.3. Enabling JSON log forwarding You can configure the Log Forwarding API to parse JSON strings into a structured object. 9.3.1. Parsing JSON logs You can use a ClusterLogForwarder object to parse JSON logs into a structured object and forward them to a supported output. To illustrate how this works, suppose that you have the following structured JSON log entry: Example structured JSON log entry {"level":"info","name":"fred","home":"bedrock"} To enable parsing JSON log, you add parse: json to a pipeline in the ClusterLogForwarder CR, as shown in the following example: Example snippet showing parse: json pipelines: - inputRefs: [ application ] outputRefs: myFluentd parse: json When you enable parsing JSON logs by using parse: json , the CR copies the JSON-structured log entry in a structured field, as shown in the following example: Example structured output containing the structured JSON log entry {"structured": { "level": "info", "name": "fred", "home": "bedrock" }, "more fields..."} Important If the log entry does not contain valid structured JSON, the structured field is absent. 9.3.2. Configuring JSON log data for Elasticsearch If your JSON logs follow more than one schema, storing them in a single index might cause type conflicts and cardinality problems. To avoid that, you must configure the ClusterLogForwarder custom resource (CR) to group each schema into a single output definition. This way, each schema is forwarded to a separate index. Important If you forward JSON logs to the default Elasticsearch instance managed by OpenShift Logging, it generates new indices based on your configuration. To avoid performance issues associated with having too many indices, consider keeping the number of possible schemas low by standardizing to common schemas. Structure types You can use the following structure types in the ClusterLogForwarder CR to construct index names for the Elasticsearch log store: structuredTypeKey is the name of a message field. The value of that field is used to construct the index name. kubernetes.labels.<key> is the Kubernetes pod label whose value is used to construct the index name. openshift.labels.<key> is the pipeline.label.<key> element in the ClusterLogForwarder CR whose value is used to construct the index name. kubernetes.container_name uses the container name to construct the index name. structuredTypeName : If the structuredTypeKey field is not set or its key is not present, the structuredTypeName value is used as the structured type. When you use both the structuredTypeKey field and the structuredTypeName field together, the structuredTypeName value provides a fallback index name if the key in the structuredTypeKey field is missing from the JSON log data. Note Although you can set the value of structuredTypeKey to any field shown in the "Log Record Fields" topic, the most useful fields are shown in the preceding list of structure types. A structuredTypeKey: kubernetes.labels.<key> example Suppose the following: Your cluster is running application pods that produce JSON logs in two different formats, "apache" and "google". The user labels these application pods with logFormat=apache and logFormat=google . You use the following snippet in your ClusterLogForwarder CR YAML file. apiVersion: logging.openshift.io/v1 kind: ClusterLogForwarder metadata: # ... spec: # ... outputDefaults: elasticsearch: structuredTypeKey: kubernetes.labels.logFormat 1 structuredTypeName: nologformat pipelines: - inputRefs: - application outputRefs: - default parse: json 2 1 Uses the value of the key-value pair that is formed by the Kubernetes logFormat label. 2 Enables parsing JSON logs. In that case, the following structured log record goes to the app-apache-write index: And the following structured log record goes to the app-google-write index: A structuredTypeKey: openshift.labels.<key> example Suppose that you use the following snippet in your ClusterLogForwarder CR YAML file. outputDefaults: elasticsearch: structuredTypeKey: openshift.labels.myLabel 1 structuredTypeName: nologformat pipelines: - name: application-logs inputRefs: - application - audit outputRefs: - elasticsearch-secure - default parse: json labels: myLabel: myValue 2 1 Uses the value of the key-value pair that is formed by the OpenShift myLabel label. 2 The myLabel element gives its string value, myValue , to the structured log record. In that case, the following structured log record goes to the app-myValue-write index: Additional considerations The Elasticsearch index for structured records is formed by prepending "app-" to the structured type and appending "-write". Unstructured records are not sent to the structured index. They are indexed as usual in the application, infrastructure, or audit indices. If there is no non-empty structured type, forward an unstructured record with no structured field. It is important not to overload Elasticsearch with too many indices. Only use distinct structured types for distinct log formats , not for each application or namespace. For example, most Apache applications use the same JSON log format and structured type, such as LogApache . 9.3.3. Forwarding JSON logs to the Elasticsearch log store For an Elasticsearch log store, if your JSON log entries follow different schemas , configure the ClusterLogForwarder custom resource (CR) to group each JSON schema into a single output definition. This way, Elasticsearch uses a separate index for each schema. Important Because forwarding different schemas to the same index can cause type conflicts and cardinality problems, you must perform this configuration before you forward data to the Elasticsearch store. To avoid performance issues associated with having too many indices, consider keeping the number of possible schemas low by standardizing to common schemas. Procedure Add the following snippet to your ClusterLogForwarder CR YAML file. outputDefaults: elasticsearch: structuredTypeKey: <log record field> structuredTypeName: <name> pipelines: - inputRefs: - application outputRefs: default parse: json Use structuredTypeKey field to specify one of the log record fields. Use structuredTypeName field to specify a name. Important To parse JSON logs, you must set both the structuredTypeKey and structuredTypeName fields. For inputRefs , specify which log types to forward by using that pipeline, such as application, infrastructure , or audit . Add the parse: json element to pipelines. Create the CR object: USD oc create -f <filename>.yaml The Red Hat OpenShift Logging Operator redeploys the collector pods. However, if they do not redeploy, delete the collector pods to force them to redeploy. USD oc delete pod --selector logging-infra=collector 9.3.4. Forwarding JSON logs from containers in the same pod to separate indices You can forward structured logs from different containers within the same pod to different indices. To use this feature, you must configure the pipeline with multi-container support and annotate the pods. Logs are written to indices with a prefix of app- . It is recommended that Elasticsearch be configured with aliases to accommodate this. Important JSON formatting of logs varies by application. Because creating too many indices impacts performance, limit your use of this feature to creating indices for logs that have incompatible JSON formats. Use queries to separate logs from different namespaces, or applications with compatible JSON formats. Prerequisites Logging for Red Hat OpenShift: 5.5 Procedure Create or edit a YAML file that defines the ClusterLogForwarder CR object: apiVersion: "logging.openshift.io/v1" kind: ClusterLogForwarder metadata: name: instance namespace: openshift-logging spec: outputDefaults: elasticsearch: structuredTypeKey: kubernetes.labels.logFormat 1 structuredTypeName: nologformat enableStructuredContainerLogs: true 2 pipelines: - inputRefs: - application name: application-logs outputRefs: - default parse: json 1 Uses the value of the key-value pair that is formed by the Kubernetes logFormat label. 2 Enables multi-container outputs. Create or edit a YAML file that defines the Pod CR object: apiVersion: v1 kind: Pod metadata: annotations: containerType.logging.openshift.io/heavy: heavy 1 containerType.logging.openshift.io/low: low spec: containers: - name: heavy 2 image: heavyimage - name: low image: lowimage 1 Format: containerType.logging.openshift.io/<container-name>: <index> 2 Annotation names must match container names Warning This configuration might significantly increase the number of shards on the cluster. Additional Resources Kubernetes Annotations Additional resources About log forwarding 9.4. Configuring log forwarding By default, the logging sends container and infrastructure logs to the default internal log store defined in the ClusterLogging custom resource. However, it does not send audit logs to the internal store because it does not provide secure storage. If this default configuration meets your needs, you do not need to configure the Cluster Log Forwarder. Note To send audit logs to the internal Elasticsearch log store, use the Cluster Log Forwarder as described in Forwarding audit logs to the log store . 9.4.1. About forwarding logs to third-party systems To send logs to specific endpoints inside and outside your OpenShift Container Platform cluster, you specify a combination of outputs and pipelines in a ClusterLogForwarder custom resource (CR). You can also use inputs to forward the application logs associated with a specific project to an endpoint. Authentication is provided by a Kubernetes Secret object. pipeline Defines simple routing from one log type to one or more outputs, or which logs you want to send. The log types are one of the following: application . Container logs generated by user applications running in the cluster, except infrastructure container applications. infrastructure . Container logs from pods that run in the openshift* , kube* , or default projects and journal logs sourced from node file system. audit . Audit logs generated by the node audit system, auditd , Kubernetes API server, OpenShift API server, and OVN network. You can add labels to outbound log messages by using key:value pairs in the pipeline. For example, you might add a label to messages that are forwarded to other data centers or label the logs by type. Labels that are added to objects are also forwarded with the log message. input Forwards the application logs associated with a specific project to a pipeline. In the pipeline, you define which log types to forward using an inputRef parameter and where to forward the logs to using an outputRef parameter. Secret A key:value map that contains confidential data such as user credentials. Note the following: If a ClusterLogForwarder CR object exists, logs are not forwarded to the default Elasticsearch instance, unless there is a pipeline with the default output. By default, the logging sends container and infrastructure logs to the default internal Elasticsearch log store defined in the ClusterLogging custom resource. However, it does not send audit logs to the internal store because it does not provide secure storage. If this default configuration meets your needs, do not configure the Log Forwarding API. If you do not define a pipeline for a log type, the logs of the undefined types are dropped. For example, if you specify a pipeline for the application and audit types, but do not specify a pipeline for the infrastructure type, infrastructure logs are dropped. You can use multiple types of outputs in the ClusterLogForwarder custom resource (CR) to send logs to servers that support different protocols. The internal OpenShift Container Platform Elasticsearch instance does not provide secure storage for audit logs. We recommend you ensure that the system to which you forward audit logs is compliant with your organizational and governmental regulations and is properly secured. The logging does not comply with those regulations. The following example forwards the audit logs to a secure external Elasticsearch instance, the infrastructure logs to an insecure external Elasticsearch instance, the application logs to a Kafka broker, and the application logs from the my-apps-logs project to the internal Elasticsearch instance. Sample log forwarding outputs and pipelines apiVersion: "logging.openshift.io/v1" kind: ClusterLogForwarder metadata: name: instance 1 namespace: openshift-logging 2 spec: outputs: - name: elasticsearch-secure 3 type: "elasticsearch" url: https://elasticsearch.secure.com:9200 secret: name: elasticsearch - name: elasticsearch-insecure 4 type: "elasticsearch" url: http://elasticsearch.insecure.com:9200 - name: kafka-app 5 type: "kafka" url: tls://kafka.secure.com:9093/app-topic inputs: 6 - name: my-app-logs application: namespaces: - my-project pipelines: - name: audit-logs 7 inputRefs: - audit outputRefs: - elasticsearch-secure - default labels: secure: "true" 8 datacenter: "east" - name: infrastructure-logs 9 inputRefs: - infrastructure outputRefs: - elasticsearch-insecure labels: datacenter: "west" - name: my-app 10 inputRefs: - my-app-logs outputRefs: - default - inputRefs: 11 - application outputRefs: - kafka-app labels: datacenter: "south" 1 The name of the ClusterLogForwarder CR must be instance . 2 The namespace for the ClusterLogForwarder CR must be openshift-logging . 3 Configuration for an secure Elasticsearch output using a secret with a secure URL. A name to describe the output. The type of output: elasticsearch . The secure URL and port of the Elasticsearch instance as a valid absolute URL, including the prefix. The secret required by the endpoint for TLS communication. The secret must exist in the openshift-logging project. 4 Configuration for an insecure Elasticsearch output: A name to describe the output. The type of output: elasticsearch . The insecure URL and port of the Elasticsearch instance as a valid absolute URL, including the prefix. 5 Configuration for a Kafka output using a client-authenticated TLS communication over a secure URL A name to describe the output. The type of output: kafka . Specify the URL and port of the Kafka broker as a valid absolute URL, including the prefix. 6 Configuration for an input to filter application logs from the my-project namespace. 7 Configuration for a pipeline to send audit logs to the secure external Elasticsearch instance: A name to describe the pipeline. The inputRefs is the log type, in this example audit . The outputRefs is the name of the output to use, in this example elasticsearch-secure to forward to the secure Elasticsearch instance and default to forward to the internal Elasticsearch instance. Optional: Labels to add to the logs. 8 Optional: String. One or more labels to add to the logs. Quote values like "true" so they are recognized as string values, not as a boolean. 9 Configuration for a pipeline to send infrastructure logs to the insecure external Elasticsearch instance. 10 Configuration for a pipeline to send logs from the my-project project to the internal Elasticsearch instance. A name to describe the pipeline. The inputRefs is a specific input: my-app-logs . The outputRefs is default . Optional: String. One or more labels to add to the logs. 11 Configuration for a pipeline to send logs to the Kafka broker, with no pipeline name: The inputRefs is the log type, in this example application . The outputRefs is the name of the output to use. Optional: String. One or more labels to add to the logs. Fluentd log handling when the external log aggregator is unavailable If your external logging aggregator becomes unavailable and cannot receive logs, Fluentd continues to collect logs and stores them in a buffer. When the log aggregator becomes available, log forwarding resumes, including the buffered logs. If the buffer fills completely, Fluentd stops collecting logs. OpenShift Container Platform rotates the logs and deletes them. You cannot adjust the buffer size or add a persistent volume claim (PVC) to the Fluentd daemon set or pods. Supported Authorization Keys Common key types are provided here. Some output types support additional specialized keys, documented with the output-specific configuration field. All secret keys are optional. Enable the security features you want by setting the relevant keys. You are responsible for creating and maintaining any additional configurations that external destinations might require, such as keys and secrets, service accounts, port openings, or global proxy configuration. Open Shift Logging will not attempt to verify a mismatch between authorization combinations. Transport Layer Security (TLS) Using a TLS URL ( http://... or ssl://... ) without a secret enables basic TLS server-side authentication. Additional TLS features are enabled by including a secret and setting the following optional fields: passphrase : (string) Passphrase to decode an encoded TLS private key. Requires tls.key . ca-bundle.crt : (string) File name of a customer CA for server authentication. Username and Password username : (string) Authentication user name. Requires password . password : (string) Authentication password. Requires username . Simple Authentication Security Layer (SASL) sasl.enable (boolean) Explicitly enable or disable SASL. If missing, SASL is automatically enabled when any of the other sasl. keys are set. sasl.mechanisms : (array) List of allowed SASL mechanism names. If missing or empty, the system defaults are used. sasl.allow-insecure : (boolean) Allow mechanisms that send clear-text passwords. Defaults to false. 9.4.1.1. Creating a Secret You can create a secret in the directory that contains your certificate and key files by using the following command: USD oc create secret generic -n <namespace> <secret_name> \ --from-file=ca-bundle.crt=<your_bundle_file> \ --from-literal=username=<your_username> \ --from-literal=password=<your_password> Note Generic or opaque secrets are recommended for best results. 9.4.2. Creating a log forwarder To create a log forwarder, you must create a ClusterLogForwarder CR that specifies the log input types that the service account can collect. You can also specify which outputs the logs can be forwarded to. The ClusterLogForwarder CR must be named instance , and must be created in the openshift-logging namespace. Important You need administrator permissions for the openshift-logging namespace. ClusterLogForwarder resource example apiVersion: "logging.openshift.io/v1" kind: ClusterLogForwarder metadata: name: <log_forwarder_name> 1 namespace: <log_forwarder_namespace> 2 spec: # ... pipelines: - inputRefs: - <log_type> 3 outputRefs: - <output_name> 4 outputs: - name: <output_name> 5 type: <output_type> 6 url: <log_output_url> 7 # ... 1 The CR name must be instance . 2 The CR namespace must be openshift-logging . 3 The log types that are collected. The value for this field can be audit for audit logs, application for application logs, infrastructure for infrastructure logs, or a named input that has been defined for your application. 4 5 A name for the output that you want to forward logs to. 6 The type of output that you want to forward logs to. The value of this field can be default , loki , kafka , elasticsearch , fluentdForward , syslog , or cloudwatch . 7 The URL of the output that you want to forward logs to. 9.4.3. Enabling multi-line exception detection Enables multi-line error detection of container logs. Warning Enabling this feature could have performance implications and may require additional computing resources or alternate logging solutions. Log parsers often incorrectly identify separate lines of the same exception as separate exceptions. This leads to extra log entries and an incomplete or inaccurate view of the traced information. Example java exception To enable logging to detect multi-line exceptions and reassemble them into a single log entry, ensure that the ClusterLogForwarder Custom Resource (CR) contains a detectMultilineErrors field, with a value of true . Example ClusterLogForwarder CR apiVersion: logging.openshift.io/v1 kind: ClusterLogForwarder metadata: name: instance namespace: openshift-logging spec: pipelines: - name: my-app-logs inputRefs: - application outputRefs: - default detectMultilineErrors: true 9.4.3.1. Details When log messages appear as a consecutive sequence forming an exception stack trace, they are combined into a single, unified log record. The first log message's content is replaced with the concatenated content of all the message fields in the sequence. Table 9.9. Supported languages per collector: Language Fluentd Vector Java [✓] [✓] JS [✓] [✓] Ruby [✓] [✓] Python [✓] [✓] Golang [✓] [✓] PHP [✓] Dart [✓] [✓] 9.4.3.2. Troubleshooting When enabled, the collector configuration will include a new section with type: detect_exceptions Example vector configuration section Example fluentd config section 9.4.4. Forwarding logs to Google Cloud Platform (GCP) You can forward logs to Google Cloud Logging in addition to, or instead of, the internal default OpenShift Container Platform log store. Note Using this feature with Fluentd is not supported. Prerequisites Red Hat OpenShift Logging Operator 5.5.1 and later Procedure Create a secret using your Google service account key . USD oc -n openshift-logging create secret generic gcp-secret --from-file google-application-credentials.json= <your_service_account_key_file.json> Create a ClusterLogForwarder Custom Resource YAML using the template below: apiVersion: "logging.openshift.io/v1" kind: "ClusterLogForwarder" metadata: name: "instance" namespace: "openshift-logging" spec: outputs: - name: gcp-1 type: googleCloudLogging secret: name: gcp-secret googleCloudLogging: projectId : "openshift-gce-devel" 1 logId : "app-gcp" 2 pipelines: - name: test-app inputRefs: 3 - application outputRefs: - gcp-1 1 Set either a projectId , folderId , organizationId , or billingAccountId field and its corresponding value, depending on where you want to store your logs in the GCP resource hierarchy . 2 Set the value to add to the logName field of the Log Entry . 3 Specify which log types to forward by using the pipeline: application , infrastructure , or audit . Additional resources Google Cloud Billing Documentation Google Cloud Logging Query Language Documentation 9.4.5. Forwarding logs to Splunk You can forward logs to the Splunk HTTP Event Collector (HEC) in addition to, or instead of, the internal default OpenShift Container Platform log store. Note Using this feature with Fluentd is not supported. Prerequisites Red Hat OpenShift Logging Operator 5.6 or later A ClusterLogging instance with vector specified as the collector Base64 encoded Splunk HEC token Procedure Create a secret using your Base64 encoded Splunk HEC token. USD oc -n openshift-logging create secret generic vector-splunk-secret --from-literal hecToken=<HEC_Token> Create or edit the ClusterLogForwarder Custom Resource (CR) using the template below: apiVersion: "logging.openshift.io/v1" kind: "ClusterLogForwarder" metadata: name: "instance" 1 namespace: "openshift-logging" 2 spec: outputs: - name: splunk-receiver 3 secret: name: vector-splunk-secret 4 type: splunk 5 url: <http://your.splunk.hec.url:8088> 6 pipelines: 7 - inputRefs: - application - infrastructure name: 8 outputRefs: - splunk-receiver 9 1 The name of the ClusterLogForwarder CR must be instance . 2 The namespace for the ClusterLogForwarder CR must be openshift-logging . 3 Specify a name for the output. 4 Specify the name of the secret that contains your HEC token. 5 Specify the output type as splunk . 6 Specify the URL (including port) of your Splunk HEC. 7 Specify which log types to forward by using the pipeline: application , infrastructure , or audit . 8 Optional: Specify a name for the pipeline. 9 Specify the name of the output to use when forwarding logs with this pipeline. 9.4.6. Forwarding logs over HTTP Forwarding logs over HTTP is supported for both the Fluentd and Vector log collectors. To enable, specify http as the output type in the ClusterLogForwarder custom resource (CR). Procedure Create or edit the ClusterLogForwarder CR using the template below: Example ClusterLogForwarder CR apiVersion: "logging.openshift.io/v1" kind: "ClusterLogForwarder" metadata: name: "instance" namespace: "openshift-logging" spec: outputs: - name: httpout-app type: http url: 1 http: headers: 2 h1: v1 h2: v2 method: POST secret: name: 3 tls: insecureSkipVerify: 4 pipelines: - name: inputRefs: - application outputRefs: - 5 1 Destination address for logs. 2 Additional headers to send with the log record. 3 Secret name for destination credentials. 4 Values are either true or false . 5 This value should be the same as the output name. 9.4.7. Forwarding application logs from specific projects You can forward a copy of the application logs from specific projects to an external log aggregator, in addition to, or instead of, using the internal log store. You must also configure the external log aggregator to receive log data from OpenShift Container Platform. To configure forwarding application logs from a project, you must create a ClusterLogForwarder custom resource (CR) with at least one input from a project, optional outputs for other log aggregators, and pipelines that use those inputs and outputs. Prerequisites You must have a logging server that is configured to receive the logging data using the specified protocol or format. Procedure Create or edit a YAML file that defines the ClusterLogForwarder CR: Example ClusterLogForwarder CR apiVersion: logging.openshift.io/v1 kind: ClusterLogForwarder metadata: name: instance 1 namespace: openshift-logging 2 spec: outputs: - name: fluentd-server-secure 3 type: fluentdForward 4 url: 'tls://fluentdserver.security.example.com:24224' 5 secret: 6 name: fluentd-secret - name: fluentd-server-insecure type: fluentdForward url: 'tcp://fluentdserver.home.example.com:24224' inputs: 7 - name: my-app-logs application: namespaces: - my-project 8 pipelines: - name: forward-to-fluentd-insecure 9 inputRefs: 10 - my-app-logs outputRefs: 11 - fluentd-server-insecure labels: project: "my-project" 12 - name: forward-to-fluentd-secure 13 inputRefs: - application 14 - audit - infrastructure outputRefs: - fluentd-server-secure - default labels: clusterId: "C1234" 1 The name of the ClusterLogForwarder CR must be instance . 2 The namespace for the ClusterLogForwarder CR must be openshift-logging . 3 The name of the output. 4 The output type: elasticsearch , fluentdForward , syslog , or kafka . 5 The URL and port of the external log aggregator as a valid absolute URL. If the cluster-wide proxy using the CIDR annotation is enabled, the output must be a server name or FQDN, not an IP address. 6 If using a tls prefix, you must specify the name of the secret required by the endpoint for TLS communication. The secret must exist in the openshift-logging project and have tls.crt , tls.key , and ca-bundle.crt keys that each point to the certificates they represent. 7 The configuration for an input to filter application logs from the specified projects. 8 If no namespace is specified, logs are collected from all namespaces. 9 The pipeline configuration directs logs from a named input to a named output. In this example, a pipeline named forward-to-fluentd-insecure forwards logs from an input named my-app-logs to an output named fluentd-server-insecure . 10 A list of inputs. 11 The name of the output to use. 12 Optional: String. One or more labels to add to the logs. 13 Configuration for a pipeline to send logs to other log aggregators. Optional: Specify a name for the pipeline. Specify which log types to forward by using the pipeline: application, infrastructure , or audit . Specify the name of the output to use when forwarding logs with this pipeline. Optional: Specify the default output to forward logs to the default log store. Optional: String. One or more labels to add to the logs. 14 Note that application logs from all namespaces are collected when using this configuration. Apply the ClusterLogForwarder CR by running the following command: USD oc apply -f <filename>.yaml 9.4.8. Forwarding application logs from specific pods As a cluster administrator, you can use Kubernetes pod labels to gather log data from specific pods and forward it to a log collector. Suppose that you have an application composed of pods running alongside other pods in various namespaces. If those pods have labels that identify the application, you can gather and output their log data to a specific log collector. To specify the pod labels, you use one or more matchLabels key-value pairs. If you specify multiple key-value pairs, the pods must match all of them to be selected. Procedure Create or edit a YAML file that defines the ClusterLogForwarder CR object. In the file, specify the pod labels using simple equality-based selectors under inputs[].name.application.selector.matchLabels , as shown in the following example. Example ClusterLogForwarder CR YAML file apiVersion: logging.openshift.io/v1 kind: ClusterLogForwarder metadata: name: instance 1 namespace: openshift-logging 2 spec: pipelines: - inputRefs: [ myAppLogData ] 3 outputRefs: [ default ] 4 inputs: 5 - name: myAppLogData application: selector: matchLabels: 6 environment: production app: nginx namespaces: 7 - app1 - app2 outputs: 8 - default ... 1 The name of the ClusterLogForwarder CR must be instance . 2 The namespace for the ClusterLogForwarder CR must be openshift-logging . 3 Specify one or more comma-separated values from inputs[].name . 4 Specify one or more comma-separated values from outputs[] . 5 Define a unique inputs[].name for each application that has a unique set of pod labels. 6 Specify the key-value pairs of pod labels whose log data you want to gather. You must specify both a key and value, not just a key. To be selected, the pods must match all the key-value pairs. 7 Optional: Specify one or more namespaces. 8 Specify one or more outputs to forward your log data to. The optional default output shown here sends log data to the internal Elasticsearch instance. Optional: To restrict the gathering of log data to specific namespaces, use inputs[].name.application.namespaces , as shown in the preceding example. Optional: You can send log data from additional applications that have different pod labels to the same pipeline. For each unique combination of pod labels, create an additional inputs[].name section similar to the one shown. Update the selectors to match the pod labels of this application. Add the new inputs[].name value to inputRefs . For example: Create the CR object: USD oc create -f <file-name>.yaml Additional resources For more information on matchLabels in Kubernetes, see Resources that support set-based requirements . 9.4.9. Forwarding logs to an external Loki logging system You can forward logs to an external Loki logging system in addition to, or instead of, the default log store. To configure log forwarding to Loki, you must create a ClusterLogForwarder custom resource (CR) with an output to Loki, and a pipeline that uses the output. The output to Loki can use the HTTP (insecure) or HTTPS (secure HTTP) connection. Prerequisites You must have a Loki logging system running at the URL you specify with the url field in the CR. Procedure Create or edit a YAML file that defines the ClusterLogForwarder CR object: apiVersion: "logging.openshift.io/v1" kind: ClusterLogForwarder metadata: name: instance 1 namespace: openshift-logging 2 spec: outputs: - name: loki-insecure 3 type: "loki" 4 url: http://loki.insecure.com:3100 5 loki: tenantKey: kubernetes.namespace_name labelKeys: - kubernetes.labels.foo - name: loki-secure 6 type: "loki" url: https://loki.secure.com:3100 secret: name: loki-secret 7 loki: tenantKey: kubernetes.namespace_name 8 labelKeys: - kubernetes.labels.foo 9 pipelines: - name: application-logs 10 inputRefs: 11 - application - audit outputRefs: 12 - loki-secure 1 The name of the ClusterLogForwarder CR must be instance . 2 The namespace for the ClusterLogForwarder CR must be openshift-logging . 3 Specify a name for the output. 4 Specify the type as "loki" . 5 Specify the URL and port of the Loki system as a valid absolute URL. You can use the http (insecure) or https (secure HTTP) protocol. If the cluster-wide proxy using the CIDR annotation is enabled, the output must be a server name or FQDN, not an IP Address. Loki's default port for HTTP(S) communication is 3100. 6 For a secure connection, you can specify an https or http URL that you authenticate by specifying a secret . 7 For an https prefix, specify the name of the secret required by the endpoint for TLS communication. The secret must exist in the openshift-logging project and must contain a ca-bundle.crt key that points to the certificate it represents. Otherwise, for http and https prefixes, you can specify a secret that contains a username and password. For more information, see the following "Example: Setting secret that contains a username and password." 8 Optional: Specify a meta-data key field to generate values for the TenantID field in Loki. For example, setting tenantKey: kubernetes.namespace_name uses the names of the Kubernetes namespaces as values for tenant IDs in Loki. To see which other log record fields you can specify, see the "Log Record Fields" link in the following "Additional resources" section. 9 Optional: Specify a list of meta-data field keys to replace the default Loki labels. Loki label names must match the regular expression [a-zA-Z_:][a-zA-Z0-9_:]* . Illegal characters in meta-data keys are replaced with _ to form the label name. For example, the kubernetes.labels.foo meta-data key becomes Loki label kubernetes_labels_foo . If you do not set labelKeys , the default value is: [log_type, kubernetes.namespace_name, kubernetes.pod_name, kubernetes_host] . Keep the set of labels small because Loki limits the size and number of labels allowed. See Configuring Loki, limits_config . You can still query based on any log record field using query filters. 10 Optional: Specify a name for the pipeline. 11 Specify which log types to forward by using the pipeline: application, infrastructure , or audit . 12 Specify the name of the output to use when forwarding logs with this pipeline. Note Because Loki requires log streams to be correctly ordered by timestamp, labelKeys always includes the kubernetes_host label set, even if you do not specify it. This inclusion ensures that each stream originates from a single host, which prevents timestamps from becoming disordered due to clock differences on different hosts. Apply the ClusterLogForwarder CR object by running the following command: USD oc apply -f <filename>.yaml Additional resources Configuring Loki server 9.4.10. Forwarding logs to an external Elasticsearch instance You can forward logs to an external Elasticsearch instance in addition to, or instead of, the internal log store. You are responsible for configuring the external log aggregator to receive log data from OpenShift Container Platform. To configure log forwarding to an external Elasticsearch instance, you must create a ClusterLogForwarder custom resource (CR) with an output to that instance, and a pipeline that uses the output. The external Elasticsearch output can use the HTTP (insecure) or HTTPS (secure HTTP) connection. To forward logs to both an external and the internal Elasticsearch instance, create outputs and pipelines to the external instance and a pipeline that uses the default output to forward logs to the internal instance. Note If you only want to forward logs to an internal Elasticsearch instance, you do not need to create a ClusterLogForwarder CR. Prerequisites You must have a logging server that is configured to receive the logging data using the specified protocol or format. Procedure Create or edit a YAML file that defines the ClusterLogForwarder CR: Example ClusterLogForwarder CR apiVersion: "logging.openshift.io/v1" kind: ClusterLogForwarder metadata: name: instance 1 namespace: openshift-logging 2 spec: outputs: - name: elasticsearch-example 3 type: elasticsearch 4 elasticsearch: version: 8 5 url: http://elasticsearch.example.com:9200 6 secret: name: es-secret 7 pipelines: - name: application-logs 8 inputRefs: 9 - application - audit outputRefs: - elasticsearch-example 10 - default 11 labels: myLabel: "myValue" 12 # ... 1 The name of the ClusterLogForwarder CR must be instance . 2 The namespace for the ClusterLogForwarder CR must be openshift-logging . 3 Specify a name for the output. 4 Specify the elasticsearch type. 5 Specify the Elasticsearch version. This can be 6 , 7 , or 8 . 6 Specify the URL and port of the external Elasticsearch instance as a valid absolute URL. You can use the http (insecure) or https (secure HTTP) protocol. If the cluster-wide proxy using the CIDR annotation is enabled, the output must be a server name or FQDN, not an IP Address. 7 For an https prefix, specify the name of the secret required by the endpoint for TLS communication. The secret must contain a ca-bundle.crt key that points to the certificate it represents. Otherwise, for http and https prefixes, you can specify a secret that contains a username and password. The secret must exist in the openshift-logging project. For more information, see the following "Example: Setting a secret that contains a username and password." 8 Optional: Specify a name for the pipeline. 9 Specify which log types to forward by using the pipeline: application, infrastructure , or audit . 10 Specify the name of the output to use when forwarding logs with this pipeline. 11 Optional: Specify the default output to send the logs to the internal Elasticsearch instance. 12 Optional: String. One or more labels to add to the logs. Apply the ClusterLogForwarder CR: USD oc apply -f <filename>.yaml Example: Setting a secret that contains a username and password You can use a secret that contains a username and password to authenticate a secure connection to an external Elasticsearch instance. For example, if you cannot use mutual TLS (mTLS) keys because a third party operates the Elasticsearch instance, you can use HTTP or HTTPS and set a secret that contains the username and password. Create a Secret YAML file similar to the following example. Use base64-encoded values for the username and password fields. The secret type is opaque by default. apiVersion: v1 kind: Secret metadata: name: openshift-test-secret data: username: <username> password: <password> # ... Create the secret: USD oc create secret -n openshift-logging openshift-test-secret.yaml Specify the name of the secret in the ClusterLogForwarder CR: kind: ClusterLogForwarder metadata: name: instance namespace: openshift-logging spec: outputs: - name: elasticsearch type: "elasticsearch" url: https://elasticsearch.secure.com:9200 secret: name: openshift-test-secret # ... Note In the value of the url field, the prefix can be http or https . Apply the CR object: USD oc apply -f <filename>.yaml 9.4.11. Forwarding logs using the Fluentd forward protocol You can use the Fluentd forward protocol to send a copy of your logs to an external log aggregator that is configured to accept the protocol instead of, or in addition to, the default Elasticsearch log store. You are responsible for configuring the external log aggregator to receive the logs from OpenShift Container Platform. To configure log forwarding using the forward protocol, you must create a ClusterLogForwarder custom resource (CR) with one or more outputs to the Fluentd servers, and pipelines that use those outputs. The Fluentd output can use a TCP (insecure) or TLS (secure TCP) connection. Prerequisites You must have a logging server that is configured to receive the logging data using the specified protocol or format. Procedure Create or edit a YAML file that defines the ClusterLogForwarder CR object: apiVersion: logging.openshift.io/v1 kind: ClusterLogForwarder metadata: name: instance 1 namespace: openshift-logging 2 spec: outputs: - name: fluentd-server-secure 3 type: fluentdForward 4 url: 'tls://fluentdserver.security.example.com:24224' 5 secret: 6 name: fluentd-secret - name: fluentd-server-insecure type: fluentdForward url: 'tcp://fluentdserver.home.example.com:24224' pipelines: - name: forward-to-fluentd-secure 7 inputRefs: 8 - application - audit outputRefs: - fluentd-server-secure 9 - default 10 labels: clusterId: "C1234" 11 - name: forward-to-fluentd-insecure 12 inputRefs: - infrastructure outputRefs: - fluentd-server-insecure labels: clusterId: "C1234" 1 The name of the ClusterLogForwarder CR must be instance . 2 The namespace for the ClusterLogForwarder CR must be openshift-logging . 3 Specify a name for the output. 4 Specify the fluentdForward type. 5 Specify the URL and port of the external Fluentd instance as a valid absolute URL. You can use the tcp (insecure) or tls (secure TCP) protocol. If the cluster-wide proxy using the CIDR annotation is enabled, the output must be a server name or FQDN, not an IP address. 6 If you are using a tls prefix, you must specify the name of the secret required by the endpoint for TLS communication. The secret must exist in the openshift-logging project and must contain a ca-bundle.crt key that points to the certificate it represents. 7 Optional: Specify a name for the pipeline. 8 Specify which log types to forward by using the pipeline: application, infrastructure , or audit . 9 Specify the name of the output to use when forwarding logs with this pipeline. 10 Optional: Specify the default output to forward logs to the internal Elasticsearch instance. 11 Optional: String. One or more labels to add to the logs. 12 Optional: Configure multiple outputs to forward logs to other external log aggregators of any supported type: A name to describe the pipeline. The inputRefs is the log type to forward by using the pipeline: application, infrastructure , or audit . The outputRefs is the name of the output to use. Optional: String. One or more labels to add to the logs. Create the CR object: USD oc create -f <file-name>.yaml 9.4.11.1. Enabling nanosecond precision for Logstash to ingest data from fluentd For Logstash to ingest log data from fluentd, you must enable nanosecond precision in the Logstash configuration file. Procedure In the Logstash configuration file, set nanosecond_precision to true . Example Logstash configuration file input { tcp { codec => fluent { nanosecond_precision => true } port => 24114 } } filter { } output { stdout { codec => rubydebug } } 9.4.12. Forwarding logs using the syslog protocol You can use the syslog RFC3164 or RFC5424 protocol to send a copy of your logs to an external log aggregator that is configured to accept the protocol instead of, or in addition to, the default Elasticsearch log store. You are responsible for configuring the external log aggregator, such as a syslog server, to receive the logs from OpenShift Container Platform. To configure log forwarding using the syslog protocol, you must create a ClusterLogForwarder custom resource (CR) with one or more outputs to the syslog servers, and pipelines that use those outputs. The syslog output can use a UDP, TCP, or TLS connection. Prerequisites You must have a logging server that is configured to receive the logging data using the specified protocol or format. Procedure Create or edit a YAML file that defines the ClusterLogForwarder CR object: apiVersion: logging.openshift.io/v1 kind: ClusterLogForwarder metadata: name: instance 1 namespace: openshift-logging 2 spec: outputs: - name: rsyslog-east 3 type: syslog 4 syslog: 5 facility: local0 rfc: RFC3164 payloadKey: message severity: informational url: 'tls://rsyslogserver.east.example.com:514' 6 secret: 7 name: syslog-secret - name: rsyslog-west type: syslog syslog: appName: myapp facility: user msgID: mymsg procID: myproc rfc: RFC5424 severity: debug url: 'tcp://rsyslogserver.west.example.com:514' pipelines: - name: syslog-east 8 inputRefs: 9 - audit - application outputRefs: 10 - rsyslog-east - default 11 labels: secure: "true" 12 syslog: "east" - name: syslog-west 13 inputRefs: - infrastructure outputRefs: - rsyslog-west - default labels: syslog: "west" 1 The name of the ClusterLogForwarder CR must be instance . 2 The namespace for the ClusterLogForwarder CR must be openshift-logging . 3 Specify a name for the output. 4 Specify the syslog type. 5 Optional: Specify the syslog parameters, listed below. 6 Specify the URL and port of the external syslog instance. You can use the udp (insecure), tcp (insecure) or tls (secure TCP) protocol. If the cluster-wide proxy using the CIDR annotation is enabled, the output must be a server name or FQDN, not an IP address. 7 If using a tls prefix, you must specify the name of the secret required by the endpoint for TLS communication. The secret must exist in the openshift-logging project and must contain a ca-bundle.crt key that points to the certificate it represents. 8 Optional: Specify a name for the pipeline. 9 Specify which log types to forward by using the pipeline: application, infrastructure , or audit . 10 Specify the name of the output to use when forwarding logs with this pipeline. 11 Optional: Specify the default output to forward logs to the internal Elasticsearch instance. 12 Optional: String. One or more labels to add to the logs. Quote values like "true" so they are recognized as string values, not as a boolean. 13 Optional: Configure multiple outputs to forward logs to other external log aggregators of any supported type: A name to describe the pipeline. The inputRefs is the log type to forward by using the pipeline: application, infrastructure , or audit . The outputRefs is the name of the output to use. Optional: String. One or more labels to add to the logs. Create the CR object: USD oc create -f <filename>.yaml 9.4.12.1. Adding log source information to message output You can add namespace_name , pod_name , and container_name elements to the message field of the record by adding the AddLogSource field to your ClusterLogForwarder custom resource (CR). spec: outputs: - name: syslogout syslog: addLogSource: true facility: user payloadKey: message rfc: RFC3164 severity: debug tag: mytag type: syslog url: tls://syslog-receiver.openshift-logging.svc:24224 pipelines: - inputRefs: - application name: test-app outputRefs: - syslogout Note This configuration is compatible with both RFC3164 and RFC5424. Example syslog message output without AddLogSource <15>1 2020-11-15T17:06:14+00:00 fluentd-9hkb4 mytag - - - {"msgcontent"=>"Message Contents", "timestamp"=>"2020-11-15 17:06:09", "tag_key"=>"rec_tag", "index"=>56} Example syslog message output with AddLogSource <15>1 2020-11-16T10:49:37+00:00 crc-j55b9-master-0 mytag - - - namespace_name=clo-test-6327,pod_name=log-generator-ff9746c49-qxm7l,container_name=log-generator,message={"msgcontent":"My life is my message", "timestamp":"2020-11-16 10:49:36", "tag_key":"rec_tag", "index":76} 9.4.12.2. Syslog parameters You can configure the following for the syslog outputs. For more information, see the syslog RFC3164 or RFC5424 RFC. facility: The syslog facility . The value can be a decimal integer or a case-insensitive keyword: 0 or kern for kernel messages 1 or user for user-level messages, the default. 2 or mail for the mail system 3 or daemon for system daemons 4 or auth for security/authentication messages 5 or syslog for messages generated internally by syslogd 6 or lpr for the line printer subsystem 7 or news for the network news subsystem 8 or uucp for the UUCP subsystem 9 or cron for the clock daemon 10 or authpriv for security authentication messages 11 or ftp for the FTP daemon 12 or ntp for the NTP subsystem 13 or security for the syslog audit log 14 or console for the syslog alert log 15 or solaris-cron for the scheduling daemon 16 - 23 or local0 - local7 for locally used facilities Optional: payloadKey : The record field to use as payload for the syslog message. Note Configuring the payloadKey parameter prevents other parameters from being forwarded to the syslog. rfc: The RFC to be used for sending logs using syslog. The default is RFC5424. severity: The syslog severity to set on outgoing syslog records. The value can be a decimal integer or a case-insensitive keyword: 0 or Emergency for messages indicating the system is unusable 1 or Alert for messages indicating action must be taken immediately 2 or Critical for messages indicating critical conditions 3 or Error for messages indicating error conditions 4 or Warning for messages indicating warning conditions 5 or Notice for messages indicating normal but significant conditions 6 or Informational for messages indicating informational messages 7 or Debug for messages indicating debug-level messages, the default tag: Tag specifies a record field to use as a tag on the syslog message. trimPrefix: Remove the specified prefix from the tag. 9.4.12.3. Additional RFC5424 syslog parameters The following parameters apply to RFC5424: appName: The APP-NAME is a free-text string that identifies the application that sent the log. Must be specified for RFC5424 . msgID: The MSGID is a free-text string that identifies the type of message. Must be specified for RFC5424 . procID: The PROCID is a free-text string. A change in the value indicates a discontinuity in syslog reporting. Must be specified for RFC5424 . 9.4.13. Forwarding logs to a Kafka broker You can forward logs to an external Kafka broker in addition to, or instead of, the default log store. To configure log forwarding to an external Kafka instance, you must create a ClusterLogForwarder custom resource (CR) with an output to that instance, and a pipeline that uses the output. You can include a specific Kafka topic in the output or use the default. The Kafka output can use a TCP (insecure) or TLS (secure TCP) connection. Procedure Create or edit a YAML file that defines the ClusterLogForwarder CR object: apiVersion: logging.openshift.io/v1 kind: ClusterLogForwarder metadata: name: instance 1 namespace: openshift-logging 2 spec: outputs: - name: app-logs 3 type: kafka 4 url: tls://kafka.example.devlab.com:9093/app-topic 5 secret: name: kafka-secret 6 - name: infra-logs type: kafka url: tcp://kafka.devlab2.example.com:9093/infra-topic 7 - name: audit-logs type: kafka url: tls://kafka.qelab.example.com:9093/audit-topic secret: name: kafka-secret-qe pipelines: - name: app-topic 8 inputRefs: 9 - application outputRefs: 10 - app-logs labels: logType: "application" 11 - name: infra-topic 12 inputRefs: - infrastructure outputRefs: - infra-logs labels: logType: "infra" - name: audit-topic inputRefs: - audit outputRefs: - audit-logs - default 13 labels: logType: "audit" 1 The name of the ClusterLogForwarder CR must be instance . 2 The namespace for the ClusterLogForwarder CR must be openshift-logging . 3 Specify a name for the output. 4 Specify the kafka type. 5 Specify the URL and port of the Kafka broker as a valid absolute URL, optionally with a specific topic. You can use the tcp (insecure) or tls (secure TCP) protocol. If the cluster-wide proxy using the CIDR annotation is enabled, the output must be a server name or FQDN, not an IP address. 6 If using a tls prefix, you must specify the name of the secret required by the endpoint for TLS communication. The secret must exist in the openshift-logging project and must contain a ca-bundle.crt key that points to the certificate it represents. 7 Optional: To send an insecure output, use a tcp prefix in front of the URL. Also omit the secret key and its name from this output. 8 Optional: Specify a name for the pipeline. 9 Specify which log types to forward by using the pipeline: application, infrastructure , or audit . 10 Specify the name of the output to use when forwarding logs with this pipeline. 11 Optional: String. One or more labels to add to the logs. 12 Optional: Configure multiple outputs to forward logs to other external log aggregators of any supported type: A name to describe the pipeline. The inputRefs is the log type to forward by using the pipeline: application, infrastructure , or audit . The outputRefs is the name of the output to use. Optional: String. One or more labels to add to the logs. 13 Optional: Specify default to forward logs to the internal Elasticsearch instance. Optional: To forward a single output to multiple Kafka brokers, specify an array of Kafka brokers as shown in the following example: # ... spec: outputs: - name: app-logs type: kafka secret: name: kafka-secret-dev kafka: 1 brokers: 2 - tls://kafka-broker1.example.com:9093/ - tls://kafka-broker2.example.com:9093/ topic: app-topic 3 # ... 1 Specify a kafka key that has a brokers and topic key. 2 Use the brokers key to specify an array of one or more brokers. 3 Use the topic key to specify the target topic that receives the logs. Apply the ClusterLogForwarder CR by running the following command: USD oc apply -f <filename>.yaml 9.4.14. Forwarding logs to Amazon CloudWatch You can forward logs to Amazon CloudWatch, a monitoring and log storage service hosted by Amazon Web Services (AWS). You can forward logs to CloudWatch in addition to, or instead of, the default log store. To configure log forwarding to CloudWatch, you must create a ClusterLogForwarder custom resource (CR) with an output for CloudWatch, and a pipeline that uses the output. Procedure Create a Secret YAML file that uses the aws_access_key_id and aws_secret_access_key fields to specify your base64-encoded AWS credentials. For example: apiVersion: v1 kind: Secret metadata: name: cw-secret namespace: openshift-logging data: aws_access_key_id: QUtJQUlPU0ZPRE5ON0VYQU1QTEUK aws_secret_access_key: d0phbHJYVXRuRkVNSS9LN01ERU5HL2JQeFJmaUNZRVhBTVBMRUtFWQo= Create the secret. For example: USD oc apply -f cw-secret.yaml Create or edit a YAML file that defines the ClusterLogForwarder CR object. In the file, specify the name of the secret. For example: apiVersion: "logging.openshift.io/v1" kind: ClusterLogForwarder metadata: name: instance 1 namespace: openshift-logging 2 spec: outputs: - name: cw 3 type: cloudwatch 4 cloudwatch: groupBy: logType 5 groupPrefix: <group prefix> 6 region: us-east-2 7 secret: name: cw-secret 8 pipelines: - name: infra-logs 9 inputRefs: 10 - infrastructure - audit - application outputRefs: - cw 11 1 The name of the ClusterLogForwarder CR must be instance . 2 The namespace for the ClusterLogForwarder CR must be openshift-logging . 3 Specify a name for the output. 4 Specify the cloudwatch type. 5 Optional: Specify how to group the logs: logType creates log groups for each log type namespaceName creates a log group for each application name space. It also creates separate log groups for infrastructure and audit logs. namespaceUUID creates a new log groups for each application namespace UUID. It also creates separate log groups for infrastructure and audit logs. 6 Optional: Specify a string to replace the default infrastructureName prefix in the names of the log groups. 7 Specify the AWS region. 8 Specify the name of the secret that contains your AWS credentials. 9 Optional: Specify a name for the pipeline. 10 Specify which log types to forward by using the pipeline: application, infrastructure , or audit . 11 Specify the name of the output to use when forwarding logs with this pipeline. Create the CR object: USD oc create -f <file-name>.yaml Example: Using ClusterLogForwarder with Amazon CloudWatch Here, you see an example ClusterLogForwarder custom resource (CR) and the log data that it outputs to Amazon CloudWatch. Suppose that you are running an OpenShift Container Platform cluster named mycluster . The following command returns the cluster's infrastructureName , which you will use to compose aws commands later on: USD oc get Infrastructure/cluster -ojson | jq .status.infrastructureName "mycluster-7977k" To generate log data for this example, you run a busybox pod in a namespace called app . The busybox pod writes a message to stdout every three seconds: USD oc run busybox --image=busybox -- sh -c 'while true; do echo "My life is my message"; sleep 3; done' USD oc logs -f busybox My life is my message My life is my message My life is my message ... You can look up the UUID of the app namespace where the busybox pod runs: USD oc get ns/app -ojson | jq .metadata.uid "794e1e1a-b9f5-4958-a190-e76a9b53d7bf" In your ClusterLogForwarder custom resource (CR), you configure the infrastructure , audit , and application log types as inputs to the all-logs pipeline. You also connect this pipeline to cw output, which forwards the logs to a CloudWatch instance in the us-east-2 region: apiVersion: "logging.openshift.io/v1" kind: ClusterLogForwarder metadata: name: instance namespace: openshift-logging spec: outputs: - name: cw type: cloudwatch cloudwatch: groupBy: logType region: us-east-2 secret: name: cw-secret pipelines: - name: all-logs inputRefs: - infrastructure - audit - application outputRefs: - cw Each region in CloudWatch contains three levels of objects: log group log stream log event With groupBy: logType in the ClusterLogForwarding CR, the three log types in the inputRefs produce three log groups in Amazon Cloudwatch: USD aws --output json logs describe-log-groups | jq .logGroups[].logGroupName "mycluster-7977k.application" "mycluster-7977k.audit" "mycluster-7977k.infrastructure" Each of the log groups contains log streams: USD aws --output json logs describe-log-streams --log-group-name mycluster-7977k.application | jq .logStreams[].logStreamName "kubernetes.var.log.containers.busybox_app_busybox-da085893053e20beddd6747acdbaf98e77c37718f85a7f6a4facf09ca195ad76.log" USD aws --output json logs describe-log-streams --log-group-name mycluster-7977k.audit | jq .logStreams[].logStreamName "ip-10-0-131-228.us-east-2.compute.internal.k8s-audit.log" "ip-10-0-131-228.us-east-2.compute.internal.linux-audit.log" "ip-10-0-131-228.us-east-2.compute.internal.openshift-audit.log" ... USD aws --output json logs describe-log-streams --log-group-name mycluster-7977k.infrastructure | jq .logStreams[].logStreamName "ip-10-0-131-228.us-east-2.compute.internal.kubernetes.var.log.containers.apiserver-69f9fd9b58-zqzw5_openshift-oauth-apiserver_oauth-apiserver-453c5c4ee026fe20a6139ba6b1cdd1bed25989c905bf5ac5ca211b7cbb5c3d7b.log" "ip-10-0-131-228.us-east-2.compute.internal.kubernetes.var.log.containers.apiserver-797774f7c5-lftrx_openshift-apiserver_openshift-apiserver-ce51532df7d4e4d5f21c4f4be05f6575b93196336be0027067fd7d93d70f66a4.log" "ip-10-0-131-228.us-east-2.compute.internal.kubernetes.var.log.containers.apiserver-797774f7c5-lftrx_openshift-apiserver_openshift-apiserver-check-endpoints-82a9096b5931b5c3b1d6dc4b66113252da4a6472c9fff48623baee761911a9ef.log" ... Each log stream contains log events. To see a log event from the busybox Pod, you specify its log stream from the application log group: USD aws logs get-log-events --log-group-name mycluster-7977k.application --log-stream-name kubernetes.var.log.containers.busybox_app_busybox-da085893053e20beddd6747acdbaf98e77c37718f85a7f6a4facf09ca195ad76.log { "events": [ { "timestamp": 1629422704178, "message": "{\"docker\":{\"container_id\":\"da085893053e20beddd6747acdbaf98e77c37718f85a7f6a4facf09ca195ad76\"},\"kubernetes\":{\"container_name\":\"busybox\",\"namespace_name\":\"app\",\"pod_name\":\"busybox\",\"container_image\":\"docker.io/library/busybox:latest\",\"container_image_id\":\"docker.io/library/busybox@sha256:0f354ec1728d9ff32edcd7d1b8bbdfc798277ad36120dc3dc683be44524c8b60\",\"pod_id\":\"870be234-90a3-4258-b73f-4f4d6e2777c7\",\"host\":\"ip-10-0-216-3.us-east-2.compute.internal\",\"labels\":{\"run\":\"busybox\"},\"master_url\":\"https://kubernetes.default.svc\",\"namespace_id\":\"794e1e1a-b9f5-4958-a190-e76a9b53d7bf\",\"namespace_labels\":{\"kubernetes_io/metadata_name\":\"app\"}},\"message\":\"My life is my message\",\"level\":\"unknown\",\"hostname\":\"ip-10-0-216-3.us-east-2.compute.internal\",\"pipeline_metadata\":{\"collector\":{\"ipaddr4\":\"10.0.216.3\",\"inputname\":\"fluent-plugin-systemd\",\"name\":\"fluentd\",\"received_at\":\"2021-08-20T01:25:08.085760+00:00\",\"version\":\"1.7.4 1.6.0\"}},\"@timestamp\":\"2021-08-20T01:25:04.178986+00:00\",\"viaq_index_name\":\"app-write\",\"viaq_msg_id\":\"NWRjZmUyMWQtZjgzNC00MjI4LTk3MjMtNTk3NmY3ZjU4NDk1\",\"log_type\":\"application\",\"time\":\"2021-08-20T01:25:04+00:00\"}", "ingestionTime": 1629422744016 }, ... Example: Customizing the prefix in log group names In the log group names, you can replace the default infrastructureName prefix, mycluster-7977k , with an arbitrary string like demo-group-prefix . To make this change, you update the groupPrefix field in the ClusterLogForwarding CR: cloudwatch: groupBy: logType groupPrefix: demo-group-prefix region: us-east-2 The value of groupPrefix replaces the default infrastructureName prefix: USD aws --output json logs describe-log-groups | jq .logGroups[].logGroupName "demo-group-prefix.application" "demo-group-prefix.audit" "demo-group-prefix.infrastructure" Example: Naming log groups after application namespace names For each application namespace in your cluster, you can create a log group in CloudWatch whose name is based on the name of the application namespace. If you delete an application namespace object and create a new one that has the same name, CloudWatch continues using the same log group as before. If you consider successive application namespace objects that have the same name as equivalent to each other, use the approach described in this example. Otherwise, if you need to distinguish the resulting log groups from each other, see the following "Naming log groups for application namespace UUIDs" section instead. To create application log groups whose names are based on the names of the application namespaces, you set the value of the groupBy field to namespaceName in the ClusterLogForwarder CR: cloudwatch: groupBy: namespaceName region: us-east-2 Setting groupBy to namespaceName affects the application log group only. It does not affect the audit and infrastructure log groups. In Amazon Cloudwatch, the namespace name appears at the end of each log group name. Because there is a single application namespace, "app", the following output shows a new mycluster-7977k.app log group instead of mycluster-7977k.application : USD aws --output json logs describe-log-groups | jq .logGroups[].logGroupName "mycluster-7977k.app" "mycluster-7977k.audit" "mycluster-7977k.infrastructure" If the cluster in this example had contained multiple application namespaces, the output would show multiple log groups, one for each namespace. The groupBy field affects the application log group only. It does not affect the audit and infrastructure log groups. Example: Naming log groups after application namespace UUIDs For each application namespace in your cluster, you can create a log group in CloudWatch whose name is based on the UUID of the application namespace. If you delete an application namespace object and create a new one, CloudWatch creates a new log group. If you consider successive application namespace objects with the same name as different from each other, use the approach described in this example. Otherwise, see the preceding "Example: Naming log groups for application namespace names" section instead. To name log groups after application namespace UUIDs, you set the value of the groupBy field to namespaceUUID in the ClusterLogForwarder CR: cloudwatch: groupBy: namespaceUUID region: us-east-2 In Amazon Cloudwatch, the namespace UUID appears at the end of each log group name. Because there is a single application namespace, "app", the following output shows a new mycluster-7977k.794e1e1a-b9f5-4958-a190-e76a9b53d7bf log group instead of mycluster-7977k.application : USD aws --output json logs describe-log-groups | jq .logGroups[].logGroupName "mycluster-7977k.794e1e1a-b9f5-4958-a190-e76a9b53d7bf" // uid of the "app" namespace "mycluster-7977k.audit" "mycluster-7977k.infrastructure" The groupBy field affects the application log group only. It does not affect the audit and infrastructure log groups. 9.4.15. Forwarding logs to Amazon CloudWatch from STS enabled clusters For clusters with AWS Security Token Service (STS) enabled, you can create an AWS service account manually or create a credentials request by using the Cloud Credential Operator(CCO) utility ccoctl . Prerequisites Logging for Red Hat OpenShift: 5.5 and later Procedure Create a CredentialsRequest custom resource YAML by using the template below: CloudWatch credentials request template apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: name: <your_role_name>-credrequest namespace: openshift-cloud-credential-operator spec: providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: AWSProviderSpec statementEntries: - action: - logs:PutLogEvents - logs:CreateLogGroup - logs:PutRetentionPolicy - logs:CreateLogStream - logs:DescribeLogGroups - logs:DescribeLogStreams effect: Allow resource: arn:aws:logs:*:*:* secretRef: name: <your_role_name> namespace: openshift-logging serviceAccountNames: - logcollector Use the ccoctl command to create a role for AWS using your CredentialsRequest CR. With the CredentialsRequest object, this ccoctl command creates an IAM role with a trust policy that is tied to the specified OIDC identity provider, and a permissions policy that grants permissions to perform operations on CloudWatch resources. This command also creates a YAML configuration file in /<path_to_ccoctl_output_dir>/manifests/openshift-logging-<your_role_name>-credentials.yaml . This secret file contains the role_arn key/value used during authentication with the AWS IAM identity provider. USD ccoctl aws create-iam-roles \ --name=<name> \ --region=<aws_region> \ --credentials-requests-dir=<path_to_directory_with_list_of_credentials_requests>/credrequests \ --identity-provider-arn=arn:aws:iam::<aws_account_id>:oidc-provider/<name>-oidc.s3.<aws_region>.amazonaws.com 1 1 <name> is the name used to tag your cloud resources and should match the name used during your STS cluster install Apply the secret created: USD oc apply -f output/manifests/openshift-logging-<your_role_name>-credentials.yaml Create or edit a ClusterLogForwarder custom resource: apiVersion: "logging.openshift.io/v1" kind: ClusterLogForwarder metadata: name: instance 1 namespace: openshift-logging 2 spec: outputs: - name: cw 3 type: cloudwatch 4 cloudwatch: groupBy: logType 5 groupPrefix: <group prefix> 6 region: us-east-2 7 secret: name: <your_role_name> 8 pipelines: - name: to-cloudwatch 9 inputRefs: 10 - infrastructure - audit - application outputRefs: - cw 11 1 The name of the ClusterLogForwarder CR must be instance . 2 The namespace for the ClusterLogForwarder CR must be openshift-logging . 3 Specify a name for the output. 4 Specify the cloudwatch type. 5 Optional: Specify how to group the logs: logType creates log groups for each log type namespaceName creates a log group for each application name space. Infrastructure and audit logs are unaffected, remaining grouped by logType . namespaceUUID creates a new log groups for each application namespace UUID. It also creates separate log groups for infrastructure and audit logs. 6 Optional: Specify a string to replace the default infrastructureName prefix in the names of the log groups. 7 Specify the AWS region. 8 Specify the name of the secret that contains your AWS credentials. 9 Optional: Specify a name for the pipeline. 10 Specify which log types to forward by using the pipeline: application, infrastructure , or audit . 11 Specify the name of the output to use when forwarding logs with this pipeline. 9.4.16. Creating a secret for AWS CloudWatch with an existing AWS role If you have an existing role for AWS, you can create a secret for AWS with STS using the oc create secret --from-literal command: USD oc create secret generic cw-sts-secret -n openshift-logging --from-literal=role_arn=arn:aws:iam::123456789012:role/my-role_with-permissions Example Secret apiVersion: v1 kind: Secret metadata: namespace: openshift-logging name: my-secret-name stringData: role_arn: arn:aws:iam::123456789012:role/my-role_with-permissions Additional resources AWS STS API Reference 9.5. Configuring the logging collector Logging for Red Hat OpenShift collects operations and application logs from your cluster and enriches the data with Kubernetes pod and project metadata. You can configure the CPU and memory limits for the log collector and move the log collector pods to specific nodes . All supported modifications to the log collector can be performed though the spec.collection.log.fluentd stanza in the ClusterLogging custom resource (CR). 9.5.1. Configuring the log collector You can configure which log collector type your logging uses by modifying the ClusterLogging custom resource (CR). Note Fluentd is deprecated and is planned to be removed in a future release. Red Hat provides bug fixes and support for this feature during the current release lifecycle, but this feature no longer receives enhancements. As an alternative to Fluentd, you can use Vector instead. Prerequisites You have administrator permissions. You have installed the OpenShift CLI ( oc ). You have installed the Red Hat OpenShift Logging Operator. You have created a ClusterLogging CR. Procedure Modify the ClusterLogging CR collection spec: ClusterLogging CR example apiVersion: logging.openshift.io/v1 kind: ClusterLogging metadata: # ... spec: # ... collection: type: <log_collector_type> 1 resources: {} tolerations: {} # ... 1 The log collector type you want to use for the logging. This can be vector or fluentd . Apply the ClusterLogging CR by running the following command: USD oc apply -f <filename>.yaml 9.5.2. Viewing logging collector pods You can view the logging collector pods and the corresponding nodes that they are running on. Procedure Run the following command in a project to view the logging collector pods and their details: USD oc get pods --selector component=collector -o wide -n <project_name> Example output NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES collector-8d69v 1/1 Running 0 134m 10.130.2.30 master1.example.com <none> <none> collector-bd225 1/1 Running 0 134m 10.131.1.11 master2.example.com <none> <none> collector-cvrzs 1/1 Running 0 134m 10.130.0.21 master3.example.com <none> <none> collector-gpqg2 1/1 Running 0 134m 10.128.2.27 worker1.example.com <none> <none> collector-l9j7j 1/1 Running 0 134m 10.129.2.31 worker2.example.com <none> <none> 9.5.3. Configure log collector CPU and memory limits The log collector allows for adjustments to both the CPU and memory limits. Procedure Edit the ClusterLogging custom resource (CR) in the openshift-logging project: USD oc -n openshift-logging edit ClusterLogging instance apiVersion: logging.openshift.io/v1 kind: ClusterLogging metadata: name: instance namespace: openshift-logging spec: collection: type: fluentd resources: limits: 1 memory: 736Mi requests: cpu: 100m memory: 736Mi # ... 1 Specify the CPU and memory limits and requests as needed. The values shown are the default values. 9.5.4. Advanced configuration for the Fluentd log forwarder Note Fluentd is deprecated and is planned to be removed in a future release. Red Hat provides bug fixes and support for this feature during the current release lifecycle, but this feature no longer receives enhancements. As an alternative to Fluentd, you can use Vector instead. Logging includes multiple Fluentd parameters that you can use for tuning the performance of the Fluentd log forwarder. With these parameters, you can change the following Fluentd behaviors: Chunk and chunk buffer sizes Chunk flushing behavior Chunk forwarding retry behavior Fluentd collects log data in a single blob called a chunk . When Fluentd creates a chunk, the chunk is considered to be in the stage , where the chunk gets filled with data. When the chunk is full, Fluentd moves the chunk to the queue , where chunks are held before being flushed, or written out to their destination. Fluentd can fail to flush a chunk for a number of reasons, such as network issues or capacity issues at the destination. If a chunk cannot be flushed, Fluentd retries flushing as configured. By default in OpenShift Container Platform, Fluentd uses the exponential backoff method to retry flushing, where Fluentd doubles the time it waits between attempts to retry flushing again, which helps reduce connection requests to the destination. You can disable exponential backoff and use the periodic retry method instead, which retries flushing the chunks at a specified interval. These parameters can help you determine the trade-offs between latency and throughput. To optimize Fluentd for throughput, you could use these parameters to reduce network packet count by configuring larger buffers and queues, delaying flushes, and setting longer times between retries. Be aware that larger buffers require more space on the node file system. To optimize for low latency, you could use the parameters to send data as soon as possible, avoid the build-up of batches, have shorter queues and buffers, and use more frequent flush and retries. You can configure the chunking and flushing behavior using the following parameters in the ClusterLogging custom resource (CR). The parameters are then automatically added to the Fluentd config map for use by Fluentd. Note These parameters are: Not relevant to most users. The default settings should give good general performance. Only for advanced users with detailed knowledge of Fluentd configuration and performance. Only for performance tuning. They have no effect on functional aspects of logging. Table 9.10. Advanced Fluentd Configuration Parameters Parameter Description Default chunkLimitSize The maximum size of each chunk. Fluentd stops writing data to a chunk when it reaches this size. Then, Fluentd sends the chunk to the queue and opens a new chunk. 8m totalLimitSize The maximum size of the buffer, which is the total size of the stage and the queue. If the buffer size exceeds this value, Fluentd stops adding data to chunks and fails with an error. All data not in chunks is lost. Approximately 15% of the node disk distributed across all outputs. flushInterval The interval between chunk flushes. You can use s (seconds), m (minutes), h (hours), or d (days). 1s flushMode The method to perform flushes: lazy : Flush chunks based on the timekey parameter. You cannot modify the timekey parameter. interval : Flush chunks based on the flushInterval parameter. immediate : Flush chunks immediately after data is added to a chunk. interval flushThreadCount The number of threads that perform chunk flushing. Increasing the number of threads improves the flush throughput, which hides network latency. 2 overflowAction The chunking behavior when the queue is full: throw_exception : Raise an exception to show in the log. block : Stop data chunking until the full buffer issue is resolved. drop_oldest_chunk : Drop the oldest chunk to accept new incoming chunks. Older chunks have less value than newer chunks. block retryMaxInterval The maximum time in seconds for the exponential_backoff retry method. 300s retryType The retry method when flushing fails: exponential_backoff : Increase the time between flush retries. Fluentd doubles the time it waits until the retry until the retry_max_interval parameter is reached. periodic : Retries flushes periodically, based on the retryWait parameter. exponential_backoff retryTimeOut The maximum time interval to attempt retries before the record is discarded. 60m retryWait The time in seconds before the chunk flush. 1s For more information on the Fluentd chunk lifecycle, see Buffer Plugins in the Fluentd documentation. Procedure Edit the ClusterLogging custom resource (CR) in the openshift-logging project: USD oc edit ClusterLogging instance Add or modify any of the following parameters: apiVersion: logging.openshift.io/v1 kind: ClusterLogging metadata: name: instance namespace: openshift-logging spec: collection: fluentd: buffer: chunkLimitSize: 8m 1 flushInterval: 5s 2 flushMode: interval 3 flushThreadCount: 3 4 overflowAction: throw_exception 5 retryMaxInterval: "300s" 6 retryType: periodic 7 retryWait: 1s 8 totalLimitSize: 32m 9 # ... 1 Specify the maximum size of each chunk before it is queued for flushing. 2 Specify the interval between chunk flushes. 3 Specify the method to perform chunk flushes: lazy , interval , or immediate . 4 Specify the number of threads to use for chunk flushes. 5 Specify the chunking behavior when the queue is full: throw_exception , block , or drop_oldest_chunk . 6 Specify the maximum interval in seconds for the exponential_backoff chunk flushing method. 7 Specify the retry type when chunk flushing fails: exponential_backoff or periodic . 8 Specify the time in seconds before the chunk flush. 9 Specify the maximum size of the chunk buffer. Verify that the Fluentd pods are redeployed: USD oc get pods -l component=collector -n openshift-logging Check that the new values are in the fluentd config map: USD oc extract configmap/collector --confirm Example fluentd.conf <buffer> @type file path '/var/lib/fluentd/default' flush_mode interval flush_interval 5s flush_thread_count 3 retry_type periodic retry_wait 1s retry_max_interval 300s retry_timeout 60m queued_chunks_limit_size "#{ENV['BUFFER_QUEUE_LIMIT'] || '32'}" total_limit_size "#{ENV['TOTAL_LIMIT_SIZE_PER_BUFFER'] || '8589934592'}" chunk_limit_size 8m overflow_action throw_exception disable_chunk_backup true </buffer> 9.6. Collecting and storing Kubernetes events The OpenShift Container Platform Event Router is a pod that watches Kubernetes events and logs them for collection by the logging. You must manually deploy the Event Router. The Event Router collects events from all projects and writes them to STDOUT . The collector then forwards those events to the store defined in the ClusterLogForwarder custom resource (CR). Important The Event Router adds additional load to Fluentd and can impact the number of other log messages that can be processed. 9.6.1. Deploying and configuring the Event Router Use the following steps to deploy the Event Router into your cluster. You should always deploy the Event Router to the openshift-logging project to ensure it collects events from across the cluster. Note The Event Router image is not a part of the Red Hat OpenShift Logging Operator and must be downloaded separately. The following Template object creates the service account, cluster role, and cluster role binding required for the Event Router. The template also configures and deploys the Event Router pod. You can either use this template without making changes or edit the template to change the deployment object CPU and memory requests. Prerequisites You need proper permissions to create service accounts and update cluster role bindings. For example, you can run the following template with a user that has the cluster-admin role. The Red Hat OpenShift Logging Operator must be installed. Procedure Create a template for the Event Router: apiVersion: template.openshift.io/v1 kind: Template metadata: name: eventrouter-template annotations: description: "A pod forwarding kubernetes events to OpenShift Logging stack." tags: "events,EFK,logging,cluster-logging" objects: - kind: ServiceAccount 1 apiVersion: v1 metadata: name: eventrouter namespace: USD{NAMESPACE} - kind: ClusterRole 2 apiVersion: rbac.authorization.k8s.io/v1 metadata: name: event-reader rules: - apiGroups: [""] resources: ["events"] verbs: ["get", "watch", "list"] - kind: ClusterRoleBinding 3 apiVersion: rbac.authorization.k8s.io/v1 metadata: name: event-reader-binding subjects: - kind: ServiceAccount name: eventrouter namespace: USD{NAMESPACE} roleRef: kind: ClusterRole name: event-reader - kind: ConfigMap 4 apiVersion: v1 metadata: name: eventrouter namespace: USD{NAMESPACE} data: config.json: |- { "sink": "stdout" } - kind: Deployment 5 apiVersion: apps/v1 metadata: name: eventrouter namespace: USD{NAMESPACE} labels: component: "eventrouter" logging-infra: "eventrouter" provider: "openshift" spec: selector: matchLabels: component: "eventrouter" logging-infra: "eventrouter" provider: "openshift" replicas: 1 template: metadata: labels: component: "eventrouter" logging-infra: "eventrouter" provider: "openshift" name: eventrouter spec: serviceAccount: eventrouter containers: - name: kube-eventrouter image: USD{IMAGE} imagePullPolicy: IfNotPresent resources: requests: cpu: USD{CPU} memory: USD{MEMORY} volumeMounts: - name: config-volume mountPath: /etc/eventrouter securityContext: allowPrivilegeEscalation: false capabilities: drop: ["ALL"] securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault volumes: - name: config-volume configMap: name: eventrouter parameters: - name: IMAGE 6 displayName: Image value: "registry.redhat.io/openshift-logging/eventrouter-rhel8:v0.4" - name: CPU 7 displayName: CPU value: "100m" - name: MEMORY 8 displayName: Memory value: "128Mi" - name: NAMESPACE displayName: Namespace value: "openshift-logging" 9 1 Creates a Service Account in the openshift-logging project for the Event Router. 2 Creates a ClusterRole to monitor for events in the cluster. 3 Creates a ClusterRoleBinding to bind the ClusterRole to the service account. 4 Creates a config map in the openshift-logging project to generate the required config.json file. 5 Creates a deployment in the openshift-logging project to generate and configure the Event Router pod. 6 Specifies the image, identified by a tag such as v0.4 . 7 Specifies the minimum amount of CPU to allocate to the Event Router pod. Defaults to 100m . 8 Specifies the minimum amount of memory to allocate to the Event Router pod. Defaults to 128Mi . 9 Specifies the openshift-logging project to install objects in. Use the following command to process and apply the template: USD oc process -f <templatefile> | oc apply -n openshift-logging -f - For example: USD oc process -f eventrouter.yaml | oc apply -n openshift-logging -f - Example output serviceaccount/eventrouter created clusterrole.rbac.authorization.k8s.io/event-reader created clusterrolebinding.rbac.authorization.k8s.io/event-reader-binding created configmap/eventrouter created deployment.apps/eventrouter created Validate that the Event Router installed in the openshift-logging project: View the new Event Router pod: USD oc get pods --selector component=eventrouter -o name -n openshift-logging Example output pod/cluster-logging-eventrouter-d649f97c8-qvv8r View the events collected by the Event Router: USD oc logs <cluster_logging_eventrouter_pod> -n openshift-logging For example: USD oc logs cluster-logging-eventrouter-d649f97c8-qvv8r -n openshift-logging Example output {"verb":"ADDED","event":{"metadata":{"name":"openshift-service-catalog-controller-manager-remover.1632d931e88fcd8f","namespace":"openshift-service-catalog-removed","selfLink":"/api/v1/namespaces/openshift-service-catalog-removed/events/openshift-service-catalog-controller-manager-remover.1632d931e88fcd8f","uid":"787d7b26-3d2f-4017-b0b0-420db4ae62c0","resourceVersion":"21399","creationTimestamp":"2020-09-08T15:40:26Z"},"involvedObject":{"kind":"Job","namespace":"openshift-service-catalog-removed","name":"openshift-service-catalog-controller-manager-remover","uid":"fac9f479-4ad5-4a57-8adc-cb25d3d9cf8f","apiVersion":"batch/v1","resourceVersion":"21280"},"reason":"Completed","message":"Job completed","source":{"component":"job-controller"},"firstTimestamp":"2020-09-08T15:40:26Z","lastTimestamp":"2020-09-08T15:40:26Z","count":1,"type":"Normal"}} You can also use Kibana to view events by creating an index pattern using the Elasticsearch infra index.
[ "apiVersion: logging.openshift.io/v1 kind: ClusterLogging metadata: name: instance namespace: openshift-logging spec: collection: logs: type: vector vector: {}", "{\"level\":\"info\",\"name\":\"fred\",\"home\":\"bedrock\"}", "pipelines: - inputRefs: [ application ] outputRefs: myFluentd parse: json", "{\"structured\": { \"level\": \"info\", \"name\": \"fred\", \"home\": \"bedrock\" }, \"more fields...\"}", "apiVersion: logging.openshift.io/v1 kind: ClusterLogForwarder metadata: spec: outputDefaults: elasticsearch: structuredTypeKey: kubernetes.labels.logFormat 1 structuredTypeName: nologformat pipelines: - inputRefs: - application outputRefs: - default parse: json 2", "{ \"structured\":{\"name\":\"fred\",\"home\":\"bedrock\"}, \"kubernetes\":{\"labels\":{\"logFormat\": \"apache\", ...}} }", "{ \"structured\":{\"name\":\"wilma\",\"home\":\"bedrock\"}, \"kubernetes\":{\"labels\":{\"logFormat\": \"google\", ...}} }", "outputDefaults: elasticsearch: structuredTypeKey: openshift.labels.myLabel 1 structuredTypeName: nologformat pipelines: - name: application-logs inputRefs: - application - audit outputRefs: - elasticsearch-secure - default parse: json labels: myLabel: myValue 2", "{ \"structured\":{\"name\":\"fred\",\"home\":\"bedrock\"}, \"openshift\":{\"labels\":{\"myLabel\": \"myValue\", ...}} }", "outputDefaults: elasticsearch: structuredTypeKey: <log record field> structuredTypeName: <name> pipelines: - inputRefs: - application outputRefs: default parse: json", "oc create -f <filename>.yaml", "oc delete pod --selector logging-infra=collector", "apiVersion: \"logging.openshift.io/v1\" kind: ClusterLogForwarder metadata: name: instance namespace: openshift-logging spec: outputDefaults: elasticsearch: structuredTypeKey: kubernetes.labels.logFormat 1 structuredTypeName: nologformat enableStructuredContainerLogs: true 2 pipelines: - inputRefs: - application name: application-logs outputRefs: - default parse: json", "apiVersion: v1 kind: Pod metadata: annotations: containerType.logging.openshift.io/heavy: heavy 1 containerType.logging.openshift.io/low: low spec: containers: - name: heavy 2 image: heavyimage - name: low image: lowimage", "apiVersion: \"logging.openshift.io/v1\" kind: ClusterLogForwarder metadata: name: instance 1 namespace: openshift-logging 2 spec: outputs: - name: elasticsearch-secure 3 type: \"elasticsearch\" url: https://elasticsearch.secure.com:9200 secret: name: elasticsearch - name: elasticsearch-insecure 4 type: \"elasticsearch\" url: http://elasticsearch.insecure.com:9200 - name: kafka-app 5 type: \"kafka\" url: tls://kafka.secure.com:9093/app-topic inputs: 6 - name: my-app-logs application: namespaces: - my-project pipelines: - name: audit-logs 7 inputRefs: - audit outputRefs: - elasticsearch-secure - default labels: secure: \"true\" 8 datacenter: \"east\" - name: infrastructure-logs 9 inputRefs: - infrastructure outputRefs: - elasticsearch-insecure labels: datacenter: \"west\" - name: my-app 10 inputRefs: - my-app-logs outputRefs: - default - inputRefs: 11 - application outputRefs: - kafka-app labels: datacenter: \"south\"", "oc create secret generic -n <namespace> <secret_name> --from-file=ca-bundle.crt=<your_bundle_file> --from-literal=username=<your_username> --from-literal=password=<your_password>", "apiVersion: \"logging.openshift.io/v1\" kind: ClusterLogForwarder metadata: name: <log_forwarder_name> 1 namespace: <log_forwarder_namespace> 2 spec: pipelines: - inputRefs: - <log_type> 3 outputRefs: - <output_name> 4 outputs: - name: <output_name> 5 type: <output_type> 6 url: <log_output_url> 7", "java.lang.NullPointerException: Cannot invoke \"String.toString()\" because \"<param1>\" is null at testjava.Main.handle(Main.java:47) at testjava.Main.printMe(Main.java:19) at testjava.Main.main(Main.java:10)", "apiVersion: logging.openshift.io/v1 kind: ClusterLogForwarder metadata: name: instance namespace: openshift-logging spec: pipelines: - name: my-app-logs inputRefs: - application outputRefs: - default detectMultilineErrors: true", "[transforms.detect_exceptions_app-logs] type = \"detect_exceptions\" inputs = [\"application\"] languages = [\"All\"] group_by = [\"kubernetes.namespace_name\",\"kubernetes.pod_name\",\"kubernetes.container_name\"] expire_after_ms = 2000 multiline_flush_interval_ms = 1000", "<label @MULTILINE_APP_LOGS> <match kubernetes.**> @type detect_exceptions remove_tag_prefix 'kubernetes' message message force_line_breaks true multiline_flush_interval .2 </match> </label>", "oc -n openshift-logging create secret generic gcp-secret --from-file google-application-credentials.json= <your_service_account_key_file.json>", "apiVersion: \"logging.openshift.io/v1\" kind: \"ClusterLogForwarder\" metadata: name: \"instance\" namespace: \"openshift-logging\" spec: outputs: - name: gcp-1 type: googleCloudLogging secret: name: gcp-secret googleCloudLogging: projectId : \"openshift-gce-devel\" 1 logId : \"app-gcp\" 2 pipelines: - name: test-app inputRefs: 3 - application outputRefs: - gcp-1", "oc -n openshift-logging create secret generic vector-splunk-secret --from-literal hecToken=<HEC_Token>", "apiVersion: \"logging.openshift.io/v1\" kind: \"ClusterLogForwarder\" metadata: name: \"instance\" 1 namespace: \"openshift-logging\" 2 spec: outputs: - name: splunk-receiver 3 secret: name: vector-splunk-secret 4 type: splunk 5 url: <http://your.splunk.hec.url:8088> 6 pipelines: 7 - inputRefs: - application - infrastructure name: 8 outputRefs: - splunk-receiver 9", "apiVersion: \"logging.openshift.io/v1\" kind: \"ClusterLogForwarder\" metadata: name: \"instance\" namespace: \"openshift-logging\" spec: outputs: - name: httpout-app type: http url: 1 http: headers: 2 h1: v1 h2: v2 method: POST secret: name: 3 tls: insecureSkipVerify: 4 pipelines: - name: inputRefs: - application outputRefs: - 5", "apiVersion: logging.openshift.io/v1 kind: ClusterLogForwarder metadata: name: instance 1 namespace: openshift-logging 2 spec: outputs: - name: fluentd-server-secure 3 type: fluentdForward 4 url: 'tls://fluentdserver.security.example.com:24224' 5 secret: 6 name: fluentd-secret - name: fluentd-server-insecure type: fluentdForward url: 'tcp://fluentdserver.home.example.com:24224' inputs: 7 - name: my-app-logs application: namespaces: - my-project 8 pipelines: - name: forward-to-fluentd-insecure 9 inputRefs: 10 - my-app-logs outputRefs: 11 - fluentd-server-insecure labels: project: \"my-project\" 12 - name: forward-to-fluentd-secure 13 inputRefs: - application 14 - audit - infrastructure outputRefs: - fluentd-server-secure - default labels: clusterId: \"C1234\"", "oc apply -f <filename>.yaml", "apiVersion: logging.openshift.io/v1 kind: ClusterLogForwarder metadata: name: instance 1 namespace: openshift-logging 2 spec: pipelines: - inputRefs: [ myAppLogData ] 3 outputRefs: [ default ] 4 inputs: 5 - name: myAppLogData application: selector: matchLabels: 6 environment: production app: nginx namespaces: 7 - app1 - app2 outputs: 8 - default", "- inputRefs: [ myAppLogData, myOtherAppLogData ]", "oc create -f <file-name>.yaml", "apiVersion: \"logging.openshift.io/v1\" kind: ClusterLogForwarder metadata: name: instance 1 namespace: openshift-logging 2 spec: outputs: - name: loki-insecure 3 type: \"loki\" 4 url: http://loki.insecure.com:3100 5 loki: tenantKey: kubernetes.namespace_name labelKeys: - kubernetes.labels.foo - name: loki-secure 6 type: \"loki\" url: https://loki.secure.com:3100 secret: name: loki-secret 7 loki: tenantKey: kubernetes.namespace_name 8 labelKeys: - kubernetes.labels.foo 9 pipelines: - name: application-logs 10 inputRefs: 11 - application - audit outputRefs: 12 - loki-secure", "oc apply -f <filename>.yaml", "apiVersion: \"logging.openshift.io/v1\" kind: ClusterLogForwarder metadata: name: instance 1 namespace: openshift-logging 2 spec: outputs: - name: elasticsearch-example 3 type: elasticsearch 4 elasticsearch: version: 8 5 url: http://elasticsearch.example.com:9200 6 secret: name: es-secret 7 pipelines: - name: application-logs 8 inputRefs: 9 - application - audit outputRefs: - elasticsearch-example 10 - default 11 labels: myLabel: \"myValue\" 12", "oc apply -f <filename>.yaml", "apiVersion: v1 kind: Secret metadata: name: openshift-test-secret data: username: <username> password: <password>", "oc create secret -n openshift-logging openshift-test-secret.yaml", "kind: ClusterLogForwarder metadata: name: instance namespace: openshift-logging spec: outputs: - name: elasticsearch type: \"elasticsearch\" url: https://elasticsearch.secure.com:9200 secret: name: openshift-test-secret", "oc apply -f <filename>.yaml", "apiVersion: logging.openshift.io/v1 kind: ClusterLogForwarder metadata: name: instance 1 namespace: openshift-logging 2 spec: outputs: - name: fluentd-server-secure 3 type: fluentdForward 4 url: 'tls://fluentdserver.security.example.com:24224' 5 secret: 6 name: fluentd-secret - name: fluentd-server-insecure type: fluentdForward url: 'tcp://fluentdserver.home.example.com:24224' pipelines: - name: forward-to-fluentd-secure 7 inputRefs: 8 - application - audit outputRefs: - fluentd-server-secure 9 - default 10 labels: clusterId: \"C1234\" 11 - name: forward-to-fluentd-insecure 12 inputRefs: - infrastructure outputRefs: - fluentd-server-insecure labels: clusterId: \"C1234\"", "oc create -f <file-name>.yaml", "input { tcp { codec => fluent { nanosecond_precision => true } port => 24114 } } filter { } output { stdout { codec => rubydebug } }", "apiVersion: logging.openshift.io/v1 kind: ClusterLogForwarder metadata: name: instance 1 namespace: openshift-logging 2 spec: outputs: - name: rsyslog-east 3 type: syslog 4 syslog: 5 facility: local0 rfc: RFC3164 payloadKey: message severity: informational url: 'tls://rsyslogserver.east.example.com:514' 6 secret: 7 name: syslog-secret - name: rsyslog-west type: syslog syslog: appName: myapp facility: user msgID: mymsg procID: myproc rfc: RFC5424 severity: debug url: 'tcp://rsyslogserver.west.example.com:514' pipelines: - name: syslog-east 8 inputRefs: 9 - audit - application outputRefs: 10 - rsyslog-east - default 11 labels: secure: \"true\" 12 syslog: \"east\" - name: syslog-west 13 inputRefs: - infrastructure outputRefs: - rsyslog-west - default labels: syslog: \"west\"", "oc create -f <filename>.yaml", "spec: outputs: - name: syslogout syslog: addLogSource: true facility: user payloadKey: message rfc: RFC3164 severity: debug tag: mytag type: syslog url: tls://syslog-receiver.openshift-logging.svc:24224 pipelines: - inputRefs: - application name: test-app outputRefs: - syslogout", "<15>1 2020-11-15T17:06:14+00:00 fluentd-9hkb4 mytag - - - {\"msgcontent\"=>\"Message Contents\", \"timestamp\"=>\"2020-11-15 17:06:09\", \"tag_key\"=>\"rec_tag\", \"index\"=>56}", "<15>1 2020-11-16T10:49:37+00:00 crc-j55b9-master-0 mytag - - - namespace_name=clo-test-6327,pod_name=log-generator-ff9746c49-qxm7l,container_name=log-generator,message={\"msgcontent\":\"My life is my message\", \"timestamp\":\"2020-11-16 10:49:36\", \"tag_key\":\"rec_tag\", \"index\":76}", "apiVersion: logging.openshift.io/v1 kind: ClusterLogForwarder metadata: name: instance 1 namespace: openshift-logging 2 spec: outputs: - name: app-logs 3 type: kafka 4 url: tls://kafka.example.devlab.com:9093/app-topic 5 secret: name: kafka-secret 6 - name: infra-logs type: kafka url: tcp://kafka.devlab2.example.com:9093/infra-topic 7 - name: audit-logs type: kafka url: tls://kafka.qelab.example.com:9093/audit-topic secret: name: kafka-secret-qe pipelines: - name: app-topic 8 inputRefs: 9 - application outputRefs: 10 - app-logs labels: logType: \"application\" 11 - name: infra-topic 12 inputRefs: - infrastructure outputRefs: - infra-logs labels: logType: \"infra\" - name: audit-topic inputRefs: - audit outputRefs: - audit-logs - default 13 labels: logType: \"audit\"", "spec: outputs: - name: app-logs type: kafka secret: name: kafka-secret-dev kafka: 1 brokers: 2 - tls://kafka-broker1.example.com:9093/ - tls://kafka-broker2.example.com:9093/ topic: app-topic 3", "oc apply -f <filename>.yaml", "apiVersion: v1 kind: Secret metadata: name: cw-secret namespace: openshift-logging data: aws_access_key_id: QUtJQUlPU0ZPRE5ON0VYQU1QTEUK aws_secret_access_key: d0phbHJYVXRuRkVNSS9LN01ERU5HL2JQeFJmaUNZRVhBTVBMRUtFWQo=", "oc apply -f cw-secret.yaml", "apiVersion: \"logging.openshift.io/v1\" kind: ClusterLogForwarder metadata: name: instance 1 namespace: openshift-logging 2 spec: outputs: - name: cw 3 type: cloudwatch 4 cloudwatch: groupBy: logType 5 groupPrefix: <group prefix> 6 region: us-east-2 7 secret: name: cw-secret 8 pipelines: - name: infra-logs 9 inputRefs: 10 - infrastructure - audit - application outputRefs: - cw 11", "oc create -f <file-name>.yaml", "oc get Infrastructure/cluster -ojson | jq .status.infrastructureName \"mycluster-7977k\"", "oc run busybox --image=busybox -- sh -c 'while true; do echo \"My life is my message\"; sleep 3; done' oc logs -f busybox My life is my message My life is my message My life is my message", "oc get ns/app -ojson | jq .metadata.uid \"794e1e1a-b9f5-4958-a190-e76a9b53d7bf\"", "apiVersion: \"logging.openshift.io/v1\" kind: ClusterLogForwarder metadata: name: instance namespace: openshift-logging spec: outputs: - name: cw type: cloudwatch cloudwatch: groupBy: logType region: us-east-2 secret: name: cw-secret pipelines: - name: all-logs inputRefs: - infrastructure - audit - application outputRefs: - cw", "aws --output json logs describe-log-groups | jq .logGroups[].logGroupName \"mycluster-7977k.application\" \"mycluster-7977k.audit\" \"mycluster-7977k.infrastructure\"", "aws --output json logs describe-log-streams --log-group-name mycluster-7977k.application | jq .logStreams[].logStreamName \"kubernetes.var.log.containers.busybox_app_busybox-da085893053e20beddd6747acdbaf98e77c37718f85a7f6a4facf09ca195ad76.log\"", "aws --output json logs describe-log-streams --log-group-name mycluster-7977k.audit | jq .logStreams[].logStreamName \"ip-10-0-131-228.us-east-2.compute.internal.k8s-audit.log\" \"ip-10-0-131-228.us-east-2.compute.internal.linux-audit.log\" \"ip-10-0-131-228.us-east-2.compute.internal.openshift-audit.log\"", "aws --output json logs describe-log-streams --log-group-name mycluster-7977k.infrastructure | jq .logStreams[].logStreamName \"ip-10-0-131-228.us-east-2.compute.internal.kubernetes.var.log.containers.apiserver-69f9fd9b58-zqzw5_openshift-oauth-apiserver_oauth-apiserver-453c5c4ee026fe20a6139ba6b1cdd1bed25989c905bf5ac5ca211b7cbb5c3d7b.log\" \"ip-10-0-131-228.us-east-2.compute.internal.kubernetes.var.log.containers.apiserver-797774f7c5-lftrx_openshift-apiserver_openshift-apiserver-ce51532df7d4e4d5f21c4f4be05f6575b93196336be0027067fd7d93d70f66a4.log\" \"ip-10-0-131-228.us-east-2.compute.internal.kubernetes.var.log.containers.apiserver-797774f7c5-lftrx_openshift-apiserver_openshift-apiserver-check-endpoints-82a9096b5931b5c3b1d6dc4b66113252da4a6472c9fff48623baee761911a9ef.log\"", "aws logs get-log-events --log-group-name mycluster-7977k.application --log-stream-name kubernetes.var.log.containers.busybox_app_busybox-da085893053e20beddd6747acdbaf98e77c37718f85a7f6a4facf09ca195ad76.log { \"events\": [ { \"timestamp\": 1629422704178, \"message\": \"{\\\"docker\\\":{\\\"container_id\\\":\\\"da085893053e20beddd6747acdbaf98e77c37718f85a7f6a4facf09ca195ad76\\\"},\\\"kubernetes\\\":{\\\"container_name\\\":\\\"busybox\\\",\\\"namespace_name\\\":\\\"app\\\",\\\"pod_name\\\":\\\"busybox\\\",\\\"container_image\\\":\\\"docker.io/library/busybox:latest\\\",\\\"container_image_id\\\":\\\"docker.io/library/busybox@sha256:0f354ec1728d9ff32edcd7d1b8bbdfc798277ad36120dc3dc683be44524c8b60\\\",\\\"pod_id\\\":\\\"870be234-90a3-4258-b73f-4f4d6e2777c7\\\",\\\"host\\\":\\\"ip-10-0-216-3.us-east-2.compute.internal\\\",\\\"labels\\\":{\\\"run\\\":\\\"busybox\\\"},\\\"master_url\\\":\\\"https://kubernetes.default.svc\\\",\\\"namespace_id\\\":\\\"794e1e1a-b9f5-4958-a190-e76a9b53d7bf\\\",\\\"namespace_labels\\\":{\\\"kubernetes_io/metadata_name\\\":\\\"app\\\"}},\\\"message\\\":\\\"My life is my message\\\",\\\"level\\\":\\\"unknown\\\",\\\"hostname\\\":\\\"ip-10-0-216-3.us-east-2.compute.internal\\\",\\\"pipeline_metadata\\\":{\\\"collector\\\":{\\\"ipaddr4\\\":\\\"10.0.216.3\\\",\\\"inputname\\\":\\\"fluent-plugin-systemd\\\",\\\"name\\\":\\\"fluentd\\\",\\\"received_at\\\":\\\"2021-08-20T01:25:08.085760+00:00\\\",\\\"version\\\":\\\"1.7.4 1.6.0\\\"}},\\\"@timestamp\\\":\\\"2021-08-20T01:25:04.178986+00:00\\\",\\\"viaq_index_name\\\":\\\"app-write\\\",\\\"viaq_msg_id\\\":\\\"NWRjZmUyMWQtZjgzNC00MjI4LTk3MjMtNTk3NmY3ZjU4NDk1\\\",\\\"log_type\\\":\\\"application\\\",\\\"time\\\":\\\"2021-08-20T01:25:04+00:00\\\"}\", \"ingestionTime\": 1629422744016 },", "cloudwatch: groupBy: logType groupPrefix: demo-group-prefix region: us-east-2", "aws --output json logs describe-log-groups | jq .logGroups[].logGroupName \"demo-group-prefix.application\" \"demo-group-prefix.audit\" \"demo-group-prefix.infrastructure\"", "cloudwatch: groupBy: namespaceName region: us-east-2", "aws --output json logs describe-log-groups | jq .logGroups[].logGroupName \"mycluster-7977k.app\" \"mycluster-7977k.audit\" \"mycluster-7977k.infrastructure\"", "cloudwatch: groupBy: namespaceUUID region: us-east-2", "aws --output json logs describe-log-groups | jq .logGroups[].logGroupName \"mycluster-7977k.794e1e1a-b9f5-4958-a190-e76a9b53d7bf\" // uid of the \"app\" namespace \"mycluster-7977k.audit\" \"mycluster-7977k.infrastructure\"", "apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: name: <your_role_name>-credrequest namespace: openshift-cloud-credential-operator spec: providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: AWSProviderSpec statementEntries: - action: - logs:PutLogEvents - logs:CreateLogGroup - logs:PutRetentionPolicy - logs:CreateLogStream - logs:DescribeLogGroups - logs:DescribeLogStreams effect: Allow resource: arn:aws:logs:*:*:* secretRef: name: <your_role_name> namespace: openshift-logging serviceAccountNames: - logcollector", "ccoctl aws create-iam-roles --name=<name> --region=<aws_region> --credentials-requests-dir=<path_to_directory_with_list_of_credentials_requests>/credrequests --identity-provider-arn=arn:aws:iam::<aws_account_id>:oidc-provider/<name>-oidc.s3.<aws_region>.amazonaws.com 1", "oc apply -f output/manifests/openshift-logging-<your_role_name>-credentials.yaml", "apiVersion: \"logging.openshift.io/v1\" kind: ClusterLogForwarder metadata: name: instance 1 namespace: openshift-logging 2 spec: outputs: - name: cw 3 type: cloudwatch 4 cloudwatch: groupBy: logType 5 groupPrefix: <group prefix> 6 region: us-east-2 7 secret: name: <your_role_name> 8 pipelines: - name: to-cloudwatch 9 inputRefs: 10 - infrastructure - audit - application outputRefs: - cw 11", "oc create secret generic cw-sts-secret -n openshift-logging --from-literal=role_arn=arn:aws:iam::123456789012:role/my-role_with-permissions", "apiVersion: v1 kind: Secret metadata: namespace: openshift-logging name: my-secret-name stringData: role_arn: arn:aws:iam::123456789012:role/my-role_with-permissions", "apiVersion: logging.openshift.io/v1 kind: ClusterLogging metadata: spec: collection: type: <log_collector_type> 1 resources: {} tolerations: {}", "oc apply -f <filename>.yaml", "oc get pods --selector component=collector -o wide -n <project_name>", "NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES collector-8d69v 1/1 Running 0 134m 10.130.2.30 master1.example.com <none> <none> collector-bd225 1/1 Running 0 134m 10.131.1.11 master2.example.com <none> <none> collector-cvrzs 1/1 Running 0 134m 10.130.0.21 master3.example.com <none> <none> collector-gpqg2 1/1 Running 0 134m 10.128.2.27 worker1.example.com <none> <none> collector-l9j7j 1/1 Running 0 134m 10.129.2.31 worker2.example.com <none> <none>", "oc -n openshift-logging edit ClusterLogging instance", "apiVersion: logging.openshift.io/v1 kind: ClusterLogging metadata: name: instance namespace: openshift-logging spec: collection: type: fluentd resources: limits: 1 memory: 736Mi requests: cpu: 100m memory: 736Mi", "oc edit ClusterLogging instance", "apiVersion: logging.openshift.io/v1 kind: ClusterLogging metadata: name: instance namespace: openshift-logging spec: collection: fluentd: buffer: chunkLimitSize: 8m 1 flushInterval: 5s 2 flushMode: interval 3 flushThreadCount: 3 4 overflowAction: throw_exception 5 retryMaxInterval: \"300s\" 6 retryType: periodic 7 retryWait: 1s 8 totalLimitSize: 32m 9", "oc get pods -l component=collector -n openshift-logging", "oc extract configmap/collector --confirm", "<buffer> @type file path '/var/lib/fluentd/default' flush_mode interval flush_interval 5s flush_thread_count 3 retry_type periodic retry_wait 1s retry_max_interval 300s retry_timeout 60m queued_chunks_limit_size \"#{ENV['BUFFER_QUEUE_LIMIT'] || '32'}\" total_limit_size \"#{ENV['TOTAL_LIMIT_SIZE_PER_BUFFER'] || '8589934592'}\" chunk_limit_size 8m overflow_action throw_exception disable_chunk_backup true </buffer>", "apiVersion: template.openshift.io/v1 kind: Template metadata: name: eventrouter-template annotations: description: \"A pod forwarding kubernetes events to OpenShift Logging stack.\" tags: \"events,EFK,logging,cluster-logging\" objects: - kind: ServiceAccount 1 apiVersion: v1 metadata: name: eventrouter namespace: USD{NAMESPACE} - kind: ClusterRole 2 apiVersion: rbac.authorization.k8s.io/v1 metadata: name: event-reader rules: - apiGroups: [\"\"] resources: [\"events\"] verbs: [\"get\", \"watch\", \"list\"] - kind: ClusterRoleBinding 3 apiVersion: rbac.authorization.k8s.io/v1 metadata: name: event-reader-binding subjects: - kind: ServiceAccount name: eventrouter namespace: USD{NAMESPACE} roleRef: kind: ClusterRole name: event-reader - kind: ConfigMap 4 apiVersion: v1 metadata: name: eventrouter namespace: USD{NAMESPACE} data: config.json: |- { \"sink\": \"stdout\" } - kind: Deployment 5 apiVersion: apps/v1 metadata: name: eventrouter namespace: USD{NAMESPACE} labels: component: \"eventrouter\" logging-infra: \"eventrouter\" provider: \"openshift\" spec: selector: matchLabels: component: \"eventrouter\" logging-infra: \"eventrouter\" provider: \"openshift\" replicas: 1 template: metadata: labels: component: \"eventrouter\" logging-infra: \"eventrouter\" provider: \"openshift\" name: eventrouter spec: serviceAccount: eventrouter containers: - name: kube-eventrouter image: USD{IMAGE} imagePullPolicy: IfNotPresent resources: requests: cpu: USD{CPU} memory: USD{MEMORY} volumeMounts: - name: config-volume mountPath: /etc/eventrouter securityContext: allowPrivilegeEscalation: false capabilities: drop: [\"ALL\"] securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault volumes: - name: config-volume configMap: name: eventrouter parameters: - name: IMAGE 6 displayName: Image value: \"registry.redhat.io/openshift-logging/eventrouter-rhel8:v0.4\" - name: CPU 7 displayName: CPU value: \"100m\" - name: MEMORY 8 displayName: Memory value: \"128Mi\" - name: NAMESPACE displayName: Namespace value: \"openshift-logging\" 9", "oc process -f <templatefile> | oc apply -n openshift-logging -f -", "oc process -f eventrouter.yaml | oc apply -n openshift-logging -f -", "serviceaccount/eventrouter created clusterrole.rbac.authorization.k8s.io/event-reader created clusterrolebinding.rbac.authorization.k8s.io/event-reader-binding created configmap/eventrouter created deployment.apps/eventrouter created", "oc get pods --selector component=eventrouter -o name -n openshift-logging", "pod/cluster-logging-eventrouter-d649f97c8-qvv8r", "oc logs <cluster_logging_eventrouter_pod> -n openshift-logging", "oc logs cluster-logging-eventrouter-d649f97c8-qvv8r -n openshift-logging", "{\"verb\":\"ADDED\",\"event\":{\"metadata\":{\"name\":\"openshift-service-catalog-controller-manager-remover.1632d931e88fcd8f\",\"namespace\":\"openshift-service-catalog-removed\",\"selfLink\":\"/api/v1/namespaces/openshift-service-catalog-removed/events/openshift-service-catalog-controller-manager-remover.1632d931e88fcd8f\",\"uid\":\"787d7b26-3d2f-4017-b0b0-420db4ae62c0\",\"resourceVersion\":\"21399\",\"creationTimestamp\":\"2020-09-08T15:40:26Z\"},\"involvedObject\":{\"kind\":\"Job\",\"namespace\":\"openshift-service-catalog-removed\",\"name\":\"openshift-service-catalog-controller-manager-remover\",\"uid\":\"fac9f479-4ad5-4a57-8adc-cb25d3d9cf8f\",\"apiVersion\":\"batch/v1\",\"resourceVersion\":\"21280\"},\"reason\":\"Completed\",\"message\":\"Job completed\",\"source\":{\"component\":\"job-controller\"},\"firstTimestamp\":\"2020-09-08T15:40:26Z\",\"lastTimestamp\":\"2020-09-08T15:40:26Z\",\"count\":1,\"type\":\"Normal\"}}" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.11/html/logging/log-collection-and-forwarding
Chapter 2. Installing
Chapter 2. Installing Installing the Red Hat build of OpenTelemetry involves the following steps: Installing the Red Hat build of OpenTelemetry Operator. Creating a namespace for an OpenTelemetry Collector instance. Creating an OpenTelemetryCollector custom resource to deploy the OpenTelemetry Collector instance. 2.1. Installing the Red Hat build of OpenTelemetry from the web console You can install the Red Hat build of OpenTelemetry from the Administrator view of the web console. Prerequisites You are logged in to the web console as a cluster administrator with the cluster-admin role. For Red Hat OpenShift Dedicated, you must be logged in using an account with the dedicated-admin role. Procedure Install the Red Hat build of OpenTelemetry Operator: Go to Operators OperatorHub and search for Red Hat build of OpenTelemetry Operator . Select the Red Hat build of OpenTelemetry Operator that is provided by Red Hat Install Install View Operator . Important This installs the Operator with the default presets: Update channel stable Installation mode All namespaces on the cluster Installed Namespace openshift-operators Update approval Automatic In the Details tab of the installed Operator page, under ClusterServiceVersion details , verify that the installation Status is Succeeded . Create a project of your choice for the OpenTelemetry Collector instance that you will create in the step by going to Home Projects Create Project . Create an OpenTelemetry Collector instance. Go to Operators Installed Operators . Select OpenTelemetry Collector Create OpenTelemetry Collector YAML view . In the YAML view , customize the OpenTelemetryCollector custom resource (CR): Example OpenTelemetryCollector CR apiVersion: opentelemetry.io/v1beta1 kind: OpenTelemetryCollector metadata: name: otel namespace: <project_of_opentelemetry_collector_instance> spec: mode: deployment config: receivers: 1 otlp: protocols: grpc: http: jaeger: protocols: grpc: {} thrift_binary: {} thrift_compact: {} thrift_http: {} zipkin: {} processors: 2 batch: {} memory_limiter: check_interval: 1s limit_percentage: 50 spike_limit_percentage: 30 exporters: 3 debug: {} service: pipelines: traces: receivers: [otlp,jaeger,zipkin] processors: [memory_limiter,batch] exporters: [debug] 1 For details, see the "Receivers" page. 2 For details, see the "Processors" page. 3 For details, see the "Exporters" page. Select Create . Verification Use the Project: dropdown list to select the project of the OpenTelemetry Collector instance. Go to Operators Installed Operators to verify that the Status of the OpenTelemetry Collector instance is Condition: Ready . Go to Workloads Pods to verify that all the component pods of the OpenTelemetry Collector instance are running. 2.2. Installing the Red Hat build of OpenTelemetry by using the CLI You can install the Red Hat build of OpenTelemetry from the command line. Prerequisites An active OpenShift CLI ( oc ) session by a cluster administrator with the cluster-admin role. Tip Ensure that your OpenShift CLI ( oc ) version is up to date and matches your OpenShift Container Platform version. Run oc login : USD oc login --username=<your_username> Procedure Install the Red Hat build of OpenTelemetry Operator: Create a project for the Red Hat build of OpenTelemetry Operator by running the following command: USD oc apply -f - << EOF apiVersion: project.openshift.io/v1 kind: Project metadata: labels: kubernetes.io/metadata.name: openshift-opentelemetry-operator openshift.io/cluster-monitoring: "true" name: openshift-opentelemetry-operator EOF Create an Operator group by running the following command: USD oc apply -f - << EOF apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: openshift-opentelemetry-operator namespace: openshift-opentelemetry-operator spec: upgradeStrategy: Default EOF Create a subscription by running the following command: USD oc apply -f - << EOF apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: opentelemetry-product namespace: openshift-opentelemetry-operator spec: channel: stable installPlanApproval: Automatic name: opentelemetry-product source: redhat-operators sourceNamespace: openshift-marketplace EOF Check the Operator status by running the following command: USD oc get csv -n openshift-opentelemetry-operator Create a project of your choice for the OpenTelemetry Collector instance that you will create in a subsequent step: To create a project without metadata, run the following command: USD oc new-project <project_of_opentelemetry_collector_instance> To create a project with metadata, run the following command: USD oc apply -f - << EOF apiVersion: project.openshift.io/v1 kind: Project metadata: name: <project_of_opentelemetry_collector_instance> EOF Create an OpenTelemetry Collector instance in the project that you created for it. Note You can create multiple OpenTelemetry Collector instances in separate projects on the same cluster. Customize the OpenTelemetryCollector custom resource (CR): Example OpenTelemetryCollector CR apiVersion: opentelemetry.io/v1beta1 kind: OpenTelemetryCollector metadata: name: otel namespace: <project_of_opentelemetry_collector_instance> spec: mode: deployment config: receivers: 1 otlp: protocols: grpc: http: jaeger: protocols: grpc: {} thrift_binary: {} thrift_compact: {} thrift_http: {} zipkin: {} processors: 2 batch: {} memory_limiter: check_interval: 1s limit_percentage: 50 spike_limit_percentage: 30 exporters: 3 debug: {} service: pipelines: traces: receivers: [otlp,jaeger,zipkin] processors: [memory_limiter,batch] exporters: [debug] 1 For details, see the "Receivers" page. 2 For details, see the "Processors" page. 3 For details, see the "Exporters" page. Apply the customized CR by running the following command: USD oc apply -f - << EOF <OpenTelemetryCollector_custom_resource> EOF Verification Verify that the status.phase of the OpenTelemetry Collector pod is Running and the conditions are type: Ready by running the following command: USD oc get pod -l app.kubernetes.io/managed-by=opentelemetry-operator,app.kubernetes.io/instance=<namespace>.<instance_name> -o yaml Get the OpenTelemetry Collector service by running the following command: USD oc get service -l app.kubernetes.io/managed-by=opentelemetry-operator,app.kubernetes.io/instance=<namespace>.<instance_name> 2.3. Using taints and tolerations To schedule the OpenTelemetry pods on dedicated nodes, see How to deploy the different OpenTelemetry components on infra nodes using nodeSelector and tolerations in OpenShift 4 2.4. Creating the required RBAC resources automatically Some Collector components require configuring the RBAC resources. Procedure Add the following permissions to the opentelemetry-operator-controller-manage service account so that the Red Hat build of OpenTelemetry Operator can create them automatically: apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: generate-processors-rbac rules: - apiGroups: - rbac.authorization.k8s.io resources: - clusterrolebindings - clusterroles verbs: - create - delete - get - list - patch - update - watch --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: generate-processors-rbac roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: generate-processors-rbac subjects: - kind: ServiceAccount name: opentelemetry-operator-controller-manager namespace: openshift-opentelemetry-operator 2.5. Additional resources Creating a cluster admin OperatorHub.io Accessing the web console Installing from OperatorHub using the web console Creating applications from installed Operators Getting started with the OpenShift CLI
[ "apiVersion: opentelemetry.io/v1beta1 kind: OpenTelemetryCollector metadata: name: otel namespace: <project_of_opentelemetry_collector_instance> spec: mode: deployment config: receivers: 1 otlp: protocols: grpc: http: jaeger: protocols: grpc: {} thrift_binary: {} thrift_compact: {} thrift_http: {} zipkin: {} processors: 2 batch: {} memory_limiter: check_interval: 1s limit_percentage: 50 spike_limit_percentage: 30 exporters: 3 debug: {} service: pipelines: traces: receivers: [otlp,jaeger,zipkin] processors: [memory_limiter,batch] exporters: [debug]", "oc login --username=<your_username>", "oc apply -f - << EOF apiVersion: project.openshift.io/v1 kind: Project metadata: labels: kubernetes.io/metadata.name: openshift-opentelemetry-operator openshift.io/cluster-monitoring: \"true\" name: openshift-opentelemetry-operator EOF", "oc apply -f - << EOF apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: openshift-opentelemetry-operator namespace: openshift-opentelemetry-operator spec: upgradeStrategy: Default EOF", "oc apply -f - << EOF apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: opentelemetry-product namespace: openshift-opentelemetry-operator spec: channel: stable installPlanApproval: Automatic name: opentelemetry-product source: redhat-operators sourceNamespace: openshift-marketplace EOF", "oc get csv -n openshift-opentelemetry-operator", "oc new-project <project_of_opentelemetry_collector_instance>", "oc apply -f - << EOF apiVersion: project.openshift.io/v1 kind: Project metadata: name: <project_of_opentelemetry_collector_instance> EOF", "apiVersion: opentelemetry.io/v1beta1 kind: OpenTelemetryCollector metadata: name: otel namespace: <project_of_opentelemetry_collector_instance> spec: mode: deployment config: receivers: 1 otlp: protocols: grpc: http: jaeger: protocols: grpc: {} thrift_binary: {} thrift_compact: {} thrift_http: {} zipkin: {} processors: 2 batch: {} memory_limiter: check_interval: 1s limit_percentage: 50 spike_limit_percentage: 30 exporters: 3 debug: {} service: pipelines: traces: receivers: [otlp,jaeger,zipkin] processors: [memory_limiter,batch] exporters: [debug]", "oc apply -f - << EOF <OpenTelemetryCollector_custom_resource> EOF", "oc get pod -l app.kubernetes.io/managed-by=opentelemetry-operator,app.kubernetes.io/instance=<namespace>.<instance_name> -o yaml", "oc get service -l app.kubernetes.io/managed-by=opentelemetry-operator,app.kubernetes.io/instance=<namespace>.<instance_name>", "apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: generate-processors-rbac rules: - apiGroups: - rbac.authorization.k8s.io resources: - clusterrolebindings - clusterroles verbs: - create - delete - get - list - patch - update - watch --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: generate-processors-rbac roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: generate-processors-rbac subjects: - kind: ServiceAccount name: opentelemetry-operator-controller-manager namespace: openshift-opentelemetry-operator" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.17/html/red_hat_build_of_opentelemetry/install-otel
Chapter 5. Managed KIE Server
Chapter 5. Managed KIE Server A managed instance requires an available Process Automation Manager controller to start KIE Server. A Process Automation Manager controller manages KIE Server configuration in a centralized way. Each Process Automation Manager controller can manage multiple configurations at once, and there can be multiple Process Automation Manager controllers in the environment. Managed KIE Server can be configured with a list of Process Automation Manager controllers, but will only connect to one at a time. Important All Process Automation Manager controllers should be synchronized to ensure that the same set of configuration is provided to the server, regardless of the Process Automation Manager controller to which it connects. When KIE Server is configured with a list of Process Automation Manager controllers, it will attempt to connect to each of them at startup until a connection is successfully established with one of them. If a connection cannot be established, the server will not start, even if there is a local storage available with configuration. This ensures consistency and prevents the server from running with redundant configuration. Note To run KIE Server in standalone mode without connecting to Process Automation Manager controllers, see Chapter 6, Unmanaged KIE Server .
null
https://docs.redhat.com/en/documentation/red_hat_decision_manager/7.13/html/managing_red_hat_decision_manager_and_kie_server_settings/kie-server-managed-kie-server-con
Part IV. Infrastructure Services
Part IV. Infrastructure Services This part provides information on how to configure services and daemons and enable remote access to a Red Hat Enterprise Linux machine.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/system_administrators_guide/part-infrastructure_services
Chapter 4. Examples
Chapter 4. Examples This chapter demonstrates the use of AMQ JavaScript through example programs. For more examples, see the AMQ JavaScript example suite and the Rhea examples . 4.1. Sending messages This client program connects to a server using <connection-url> , creates a sender for target <address> , sends a message containing <message-body> , closes the connection, and exits. Example: Sending messages "use strict"; var rhea = require("rhea"); var url = require("url"); if (process.argv.length !== 5) { console.error("Usage: send.js <connection-url> <address> <message-body>"); process.exit(1); } var conn_url = url.parse(process.argv[2]); var address = process.argv[3]; var message_body = process.argv[4]; var container = rhea.create_container(); container.on("sender_open", function (event) { console.log("SEND: Opened sender for target address '" + event.sender.target.address + "'"); }); container.on("sendable", function (event) { var message = { body: message_body }; event.sender.send(message); console.log("SEND: Sent message '" + message.body + "'"); event.sender.close(); event.connection.close(); }); var opts = { host: conn_url.hostname, port: conn_url.port || 5672, // To connect with a user and password: // username: "<username>", // password: "<password>", }; var conn = container.connect(opts); conn.open_sender(address); Running the example To run the example program, copy it to a local file and invoke it using the node command. For more information, see Chapter 3, Getting started . USD node send.js amqp://localhost queue1 hello 4.2. Receiving messages This client program connects to a server using <connection-url> , creates a receiver for source <address> , and receives messages until it is terminated or it reaches <count> messages. Example: Receiving messages "use strict"; var rhea = require("rhea"); var url = require("url"); if (process.argv.length !== 4 && process.argv.length !== 5) { console.error("Usage: receive.js <connection-url> <address> [<message-count>]"); process.exit(1); } var conn_url = url.parse(process.argv[2]); var address = process.argv[3]; var desired = 0; var received = 0; if (process.argv.length === 5) { desired = parseInt(process.argv[4]); } var container = rhea.create_container(); container.on("receiver_open", function (event) { console.log("RECEIVE: Opened receiver for source address '" + event.receiver.source.address + "'"); }); container.on("message", function (event) { var message = event.message; console.log("RECEIVE: Received message '" + message.body + "'"); received++; if (received == desired) { event.receiver.close(); event.connection.close(); } }); var opts = { host: conn_url.hostname, port: conn_url.port || 5672, // To connect with a user and password: // username: "<username>", // password: "<password>", }; var conn = container.connect(opts); conn.open_receiver(address); Running the example To run the example program, copy it to a local file and invoke it using the python command. For more information, see Chapter 3, Getting started . USD node receive.js amqp://localhost queue1
[ "\"use strict\"; var rhea = require(\"rhea\"); var url = require(\"url\"); if (process.argv.length !== 5) { console.error(\"Usage: send.js <connection-url> <address> <message-body>\"); process.exit(1); } var conn_url = url.parse(process.argv[2]); var address = process.argv[3]; var message_body = process.argv[4]; var container = rhea.create_container(); container.on(\"sender_open\", function (event) { console.log(\"SEND: Opened sender for target address '\" + event.sender.target.address + \"'\"); }); container.on(\"sendable\", function (event) { var message = { body: message_body }; event.sender.send(message); console.log(\"SEND: Sent message '\" + message.body + \"'\"); event.sender.close(); event.connection.close(); }); var opts = { host: conn_url.hostname, port: conn_url.port || 5672, // To connect with a user and password: // username: \"<username>\", // password: \"<password>\", }; var conn = container.connect(opts); conn.open_sender(address);", "node send.js amqp://localhost queue1 hello", "\"use strict\"; var rhea = require(\"rhea\"); var url = require(\"url\"); if (process.argv.length !== 4 && process.argv.length !== 5) { console.error(\"Usage: receive.js <connection-url> <address> [<message-count>]\"); process.exit(1); } var conn_url = url.parse(process.argv[2]); var address = process.argv[3]; var desired = 0; var received = 0; if (process.argv.length === 5) { desired = parseInt(process.argv[4]); } var container = rhea.create_container(); container.on(\"receiver_open\", function (event) { console.log(\"RECEIVE: Opened receiver for source address '\" + event.receiver.source.address + \"'\"); }); container.on(\"message\", function (event) { var message = event.message; console.log(\"RECEIVE: Received message '\" + message.body + \"'\"); received++; if (received == desired) { event.receiver.close(); event.connection.close(); } }); var opts = { host: conn_url.hostname, port: conn_url.port || 5672, // To connect with a user and password: // username: \"<username>\", // password: \"<password>\", }; var conn = container.connect(opts); conn.open_receiver(address);", "node receive.js amqp://localhost queue1" ]
https://docs.redhat.com/en/documentation/red_hat_amq/2020.q4/html/using_the_amq_javascript_client/examples
8.2. Understanding the Default Behavior of Controller and Port Interfaces
8.2. Understanding the Default Behavior of Controller and Port Interfaces When controlling teamed port interfaces using the NetworkManager daemon, and especially when fault finding, keep the following in mind: Starting the controller interface does not automatically start the port interfaces. Starting a port interface always starts the controller interface. Stopping the controller interface also stops the port interfaces. A controller without ports can start static IP connections. A controller without ports waits for ports when starting DHCP connections. A controller with a DHCP connection waiting for ports completes when a port with a carrier is added. A controller with a DHCP connection waiting for ports continues waiting when a port without a carrier is added. Warning The use of direct cable connections without network switches is not supported for teaming. The failover mechanisms described here will not work as expected without the presence of network switches. See the Red Hat Knowledgebase article Why is bonding not supported with direct connection using crossover cables? for more information.
null
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/networking_guide/sec-team-understanding_the_default_behavior_of_controller_and_port_interfaces
Installing on JBoss EAP
Installing on JBoss EAP Red Hat Fuse 7.13 Install Red Hat Fuse 7.13 on Red Hat JBoss Enterprise Application Platform 7.4.16 Red Hat Fuse Documentation Team
null
https://docs.redhat.com/en/documentation/red_hat_fuse/7.13/html/installing_on_jboss_eap/index
4.13. Hardening TLS Configuration
4.13. Hardening TLS Configuration TLS ( Transport Layer Security ) is a cryptographic protocol used to secure network communications. When hardening system security settings by configuring preferred key-exchange protocols , authentication methods , and encryption algorithms , it is necessary to bear in mind that the broader the range of supported clients, the lower the resulting security. Conversely, strict security settings lead to limited compatibility with clients, which can result in some users being locked out of the system. Be sure to target the strictest available configuration and only relax it when it is required for compatibility reasons. Note that the default settings provided by libraries included in Red Hat Enterprise Linux 7 are secure enough for most deployments. The TLS implementations use secure algorithms where possible while not preventing connections from or to legacy clients or servers. Apply the hardened settings described in this section in environments with strict security requirements where legacy clients or servers that do not support secure algorithms or protocols are not expected or allowed to connect. 4.13.1. Choosing Algorithms to Enable There are several components that need to be selected and configured. Each of the following directly influences the robustness of the resulting configuration (and, consequently, the level of support in clients) or the computational demands that the solution has on the system. Protocol Versions The latest version of TLS provides the best security mechanism. Unless you have a compelling reason to include support for older versions of TLS (or even SSL ), allow your systems to negotiate connections using only the latest version of TLS . Do not allow negotiation using SSL version 2 or 3. Both of those versions have serious security vulnerabilities. Only allow negotiation using TLS version 1.0 or higher. The current version of TLS , 1.2, should always be preferred. Note Please note that currently, the security of all versions of TLS depends on the use of TLS extensions, specific ciphers (see below), and other workarounds. All TLS connection peers need to implement secure renegotiation indication ( RFC 5746 ), must not support compression, and must implement mitigating measures for timing attacks against CBC -mode ciphers (the Lucky Thirteen attack). TLS 1.0 clients need to additionally implement record splitting (a workaround against the BEAST attack). TLS 1.2 supports Authenticated Encryption with Associated Data ( AEAD ) mode ciphers like AES-GCM , AES-CCM , or Camellia-GCM , which have no known issues. All the mentioned mitigations are implemented in cryptographic libraries included in Red Hat Enterprise Linux. See Table 4.6, "Protocol Versions" for a quick overview of protocol versions and recommended usage. Table 4.6. Protocol Versions Protocol Version Usage Recommendation SSL v2 Do not use. Has serious security vulnerabilities. SSL v3 Do not use. Has serious security vulnerabilities. TLS 1.0 Use for interoperability purposes where needed. Has known issues that cannot be mitigated in a way that guarantees interoperability, and thus mitigations are not enabled by default. Does not support modern cipher suites. TLS 1.1 Use for interoperability purposes where needed. Has no known issues but relies on protocol fixes that are included in all the TLS implementations in Red Hat Enterprise Linux. Does not support modern cipher suites. TLS 1.2 Recommended version. Supports the modern AEAD cipher suites. Some components in Red Hat Enterprise Linux are configured to use TLS 1.0 even though they provide support for TLS 1.1 or even 1.2 . This is motivated by an attempt to achieve the highest level of interoperability with external services that may not support the latest versions of TLS . Depending on your interoperability requirements, enable the highest available version of TLS . Important SSL v3 is not recommended for use. However, if, despite the fact that it is considered insecure and unsuitable for general use, you absolutely must leave SSL v3 enabled, see Section 4.8, "Using stunnel" for instructions on how to use stunnel to securely encrypt communications even when using services that do not support encryption or are only capable of using obsolete and insecure modes of encryption. Cipher Suites Modern, more secure cipher suites should be preferred to old, insecure ones. Always disable the use of eNULL and aNULL cipher suites, which do not offer any encryption or authentication at all. If at all possible, ciphers suites based on RC4 or HMAC-MD5 , which have serious shortcomings, should also be disabled. The same applies to the so-called export cipher suites, which have been intentionally made weaker, and thus are easy to break. While not immediately insecure, cipher suites that offer less than 128 bits of security should not be considered for their short useful life. Algorithms that use 128 bit of security or more can be expected to be unbreakable for at least several years, and are thus strongly recommended. Note that while 3DES ciphers advertise the use of 168 bits, they actually offer 112 bits of security. Always give preference to cipher suites that support (perfect) forward secrecy ( PFS ), which ensures the confidentiality of encrypted data even in case the server key is compromised. This rules out the fast RSA key exchange, but allows for the use of ECDHE and DHE . Of the two, ECDHE is the faster and therefore the preferred choice. You should also give preference to AEAD ciphers, such as AES-GCM , before CBC -mode ciphers as they are not vulnerable to padding oracle attacks. Additionally, in many cases, AES-GCM is faster than AES in CBC mode, especially when the hardware has cryptographic accelerators for AES . Note also that when using the ECDHE key exchange with ECDSA certificates, the transaction is even faster than pure RSA key exchange. To provide support for legacy clients, you can install two pairs of certificates and keys on a server: one with ECDSA keys (for new clients) and one with RSA keys (for legacy ones). Public Key Length When using RSA keys, always prefer key lengths of at least 3072 bits signed by at least SHA-256, which is sufficiently large for true 128 bits of security. Warning Keep in mind that the security of your system is only as strong as the weakest link in the chain. For example, a strong cipher alone does not guarantee good security. The keys and the certificates are just as important, as well as the hash functions and keys used by the Certification Authority ( CA ) to sign your keys. 4.13.2. Using Implementations of TLS Red Hat Enterprise Linux 7 is distributed with several full-featured implementations of TLS . In this section, the configuration of OpenSSL and GnuTLS is described. See Section 4.13.3, "Configuring Specific Applications" for instructions on how to configure TLS support in individual applications. The available TLS implementations offer support for various cipher suites that define all the elements that come together when establishing and using TLS -secured communications. Use the tools included with the different implementations to list and specify cipher suites that provide the best possible security for your use case while considering the recommendations outlined in Section 4.13.1, "Choosing Algorithms to Enable" . The resulting cipher suites can then be used to configure the way individual applications negotiate and secure connections. Important Be sure to check your settings following every update or upgrade of the TLS implementation you use or the applications that utilize that implementation. New versions may introduce new cipher suites that you do not want to have enabled and that your current configuration does not disable. 4.13.2.1. Working with Cipher Suites in OpenSSL OpenSSL is a toolkit and a cryptography library that support the SSL and TLS protocols. On Red Hat Enterprise Linux 7, a configuration file is provided at /etc/pki/tls/openssl.cnf . The format of this configuration file is described in config (1) . See also Section 4.7.9, "Configuring OpenSSL" . To get a list of all cipher suites supported by your installation of OpenSSL , use the openssl command with the ciphers subcommand as follows: Pass other parameters (referred to as cipher strings and keywords in OpenSSL documentation) to the ciphers subcommand to narrow the output. Special keywords can be used to only list suites that satisfy a certain condition. For example, to only list suites that are defined as belonging to the HIGH group, use the following command: See the ciphers (1) manual page for a list of available keywords and cipher strings. To obtain a list of cipher suites that satisfy the recommendations outlined in Section 4.13.1, "Choosing Algorithms to Enable" , use a command similar to the following: The above command omits all insecure ciphers, gives preference to ephemeral elliptic curve Diffie-Hellman key exchange and ECDSA ciphers, and omits RSA key exchange (thus ensuring perfect forward secrecy ). Note that this is a rather strict configuration, and it might be necessary to relax the conditions in real-world scenarios to allow for a compatibility with a broader range of clients. 4.13.2.2. Working with Cipher Suites in GnuTLS GnuTLS is a communications library that implements the SSL and TLS protocols and related technologies. Note The GnuTLS installation on Red Hat Enterprise Linux 7 offers optimal default configuration values that provide sufficient security for the majority of use cases. Unless you need to satisfy special security requirements, it is recommended to use the supplied defaults. Use the gnutls-cli command with the -l (or --list ) option to list all supported cipher suites: To narrow the list of cipher suites displayed by the -l option, pass one or more parameters (referred to as priority strings and keywords in GnuTLS documentation) to the --priority option. See the GnuTLS documentation at http://www.gnutls.org/manual/gnutls.html#Priority-Strings for a list of all available priority strings. For example, issue the following command to get a list of cipher suites that offer at least 128 bits of security: To obtain a list of cipher suites that satisfy the recommendations outlined in Section 4.13.1, "Choosing Algorithms to Enable" , use a command similar to the following: The above command limits the output to ciphers with at least 128 bits of security while giving preference to the stronger ones. It also forbids RSA key exchange and DSS authentication. Note that this is a rather strict configuration, and it might be necessary to relax the conditions in real-world scenarios to allow for a compatibility with a broader range of clients. 4.13.3. Configuring Specific Applications Different applications provide their own configuration mechanisms for TLS . This section describes the TLS -related configuration files employed by the most commonly used server applications and offers examples of typical configurations. Regardless of the configuration you choose to use, always make sure to mandate that your server application enforces server-side cipher order , so that the cipher suite to be used is determined by the order you configure. 4.13.3.1. Configuring the Apache HTTP Server The Apache HTTP Server can use both OpenSSL and NSS libraries for its TLS needs. Depending on your choice of the TLS library, you need to install either the mod_ssl or the mod_nss module (provided by eponymous packages). For example, to install the package that provides the OpenSSL mod_ssl module, issue the following command as root: The mod_ssl package installs the /etc/httpd/conf.d/ssl.conf configuration file, which can be used to modify the TLS -related settings of the Apache HTTP Server . Similarly, the mod_nss package installs the /etc/httpd/conf.d/nss.conf configuration file. Install the httpd-manual package to obtain complete documentation for the Apache HTTP Server , including TLS configuration. The directives available in the /etc/httpd/conf.d/ssl.conf configuration file are described in detail in /usr/share/httpd/manual/mod/mod_ssl.html . Examples of various settings are in /usr/share/httpd/manual/ssl/ssl_howto.html . When modifying the settings in the /etc/httpd/conf.d/ssl.conf configuration file, be sure to consider the following three directives at the minimum: SSLProtocol Use this directive to specify the version of TLS (or SSL ) you want to allow. SSLCipherSuite Use this directive to specify your preferred cipher suite or disable the ones you want to disallow. SSLHonorCipherOrder Uncomment and set this directive to on to ensure that the connecting clients adhere to the order of ciphers you specified. For example: Note that the above configuration is the bare minimum, and it can be hardened significantly by following the recommendations outlined in Section 4.13.1, "Choosing Algorithms to Enable" . To configure and use the mod_nss module, modify the /etc/httpd/conf.d/nss.conf configuration file. The mod_nss module is derived from mod_ssl , and as such it shares many features with it, not least the structure of the configuration file, and the directives that are available. Note that the mod_nss directives have a prefix of NSS instead of SSL . See https://git.fedorahosted.org/cgit/mod_nss.git/plain/docs/mod_nss.html for an overview of information about mod_nss , including a list of mod_ssl configuration directives that are not applicable to mod_nss . 4.13.3.2. Configuring the Dovecot Mail Server To configure your installation of the Dovecot mail server to use TLS , modify the /etc/dovecot/conf.d/10-ssl.conf configuration file. You can find an explanation of some of the basic configuration directives available in that file in /usr/share/doc/dovecot-2.2.10/wiki/SSL.DovecotConfiguration.txt (this help file is installed along with the standard installation of Dovecot ). When modifying the settings in the /etc/dovecot/conf.d/10-ssl.conf configuration file, be sure to consider the following three directives at the minimum: ssl_protocols Use this directive to specify the version of TLS (or SSL ) you want to allow. ssl_cipher_list Use this directive to specify your preferred cipher suites or disable the ones you want to disallow. ssl_prefer_server_ciphers Uncomment and set this directive to yes to ensure that the connecting clients adhere to the order of ciphers you specified. For example: Note that the above configuration is the bare minimum, and it can be hardened significantly by following the recommendations outlined in Section 4.13.1, "Choosing Algorithms to Enable" . 4.13.4. Additional Information For more information about TLS configuration and related topics, see the resources listed below. Installed Documentation config (1) - Describes the format of the /etc/ssl/openssl.conf configuration file. ciphers (1) - Includes a list of available OpenSSL keywords and cipher strings. /usr/share/httpd/manual/mod/mod_ssl.html - Contains detailed descriptions of the directives available in the /etc/httpd/conf.d/ssl.conf configuration file used by the mod_ssl module for the Apache HTTP Server . /usr/share/httpd/manual/ssl/ssl_howto.html - Contains practical examples of real-world settings in the /etc/httpd/conf.d/ssl.conf configuration file used by the mod_ssl module for the Apache HTTP Server . /usr/share/doc/dovecot-2.2.10/wiki/SSL.DovecotConfiguration.txt - Explains some of the basic configuration directives available in the /etc/dovecot/conf.d/10-ssl.conf configuration file used by the Dovecot mail server. Online Documentation Red Hat Enterprise Linux 7 SELinux User's and Administrator's Guide - The SELinux User's and Administrator's Guide for Red Hat Enterprise Linux 7 describes the basic principles of SELinux and documents in detail how to configure and use SELinux with various services, such as the Apache HTTP Server . http://tools.ietf.org/html/draft-ietf-uta-tls-bcp-00 - Recommendations for secure use of TLS and DTLS . See Also Section A.2.4, "SSL/TLS" provides a concise description of the SSL and TLS protocols. Section 4.7, "Using OpenSSL" describes, among other things, how to use OpenSSL to create and manage keys, generate certificates, and encrypt and decrypt files.
[ "~]USD openssl ciphers -v 'ALL:COMPLEMENTOFALL'", "~]USD openssl ciphers -v 'HIGH'", "~]USD openssl ciphers -v 'kEECDH+aECDSA+AES:kEECDH+AES+aRSA:kEDH+aRSA+AES' | column -t ECDHE-ECDSA-AES256-GCM-SHA384 TLSv1.2 Kx=ECDH Au=ECDSA Enc=AESGCM(256) Mac=AEAD ECDHE-ECDSA-AES256-SHA384 TLSv1.2 Kx=ECDH Au=ECDSA Enc=AES(256) Mac=SHA384 ECDHE-ECDSA-AES256-SHA SSLv3 Kx=ECDH Au=ECDSA Enc=AES(256) Mac=SHA1 ECDHE-ECDSA-AES128-GCM-SHA256 TLSv1.2 Kx=ECDH Au=ECDSA Enc=AESGCM(128) Mac=AEAD ECDHE-ECDSA-AES128-SHA256 TLSv1.2 Kx=ECDH Au=ECDSA Enc=AES(128) Mac=SHA256 ECDHE-ECDSA-AES128-SHA SSLv3 Kx=ECDH Au=ECDSA Enc=AES(128) Mac=SHA1 ECDHE-RSA-AES256-GCM-SHA384 TLSv1.2 Kx=ECDH Au=RSA Enc=AESGCM(256) Mac=AEAD ECDHE-RSA-AES256-SHA384 TLSv1.2 Kx=ECDH Au=RSA Enc=AES(256) Mac=SHA384 ECDHE-RSA-AES256-SHA SSLv3 Kx=ECDH Au=RSA Enc=AES(256) Mac=SHA1 ECDHE-RSA-AES128-GCM-SHA256 TLSv1.2 Kx=ECDH Au=RSA Enc=AESGCM(128) Mac=AEAD ECDHE-RSA-AES128-SHA256 TLSv1.2 Kx=ECDH Au=RSA Enc=AES(128) Mac=SHA256 ECDHE-RSA-AES128-SHA SSLv3 Kx=ECDH Au=RSA Enc=AES(128) Mac=SHA1 DHE-RSA-AES256-GCM-SHA384 TLSv1.2 Kx=DH Au=RSA Enc=AESGCM(256) Mac=AEAD DHE-RSA-AES256-SHA256 TLSv1.2 Kx=DH Au=RSA Enc=AES(256) Mac=SHA256 DHE-RSA-AES256-SHA SSLv3 Kx=DH Au=RSA Enc=AES(256) Mac=SHA1 DHE-RSA-AES128-GCM-SHA256 TLSv1.2 Kx=DH Au=RSA Enc=AESGCM(128) Mac=AEAD DHE-RSA-AES128-SHA256 TLSv1.2 Kx=DH Au=RSA Enc=AES(128) Mac=SHA256 DHE-RSA-AES128-SHA SSLv3 Kx=DH Au=RSA Enc=AES(128) Mac=SHA1", "~]USD gnutls-cli -l", "~]USD gnutls-cli --priority SECURE128 -l", "~]USD gnutls-cli --priority SECURE256:+SECURE128:-VERS-TLS-ALL:+VERS-TLS1.2:-RSA:-DHE-DSS:-CAMELLIA-128-CBC:-CAMELLIA-256-CBC -l Cipher suites for SECURE256:+SECURE128:-VERS-TLS-ALL:+VERS-TLS1.2:-RSA:-DHE-DSS:-CAMELLIA-128-CBC:-CAMELLIA-256-CBC TLS_ECDHE_ECDSA_AES_256_GCM_SHA384 0xc0, 0x2c TLS1.2 TLS_ECDHE_ECDSA_AES_256_CBC_SHA384 0xc0, 0x24 TLS1.2 TLS_ECDHE_ECDSA_AES_256_CBC_SHA1 0xc0, 0x0a SSL3.0 TLS_ECDHE_ECDSA_AES_128_GCM_SHA256 0xc0, 0x2b TLS1.2 TLS_ECDHE_ECDSA_AES_128_CBC_SHA256 0xc0, 0x23 TLS1.2 TLS_ECDHE_ECDSA_AES_128_CBC_SHA1 0xc0, 0x09 SSL3.0 TLS_ECDHE_RSA_AES_256_GCM_SHA384 0xc0, 0x30 TLS1.2 TLS_ECDHE_RSA_AES_256_CBC_SHA1 0xc0, 0x14 SSL3.0 TLS_ECDHE_RSA_AES_128_GCM_SHA256 0xc0, 0x2f TLS1.2 TLS_ECDHE_RSA_AES_128_CBC_SHA256 0xc0, 0x27 TLS1.2 TLS_ECDHE_RSA_AES_128_CBC_SHA1 0xc0, 0x13 SSL3.0 TLS_DHE_RSA_AES_256_CBC_SHA256 0x00, 0x6b TLS1.2 TLS_DHE_RSA_AES_256_CBC_SHA1 0x00, 0x39 SSL3.0 TLS_DHE_RSA_AES_128_GCM_SHA256 0x00, 0x9e TLS1.2 TLS_DHE_RSA_AES_128_CBC_SHA256 0x00, 0x67 TLS1.2 TLS_DHE_RSA_AES_128_CBC_SHA1 0x00, 0x33 SSL3.0 Certificate types: CTYPE-X.509 Protocols: VERS-TLS1.2 Compression: COMP-NULL Elliptic curves: CURVE-SECP384R1, CURVE-SECP521R1, CURVE-SECP256R1 PK-signatures: SIGN-RSA-SHA384, SIGN-ECDSA-SHA384, SIGN-RSA-SHA512, SIGN-ECDSA-SHA512, SIGN-RSA-SHA256, SIGN-DSA-SHA256, SIGN-ECDSA-SHA256", "~]# yum install mod_ssl", "SSLProtocol all -SSLv2 -SSLv3 SSLCipherSuite HIGH:!aNULL:!MD5 SSLHonorCipherOrder on", "ssl_protocols = !SSLv2 !SSLv3 ssl_cipher_list = HIGH:!aNULL:!MD5 ssl_prefer_server_ciphers = yes" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/security_guide/sec-hardening_tls_configuration
Chapter 18. Upgrading a standard overcloud
Chapter 18. Upgrading a standard overcloud This scenario contains an example upgrade process for a standard overcloud environment, which includes the following node types: Three Controller nodes Three Ceph Storage nodes Multiple Compute nodes 18.1. Running the overcloud upgrade preparation The upgrade requires running openstack overcloud upgrade prepare command, which performs the following tasks: Updates the overcloud plan to OpenStack Platform 16.2 Prepares the nodes for the upgrade Note If you are not using the default stack name ( overcloud ), set your stack name with the --stack STACK NAME option replacing STACK NAME with the name of your stack. Procedure Source the stackrc file: Run the upgrade preparation command: Include the following options relevant to your environment: The environment file ( upgrades-environment.yaml ) with the upgrade-specific parameters ( -e ). The environment file ( rhsm.yaml ) with the registration and subscription parameters ( -e ). The environment file ( containers-prepare-parameter.yaml ) with your new container image locations ( -e ). In most cases, this is the same environment file that the undercloud uses. The environment file ( neutron-ovs.yaml ) to maintain OVS compatibility. Any custom configuration environment files ( -e ) relevant to your deployment. If applicable, your custom roles ( roles_data ) file using --roles-file . If applicable, your composable network ( network_data ) file using --networks-file . If you use a custom stack name, pass the name with the --stack option. Wait until the upgrade preparation completes. Download the container images: 18.2. Upgrading Controller nodes To upgrade all the Controller nodes to Red Hat OpenStack Platform (RHOSP) 16.2, you must upgrade each Controller node starting with the bootstrap Controller node. If your deployment uses a Red Hat Ceph Storage cluster that was deployed using director, follow the procedure in Upgrading Controller nodes with director-deployed Ceph Storage . During the bootstrap Controller node upgrade process, a new Pacemaker cluster is created and new RHOSP 16.2 containers are started on the node, while the remaining Controller nodes are still running on RHOSP 13. After upgrading the bootstrap node, you must upgrade each additional node with Pacemaker services and ensure that each node joins the new Pacemaker cluster started with the bootstrap node. For more information, see Overcloud node upgrade workflow . Procedure Source the stackrc file: On the undercloud node, identify the bootstrap Controller node: Replace <stack_name> with the name of your stack. Upgrade the bootstrap Controller node: Perform a Leapp upgrade of the operating system on the bootstrap Controller node: Replace <bootstrap_controller_node> with the host name of the bootstrap Controller node in your environment, for example, overcloud-controller-0 . If you are not using the default overcloud stack name, overcloud , include the --stack optional argument and replace <stack> with the name of your overcloud stack. The bootstrap Controller node is rebooted as part of the Leapp upgrade. Copy the latest version of the database from an existing node to the bootstrap node: Important This command causes an outage on the control plane. You cannot perform any standard operations on the overcloud until the RHOSP upgrade is complete and the control plane is active again. Launch temporary 16.2 containers on Compute nodes to help facilitate workload migration when you upgrade Compute nodes at a later step: Upgrade the overcloud with no tags: Verify that after the upgrade, the new Pacemaker cluster is started and that the control plane services such as galera , rabbit , haproxy , and redis are running: Upgrade the Controller node: Verify that the old cluster is no longer running: An error similar to the following is displayed when the cluster is not running: Perform a Leapp upgrade of the operating system on the Controller node: Replace <controller_node> with the host name of the Controller node to upgrade, for example, overcloud-controller-1 . The Controller node is rebooted as a part of the Leapp upgrade. Upgrade the Controller node, adding it to the previously upgraded nodes in the new Pacemaker cluster: Replace <bootstrap_controller_node,controller_node_1,controller_node_n> with a comma-separated list of the Controller nodes that you have upgraded so far, and the additional Controller node that you want to add to the Pacemaker cluster, for example, overcloud-controller-0,overcloud-controller-1, overcloud-controller-2 . 18.3. Upgrading Controller nodes with director-deployed Ceph Storage If your deployment uses a Red Hat Ceph Storage cluster that was deployed using director, you must complete this procedure. To upgrade all the Controller nodes to OpenStack Platform 16.2, you must upgrade each Controller node starting with the bootstrap Controller node. During the bootstrap Controller node upgrade process, a new Pacemaker cluster is created and new Red Hat OpenStack 16.2 containers are started on the node, while the remaining Controller nodes are still running on Red Hat OpenStack 13. After upgrading the bootstrap node, you must upgrade each additional node with Pacemaker services and ensure that each node joins the new Pacemaker cluster started with the bootstrap node. For more information, see Overcloud node upgrade workflow . In this example, the controller nodes are named using the default overcloud-controller- NODEID convention. This includes the following three controller nodes: overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 Substitute these values for your own node names where applicable. Procedure Source the stackrc file: Identify the bootstrap Controller node by running the following command on the undercloud node: Optional: Replace <stack_name> with the name of the stack. If not specified, the default is overcloud . Upgrade the bootstrap Controller node: Run the external upgrade command with the ceph_systemd tag: Replace <stack_name> with the name of your stack. This command performs the following functions: Changes the systemd units that control the Ceph Storage containers to use Podman management. Limits actions to the selected Controller node using the ceph_ansible_limit variable. This step is a preliminary measure to prepare the Ceph Storage services for The leapp upgrade. Run the upgrade command with the system_upgrade tag: This command performs the following actions: Performs a Leapp upgrade of the operating system. Performs a reboot as a part of the Leapp upgrade. Important The command causes an outage on the control plane. You cannot perform any standard operations on the overcloud during the few steps. Run the external upgrade command with the system_upgrade_transfer_data tag: This command copies the latest version of the database from an existing node to the bootstrap node. Run the upgrade command with the nova_hybrid_state tag and run only the upgrade_steps_playbook.yaml playbook: This command launches temporary 16.2 containers on Compute nodes to help facilitate workload migration when you upgrade Compute nodes at a later step. Run the upgrade command with no tags: This command performs the Red Hat OpenStack Platform upgrade. Important The control plane becomes active when this command finishes. You can perform standard operations on the overcloud again. Verify that after the upgrade, the new Pacemaker cluster is started and that the control plane services such as galera, rabbit, haproxy, and redis are running: Upgrade the Controller node: Verify that the old cluster is no longer running: An error similar to the following is displayed when the cluster is not running: Run the external upgrade command with the ceph_systemd tag: This command performs the following functions: Changes the systemd units that control the Ceph Storage containers to use Podman management. Limits actions to the selected Controller node using the ceph_ansible_limit variable. This step is a preliminary measure to prepare the Ceph Storage services for The leapp upgrade. Run the upgrade command with the system_upgrade tag on the Controller node: This command performs the following actions: Performs a Leapp upgrade of the operating system. Performs a reboot as a part of the Leapp upgrade. Run the upgrade command with no tags: This command performs the Red Hat OpenStack Platform upgrade. In addition to this node, include the previously upgraded bootstrap node in the --limit option. Upgrade the final Controller node: Verify that the old cluster is no longer running: An error similar to the following is displayed when the cluster is not running: Run the external upgrade command with the ceph_systemd tag: This command performs the following functions: Changes the systemd units that control the Ceph Storage containers to use Podman management. Limits actions to the selected Controller node using the ceph_ansible_limit variable. This step is a preliminary measure to prepare the Ceph Storage services for The leapp upgrade. Run the upgrade command with the system_upgrade tag: This command performs the following actions: Performs a Leapp upgrade of the operating system. Performs a reboot as a part of the Leapp upgrade. Run the upgrade command with no tags: This command performs the Red Hat OpenStack Platform upgrade. Include all Controller nodes in the --limit option. 18.4. Upgrading the operating system for Ceph Storage nodes If your deployment uses a Red Hat Ceph Storage cluster that was deployed using director, you must upgrade the operating system for each Ceph Storage nodes. Note If you are not using the default stack name ( overcloud ), set your stack name with the --stack STACK NAME option replacing STACK NAME with the name of your stack. Procedure Source the stackrc file: Select a Ceph Storage node and upgrade the operating system: Run the external upgrade command with the ceph_systemd tag: This command performs the following functions: Changes the systemd units that control the Ceph Storage containers to use Podman management. Limits actions to the selected node using the ceph_ansible_limit variable. This step is a preliminary measure to prepare the Ceph Storage services for The leapp upgrade. Run the upgrade command with the system_upgrade tag: This command performs the following actions: Performs a Leapp upgrade of the operating system. Performs a reboot as a part of the Leapp upgrade. Run the upgrade command with no tags: This command runs the config-download playbooks and configures the composable services on the Ceph Storage node. This step does not upgrade the Ceph Storage nodes to Red Hat Ceph Storage 4. The Red Hat Ceph Storage 4 upgrade occurs in a later procedure. Select the Ceph Storage node and upgrade the operating system: Run the external upgrade command with the ceph_systemd tag: This command performs the following functions: Changes the systemd units that control the Ceph Storage containers to use Podman management. Limits actions to the selected node using the ceph_ansible_limit variable. This step is a preliminary measure to prepare the Ceph Storage services for The leapp upgrade. Run the upgrade command with the system_upgrade tag: This command performs the following actions: Performs a Leapp upgrade of the operating system. Performs a reboot as a part of the Leapp upgrade. Run the upgrade command with no tags: This command runs the config-download playbooks and configures the composable services on the Ceph Storage node. This step does not upgrade the Ceph Storage nodes to Red Hat Ceph Storage 4. The Red Hat Ceph Storage 4 upgrade occurs in a later procedure. Select the final Ceph Storage node and upgrade the operating system: Run the external upgrade command with the ceph_systemd tag: This command performs the following functions: Changes the systemd units that control the Ceph Storage containers to use Podman management. Limits actions to the selected node using the ceph_ansible_limit variable. This step is a preliminary measure to prepare the Ceph Storage services for The leapp upgrade. Run the upgrade command with the system_upgrade tag: This command performs the following actions: Performs a Leapp upgrade of the operating system. Performs a reboot as a part of the Leapp upgrade. Run the upgrade command with no tags: This command runs the config-download playbooks and configures the composable services on the Ceph Storage node. This step does not upgrade the Ceph Storage nodes to Red Hat Ceph Storage 4. The Red Hat Ceph Storage 4 upgrade occurs in a later procedure. 18.5. Upgrading Compute nodes Upgrade all the Compute nodes to OpenStack Platform 16.2. Note If you are not using the default stack name ( overcloud ), set your stack name with the --stack STACK NAME option replacing STACK NAME with the name of your stack. Procedure Source the stackrc file: Migrate your instances. For more information on migration strategies, see Migrating virtual machines between Compute nodes . Run the upgrade command with the system_upgrade tag: This command performs the following actions: Performs a Leapp upgrade of the operating system. Performs a reboot as a part of the Leapp upgrade. Run the upgrade command with no tags: This command performs the Red Hat OpenStack Platform upgrade. To upgrade multiple Compute nodes in parallel, set the --limit option to a comma-separated list of nodes that you want to upgrade. First perform the system_upgrade task: Then perform the standard OpenStack service upgrade: 18.6. Synchronizing the overcloud stack The upgrade requires an update the overcloud stack to ensure that the stack resource structure and parameters align with a fresh deployment of OpenStack Platform 16.2. Note If you are not using the default stack name ( overcloud ), set your stack name with the --stack STACK NAME option replacing STACK NAME with the name of your stack. Procedure Source the stackrc file: Edit the containers-prepare-parameter.yaml file and remove the following parameters and their values: ceph3_namespace ceph3_tag ceph3_image name_prefix_stein name_suffix_stein namespace_stein tag_stein To re-enable fencing in your overcloud, set the EnableFencing parameter to true in the fencing.yaml environment file. Run the upgrade finalization command: Include the following options relevant to your environment: The environment file ( upgrades-environment.yaml ) with the upgrade-specific parameters ( -e ). The environment file ( fencing.yaml ) with the EnableFencing parameter set to true . The environment file ( rhsm.yaml ) with the registration and subscription parameters ( -e ). The environment file ( containers-prepare-parameter.yaml ) with your new container image locations ( -e ). In most cases, this is the same environment file that the undercloud uses. The environment file ( neutron-ovs.yaml ) to maintain OVS compatibility. Any custom configuration environment files ( -e ) relevant to your deployment. If applicable, your custom roles ( roles_data ) file using --roles-file . If applicable, your composable network ( network_data ) file using --networks-file . If you use a custom stack name, pass the name with the --stack option. Wait until the stack synchronization completes. Important You do not need the upgrades-environment.yaml file for any further deployment operations.
[ "source ~/stackrc", "openstack overcloud upgrade prepare --stack STACK NAME --templates -e ENVIRONMENT FILE ... -e /home/stack/templates/upgrades-environment.yaml -e /home/stack/templates/rhsm.yaml -e /home/stack/containers-prepare-parameter.yaml -e /usr/share/openstack-tripleo-heat-templates/environments/services/neutron-ovs.yaml ...", "openstack overcloud external-upgrade run --stack STACK NAME --tags container_image_prepare", "source ~/stackrc", "tripleo-ansible-inventory --list [--stack <stack_name>] |jq .overcloud_Controller.hosts[0]", "openstack overcloud upgrade run [--stack <stack>] --tags system_upgrade --limit <bootstrap_controller_node>", "openstack overcloud external-upgrade run [--stack <stack>] --tags system_upgrade_transfer_data", "openstack overcloud upgrade run --stack <stack> --playbook upgrade_steps_playbook.yaml --tags nova_hybrid_state --limit all", "openstack overcloud upgrade run --stack <stack> --limit <bootstrap_controller_node>", "sudo pcs status", "sudo pcs status", "Error: cluster is not currently running on this node", "openstack overcloud upgrade run --stack <stack> --tags system_upgrade --limit <controller_node>", "openstack overcloud upgrade run --stack <stack> --limit <bootstrap_controller_node,controller_node_1,controller_node_n>", "source ~/stackrc", "tripleo-ansible-inventory --list [--stack <stack_name>] |jq .overcloud_Controller.hosts[0]", "openstack overcloud external-upgrade run [--stack <stack_name>] --tags ceph_systemd -e ceph_ansible_limit=overcloud-controller-0", "openstack overcloud upgrade run [--stack <stack_name>] --tags system_upgrade --limit overcloud-controller-0", "openstack overcloud external-upgrade run [--stack <stack_name>] --tags system_upgrade_transfer_data", "openstack overcloud upgrade run [--stack <stack_name>] --playbook upgrade_steps_playbook.yaml --tags nova_hybrid_state --limit all", "openstack overcloud upgrade run [--stack <stack_name>] --limit overcloud-controller-0", "sudo pcs status", "sudo pcs status", "Error: cluster is not currently running on this node", "openstack overcloud external-upgrade run [--stack <stack_name>] --tags ceph_systemd -e ceph_ansible_limit=overcloud-controller-1", "openstack overcloud upgrade run [--stack <stack_name>] --tags system_upgrade --limit overcloud-controller-1", "openstack overcloud upgrade run [--stack <stack_name>] --limit overcloud-controller-0,overcloud-controller-1", "sudo pcs status", "Error: cluster is not currently running on this node", "openstack overcloud external-upgrade run [--stack <stack_name>] --tags ceph_systemd -e ceph_ansible_limit=overcloud-controller-2", "openstack overcloud upgrade run [--stack <stack_name>] --tags system_upgrade --limit overcloud-controller-2", "openstack overcloud upgrade run [--stack <stack_name>] --limit overcloud-controller-0,overcloud-controller-1,overcloud-controller-2", "source ~/stackrc", "openstack overcloud external-upgrade run --stack STACK NAME --tags ceph_systemd -e ceph_ansible_limit=overcloud-cephstorage-0", "openstack overcloud upgrade run --stack STACK NAME --tags system_upgrade --limit overcloud-cephstorage-0", "openstack overcloud upgrade run --stack STACK NAME --limit overcloud-cephstorage-0", "openstack overcloud external-upgrade run --stack STACK NAME --tags ceph_systemd -e ceph_ansible_limit=overcloud-cephstorage-1", "openstack overcloud upgrade run --stack STACK NAME --tags system_upgrade --limit overcloud-cephstorage-1", "openstack overcloud upgrade run --stack STACK NAME --limit overcloud-cephstorage-1", "openstack overcloud external-upgrade run --stack STACK NAME --tags ceph_systemd -e ceph_ansible_limit=overcloud-cephstorage-2", "openstack overcloud upgrade run --stack STACK NAME --tags system_upgrade --limit overcloud-cephstorage-2", "openstack overcloud upgrade run --stack STACK NAME --limit overcloud-cephstorage-2", "source ~/stackrc", "openstack overcloud upgrade run --stack STACK NAME --tags system_upgrade --limit overcloud-compute-0", "openstack overcloud upgrade run --stack STACK NAME --limit overcloud-compute-0", "openstack overcloud upgrade run --stack STACK NAME --tags system_upgrade --limit overcloud-compute-0,overcloud-compute-1,overcloud-compute-2", "openstack overcloud upgrade run --stack STACK NAME --limit overcloud-compute-0,overcloud-compute-1,overcloud-compute-2", "source ~/stackrc", "openstack overcloud upgrade converge --stack STACK NAME --templates -e ENVIRONMENT FILE ... -e /home/stack/templates/upgrades-environment.yaml -e /home/stack/templates/rhsm.yaml -e /home/stack/containers-prepare-parameter.yaml -e /usr/share/openstack-tripleo-heat-templates/environments/services/neutron-ovs.yaml ..." ]
https://docs.redhat.com/en/documentation/red_hat_openstack_platform/16.2/html/framework_for_upgrades_13_to_16.2/upgrading-a-standard-overcloud_upgrading-overcloud-standard
31.2. Test Environment Preparations
31.2. Test Environment Preparations Before evaluating VDO, it is important to consider the host system configuration, VDO configuration, and the workloads that will be used during testing. These choices will affect benchmarking both in terms of data optimization (space efficiency) and performance (bandwidth and latency). Items that should be considered when developing test plans are listed in the following sections. 31.2.1. System Configuration Number and type of CPU cores available. This can be controlled by using the taskset utility. Available memory and total installed memory. Configuration of storage devices. Linux kernel version. Note that Red Hat Enterprise Linux 7 provides only one Linux kernel version. Packages installed. 31.2.2. VDO Configuration Partitioning scheme File system(s) used on VDO volumes Size of the physical storage assigned to a VDO volume Size of the logical VDO volume created Sparse or dense indexing UDS Index in memory size VDO's thread configuration 31.2.3. Workloads Types of tools used to generate test data Number of concurrent clients The quantity of duplicate 4 KB blocks in the written data Read and write patterns The working set size VDO volumes may need to be re-created in between certain tests to ensure that each test is performed on the same disk environment. Read more about this in the testing section. 31.2.4. Supported System Configurations Red Hat has tested VDO with Red Hat Enterprise Linux 7 on the Intel 64 architecture. For the system requirements of VDO, see Section 30.2, "System Requirements" . The following utilities are recommended when evaluating VDO: Flexible I/O Tester version 2.08 or higher; available from the fio package sysstat version 8.1.2-2 or higher; available from the sysstat package 31.2.5. Pre-Test System Preparations This section describes how to configure system settings to achieve optimal performance during the evaluation. Testing beyond the implicit bounds established in any particular test may result in loss of testing time due to abnormal results. For example, this guide describes a test that conducts random reads over a 100 GB address range. To test a working set of 500 GB, the amount of DRAM allocated for the VDO block map cache should be increased accordingly. System Configuration Ensure that your CPU is running at its highest performance setting. Disable frequency scaling if possible using the BIOS configuration or the Linux cpupower utility. Enable Turbo mode if possible to achieve maximum throughput. Turbo mode introduces some variability in test results, but performance will meet or exceed that of testing without Turbo. Linux Configuration For disk-based solutions, Linux offers several I/O scheduler algorithms to handle multiple read/write requests as they are queued. By default, Red Hat Enterprise Linux uses the CFQ (completely fair queuing) scheduler, which arranges requests in a way that improves rotational disk (hard disk) access in many situations. We instead suggest using the Deadline scheduler for rotational disks, having found that it provides better throughput and latency in Red Hat lab testing. Change the device settings as follows: For flash-based solutions, the noop scheduler demonstrates superior random access throughput and latency in Red Hat lab testing. Change the device settings as follows: Storage device configuration File systems (ext4, XFS, etc.) may have unique impacts on performance; they often skew performance measurements, making it harder to isolate VDO's impact on the results. If reasonable, we recommend measuring performance on the raw block device. If this is not possible, format the device using the file system that would be used in the target implementation. 31.2.6. VDO Internal Structures We believe that a general understanding of VDO mechanisms is essential for a complete and successful evaluation. This understanding becomes especially important when testers wish to deviate from the test plan or devise new stimuli to emulate a particular application or use case. For more information, see Chapter 30, VDO Integration . The Red Hat test plan was written to operate with a default VDO configuration. When developing new tests, some of the VDO parameters listed in the section must be adjusted. 31.2.7. VDO Optimizations High Load Perhaps the most important strategy for producing optimal performance is determining the best I/O queue depth, a characteristic that represents the load on the storage system. Most modern storage systems perform optimally with high I/O depth. VDO's performance is best demonstrated with many concurrent requests. Synchronous vs. Asynchronous Write Policy VDO might operate with either of two write policies, synchronous or asynchronous. By default, VDO automatically chooses the appropriate write policy for your underlying storage device. When testing performance, you need to know which write policy VDO selected. The following command shows the write policy of your VDO volume: For more information on write policies, see the section called "Overview of VDO Write Policies" and Section 30.4.2, "Selecting VDO Write Modes" . Metadata Caching VDO maintains a table of mappings from logical block addresses to physical block addresses, and VDO must look up the relevant mapping when accessing any particular block. By default, VDO allocates 128 MB of metadata cache in DRAM to support efficient access to 100 GB of logical space at a time. The test plan generates workloads appropriate to this configuration option. Working sets larger than the configured cache size will require additional I/Os to service requests, in which case performance degradation will occur. If additional memory is available, the block map cache should be made larger. If the working set is larger than what the block map cache can hold in memory, additional I/O hover head can occur to lookup associated block map pages. VDO Multithreading Configuration VDO's thread configuration must be tuned to achieve optimal performance. Review the VDO Integration Guide for information on how to modify these settings when creating a VDO volume. Contact your Red Hat Sales Engineer to discuss how to design a test to find the optimal setting. Data Content Because VDO performs deduplication and compression, test data sets must be chosen to effectively exercise these capabilities. 31.2.8. Special Considerations for Testing Read Performance When testing read performance, these factors must be considered: If a 4 KB block has never been written , VDO will not perform I/O to the storage and will immediately respond with a zero block. If a 4 KB block has been written but contains all zeros , VDO will not perform I/O to the storage and will immediately respond with a zero block. This behavior results in very fast read performance when there is no data to read. This makes it imperative that read tests prefill with actual data. 31.2.9. Cross Talk To prevent one test from affecting the results of another, it is suggested that a new VDO volume be created for each iteration of each test.
[ "echo \"deadline\" > /sys/block/ device /queue/scheduler", "echo \"noop\" > /sys/block/ device /queue/scheduler", "vdo status --name= my_vdo" ]
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/storage_administration_guide/vdo-ev-test-environment-prep
2.2.7.4. Disable Postfix Network Listening
2.2.7.4. Disable Postfix Network Listening By default, Postfix is set up to only listen to the local loopback address. You can verify this by viewing the file /etc/postfix/main.cf . View the file /etc/postfix/main.cf to ensure that only the following inet_interfaces line appears: This ensures that Postfix only accepts mail messages (such as cron job reports) from the local system and not from the network. This is the default setting and protects Postfix from a network attack. For removal of the localhost restriction and allowing Postfix to listen on all interfaces the inet_interfaces = all setting can be used.
[ "inet_interfaces = localhost" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/security_guide/sect-security_guide-disable_postfix_network_listening
Chapter 5. Triggers
Chapter 5. Triggers 5.1. Triggers overview Triggers are an essential component in Knative Eventing that connect specific event sources to subscriber services based on defined filters. By creating a Trigger, you can dynamically manage how events are routed within your system, ensuring they reach the appropriate destination based on your business logic. Brokers can be used in combination with triggers to deliver events from an event source to an event sink. Events are sent from an event source to a broker as an HTTP POST request. After events have entered the broker, they can be filtered by CloudEvent attributes using triggers, and sent as an HTTP POST request to an event sink. 5.2. Creating triggers Triggers in Knative Eventing allow you to route events from a broker to a specific subscriber based on your requirements. By defining a Trigger, you can connect event producers to consumers dynamically, ensuring events are delivered to the correct destination. This section describes the steps to create a Trigger, configure its filters, and verify its functionality. Whether you're working with simple routing needs or complex event-driven workflows. The following examples displays common configurations for Triggers, demonstrating how to route events to Knative services or custom endpoints. Example of routing events to a Knative Serving service The following Trigger routes all events from the default broker to the Knative Serving service named my-service : apiVersion: eventing.knative.dev/v1 kind: Trigger metadata: name: my-service-trigger spec: broker: default filter: attributes: type: dev.knative.foo.bar subscriber: ref: apiVersion: serving.knative.dev/v1 kind: Service name: my-service Note Routing all events without a filter attribute is recommended for debugging purposes. It allows you to observe and analyze all incoming events, helping identify issues or validate the flow of events through the broker before applying specific filters. To know more about filtering, see Advanced trigger filters . To apply this trigger, you can save the configuration to a file, for example, trigger.yaml and run the following command: USD oc apply -f trigger.yaml Example of routing events to a custom path This Trigger routes all events from the default broker to a custom path /my-custom-path on the service named my-service : apiVersion: eventing.knative.dev/v1 kind: Trigger metadata: name: my-service-trigger spec: broker: default subscriber: ref: apiVersion: v1 kind: Service name: my-service uri: /my-custom-path You can save the configuration to a file, for example, custom-path-trigger.yaml and run the following command: USD oc apply -f custom-path-trigger.yaml 5.2.1. Creating a trigger by using the Administrator perspective Using the OpenShift Container Platform web console provides a streamlined and intuitive user interface to create a trigger. After Knative Eventing is installed on your cluster and you have created a broker, you can create a trigger by using the web console. Prerequisites The OpenShift Serverless Operator and Knative Eventing are installed on your OpenShift Container Platform cluster. You have logged in to the web console and are in the Administrator perspective. You have cluster administrator permissions on OpenShift Container Platform, or you have cluster or dedicated administrator permissions on Red Hat OpenShift Service on AWS or OpenShift Dedicated. You have created a Knative broker. You have created a Knative service to use as a subscriber. Procedure In the Administrator perspective of the OpenShift Container Platform web console, navigate to Serverless Eventing . In the Broker tab, select the Options menu for the broker that you want to add a trigger to. Click Add Trigger in the list. In the Add Trigger dialogue box, select a Subscriber for the trigger. The subscriber is the Knative service that will receive events from the broker. Click Add . 5.2.2. Creating a trigger by using the Developer perspective Using the OpenShift Container Platform web console provides a streamlined and intuitive user interface to create a trigger. After Knative Eventing is installed on your cluster and you have created a broker, you can create a trigger by using the web console. Prerequisites The OpenShift Serverless Operator, Knative Serving, and Knative Eventing are installed on your OpenShift Container Platform cluster. You have logged in to the web console. You have created a project or have access to a project with the appropriate roles and permissions to create applications and other workloads in OpenShift Container Platform. You have created a broker and a Knative service or other event sink to connect to the trigger. Procedure In the Developer perspective, navigate to the Topology page. Hover over the broker that you want to create a trigger for, and drag the arrow. The Add Trigger option is displayed. Click Add Trigger . Select your sink in the Subscriber list. Click Add . Verification After the subscription has been created, you can view it in the Topology page, where it is represented as a line that connects the broker to the event sink. Deleting a trigger In the Developer perspective, navigate to the Topology page. Click on the trigger that you want to delete. In the Actions context menu, select Delete Trigger . 5.2.3. Creating a trigger by using the Knative CLI You can use the kn trigger create command to create a trigger. Prerequisites The OpenShift Serverless Operator and Knative Eventing are installed on your OpenShift Container Platform cluster. You have installed the Knative ( kn ) CLI. You have created a project or have access to a project with the appropriate roles and permissions to create applications and other workloads in OpenShift Container Platform. Procedure Create a trigger: USD kn trigger create <trigger_name> --broker <broker_name> --filter <key=value> --sink <sink_name> Alternatively, you can create a trigger and simultaneously create the default broker using broker injection: USD kn trigger create <trigger_name> --inject-broker --filter <key=value> --sink <sink_name> By default, triggers forward all events sent to a broker to sinks that are subscribed to that broker. Using the --filter attribute for triggers allows you to filter events from a broker, so that subscribers will only receive a subset of events based on your defined criteria. 5.3. List triggers from the command line Using the Knative ( kn ) CLI to list triggers provides a streamlined and intuitive user interface. 5.3.1. Listing triggers by using the Knative CLI You can use the kn trigger list command to list existing triggers in your cluster. Prerequisites The OpenShift Serverless Operator and Knative Eventing are installed on your OpenShift Container Platform cluster. You have installed the Knative ( kn ) CLI. Procedure Print a list of available triggers: USD kn trigger list Example output NAME BROKER SINK AGE CONDITIONS READY REASON email default ksvc:edisplay 4s 5 OK / 5 True ping default ksvc:edisplay 32s 5 OK / 5 True Optional: Print a list of triggers in JSON format: USD kn trigger list -o json 5.4. Describe triggers from the command line Using the Knative ( kn ) CLI to describe triggers provides a streamlined and intuitive user interface. 5.4.1. Describing a trigger by using the Knative CLI You can use the kn trigger describe command to print information about existing triggers in your cluster by using the Knative CLI. Prerequisites The OpenShift Serverless Operator and Knative Eventing are installed on your OpenShift Container Platform cluster. You have installed the Knative ( kn ) CLI. You have created a trigger. Procedure Enter the command: USD kn trigger describe <trigger_name> Example output Name: ping Namespace: default Labels: eventing.knative.dev/broker=default Annotations: eventing.knative.dev/creator=kube:admin, eventing.knative.dev/lastModifier=kube:admin Age: 2m Broker: default Filter: type: dev.knative.event Sink: Name: edisplay Namespace: default Resource: Service (serving.knative.dev/v1) Conditions: OK TYPE AGE REASON ++ Ready 2m ++ BrokerReady 2m ++ DependencyReady 2m ++ Subscribed 2m ++ SubscriberResolved 2m 5.5. Connecting a trigger to a sink You can connect a trigger to a sink, so that events from a broker are filtered before they are sent to the sink. A sink that is connected to a trigger is configured as a subscriber in the Trigger object's resource spec. Example of a Trigger object connected to an Apache Kafka sink apiVersion: eventing.knative.dev/v1 kind: Trigger metadata: name: <trigger_name> 1 spec: ... subscriber: ref: apiVersion: eventing.knative.dev/v1alpha1 kind: KafkaSink name: <kafka_sink_name> 2 1 The name of the trigger being connected to the sink. 2 The name of a KafkaSink object. 5.6. Filtering triggers from the command line Using the Knative ( kn ) CLI to filter events by using triggers provides a streamlined and intuitive user interface. You can use the kn trigger create command, along with the appropriate flags, to filter events by using triggers. 5.6.1. Filtering events with triggers by using the Knative CLI In the following trigger example, only events with the attribute type: dev.knative.samples.helloworld are sent to the event sink: USD kn trigger create <trigger_name> --broker <broker_name> --filter type=dev.knative.samples.helloworld --sink ksvc:<service_name> You can also filter events by using multiple attributes. The following example shows how to filter events using the type, source, and extension attributes: USD kn trigger create <trigger_name> --broker <broker_name> --sink ksvc:<service_name> \ --filter type=dev.knative.samples.helloworld \ --filter source=dev.knative.samples/helloworldsource \ --filter myextension=my-extension-value 5.7. Advanced trigger filters The advanced trigger filters give you advanced options for more precise event routing. You can filter events by exact matches, prefixes, or suffixes, as well as by CloudEvent extensions. This added control makes it easier to fine-tune how events flow ensuring that only relevant events trigger specific actions. Important Advanced trigger filters feature is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . 5.7.1. Advanced trigger filters overview The advanced trigger filters feature adds a new filters field to triggers that aligns with the filters API field defined in the CloudEvents Subscriptions API. You can specify filter expressions, where each expression evaluates to true or false for each event. The following example shows a trigger using the advanced filters field: apiVersion: eventing.knative.dev/v1 kind: Trigger metadata: name: my-service-trigger spec: broker: default filters: - cesql: "source LIKE '%commerce%' AND type IN ('order.created', 'order.updated', 'order.canceled')" subscriber: ref: apiVersion: serving.knative.dev/v1 kind: Service name: my-service The filters field contains an array of filter expressions, each evaluating to either true or false . If any expression evaluates to false , the event is not sent to the subscriber. Each filter expression uses a specific dialect that determines the type of filter and the set of allowed additional properties within the expression. 5.7.2. Supported filter dialects You can use dialects to define flexible filter expressions to target specific events. The advanced trigger filters support the following dialects that offer different ways to match and filter events: exact prefix suffix all any not cesql Each dialect provides a different method for filtering events based on a specific criteria, enabling precise event selection for processing. 5.7.2.1. exact filter dialect The exact dialect filters events by comparing a string value of the CloudEvent attribute to exactly match the specified string. The comparison is case sensitive. If the attribute is not a string, the filter converts the attribute to its string representation before comparing it to the specified value. Example of the exact filter dialect apiVersion: eventing.knative.dev/v1 kind: Trigger metadata: ... spec: ... filters: - exact: type: com.github.push 5.7.2.2. prefix filter dialect The prefix dialect filters events by comparing a string value of the CloudEvent attribute that starts with the specified string. This comparison is case sensitive. If the attribute is not a string, the filter converts the attribute to its string representation before matching it against the specified value. Example of the prefix filter dialect apiVersion: eventing.knative.dev/v1 kind: Trigger metadata: ... spec: ... filters: - prefix: type: com.github. 5.7.2.3. suffix filter dialect The suffix dialect filters events by comparing a string value of the CloudEvent attribute that ends with the specified string. This comparison is case-sensitive. If the attribute is not a string, the filter converts the attribute to its string representation before matching it to the specified value. Example of the suffix filter dialect apiVersion: eventing.knative.dev/v1 kind: Trigger metadata: ... spec: ... filters: - suffix: type: .created 5.7.2.4. all filter dialect The all filter dialect needs that all nested filter expressions evaluate to true to process the event. If any of the nested expressions return false , the event is not sent to the subscriber. Example of the all filter dialect apiVersion: eventing.knative.dev/v1 kind: Trigger metadata: ... spec: ... filters: - all: - exact: type: com.github.push - exact: subject: https://github.com/cloudevents/spec 5.7.2.5. any filter dialect The any filter dialect requires at least one of the nested filter expressions to evaluate to true . If none of the nested expressions return true , the event is not sent to the subscriber. Example of the any filter dialect apiVersion: eventing.knative.dev/v1 kind: Trigger metadata: ... spec: ... filters: - any: - exact: type: com.github.push - exact: subject: https://github.com/cloudevents/spec 5.7.2.6. not filter dialect The not filter dialect requires that the nested filter expression evaluates to false for the event to be processed. If the nested expression evaluates to true , the event is not sent to the subscriber. Example of the not filter dialect apiVersion: eventing.knative.dev/v1 kind: Trigger metadata: ... spec: ... filters: - not: exact: type: com.github.push 5.7.2.7. cesql filter dialect CloudEvents SQL expressions (cesql) allow computing values and matching of CloudEvent attributes against complex expressions that lean on the syntax of Structured Query Language (SQL) WHERE clauses. The cesql filter dialect uses CloudEvents SQL expressions to filter events. The provided CESQL expression must evaluate to true for the event to be processed. Example of the cesql filter dialect apiVersion: eventing.knative.dev/v1 kind: Trigger metadata: ... spec: ... filters: - cesql: "source LIKE '%commerce%' AND type IN ('order.created', 'order.updated', 'order.canceled')" For more information about the syntax and the features of the cesql filter dialect, see CloudEvents SQL Expression Language . 5.7.3. Conflict with the existing filter field You can use the filters and the existing filter field at the same time. If you enable the new new-trigger-filters feature and an object contains both filter and filters , the filters field overrides. This setup allows you to test the new filters field while maintaining support for existing filters. You can gradually introduce the new field into existing trigger objects. Example of filters field overriding the filter field: apiVersion: eventing.knative.dev/v1 kind: Trigger metadata: name: my-service-trigger spec: broker: default # Existing filter field. This will be ignored when the new filters field is present. filter: attributes: type: dev.knative.foo.bar myextension: my-extension-value # New filters field. This takes precedence over the old filter field. filters: - cesql: "type = 'dev.knative.foo.bar' AND myextension = 'my-extension-value'" subscriber: ref: apiVersion: serving.knative.dev/v1 kind: Service name: my-service 5.7.4. Legacy attributes filter The legacy attributes filter enables exact match filtering on any number of CloudEvents attributes, including extensions. Its functionality mirrors the exact filter dialect, and you are encouraged to transition to the exact filter whenever possible. However, for backward compatibility, the attributes filter remains available. The following example displays how to filter events from the default broker that match the type attribute dev.knative.foo.bar and have the extension myextension with the my-extension-value value: Example of filtering events with specific attributes apiVersion: eventing.knative.dev/v1 kind: Trigger metadata: name: my-service-trigger spec: broker: default filter: attributes: type: dev.knative.foo.bar myextension: my-extension-value subscriber: ref: apiVersion: serving.knative.dev/v1 kind: Service name: my-service When both the filters field and the legacy filter field are specified, the filters field takes precedence. For example, in the following example configuration, events with the dev.knative.a type are delivered, while events with the dev.knative.b type are ignored: apiVersion: eventing.knative.dev/v1 kind: Trigger metadata: name: my-service-trigger spec: broker: default filters: exact: type: dev.knative.a filter: attributes: type: dev.knative.b subscriber: ref: apiVersion: serving.knative.dev/v1 kind: Service name: my-service 5.8. Updating triggers from the command line Using the Knative ( kn ) CLI to update triggers provides a streamlined and intuitive user interface. 5.8.1. Updating a trigger by using the Knative CLI You can use the kn trigger update command with certain flags to update attributes for a trigger. Prerequisites The OpenShift Serverless Operator and Knative Eventing are installed on your OpenShift Container Platform cluster. You have installed the Knative ( kn ) CLI. You have created a project or have access to a project with the appropriate roles and permissions to create applications and other workloads in OpenShift Container Platform. Procedure Update a trigger: USD kn trigger update <trigger_name> --filter <key=value> --sink <sink_name> [flags] You can update a trigger to filter exact event attributes that match incoming events. For example, using the type attribute: USD kn trigger update <trigger_name> --filter type=knative.dev.event You can remove a filter attribute from a trigger. For example, you can remove the filter attribute with key type : USD kn trigger update <trigger_name> --filter type- You can use the --sink parameter to change the event sink of a trigger: USD kn trigger update <trigger_name> --sink ksvc:my-event-sink 5.9. Deleting triggers from the command line Using the Knative ( kn ) CLI to delete a trigger provides a streamlined and intuitive user interface. 5.9.1. Deleting a trigger by using the Knative CLI You can use the kn trigger delete command to delete a trigger. Prerequisites The OpenShift Serverless Operator and Knative Eventing are installed on your OpenShift Container Platform cluster. You have installed the Knative ( kn ) CLI. You have created a project or have access to a project with the appropriate roles and permissions to create applications and other workloads in OpenShift Container Platform. Procedure Delete a trigger: USD kn trigger delete <trigger_name> Verification List existing triggers: USD kn trigger list Verify that the trigger no longer exists: Example output No triggers found. 5.10. Event delivery order for triggers In Knative Eventing, the delivery order of events plays a critical role in ensuring messages are processed according to application requirements. When using a Kafka broker, you can specify whether events should be delivered in order or without strict ordering. By configuring the delivery order, you can optimize event handling for use cases that require sequential processing or prioritize performance for unordered delivery. 5.10.1. Configuring event delivery ordering for triggers If you are using a Kafka broker, you can configure the delivery order of events from triggers to event sinks. Prerequisites The OpenShift Serverless Operator, Knative Eventing, and Knative Kafka are installed on your OpenShift Container Platform cluster. Kafka broker is enabled for use on your cluster, and you have created a Kafka broker. You have created a project or have access to a project with the appropriate roles and permissions to create applications and other workloads in OpenShift Container Platform. You have installed the OpenShift ( oc ) CLI. Procedure Create or modify a Trigger object and set the kafka.eventing.knative.dev/delivery.order annotation using the following example Trigger YAML file:: apiVersion: eventing.knative.dev/v1 kind: Trigger metadata: name: <trigger_name> annotations: kafka.eventing.knative.dev/delivery.order: ordered # ... The supported consumer delivery guarantees are: unordered An unordered consumer is a non-blocking consumer that delivers messages unordered, while preserving proper offset management. ordered An ordered consumer is a per-partition blocking consumer that waits for a successful response from the CloudEvent subscriber before it delivers the message of the partition. The default ordering guarantee is unordered . Apply the Trigger object using the following command:: USD oc apply -f <filename> 5.10.2. steps Configure event delivery parameters that are applied in cases where an event fails to be delivered to an event sink.
[ "apiVersion: eventing.knative.dev/v1 kind: Trigger metadata: name: my-service-trigger spec: broker: default filter: attributes: type: dev.knative.foo.bar subscriber: ref: apiVersion: serving.knative.dev/v1 kind: Service name: my-service", "oc apply -f trigger.yaml", "apiVersion: eventing.knative.dev/v1 kind: Trigger metadata: name: my-service-trigger spec: broker: default subscriber: ref: apiVersion: v1 kind: Service name: my-service uri: /my-custom-path", "oc apply -f custom-path-trigger.yaml", "kn trigger create <trigger_name> --broker <broker_name> --filter <key=value> --sink <sink_name>", "kn trigger create <trigger_name> --inject-broker --filter <key=value> --sink <sink_name>", "kn trigger list", "NAME BROKER SINK AGE CONDITIONS READY REASON email default ksvc:edisplay 4s 5 OK / 5 True ping default ksvc:edisplay 32s 5 OK / 5 True", "kn trigger list -o json", "kn trigger describe <trigger_name>", "Name: ping Namespace: default Labels: eventing.knative.dev/broker=default Annotations: eventing.knative.dev/creator=kube:admin, eventing.knative.dev/lastModifier=kube:admin Age: 2m Broker: default Filter: type: dev.knative.event Sink: Name: edisplay Namespace: default Resource: Service (serving.knative.dev/v1) Conditions: OK TYPE AGE REASON ++ Ready 2m ++ BrokerReady 2m ++ DependencyReady 2m ++ Subscribed 2m ++ SubscriberResolved 2m", "apiVersion: eventing.knative.dev/v1 kind: Trigger metadata: name: <trigger_name> 1 spec: subscriber: ref: apiVersion: eventing.knative.dev/v1alpha1 kind: KafkaSink name: <kafka_sink_name> 2", "kn trigger create <trigger_name> --broker <broker_name> --filter type=dev.knative.samples.helloworld --sink ksvc:<service_name>", "kn trigger create <trigger_name> --broker <broker_name> --sink ksvc:<service_name> --filter type=dev.knative.samples.helloworld --filter source=dev.knative.samples/helloworldsource --filter myextension=my-extension-value", "apiVersion: eventing.knative.dev/v1 kind: Trigger metadata: name: my-service-trigger spec: broker: default filters: - cesql: \"source LIKE '%commerce%' AND type IN ('order.created', 'order.updated', 'order.canceled')\" subscriber: ref: apiVersion: serving.knative.dev/v1 kind: Service name: my-service", "apiVersion: eventing.knative.dev/v1 kind: Trigger metadata: spec: filters: - exact: type: com.github.push", "apiVersion: eventing.knative.dev/v1 kind: Trigger metadata: spec: filters: - prefix: type: com.github.", "apiVersion: eventing.knative.dev/v1 kind: Trigger metadata: spec: filters: - suffix: type: .created", "apiVersion: eventing.knative.dev/v1 kind: Trigger metadata: spec: filters: - all: - exact: type: com.github.push - exact: subject: https://github.com/cloudevents/spec", "apiVersion: eventing.knative.dev/v1 kind: Trigger metadata: spec: filters: - any: - exact: type: com.github.push - exact: subject: https://github.com/cloudevents/spec", "apiVersion: eventing.knative.dev/v1 kind: Trigger metadata: spec: filters: - not: exact: type: com.github.push", "apiVersion: eventing.knative.dev/v1 kind: Trigger metadata: spec: filters: - cesql: \"source LIKE '%commerce%' AND type IN ('order.created', 'order.updated', 'order.canceled')\"", "apiVersion: eventing.knative.dev/v1 kind: Trigger metadata: name: my-service-trigger spec: broker: default # Existing filter field. This will be ignored when the new filters field is present. filter: attributes: type: dev.knative.foo.bar myextension: my-extension-value # New filters field. This takes precedence over the old filter field. filters: - cesql: \"type = 'dev.knative.foo.bar' AND myextension = 'my-extension-value'\" subscriber: ref: apiVersion: serving.knative.dev/v1 kind: Service name: my-service", "apiVersion: eventing.knative.dev/v1 kind: Trigger metadata: name: my-service-trigger spec: broker: default filter: attributes: type: dev.knative.foo.bar myextension: my-extension-value subscriber: ref: apiVersion: serving.knative.dev/v1 kind: Service name: my-service", "apiVersion: eventing.knative.dev/v1 kind: Trigger metadata: name: my-service-trigger spec: broker: default filters: exact: type: dev.knative.a filter: attributes: type: dev.knative.b subscriber: ref: apiVersion: serving.knative.dev/v1 kind: Service name: my-service", "kn trigger update <trigger_name> --filter <key=value> --sink <sink_name> [flags]", "kn trigger update <trigger_name> --filter type=knative.dev.event", "kn trigger update <trigger_name> --filter type-", "kn trigger update <trigger_name> --sink ksvc:my-event-sink", "kn trigger delete <trigger_name>", "kn trigger list", "No triggers found.", "apiVersion: eventing.knative.dev/v1 kind: Trigger metadata: name: <trigger_name> annotations: kafka.eventing.knative.dev/delivery.order: ordered", "oc apply -f <filename>" ]
https://docs.redhat.com/en/documentation/red_hat_openshift_serverless/1.35/html/eventing/triggers
Chapter 1. Authorization APIs
Chapter 1. Authorization APIs 1.1. LocalResourceAccessReview [authorization.openshift.io/v1] Description LocalResourceAccessReview is a means to request a list of which users and groups are authorized to perform the action specified by spec in a particular namespace Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object 1.2. LocalSubjectAccessReview [authorization.openshift.io/v1] Description LocalSubjectAccessReview is an object for requesting information about whether a user or group can perform an action in a particular namespace Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object 1.3. ResourceAccessReview [authorization.openshift.io/v1] Description ResourceAccessReview is a means to request a list of which users and groups are authorized to perform the action specified by spec Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object 1.4. SelfSubjectRulesReview [authorization.openshift.io/v1] Description SelfSubjectRulesReview is a resource you can create to determine which actions you can perform in a namespace Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object 1.5. SubjectAccessReview [authorization.openshift.io/v1] Description SubjectAccessReview is an object for requesting information about whether a user or group can perform an action Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object 1.6. SubjectRulesReview [authorization.openshift.io/v1] Description SubjectRulesReview is a resource you can create to determine which actions another user can perform in a namespace Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object 1.7. TokenRequest [authentication.k8s.io/v1] Description TokenRequest requests a token for a given service account. Type object 1.8. TokenReview [authentication.k8s.io/v1] Description TokenReview attempts to authenticate a token to a known user. Note: TokenReview requests may be cached by the webhook token authenticator plugin in the kube-apiserver. Type object 1.9. LocalSubjectAccessReview [authorization.k8s.io/v1] Description LocalSubjectAccessReview checks whether or not a user or group can perform an action in a given namespace. Having a namespace scoped resource makes it much easier to grant namespace scoped policy that includes permissions checking. Type object 1.10. SelfSubjectAccessReview [authorization.k8s.io/v1] Description SelfSubjectAccessReview checks whether or the current user can perform an action. Not filling in a spec.namespace means "in all namespaces". Self is a special case, because users should always be able to check whether they can perform an action Type object 1.11. SelfSubjectRulesReview [authorization.k8s.io/v1] Description SelfSubjectRulesReview enumerates the set of actions the current user can perform within a namespace. The returned list of actions may be incomplete depending on the server's authorization mode, and any errors experienced during the evaluation. SelfSubjectRulesReview should be used by UIs to show/hide actions, or to quickly let an end user reason about their permissions. It should NOT Be used by external systems to drive authorization decisions as this raises confused deputy, cache lifetime/revocation, and correctness concerns. SubjectAccessReview, and LocalAccessReview are the correct way to defer authorization decisions to the API server. Type object 1.12. SubjectAccessReview [authorization.k8s.io/v1] Description SubjectAccessReview checks whether or not a user or group can perform an action. Type object
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.14/html/authorization_apis/authorization-apis
Chapter 1. Long Term Support for AMQ Broker 7.8
Chapter 1. Long Term Support for AMQ Broker 7.8 AMQ Broker 7.8 has been designated as a Long Term Support (LTS) release version. Bug fixes and security advisories will be made available for AMQ Broker 7.8 in a series of micro releases (7.8.1, 7.8.2, and so on) for a period of at least 12 months. This means that you will be able to get recent bug fixes and security advisories for AMQ Broker without having to upgrade to a new minor release. Note the following important points about the LTS release stream: The LTS release stream provides only bug fixes. No new enhancements will be added to this stream. To remain in a supported configuration, you must upgrade to the latest micro release in the LTS release stream. The LTS version will be supported for at least 12 months from the time of the AMQ Broker 7.8.0 GA release. Support for Red Hat Enterprise Linux and OpenShift Container Platform The AMQ Broker 7.8 LTS version supports: Red Hat Enterprise Linux 6, 7, and 8 OpenShift Container Platform 3.11 and 4.5 and 4.6 Note the following important points about support for Red Hat Enterprise Linux and OpenShift Container Platform: AMQ Broker 7.8 is the last version that will support Red Hat Enterprise Linux 6 and OpenShift Container Platform 3.11. Red Hat does not guarantee that AMQ Broker 7.8 will be supported on future versions (that is, versions greater than 4.6) of OpenShift Container Platform. For information about issues resolved in AMQ Broker 7.8 LTS micro releases, see AMQ 7 Broker - 7.8.x Resolved Issues .
null
https://docs.redhat.com/en/documentation/red_hat_amq/2020.q4/html/release_notes_for_red_hat_amq_broker_7.8/lts_releases