title
stringlengths
4
168
content
stringlengths
7
1.74M
commands
listlengths
1
5.62k
url
stringlengths
79
342
Chapter 1. OpenShift Container Platform 4.16 release notes
Chapter 1. OpenShift Container Platform 4.16 release notes Red Hat OpenShift Container Platform provides developers and IT organizations with a hybrid cloud application platform for deploying both new and existing applications on secure, scalable resources with minimal configuration and management. OpenShift Container Platform supports a wide selection of programming languages and frameworks, such as Java, JavaScript, Python, Ruby, and PHP. Built on Red Hat Enterprise Linux (RHEL) and Kubernetes, OpenShift Container Platform provides a more secure and scalable multitenant operating system for today's enterprise-class applications, while delivering integrated application runtimes and libraries. OpenShift Container Platform enables organizations to meet security, privacy, compliance, and governance requirements. 1.1. About this release OpenShift Container Platform ( RHSA-2024:0041 ) is now available. This release uses Kubernetes 1.29 with CRI-O runtime. New features, changes, and known issues that pertain to OpenShift Container Platform 4.16 are included in this topic. OpenShift Container Platform 4.16 clusters are available at https://console.redhat.com/openshift . With the Red Hat OpenShift Cluster Manager application for OpenShift Container Platform, you can deploy OpenShift Container Platform clusters to either on-premises or cloud environments. OpenShift Container Platform 4.16 is supported on Red Hat Enterprise Linux (RHEL) 8.8 and a later version of RHEL 8 that is released before End of Life of OpenShift Container Platform 4.16. OpenShift Container Platform 4.16 is also supported on Red Hat Enterprise Linux CoreOS (RHCOS) 4.16. To understand RHEL versions used by RHCOS, see RHEL Versions Utilized by Red Hat Enterprise Linux CoreOS (RHCOS) and OpenShift Container Platform (Knowledgebase article). You must use RHCOS machines for the control plane, and you can use either RHCOS or RHEL for compute machines. RHEL machines are deprecated in OpenShift Container Platform 4.16 and will be removed in a future release. Starting from OpenShift Container Platform 4.14, the Extended Update Support (EUS) phase for even-numbered releases increases the total available lifecycle to 24 months on all supported architectures, including x86_64 , 64-bit ARM ( aarch64 ), IBM Power(R) ( ppc64le ), and IBM Z(R) ( s390x ) architectures. Beyond this, Red Hat also offers a 12-month additional EUS add-on, denoted as Additional EUS Term 2 , that extends the total available lifecycle from 24 months to 36 months. The Additional EUS Term 2 is available on all architecture variants of OpenShift Container Platform. For more information about support for all versions, see the Red Hat OpenShift Container Platform Life Cycle Policy . Commencing with the 4.16 release, Red Hat is simplifying the administration and management of Red Hat shipped cluster Operators with the introduction of three new life cycle classifications; Platform Aligned, Platform Agnostic, and Rolling Stream. These life cycle classifications provide additional ease and transparency for cluster administrators to understand the life cycle policies of each Operator and form cluster maintenance and upgrade plans with predictable support boundaries. For more information, see OpenShift Operator Life Cycles . OpenShift Container Platform is designed for FIPS. When running Red Hat Enterprise Linux (RHEL) or Red Hat Enterprise Linux CoreOS (RHCOS) booted in FIPS mode, OpenShift Container Platform core components use the RHEL cryptographic libraries that have been submitted to NIST for FIPS 140-2/140-3 Validation on only the x86_64 , ppc64le , and s390x architectures. For more information about the NIST validation program, see Cryptographic Module Validation Program . For the latest NIST status for the individual versions of RHEL cryptographic libraries that have been submitted for validation, see Compliance Activities and Government Standards . 1.2. OpenShift Container Platform layered and dependent component support and compatibility The scope of support for layered and dependent components of OpenShift Container Platform changes independently of the OpenShift Container Platform version. To determine the current support status and compatibility for an add-on, refer to its release notes. For more information, see the Red Hat OpenShift Container Platform Life Cycle Policy . 1.3. New features and enhancements This release adds improvements related to the following components and concepts. 1.3.1. Red Hat Enterprise Linux CoreOS (RHCOS) 1.3.1.1. RHCOS now uses RHEL 9.4 RHCOS now uses Red Hat Enterprise Linux (RHEL) 9.4 packages in OpenShift Container Platform 4.16. These packages ensure that your OpenShift Container Platform instances receive the latest fixes, features, enhancements, hardware support, and driver updates. As an Extended Update Support (EUS) release, OpenShift Container Platform 4.14 is excluded from this change and will continue to use RHEL 9.2 EUS packages for the entirety of its lifecycle. 1.3.1.2. Support for iSCSI boot volumes If your cluster uses user-provisioned infrastructure, you can now install RHCOS to Small Computer Systems Interface (iSCSI) boot devices. Multipathing for iSCSI is also supported. For more information, see Installing RHCOS manually on an iSCSI boot device and Installing RHCOS on an iSCSI boot device using iBFT 1.3.1.3. Support for RAID storage using Intel(R) Virtual RAID on CPU (VROC) With this release, you can now install RHCOS to Intel(R) VROC RAID devices. For more information about configuring RAID to an Intel(R) VROC device, see Configuring an Intel(R) Virtual RAID on CPU (VROC) data volume . 1.3.2. Installation and update 1.3.2.1. Cluster API replaces Terraform for AWS installations In OpenShift Container Platform 4.16, the installation program uses Cluster API instead of Terraform to provision cluster infrastructure during installations on Amazon Web Services. There are several additional required permissions as a result of this change. For more information, see Required AWS permissions for the IAM user . Additionally, SSH access to control plane and compute machines is no longer open to the machine network, but is restricted to the security groups associated with the control plane and compute machines. Warning Installing a cluster on Amazon Web Services (AWS) into a secret or top-secret region by using the Cluster API implementation has not been tested as of the release of OpenShift Container Platform 4.16. This document will be updated when installation into a secret region has been tested. There is a known issue with Network Load Balancer support for security groups in secret or top secret regions that causes installations to fail. For more information, see OCPBUGS-33311 . 1.3.2.2. Cluster API replaces Terraform for VMware vSphere installations In OpenShift Container Platform 4.16, the installation program uses Cluster API instead of Terraform to provision cluster infrastructure during installations on VMware vSphere. 1.3.2.3. Cluster API replaces Terraform for Nutanix installations In OpenShift Container Platform 4.16, the installation program uses Cluster API instead of Terraform to provision cluster infrastructure during installations on Nutanix. 1.3.2.4. Cluster API replaces Terraform for Google Cloud Platform (GCP) installations (Technology Preview) In OpenShift Container Platform 4.16, the installation program uses Cluster API instead of Terraform to provision cluster infrastructure during installations on GCP. This feature is available as a Technology Preview in OpenShift Container Platform 4.16. To enable Technology Preview features, set the featureSet: TechPreviewNoUpgrade parameter in the install-config.yaml file before installation. Alternatively, add the following stanza to the install-config.yaml file before installation to enable Cluster API installation without any other Technology Preview features: featureSet: CustomNoUpgrade featureGates: - ClusterAPIInstall=true For more information, see Optional configuration parameters . 1.3.2.5. Installation on Alibaba Cloud by using Assisted Installer (Technology Preview) With this release, the OpenShift Container Platform installation program no longer supports the installer-provisioned installation on the Alibaba Cloud platform. You can install a cluster on Alibaba Cloud by using Assisted Installer, which is currently a Technology Preview feature. For more information, see Installing on Alibaba cloud . 1.3.2.6. Optional cloud controller manager cluster capability In OpenShift Container Platform 4.16, you can disable the cloud controller manager capability during installation. For more information, see Cloud controller manager capability . 1.3.2.7. FIPS installation requirements in OpenShift Container Platform 4.16 With this update, if you install a FIPS-enabled cluster, you must run the installation program from a RHEL 9 computer that is configured to operate in FIPS mode, and you must use a FIPS-capable version of the installation program. For more information, see Support for FIPS cryptography . 1.3.2.8. Optional additional tags for VMware vSphere In OpenShift Container Platform 4.16, you can add up to ten tags to attach to the virtual machines (VMs) provisioned by a VMware vSphere cluster. These tags are in addition to the unique cluster-specific tag that the installation program uses to identify and remove associated VMs when a cluster is decommissioned. You can define the tags on the VMware vSphere VMs in the install-config.yaml file during cluster creation. For more information, see Sample install-config.yaml file for an installer-provisioned VMware vSphere cluster . You can define tags for compute or control plane machines on an existing cluster by using machine sets. For more information, see "Adding tags to machines by using machine sets" for compute or control plane machine sets. 1.3.2.9. Required administrator acknowledgment when updating from OpenShift Container Platform 4.15 to 4.16 OpenShift Container Platform 4.16 uses Kubernetes 1.29, which removed several deprecated APIs . A cluster administrator must provide manual acknowledgment before the cluster can be updated from OpenShift Container Platform 4.15 to 4.16. This is to help prevent issues after updating to OpenShift Container Platform 4.16, where APIs that have been removed are still in use by workloads, tools, or other components running on or interacting with the cluster. Administrators must evaluate their cluster for any APIs in use that will be removed and migrate the affected components to use the appropriate new API version. After this is done, the administrator can provide the administrator acknowledgment. All OpenShift Container Platform 4.15 clusters require this administrator acknowledgment before they can be updated to OpenShift Container Platform 4.16. For more information, see Preparing to update to OpenShift Container Platform 4.16 . 1.3.2.10. Secure kubeadmin password from being displayed in the console With this release, you can prevent the kubeadmin password from being displayed in the console after the installation by using the --skip-password-print flag during cluster creation. The password remains accessible in the auth directory. 1.3.2.11. OpenShift-based Appliance Builder (Technology Preview) With this release, the OpenShift-based Appliance Builder is available as a Technology Preview feature. The Appliance Builder enables self-contained OpenShift Container Platform cluster installations, meaning that it does not rely on internet connectivity or external registries. It is a container-based utility that builds a disk image that includes the Agent-based Installer, which can then be used to install multiple OpenShift Container Platform clusters. For more information, see the OpenShift-based Appliance Builder User Guide . 1.3.2.12. Bring your own IPv4 (BYOIP) feature enabled for installation on AWS With this release, you can enable bring your own public IPv4 addresses (BYOIP) feature when installing on Amazon Web Services (AWS) by using the publicIpv4Pool field to allocate Elastic IP addresses (EIPs). You must ensure that you have the required permissions to enable BYOIP. For more information, see Optional AWS configuration parameters . 1.3.2.13. Deploy GCP in the Dammam (Saudi Arabia) and Johannesburg (South Africa) regions You can deploy OpenShift Container Platform 4.16 in Google Cloud Platform (GCP) in the Dammam, Saudi Arabia ( me-central2 ) region and in the Johannesburg, South Africa ( africa-south1 ) region. For more information, see Supported GCP regions . 1.3.2.14. Installation on NVIDIA H100 instance types on Google Cloud Platform (GCP) With this release, you can deploy compute nodes on GPU-enabled NVIDIA H100 machines when installing a cluster on GCP. For more information, see Tested instance types for GCP and Google's documentation about the Accelerator-optimized machine family . 1.3.3. Postinstallation configuration 1.3.3.1. Managing workloads on multi-architecture clusters by using the Multiarch Tuning Operator With this release, you can manage workloads on multi-architecture clusters by using the Multiarch Tuning Operator. This Operator enhances the operational experience within multi-architecture clusters, and single-architecture clusters that are migrating to a multi-architecture compute configuration. It implements the ClusterPodPlacementConfig custom resource (CR) to support architecture-aware workload scheduling. For more information, see Managing workloads on multi-architecture clusters by using the Multiarch Tuning Operator . 1.3.3.2. Support for adding 64-bit x86 compute machines to a cluster with 64-bit ARM control plane machines This feature provides support for adding 64-bit x86 compute machines to a multi-architecture cluster with 64-bit ARM control plane machines. With this release, you can add 64-bit x86 compute machines to a cluster that uses 64-bit ARM control plane machines and already includes 64-bit ARM compute machines. 1.3.3.3. Support for installing an Agent-based Installer cluster with multi payload This feature provides support for installing an Agent-based Installer cluster with multi payload. After installing the Agent-based Installer cluster with multi payload, you can add compute machines with different architectures to the cluster. 1.3.4. Web console 1.3.4.1. Language support for French and Spanish With this release, French and Spanish are supported in the web console. You can update the language in the web console from the Language list on the User Preferences page. 1.3.4.2. Patternfly 4 is now deprecated with 4.16 With this release, Patternfly 4 and React Router 5 are deprecated in the web console. All plugins should migrate to Patternfly 5 and React Router 6 as soon as possible. 1.3.4.3. Administrator perspective This release introduces the following updates to the Administrator perspective of the web console: A Google Cloud Platform (GCP) token authorization, Auth Token GCP , and a Configurable TLS ciphers filter was added to the Infrastructure features filter in the OperatorHub. A new quick start, Impersonating the system:admin user , is available with information on impersonating the system:admin user. A pod's last termination state is now available to view on the Container list and Container details pages. An Impersonate Group action is now available from the Groups and Group details pages without having to search for the appropriate RoleBinding . You can collapse and expand the Getting started section. 1.3.4.3.1. Node CSR handling in the OpenShift Container Platform web console With this release, the OpenShift Container Platform web console supports node certificate signing requests (CSRs). 1.3.4.3.2. Cross Storage Class clone and restore With this release, you can select a storage class from the same provider when completing clone or restore operations. This flexibility allows seamless transitions between storage classes with different replica counts. For example, moving from a storage class with 3 replicas to 2/1 replicas. 1.3.4.4. Developer Perspective This release introduces the following updates to the Developer perspective of the web console: When searching, a new section was added to the list of Resources on the Search page to display the recently searched items in the order they were searched. 1.3.4.4.1. Console Telemetry With this release, anonymized user analytics were enabled if cluster telemetry is also enabled. This is the default for most of the cluster and provides Red Hat with metrics for how the web console is used. Cluster administrators can update this in each cluster and opt-in, opt-out, or disable front-end telemetry. 1.3.5. OpenShift CLI (oc) 1.3.5.1. oc-mirror plugin v2 (Technology Preview) The oc-mirror plugin v2 for OpenShift Container Platform includes new features and functionalities that improve the mirroring process for Operator images and other OpenShift Container Platform content. The following are the key enhancements and features in oc-mirror plugin v2: Automatic generation of IDMS and ITMS objects : oc-mirror plugin v2 automatically generates a comprehensive list of ImageDigestMirrorSet (IDMS) and ImageTagMirrorSet (ITMS) objects after each run. These objects replace the ImageContentSourcePolicy (ICSP) used in oc-mirror plugin v1. This enhancement eliminates the need for manual merging and cleanup of operator images and ensures all necessary images are included. CatalogSource objects : CatalogSource objects creation, where the plugin now generates CatalogSource objects for all relevant catalog indexes to enhance the application of oc-mirror's output artifacts to disconnected clusters. Improved verification : oc-mirror plugin v2 verifies that the complete image set specified in the image set config is mirrored to the registry, regardless of whether the images were previously mirrored or not. This ensures comprehensive and reliable mirroring. Cache system : The new cache system replaces metadata, maintaining minimal archive sizes by incorporating only new images into the archive. This optimizes storage and improves performance. Selective mirroring by date : Users can now generate mirroring archives based on the mirroring date, allowing for the selective inclusion of new images. Enhanced image deletion control : The introduction of a Delete feature replaces automatic pruning, providing users with greater control over image deletion. Support for registries.conf : oc-mirror plugin v2 supports the registries.conf file that facilitates mirroring to multiple enclaves using the same cache. This enhances flexibility and efficiency in managing mirrored images. Operator version filtering : Users can filter Operator versions by bundle name, offering more precise control over the versions included in the mirroring process. Differences Between oc-mirror v1 and v2 While oc-mirror plugin v2 brings numerous enhancements, some features from oc-mirror plugin v1 are not yet present in oc-mirror plugin v2: Helm Charts: Helm charts are not present in oc-mirror plugin v2. ImageSetConfig v1alpha2 : The API version v1alpha2 is not available, users must update to v2alpha1 . Storage Metadata ( storageConfig ): Storage metadata is not used in oc-mirror plugin v2 ImageSetConfiguration . Automatic Pruning: Replaced by the new Delete feature in oc-mirror plugin v2. Release Signatures: Release signatures are not generated in oc-mirror plugin v2. Some commands: The init , list , and describe commands are not available in oc-mirror plugin v2. Using oc-mirror plugin v2 To use the oc-mirror plugin v2, add the --v2 flag to the oc-mirror command line. The oc-mirror OpenShift CLI ( oc ) plugin is used to mirror all the required OpenShift Container Platform content and other images to your mirror registry, simplifying the maintenance of disconnected clusters. 1.3.5.2. Introducing the oc adm upgrade status command (Technology Preview) Previously, the oc adm upgrade command provided limited information about the status of a cluster update. This release adds the oc adm upgrade status command, which decouples status information from the oc adm upgrade command and provides specific information regarding a cluster update, including the status of the control plane and worker node updates. 1.3.5.3. Warning for duplicate resource short names With this release, if you query a resource by using its short name, the OpenShift CLI ( oc ) returns a warning if more than one custom resource definition (CRD) with the same short name exists in the cluster. Example warning Warning: short name "ex" could also match lower priority resource examples.test.com 1.3.5.4. New flag to require confirmation when deleting resources (Technology Preview) This release introduces a new --interactive flag for the oc delete command. When the --interactive flag is set to true , the resource is deleted only if the user confirms the deletion. This flag is available as a Technology Preview feature. 1.3.6. IBM Z and IBM LinuxONE With this release, IBM Z(R) and IBM(R) LinuxONE are now compatible with OpenShift Container Platform 4.16. You can perform the installation with z/VM, LPAR, or Red Hat Enterprise Linux (RHEL) Kernel-based Virtual Machine (KVM). For installation instructions, see Preparing to install on IBM Z and IBM LinuxONE . Important Compute nodes must run Red Hat Enterprise Linux CoreOS (RHCOS). 1.3.6.1. IBM Z and IBM LinuxONE notable enhancements The IBM Z(R) and IBM(R) LinuxONE release on OpenShift Container Platform 4.16 adds improvements and new capabilities to OpenShift Container Platform components and concepts. This release introduces support for the following features on IBM Z(R) and IBM(R) LinuxONE: Agent-based Installer ISO boot for RHEL KVM Ingress Node Firewall Operator Multi-architecture compute machines in an LPAR Secure boot for z/VM and LPAR 1.3.7. IBM Power IBM Power(R) is now compatible with OpenShift Container Platform 4.16. For installation instructions, see the following documentation: Installing a cluster on IBM Power(R) Installing a cluster on IBM Power(R) in a restricted network Important Compute nodes must run Red Hat Enterprise Linux CoreOS (RHCOS). 1.3.7.1. IBM Power notable enhancements The IBM Power(R) release on OpenShift Container Platform 4.16 adds improvements and new capabilities to OpenShift Container Platform components. This release introduces support for the following features on IBM Power(R): CPU manager Ingress Node Firewall Operator 1.3.7.2. IBM Power, IBM Z, and IBM LinuxONE support matrix Starting in OpenShift Container Platform 4.14, Extended Update Support (EUS) is extended to the IBM Power(R) and the IBM Z(R) platform. For more information, see the OpenShift EUS Overview . Table 1.1. OpenShift Container Platform features Feature IBM Power(R) IBM Z(R) and IBM(R) LinuxONE Alternate authentication providers Supported Supported Agent-based Installer Supported Supported Assisted Installer Supported Supported Automatic Device Discovery with Local Storage Operator Unsupported Supported Automatic repair of damaged machines with machine health checking Unsupported Unsupported Cloud controller manager for IBM Cloud(R) Supported Unsupported Controlling overcommit and managing container density on nodes Unsupported Unsupported Cron jobs Supported Supported Descheduler Supported Supported Egress IP Supported Supported Encrypting data stored in etcd Supported Supported FIPS cryptography Supported Supported Helm Supported Supported Horizontal pod autoscaling Supported Supported Hosted control planes (Technology Preview) Supported Supported IBM Secure Execution Unsupported Supported Installer-provisioned Infrastructure Enablement for IBM Power(R) Virtual Server Supported Unsupported Installing on a single node Supported Supported IPv6 Supported Supported Monitoring for user-defined projects Supported Supported Multi-architecture compute nodes Supported Supported Multi-architecture control plane Supported Supported Multipathing Supported Supported Network-Bound Disk Encryption - External Tang Server Supported Supported Non-volatile memory express drives (NVMe) Supported Unsupported nx-gzip for Power10 (Hardware Acceleration) Supported Unsupported oc-mirror plugin Supported Supported OpenShift CLI ( oc ) plugins Supported Supported Operator API Supported Supported OpenShift Virtualization Unsupported Unsupported OVN-Kubernetes, including IPsec encryption Supported Supported PodDisruptionBudget Supported Supported Precision Time Protocol (PTP) hardware Unsupported Unsupported Red Hat OpenShift Local Unsupported Unsupported Scheduler profiles Supported Supported Secure Boot Unsupported Supported Stream Control Transmission Protocol (SCTP) Supported Supported Support for multiple network interfaces Supported Supported The openshift-install utility to support various SMT levels on IBM Power(R) (Hardware Acceleration) Supported Supported Three-node cluster support Supported Supported Topology Manager Supported Unsupported z/VM Emulated FBA devices on SCSI disks Unsupported Supported 4K FCP block device Supported Supported Table 1.2. Persistent storage options Feature IBM Power(R) IBM Z(R) and IBM(R) LinuxONE Persistent storage using iSCSI Supported [1] Supported [1] , [2] Persistent storage using local volumes (LSO) Supported [1] Supported [1] , [2] Persistent storage using hostPath Supported [1] Supported [1] , [2] Persistent storage using Fibre Channel Supported [1] Supported [1] , [2] Persistent storage using Raw Block Supported [1] Supported [1] , [2] Persistent storage using EDEV/FBA Supported [1] Supported [1] , [2] Persistent shared storage must be provisioned by using either Red Hat OpenShift Data Foundation or other supported storage protocols. Persistent non-shared storage must be provisioned by using local storage, such as iSCSI, FC, or by using LSO with DASD, FCP, or EDEV/FBA. Table 1.3. Operators Feature IBM Power(R) IBM Z(R) and IBM(R) LinuxONE cert-manager Operator for Red Hat OpenShift Supported Supported Cluster Logging Operator Supported Supported Cluster Resource Override Operator Supported Supported Compliance Operator Supported Supported Cost Management Metrics Operator Supported Supported File Integrity Operator Supported Supported HyperShift Operator Technology Preview Technology Preview IBM Power(R) Virtual Server Block CSI Driver Operator Supported Unsupported Ingress Node Firewall Operator Supported Supported Local Storage Operator Supported Supported MetalLB Operator Supported Supported Network Observability Operator Supported Supported NFD Operator Supported Supported NMState Operator Supported Supported OpenShift Elasticsearch Operator Supported Supported Vertical Pod Autoscaler Operator Supported Supported Table 1.4. Multus CNI plugins Feature IBM Power(R) IBM Z(R) and IBM(R) LinuxONE Bridge Supported Supported Host-device Supported Supported IPAM Supported Supported IPVLAN Supported Supported Table 1.5. CSI Volumes Feature IBM Power(R) IBM Z(R) and IBM(R) LinuxONE Cloning Supported Supported Expansion Supported Supported Snapshot Supported Supported 1.3.8. Authentication and authorization 1.3.8.1. Enabling Microsoft Entra Workload ID on existing clusters In this release, you can enable Microsoft Entra Workload ID to use short-term credentials on an existing Microsoft Azure OpenShift Container Platform cluster. This functionality is now also supported in versions 4.14 and 4.15 of OpenShift Container Platform. For more information, see Enabling token-based authentication . 1.3.9. Networking 1.3.9.1. OpenShift SDN network plugin blocks future major upgrades As part of the OpenShift Container Platform move to OVN-Kubernetes as the only supported network plugin, starting with OpenShift Container Platform 4.16, if your cluster uses the OpenShift SDN network plugin, you cannot upgrade to future major versions of OpenShift Container Platform without migrating to OVN-Kubernetes. For more information about migrating to OVN-Kubernetes, see Migrating from the OpenShift SDN network plugin . If you try an upgrade, the Cluster Network Operator reports the following status: - lastTransitionTime: "2024-04-11T05:54:37Z" message: Cluster is configured with OpenShiftSDN, which is not supported in the version. Please follow the documented steps to migrate from OpenShiftSDN to OVN-Kubernetes in order to be able to upgrade. https://docs.openshift.com/container-platform/4.16/networking/ovn_kubernetes_network_provider/migrate-from-openshift-sdn.html reason: OpenShiftSDNConfigured status: "False" type: Upgradeable 1.3.9.2. Dual-NIC Intel E810 Westport Channel as PTP grandmaster clock (Generally Available) Configuring linuxptp services as grandmaster clock (T-GM) for dual Intel E810 Westport Channel network interface controllers (NICs) is now a generally available feature in OpenShift Container Platform. The host system clock is synchronized from the NIC that is connected to the Global Navigation Satellite Systems (GNSS) time source. The second NIC is synced to the 1PPS timing output provided by the NIC that is connected to GNSS. For more information see Configuring linuxptp services as a grandmaster clock for dual E810 Westport Channel NICs . 1.3.9.3. Dual-NIC Intel E810 PTP boundary clock with highly available system clock (Generally Available) You can configure the linuxptp services ptp4l and phc2sys as a highly available (HA) system clock for dual PTP boundary clocks (T-BC). For more information, see Configuring linuxptp as a highly available system clock for dual-NIC Intel E810 PTP boundary clocks . 1.3.9.4. Configuring pod placement to check network connectivity To periodically test network connectivity among cluster components, the Cluster Network Operator (CNO) creates the network-check-source deployment and the network-check-target daemon set. In OpenShift Container Platform 4.16, you can configure the nodes by setting node selectors and run the source and target pods to check the network connectivity. For more information, see Verifying connectivity to an endpoint . 1.3.9.5. Define multiple CIDR blocks for one network security group (NSG) rule With this release, IP addresses and ranges are handled more efficiently in NSGs for OpenShift Container Platform clusters hosted on Microsoft Azure. As a result, the maximum limit of Classless Inter-Domain Routings (CIDRs) for all Ingress Controllers in Microsoft Azure clusters, using the allowedSourceRanges field, increases from approximately 1000 to 4000 CIDRs. 1.3.9.6. Migration from OpenShift SDN to OVN-Kubernetes on Nutanix With this release, migration from the OpenShift SDN network plugin to OVN-Kubernetes is now supported on Nutanix platforms. For more information, see Migration to the OVN-Kubernetes network plugin . 1.3.9.7. Improved integration between CoreDNS and egress firewall (Technology Preview) With this release, OVN-Kubernetes uses a new DNSNameResolver custom resource to keep track of DNS records in your egress firewall rules, and is available as a Technology Preview. This custom resource supports the use of both wildcard DNS names and regular DNS names and allows access to DNS names regardless of the IP addresses associated with its change. For more information, see Improved DNS resolution and resolving wildcard domain names . 1.3.9.8. Parallel node draining during SR-IOV network policy updates With this release, you can configure the SR-IOV Network Operator to drain nodes in parallel during network policy updates. The option to drain nodes in parallel enables faster rollouts of SR-IOV network configurations. You can use the SriovNetworkPoolConfig custom resource to configure parallel node draining and define the maximum number of nodes in the pool that the Operator can drain in parallel. For further information, see Configuring parallel node draining during SR-IOV network policy updates . 1.3.9.9. SR-IOV Network Operator no longer automatically creates the SriovOperatorConfig CR As of OpenShift Container Platform 4.16, the SR-IOV Network Operator no longer automatically creates a SriovOperatorConfig custom resource (CR). Create the SriovOperatorConfig CR by using the procedure described in Configuring the SR-IOV Network Operator 1.3.9.10. Supporting double-tagged packets (QinQ) This release introduces 802.1Q-in-802.1Q also known as QinQ support . QinQ introduces a second VLAN tag, where the service provider designates the outer tag for their use, offering them flexibility, while the inner tag remains dedicated to the customer's VLAN. When two VLAN tags are present in a packet, the outer VLAN tag can be either 802.1Q or 802.1ad. The inner VLAN tag must always be 802.1Q. For more information, see Configuring QinQ support for SR-IOV enabled workloads . 1.3.9.11. Configuring a user-managed load balancer for on-premise infrastructure With this release, you can configure an OpenShift Container Platform cluster on any on-premise infrastructure, such as bare metal, VMware vSphere, Red Hat OpenStack Platform (RHOSP), or Nutanix, to use a user-managed load balancer in place of the default load balancer. For this configuration, you must specify loadBalancer.type: UserManaged in your cluster's install-config.yaml file. For more information about this feature on bare-metal infrastructure, see Services for a user-managed load balancer in Setting up the environment for an OpenShift installation . 1.3.9.12. Detect and warning for iptables With this release, if you have pods in your cluster using iptables rules the following event message is given to warn against future deprecation: This pod appears to have created one or more iptables rules. IPTables is deprecated and will no longer be available in RHEL 10 and later. You should consider migrating to another API such as nftables or eBPF. For more information, see Getting started with nftables . If you are running third-party software, check with your vendor to ensure they will have an nftables based version available soon. 1.3.9.13. Ingress network flows for OpenShift Container Platform services With this release, you can view the ingress network flows for OpenShift Container Platform services. You can use this information to manage ingress traffic for your network and improve network security. For more information, see OpenShift Container Platform network flow matrix . 1.3.9.14. Patching an existing dual-stack network With this release, you can add IPv6 virtual IPs (VIPs) for API and Ingress services to an existing dual-stack-configured cluster by patching the cluster infrastructure. If you have already upgraded your cluster to OpenShift Container Platform 4.16 and you need to convert the single-stack cluster network to a dual-stack cluster network, you must specify the following for your cluster in the YAML configuration patch file: An IPv4 network for API and Ingress services on the first machineNetwork configuration. An IPv6 network for API and Ingress services on the second machineNetwork configuration. For more information, see Converting to a dual-stack cluster network in Converting to IPv4/IPv6 dual-stack networking . 1.3.9.15. Integration of MetalLB and FRR-K8s (Technology Preview) This release introduces FRR-K8s , a Kubernetes based DaemonSet that exposes a subset of the FRR API in a Kubernetes-compliant manner. As a cluster administrator, you can use the FRRConfiguration custom resource (CR) to configure the MetalLB Operator to use the FRR-K8s daemon set as the backend. You can use this to operate FRR services, such as receiving routes. For more information, see Configuring the integration of MetalLB and FRR-K8s . 1.3.9.16. Creating a route with externally managed certificate (Technology Preview) With this release, OpenShift Container Platform routes can be configured with third-party certificate management solutions, utilising the .spec.tls.externalCertificate field in the route API. This allows you to reference externally managed TLS certificates through secrets, streamlining the process by eliminating manual certificate management. By using externally managed certificates, you reduce errors, ensure a smoother certificate update process, and enable the OpenShift router to promptly serve renewed certificates. For more information, see Creating a route with externally managed certificate . 1.3.9.17. AdminNetworkPolicy is generally available This feature provides two new APIs, AdminNetworkPolicy (ANP) and BaselineAdminNetworkPolicy (BANP). Before namespaces are created, cluster Administrators can use ANP and BANP to apply cluster-scoped network policies and safeguards for an entire cluster. Because it is cluster scoped, ANP provides Administrators a solution to manage the security of their network at scale without having to duplicate their network policies on each namespace. For more information, see AdminNetworkPolicy in Network security . 1.3.9.18. Limited live migration to the OVN-Kubernetes network plugin Previously, when migrating from OpenShift SDN to OVN-Kubernetes, the only available option was an offline migration method. This process included some downtime, during which clusters were unreachable. This release introduces a limited live migration method. The limited live migration method is the process in which the OpenShift SDN network plugin and its network configurations, connections, and associated resources are migrated to the OVN-Kubernetes network plugin without service interruption. It is available for OpenShift Container Platform. It is not available for hosted control plane deployment types. This migration method is valuable for deployment types that require constant service availability and offers the following benefits: Continuous service availability Minimized downtime Automatic node rebooting Seamless transition from the OpenShift SDN network plugin to the OVN-Kubernetes network plugin Migration to OVN-Kubernetes is intended to be a one-way process. For more information, see Limited live migration to the OVN-Kubernetes network plugin overview . 1.3.9.19. Overlapping IP configuration for multi-tenant networks with Whereabouts Previously, it was not possible to configure the same CIDR range twice and to have the Whereabouts CNI plugin assign IP addresses from these ranges independently. This limitation caused issues in multi-tenant environments where different groups might need to select overlapping CIDR ranges. With this release, the Whereabouts CNI plugin supports overlapping IP address ranges through the inclusion of a network_name parameter. Administrators can use the network_name parameter to configure the same CIDR range multiple times within separate NetworkAttachmentDefinitions , which enables independent IP address assignments for each range. This feature also includes enhanced namespace handling, stores IPPool custom resources (CRs) in the appropriate namespaces, and supports cross-namespaces when permitted by Multus. These improvements provide greater flexibility and management capabilities in multi-tenant environments. For more information about this feature, see Dynamic IP address assignment configuration with Whereabouts . 1.3.9.20. Support for changing the OVN-Kubernetes network plugin internal IP address ranges If you use the OVN-Kubernetes network plugin, you can configure the transit, join, and masquerade subnets. The transit, join and masquerade subnets can be configured either during cluster installation or after. The subnet defaults are: Transit subnet: 100.88.0.0/16 and fd97::/64 Join subnet: 100.64.0.0/16 and fd98::/64 Masquerade subnet: 169.254.169.0/29 and fd69::/125 For more information about these configuration fields, see Cluster Network Operator configuration object . For more information about configuring the transit and join subnets on an existing cluster, see Configure OVN-Kubernetes internal IP address subnets. 1.3.9.21. IPsec telemetry The Telemetry and the Insights Operator collects telemetry on IPsec connections. For more information, see Showing data collected by Telemetry . 1.3.10. Storage 1.3.10.1. HashiCorp Vault is now available for the Secrets Store CSI Driver Operator (Technology Preview) You can now use the Secrets Store CSI Driver Operator to mount secrets from HashiCorp Vault to a Container Storage Interface (CSI) volume in OpenShift Container Platform. The Secrets Store CSI Driver Operator is available as a Technology Preview feature. For the full list of available secrets store providers, see Secrets store providers . For information about using the Secrets Store CSI Driver Operator to mount secrets from HashiCorp Vault, see Mounting secrets from HashiCorp Vault . 1.3.10.2. Volume cloning supported for Microsoft Azure File (Technology Preview) OpenShift Container Platform 4.16 introduces volume cloning for the Microsoft Azure File Container Storage Interface (CSI) Driver Operator as a Technology Preview feature. Volume cloning duplicates an existing persistent volume (PV) to help protect against data loss in OpenShift Container Platform. You can also use a volume clone just as you would use any standard volume. For more information, see Azure File CSI Driver Operator and CSI volume cloning . 1.3.10.3. Node Expansion Secret is generally available The Node Expansion Secret feature allows your cluster to expand storage of mounted volumes, even when access to those volumes requires a secret (for example, a credential for accessing a Storage Area Network (SAN) fabric) to perform the node expand operation. OpenShift Container Platform 4.16 supports this feature as generally available. 1.3.10.4. Changing vSphere CSI maximum number of snapshots is generally available The default maximum number of snapshots in VMware vSphere Container Storage Interface (CSI) is 3 per volume. In OpenShift Container Platform 4.16, you can now change this maximum number of snapshots to a maximum of 32 per volume. You also have granular control of the maximum number of snapshots for vSAN and Virtual Volume datastores. OpenShift Container Platform 4.16 supports this feature as generally available. For more information, see Changing the maximum number of snapshots for vSphere . 1.3.10.5. Persistent volume last phase transition time parameter (Technology Preview) In OpenShift Container Platform 4.16 introduces a new parameter, LastPhaseTransitionTime , which has a timestamp that is updated every time a persistent volume (PV) transitions to a different phase ( pv.Status.Phase ). This feature is being released with Technology Preview status. 1.3.10.6. Persistent storage using CIFS/SMB CSI Driver Operator (Technology Preview) OpenShift Container Platform is capable of provisioning persistent volumes (PVs) with a Container Storage Interface (CSI) driver for the Common Internet File System (CIFS) dialect/Server Message Block (SMB) protocol. The CIFS/SMB CSI Driver Operator that manages this driver is in Technology Preview status. For more information, see CIFS/SMB CSI Driver Operator . 1.3.10.7. RWOP with SELinux context mount is generally available OpenShift Container Platform 4.14 introduced a new access mode with Technical Preview status for persistent volumes (PVs) and persistent volume claims (PVCs) called ReadWriteOncePod (RWOP). RWOP can be used only in a single pod on a single node compared to the existing ReadWriteOnce access mode where a PV or PVC can be used on a single node by many pods. If the driver enables it, RWOP uses the SELinux context mount set in the PodSpec or container, which allows the driver to mount the volume directly with the correct SELinux labels. This eliminates the need to recursively relabel the volume, and pod startup can be significantly faster. In OpenShift Container Platform 4.16, this feature is generally available. For more information, see Access modes . 1.3.10.8. vSphere CSI Driver 3.1 updated CSI topology requirements To support VMware vSphere Container Storage Interface (CSI) volume provisioning and usage in multi-zonal clusters, the deployment should match certain requirements imposed by CSI driver. These requirements have changed starting with 3.1.0, and although OpenShift Container Platform 4.16 accepts both the old and new tagging methods, you should use the new tagging method since VMware vSphere considers the old way an invalid configuration. To prevent problems, you should not use the old tagging method. For more information, see vSphere CSI topology requirements . 1.3.10.9. Support for configuring thick-provisioned storage This feature provides support for configuring thick-provisioned storage. If you exclude the deviceClasses.thinPoolConfig field in the LVMCluster custom resource (CR), logical volumes are thick provisioned. Using thick-provisioned storage includes the following limitations: No copy-on-write support for volume cloning. No support for VolumeSnapshotClass . Therefore, CSI snapshotting is not supported. No support for over-provisioning. As a result, the provisioned capacity of PersistentVolumeClaims (PVCs) is immediately reduced from the volume group. No support for thin metrics. Thick-provisioned devices only support volume group metrics. For information about configuring the LVMCluster CR, see About the LVMCluster custom resource . 1.3.10.10. Support for a new warning message when device selector is not configured in the LVMCluster custom resource This update provides a new warning message when you do not configure the deviceSelector field in the LVMCluster custom resource (CR). The LVMCluster CR supports a new field, deviceDiscoveryPolicy , which indicates whether the deviceSelector field is configured. If you do not configure the deviceSelector field, LVM Storage automatically sets the deviceDiscoveryPolicy field to RuntimeDynamic . Otherwise, the deviceDiscoveryPolicy field is set to Preconfigured . It is not recommended to exclude the deviceSelector field from the LMVCluster CR. For more information about the limitations of not configuring the deviceSelector field, see About adding devices to a volume group . 1.3.10.11. Support for adding encrypted devices to a volume group This feature provides support for adding encrypted devices to a volume group. You can enable disk encryption on the cluster nodes during an OpenShift Container Platform installation. After encrypting a device, you can specify the path to the LUKS encrypted device in the deviceSelector field in the LVMCluster custom resource. For information about disk encryption, About disk encryption and Configuring disk encryption and mirroring . For more information about adding devices to a volume group, see About adding devices to a volume group . 1.3.11. Operator lifecycle 1.3.11.1. Operator API renamed to ClusterExtension (Technology Preview) Earlier Technology Preview phases of Operator Lifecycle Manager (OLM) 1.0 introduced a new Operator API, provided as operator.operators.operatorframework.io by the Operator Controller component. In OpenShift Container Platform 4.16, this API is renamed ClusterExtension , provided as clusterextension.olm.operatorframework.io , for this Technology Preview phase of OLM 1.0. This API still streamlines management of installed extensions, which includes Operators via the registry+v1 bundle format, by consolidating user-facing APIs into a single object. The rename to ClusterExtension addresses the following: More accurately reflects the simplified functionality of extending a cluster's capabilities Better represents a more flexible packaging format Cluster prefix clearly indicates that ClusterExtension objects are cluster-scoped, a change from legacy OLM where Operators could be either namespace-scoped or cluster-scoped For more information, see Operator Controller . Important OLM 1.0 does not support dependency resolution. If an extension declares dependencies for other APIs or packages, the dependencies must be present on the cluster before you attempt to install the extension. Currently, OLM 1.0 supports the installation of extensions that meet the following criteria: The extension must use the AllNamespaces install mode. The extension must not use webhooks. Cluster extensions that use webhooks or that target a single or specified set of namespaces cannot be installed. 1.3.11.2. Improved status condition messages and deprecation notices for cluster extensions in Operator Lifecycle Manager (OLM) 1.0 (Technology Preview) With this release, OLM 1.0 displays the following status condition messages for installed cluster extensions: Specific bundle name Installed version Improved health reporting Deprecation notices for packages, channels, and bundles 1.3.11.3. Support for legacy OLM upgrade edges in OLM 1.0 (Technology Preview) When determining upgrade edges for an installed cluster extension, Operator Lifecycle Manager (OLM) 1.0 supports legacy OLM semantics starting in OpenShift Container Platform 4.16. This support follows the behavior from legacy OLM, including replaces , skips , and skipRange directives, with a few noted differences. By supporting legacy OLM semantics, OLM 1.0 now honors the upgrade graph from catalogs accurately. Note Support for semantic versioning (semver) upgrade constraints was introduced in OpenShift Container Platform 4.15 but disabled in 4.16 in favor of legacy OLM semantics during this Technology Preview phase. For more information, see Upgrade constraint semantics . 1.3.12. Builds 1.3.12.1. Unauthenticated users were removed from the system:webhook role binding With this release, unauthenticated users no longer have access to the system:webhook role binding. Before OpenShift Container Platform 4.16, unauthenticated users could access the system:webhook role binding. Changing this access for unauthenticated users adds an additional layer of security and should only be enabled by users when necessary. This change is for new clusters; clusters are not affected. There are use cases where you might want to allow unauthenticated users the system:webhook role binding for specific namespaces. The system:webhook cluster role allows users to trigger builds from external systems that do not use OpenShift Container Platform authentication mechanisms, such as GitHub, GitLab, and Bitbucket. Cluster admins can grant unauthenticated users access to the system:webhook role binding to facilitate this use case. Important Always verify compliance with your organization's security standards when modifying unauthenticated access. To grant unauthenticated users access to the system:webhook role binding in specific namespaces, see Adding unauthenticated users to the system:webhook role binding . 1.3.13. Machine Config Operator 1.3.13.1. Garbage collection of unused rendered machine configs With this release, you can now garbage collect unused rendered machine configs. By using the oc adm prune renderedmachineconfigs command, you can view the unused rendered machine configs, determine which to remove, then batch delete the rendered machine configs that you no longer need. Having too many machine configs can make working with the machine configs confusing and can also contribute to disk space and performance issues. For more information, see Managing unused rendered machine configs . 1.3.13.2. Node disruption policies (Technology Preview) By default, when you make certain changes to the parameters in a MachineConfig object, the Machine Config Operator (MCO) drains and reboots the nodes associated with that machine config. However, you can create a node disruption policy in the MCO namespace that defines a set of Ignition config objects changes that would require little or no disruption to your workloads. For more information, see Using node disruption policies to minimize disruption from machine config changes . 1.3.13.3. On-cluster RHCOS image layering (Technology Preview) With Red Hat Enterprise Linux CoreOS (RHCOS) image layering, you can now automatically build the custom layered image directly in your cluster, as a Technology Preview feature. Previously, you needed to build the custom layered image outside of the cluster, then pull the image into the cluster. You can use the image layering feature to extend the functionality of your base RHCOS image by layering additional images onto the base image. For more information, see RHCOS image layering . 1.3.13.4. Updating boot images (Technology Preview) By default, the MCO does not delete the boot image it uses to bring up a Red Hat Enterprise Linux CoreOS (RHCOS) node. Consequently, the boot image in your cluster is not updated along with your cluster. You can now configure your cluster to update the boot image whenever you update your cluster. For more information, see Updating boot images . 1.3.14. Machine management 1.3.14.1. Configuring expanders for the cluster autoscaler With this release, the cluster autoscaler can use the LeastWaste , Priority , and Random expanders. You can configure these expanders to influence the selection of machine sets when scaling the cluster. For more information, see Configuring the cluster autoscaler . 1.3.14.2. Managing machines with the Cluster API for VMware vSphere (Technology Preview) This release introduces the ability to manage machines by using the upstream Cluster API, integrated into OpenShift Container Platform, as a Technology Preview for VMware vSphere clusters. This capability is in addition or an alternative to managing machines with the Machine API. For more information, see About the Cluster API . 1.3.14.3. Defining a vSphere failure domain for a control plane machine set With this release, the previously Technology Preview feature of defining a vSphere failure domain for a control plane machine set is Generally Available. For more information, see Control plane configuration options for VMware vSphere . 1.3.15. Nodes 1.3.15.1. Moving the Vertical Pod Autoscaler Operator pods The Vertical Pod Autoscaler Operator (VPA) consists of three components: the recommender, updater, and admission controller. The Operator and each component has its own pod in the VPA namespace on the control plane nodes. You can move the VPA Operator and component pods to infrastructure or worker nodes. For more information, see Moving the Vertical Pod Autoscaler Operator components . 1.3.15.2. Additional information collected by must-gather With this release, the oc adm must-gather command collects the following additional information: OpenShift CLI ( oc ) binary version Must-gather logs These additions help identify issues that might stem from using a specific version of oc . The oc adm must-gather command also lists what image was used and if any data could not be gathered in the must-gather logs. For more information, see About the must-gather tool . 1.3.15.3. Editing the BareMetalHost resource In OpenShift Container Platform 4.16 and later, you can edit the baseboard management controller (BMC) address in the BareMetalHost resource of a bare-metal node. The node must be in the Provisioned , ExternallyProvisioned , Registering , or Available state. Editing the BMC address in the BareMetalHost resource will not result in deprovisioning the node. See Editing a BareMetalHost resource for additional details. 1.3.15.4. Attaching a non-bootable ISO In OpenShift Container Platform 4.16 and later, you can attach a generic, non-bootable ISO virtual media image to a provisioned node by using the DataImage resource. After you apply the resource, the ISO image becomes accessible to the operating system on the reboot. The node must use Redfish or drivers derived from it to support this feature. The node must be in the Provisioned or ExternallyProvisioned state. See Attaching a non-bootable ISO to a bare-metal node for additional details. 1.3.16. Monitoring The in-cluster monitoring stack for this release includes the following new and modified features. 1.3.16.1. Updates to monitoring stack components and dependencies This release includes the following version updates for in-cluster monitoring stack components and dependencies: kube-state-metrics to 2.12.0 Metrics Server to 0.7.1 node-exporter to 1.8.0 Prometheus to 2.52.0 Prometheus Operator to 0.73.2 Thanos to 0.35.0 1.3.16.2. Changes to alerting rules Note Red Hat does not guarantee backward compatibility for recording rules or alerting rules. Added the ClusterMonitoringOperatorDeprecatedConfig alert to monitor when the Cluster Monitoring Operator configuration uses a deprecated field. Added the PrometheusOperatorStatusUpdateErrors alert to monitor when the Prometheus Operator fails to update object status. 1.3.16.3. Metrics Server component to access the Metrics API general availability (GA) The Metrics Server component is now generally available and automatically installed instead of the deprecated Prometheus Adapter. Metrics Server collects resource metrics and exposes them in the metrics.k8s.io Metrics API service for use by other tools and APIs, which frees the core platform Prometheus stack from handling this functionality. For more information, see MetricsServerConfig in the config map API reference for the Cluster Monitoring Operator. 1.3.16.4. New monitoring role to allow read-only access to the Alertmanager API This release introduces a new monitoring-alertmanager-view role to allow read-only access to the Alertmanager API in the openshift-monitoring project. 1.3.16.5. VPA metrics are available in the kube-state-metrics agent Vertical Pod Autoscaler (VPA) metrics are now available through the kube-state-metrics agent. VPA metrics follow a similar exposition format just as they did before being deprecated and removed from native support upstream. 1.3.16.6. Change in proxy service for monitoring components With this release, the proxy service in front of Prometheus, Alertmanager, and Thanos Ruler has been updated from OAuth to kube-rbac-proxy . This change might affect service accounts and users accessing these API endpoints without the appropriate roles and cluster roles. 1.3.16.7. Change in how Prometheus handles duplicate samples With this release, when Prometheus scrapes a target, duplicate samples are no longer silently ignored, even if they have the same value. The first sample is accepted and the prometheus_target_scrapes_sample_duplicate_timestamp_total counter is incremented, which might trigger the PrometheusDuplicateTimestamps alert. 1.3.17. Network Observability Operator The Network Observability Operator releases updates independently from the OpenShift Container Platform minor version release stream. Updates are available through a single, Rolling Stream which is supported on all currently supported versions of OpenShift Container Platform 4. Information regarding new features, enhancements, and bug fixes for the Network Observability Operator is found in the Network Observability release notes . 1.3.18. Scalability and performance 1.3.18.1. Workload partitioning enhancement With this release, platform pods deployed with a workload annotation that includes both CPU limits and CPU requests will have the CPU limits accurately calculated and applied as a CPU quota for the specific pod. In prior releases, if a workload partitioned pod had both CPU limits and requests set, they were ignored by the webhook. The pod did not benefit from workload partitioning and was not locked down to specific cores. This update ensures the requests and limits are now interpreted correctly by the webhook. Note It is expected that if the values for CPU limits are different from the value for requests in the annotation, the CPU limits are taken as being the same as the requests. For more information, see Workload partitioning . 1.3.18.2. Linux Control Groups version 2 is now supported with the performance profile feature Beginning with OpenShift Container Platform 4.16, Control Groups version 2 (cgroup v2), also known as cgroup2 or cgroupsv2, is enabled by default for all new deployments, even when performance profiles are present. Since OpenShift Container Platform 4.14, cgroups v2 has been the default, but the performance profile feature required the use of cgroups v1. This issue has been resolved. cgroup v1 is still used in upgraded clusters with performance profiles that have initial installation dates before OpenShift Container Platform 4.16. cgroup v1 can still be used in the current version by changing the cgroupMode field in the node.config object to v1 . For more information, see Configuring the Linux cgroup version on your nodes . 1.3.18.3. Support for increasing the etcd database size (Technology Preview) With this release, you can increase the disk quota in etcd. This is a Technology Preview feature. For more information, see Increasing the database size for etcd . 1.3.18.4. Reserved core frequency tuning With this release, the Node Tuning Operator supports setting CPU frequencies in the PerformanceProfile for reserved and isolated core CPUs. This is an optional feature that you can use to define specific frequencies. The Node Tuning Operator then sets those frequencies by enabling the intel_pstate CPUFreq driver in the Intel hardware. You must follow Intel's recommendations on frequencies for FlexRAN-like applications, which require the default CPU frequency to be set to a lower value than the default running frequency. 1.3.18.5. Node Tuning Operator intel_pstate driver default setting Previously, for the RAN DU-profile, setting the realTime workload hint to true in the PerformanceProfile always disabled the intel_pstate . With this release, the Node Tuning Operator detects the underlying Intel hardware using TuneD and appropriately sets the intel_pstate kernel parameter based on the processor's generation. This decouples the intel_pstate from the realTime and highPowerConsumption workload hints. The intel_pstate now depends only on the underlying processor generation. For pre-IceLake processors, the intel_pstate is deactivated by default, whereas for IceLake and later generation processors, the intel_pstate is set to active . 1.3.18.6. Support for compute nodes with AMD EPYC Zen 4 CPUs From release 4.16.30, you can use the PerformanceProfile custom resource (CR) to configure compute nodes on machines equipped with AMD EPYC Zen 4 CPUs, such as Genoa and Bergamo. Only single NUMA domain (NPS=1) configurations are supported. Per-pod power management is currently not supported on AMD. 1.3.19. Edge computing 1.3.19.1. Using RHACM PolicyGenerator resources to manage GitOps ZTP cluster policies (Technology Preview) You can now use PolicyGenerator resources and Red Hat Advanced Cluster Management (RHACM) to deploy polices for managed clusters with GitOps ZTP. The PolicyGenerator API is part of the Open Cluster Management standard and provides a generic way of patching resources which is not possible with the PolicyGenTemplate API. Using PolicyGenTemplate resources to manage and deploy polices will be deprecated in an upcoming OpenShift Container Platform release. For more information, see Configuring managed cluster policies by using PolicyGenerator resources . Note The PolicyGenerator API does not currently support merging patches with custom Kubernetes resources that contain lists of items. For example, in PtpConfig CRs. 1.3.19.2. TALM policy remediation With this release, Topology Aware Lifecycle Manager (TALM) uses a Red Hat Advanced Cluster Management (RHACM) feature to remediate inform policies on managed clusters. This enhancement removes the need for the Operator to create enforce copies of inform policies during policy remediation. This enhancement also reduces the workload on the hub cluster due to copied policies, and can reduce the overall time required to remediate policies on managed clusters. For more information, see Update policies on managed clusters . 1.3.19.3. Accelerated provisioning of GitOps ZTP (Technology Preview) With this release, you can reduce the time taken for cluster installation by using accelerated provisioning of GitOps ZTP for single-node OpenShift. Accelerated ZTP speeds up installation by applying Day 2 manifests derived from policies at an earlier stage. The benefits of accelerated provisioning of GitOps ZTP increase with the scale of your deployment. Full acceleration gives more benefit on a larger number of clusters. With a smaller number of clusters, the reduction in installation time is less significant. For more information, see Accelerated provisioning of GitOps ZTP . 1.3.19.4. Image-based upgrade for single-node OpenShift clusters using Lifecycle Agent With this release, you can use the Lifecycle Agent to orchestrate an image-based upgrade for single-node OpenShift clusters from OpenShift Container Platform <4.y> to <4.y+2>, and <4.y.z> to <4.y.z+n>. The Lifecycle Agent generates an Open Container Initiative (OCI) image that matches the configuration of participating clusters. In addition to the OCI image, the image-based upgrade uses the ostree library and the OADP Operator to reduce upgrade and service outage duration when transitioning between the original and target platform versions. For more information, see Understanding the image-based upgrade for single-node OpenShift clusters . 1.3.19.5. Image-based upgrade enhancements With this release, the image-based upgrade introduces the following enhancements: Simplifies the upgrade process for a large group of managed clusters by adding the ImageBasedGroupUpgrade API on the hub cluster Labels the managed clusters for action completion when using the ImageBasedGroupUpgrade API Improves seed cluster validation before the seed image generation Automatically cleans the container storage disk if usage reaches a certain threshold on the managed clusters Adds comprehensive event history in the new status.history field of the ImageBasedUpgrade CR For more information about the ImageBasedGroupUpgrade API, see Managing the image-based upgrade at scale using the ImageBasedGroupUpgrade CR on the hub . 1.3.19.6. Deploying IPsec encryption to managed clusters with GitOps ZTP and RHACM You can now enable IPsec encryption in managed single-node OpenShift clusters that you deploy with GitOps ZTP and Red Hat Advanced Cluster Management (RHACM). You can encrypt external traffic between pods and IPsec endpoints external to the managed cluster. All pod-to-pod network traffic between nodes on the OVN-Kubernetes cluster network is encrypted with IPsec in Transport mode. For more information, see Configuring IPsec encryption for single-node OpenShift clusters using GitOps ZTP and SiteConfig resources . 1.3.20. Hosted control planes 1.3.20.1. Hosted control planes is Generally Available on Amazon Web Services (AWS) Hosted control planes for OpenShift Container Platform 4.16 is now Generally Available on the AWS platform. 1.3.21. Security A new signer certificate authority (CA), openshift-etcd , is now available to sign certificates. This CA is contained in a trust bundle with the existing CA. Two CA secrets, etcd-signer and etcd-metric-signer , are also available for rotation. Starting with this release, all certificates will move to a proven library. This change allows for the automatic rotation of all certificates that were not managed by cluster-etcd-operator . All node-based certificates will continue with the current update process. 1.4. Notable technical changes OpenShift Container Platform 4.16 introduces the following notable technical changes. HAProxy version 2.8 OpenShift Container Platform 4.16 uses HAProxy 2.8. SHA-1 certificates no longer supported for use with HAProxy SHA-1 certificates are no longer supported for use with HAProxy. Both existing and new routes that use SHA-1 certificates in OpenShift Container Platform 4.16 are rejected and no longer function. For more information about creating secure routes, see Secured Routes . etcd tuning parameters With this release, the etcd tuning parameters can be set to values that optimize performance and decrease latency, as follows. "" (Default) Standard Slower Unauthenticated users were removed from some cluster roles With this release, unauthenticated users no longer have access to specific cluster roles that are necessary for certain feature sets. Before OpenShift Container Platform 4.16 unauthenticated users could access certain cluster roles. Changing this access for unauthenticated users adds an additional layer of security and should only be enabled when necessary. This change is for new clusters; clusters are not affected. There are use cases where you might want to give access to unauthenticated users for specific cluster roles. To grant unauthenticated users access to specific cluster roles that are necessary for certain features, see Adding unauthenticated groups to cluster roles . Important Always verify compliance with your organization's security standards when modifying unauthenticated access. RHCOS dasd image artifacts no longer supported on IBM Z(R) and IBM(R) LinuxONE (s390x) With this release, dasd image artifacts for the s390x architecture are removed from the OpenShift Container Platform image building pipeline. You can still use the metal4k image artifact, which is identical and contains the same functionality. Support for EgressIP with ExternalTrafficPolicy=Local services Previously, it was unsupported for EgressIP selected pods to also serve as backends for services with externalTrafficPolicy set to Local . When attempting this configuration, service ingress traffic reaching the pods was incorrectly rerouted to the egress node hosting the EgressIP. This affected how responses to incoming service traffic connections were handled and led to non-functional services when externalTrafficPolicy was set to Local , as connections were dropped and the service became unavailable. With OpenShift Container Platform 4.16, OVN-Kubernetes now supports the use of ExternalTrafficPolicy=Local services and EgressIP configurations at the same time on the same set of selected pods. OVN-Kubernetes now only reroutes the traffic originating from the EgressIP pods towards the egress node while routing the responses to ingress service traffic from the EgressIP pods via the same node where the pod is located. Legacy service account API token secrets are no longer generated for each service account Before OpenShift Container Platform 4.16, when the integrated OpenShift image registry was enabled, a legacy service account API token secret was generated for every service account in the cluster. Starting with OpenShift Container Platform 4.16, when the integrated OpenShift image registry is enabled, the legacy service account API token secret is no longer generated for each service account. Additionally, when the integrated OpenShift image registry is enabled, the image pull secret generated for every service account no longer uses a legacy service account API token. Instead, the image pull secret now uses a bound service account token that is automatically refreshed before it expires. For more information, see Automatically generated image pull secrets . For information about detecting legacy service account API token secrets that are in use in your cluster or deleting them if they are not needed, see the Red Hat Knowledgebase article Long-lived service account API tokens in OpenShift Container Platform . Support for external cloud authentication providers In this release, the functionality to authenticate to private registries on Amazon Web Services (AWS), Google Cloud Platform (GCP), and Microsoft Azure clusters is moved from the in-tree provider to binaries that ship with OpenShift Container Platform. This change supports the default external cloud authentication provider behavior that is introduced in Kubernetes 1.29. The builder service account is no longer created if the Build cluster capability is disabled With this release, if you disable the Build cluster capability, the builder service account and its corresponding secrets are no longer created. For more information, see Build capability . Default OLM 1.0 upgrade constraints changed to legacy OLM semantics (Technology Preview) In OpenShift Container Platform 4.16, Operator Lifecycle Manager (OLM) 1.0 changes its default upgrade constraints from semantic versioning (semver) to legacy OLM semantics. For more information, see Support for legacy OLM upgrade edges in OLM 1.0 (Technology Preview) . Removal of the RukPak Bundle API from OLM 1.0 (Technology Preview) In OpenShift Container Platform 4.16, Operator Lifecycle Manager (OLM) 1.0 removes the Bundle API, which was provided by the RukPak component. The RukPak BundleDeployment API remains and supports registry+v1 bundles for unpacking Kubernetes YAML manifests organized in the legacy Operator Lifecycle Manager (OLM) bundle format. For more information, see Rukpak (Technology Preview) . dal12 region was added With this release, the dal12 region has been added to the IBM Power(R) VS Installer. Regions added to IBM Power(R) Virtual Server This release introduces the ability to deploy to new IBM Power(R) Virtual Server (VS) regions osa21 , syd04 , lon06 , and sao01 . IBM Power(R) Virtual Server updated to use Cluster API Provider IBM Cloud 0.8.0 With this release, the IBM Power(R) Virtual Server has been updated to use Cluster API Provider IBM Cloud version 0.8.0. Additional debugging statements for ServiceInstanceNameToGUID With this release, additional debugging statements were added to the ServiceInstanceNameToGUID function. 1.5. Deprecated and removed features Some features available in releases have been deprecated or removed. Deprecated functionality is still included in OpenShift Container Platform and continues to be supported; however, it will be removed in a future release of this product and is not recommended for new deployments. For the most recent list of major functionality deprecated and removed within OpenShift Container Platform 4.16, refer to the table below. Additional details for more functionality that has been deprecated and removed are listed after the table. In the following tables, features are marked with the following statuses: Not Available Technology Preview General Availability Deprecated Removed Operator lifecycle and development deprecated and removed features Table 1.6. Operator lifecycle and development deprecated and removed tracker Feature 4.14 4.15 4.16 Operator SDK General Availability General Availability Deprecated Scaffolding tools for Ansible-based Operator projects General Availability General Availability Deprecated Scaffolding tools for Helm-based Operator projects General Availability General Availability Deprecated Scaffolding tools for Go-based Operator projects General Availability General Availability Deprecated Scaffolding tools for Hybrid Helm-based Operator projects Technology Preview Technology Preview Deprecated Scaffolding tools for Java-based Operator projects Technology Preview Technology Preview Deprecated Platform Operators Technology Preview Technology Preview Removed Plain bundles Technology Preview Technology Preview Removed SQLite database format for Operator catalogs Deprecated Deprecated Deprecated Images deprecated and removed features Table 1.7. Cluster Samples Operator deprecated and removed tracker Feature 4.14 4.15 4.16 Cluster Samples Operator General Availability General Availability Deprecated Monitoring deprecated and removed features Table 1.8. Monitoring deprecated and removed tracker Feature 4.14 4.15 4.16 dedicatedServiceMonitors setting that enables dedicated service monitors for core platform monitoring General Availability Deprecated Removed prometheus-adapter component that queries resource metrics from Prometheus and exposes them in the metrics API. General Availability Deprecated Removed Installation deprecated and removed features Table 1.9. Installation deprecated and removed tracker Feature 4.14 4.15 4.16 OpenShift SDN network plugin Deprecated Removed [1] Removed --cloud parameter for oc adm release extract Deprecated Deprecated Deprecated CoreDNS wildcard queries for the cluster.local domain Deprecated Deprecated Deprecated compute.platform.openstack.rootVolume.type for RHOSP Deprecated Deprecated Deprecated controlPlane.platform.openstack.rootVolume.type for RHOSP Deprecated Deprecated Deprecated ingressVIP and apiVIP settings in the install-config.yaml file for installer-provisioned infrastructure clusters Deprecated Deprecated Deprecated Package-based RHEL compute machines General Availability General Availability Deprecated platform.aws.preserveBootstrapIgnition parameter for Amazon Web Services (AWS) General Availability General Availability Deprecated Terraform infrastructure provider for Amazon Web Services (AWS), VMware vSphere and Nutanix General Availability General Availability Removed Installing a cluster on Alibaba Cloud with installer-provisioned infrastructure Technology Preview Technology Preview Removed While the OpenShift SDN network plugin is no longer supported by the installation program in version 4.15, you can upgrade a cluster that uses the OpenShift SDN plugin from version 4.14 to version 4.15. Updating clusters deprecated and removed features Table 1.10. Updating clusters deprecated and removed tracker Feature 4.14 4.15 4.16 Machine management deprecated and removed features Table 1.11. Machine management deprecated and removed tracker Feature 4.14 4.15 4.16 Managing machine with Machine API for Alibaba Cloud Technology Preview Technology Preview Removed Cloud controller manager for Alibaba Cloud Technology Preview Technology Preview Removed Storage deprecated and removed features Table 1.12. Storage deprecated and removed tracker Feature 4.14 4.15 4.16 Persistent storage using FlexVolume Deprecated Deprecated Deprecated AliCloud Disk CSI Driver Operator General Availability General Availability Removed Networking deprecated and removed features Table 1.13. Networking deprecated and removed tracker Feature 4.14 4.15 4.16 Kuryr on RHOSP Deprecated Removed Removed OpenShift SDN network plugin Deprecated Deprecated Deprecated iptables Deprecated Deprecated Deprecated Web console deprecated and removed features Table 1.14. Web console deprecated and removed tracker Feature 4.14 4.15 4.16 Patternfly 4 General Availability Deprecated Deprecated React Router 5 General Availability Deprecated Deprecated Node deprecated and removed features Table 1.15. Node deprecated and removed tracker Feature 4.14 4.15 4.16 ImageContentSourcePolicy (ICSP) objects Deprecated Deprecated Deprecated Kubernetes topology label failure-domain.beta.kubernetes.io/zone Deprecated Deprecated Deprecated Kubernetes topology label failure-domain.beta.kubernetes.io/region Deprecated Deprecated Deprecated cgroup v1 General Availability General Availability Deprecated Workloads deprecated and removed features Table 1.16. Workloads deprecated and removed tracker Feature 4.14 4.15 4.16 DeploymentConfig objects Deprecated Deprecated Deprecated Bare metal monitoring deprecated and removed features Table 1.17. Bare Metal Event Relay Operator tracker Feature 4.14 4.15 4.16 Bare Metal Event Relay Operator Technology Preview Deprecated Deprecated 1.5.1. Deprecated features 1.5.1.1. Linux Control Groups version 1 is now deprecated In Red Hat Enterprise Linux (RHEL) 9, the default mode is cgroup v2. When Red Hat Enterprise Linux (RHEL) 10 is released, systemd will not support booting in the cgroup v1 mode and only cgroup v2 mode will be available. As such, cgroup v1 is deprecated in OpenShift Container Platform 4.16 and later. cgroup v1 will be removed in a future OpenShift Container Platform release. 1.5.1.2. Cluster Samples Operator The Cluster Samples Operator is deprecated with the OpenShift Container Platform 4.16 release. The Cluster Samples Operator will stop managing and providing support to the non-S2I samples (image streams and templates). No new templates, samples or non-Source-to-Image (Non-S2I) image streams will be added to the Cluster Samples Operator. However, the existing S2I builder image streams and templates will continue to receive updates until the Cluster Samples Operator is removed in a future release. 1.5.1.3. Package-based RHEL compute machines With this release, installation of package-based RHEL worker nodes is deprecated. In a subsequent future release, RHEL worker nodes will be removed and no longer supported. RHCOS image layering will replace this feature and supports installing additional packages on the base operating system of your worker nodes. For more information about image layering, see RHCOS image layering . 1.5.1.4. Operator SDK CLI tool and related testing and scaffolding tools are deprecated The Red Hat-supported version of the Operator SDK CLI tool, including the related scaffolding and testing tools for Operator projects, is deprecated and is planned to be removed in a future release of OpenShift Container Platform. Red Hat will provide bug fixes and support for this feature during the current release lifecycle, but this feature will no longer receive enhancements and will be removed from future OpenShift Container Platform releases. The Red Hat-supported version of the Operator SDK is not recommended for creating new Operator projects. Operator authors with existing Operator projects can use the version of the Operator SDK CLI tool released with OpenShift Container Platform 4.16 to maintain their projects and create Operator releases targeting newer versions of OpenShift Container Platform. The following related base images for Operator projects are not deprecated. The runtime functionality and configuration APIs for these base images are still supported for bug fixes and for addressing CVEs. The base image for Ansible-based Operator projects The base image for Helm-based Operator projects For information about the unsupported, community-maintained, version of the Operator SDK, see Operator SDK (Operator Framework) . 1.5.1.5. The preserveBootstrapIgnition parameter on Amazon Web Services (AWS) is deprecated The preserveBootstrapIgnition parameter for Amazon Web Services in the install-config.yaml file has been deprecated. You can use the bestEffortDeleteIgnition parameter instead. 1.5.2. Removed features 1.5.2.1. Deprecated disk partition configuration method The nodes.diskPartition section in the SiteConfig custom resource (CR) is deprecated with the OpenShift Container Platform 4.16 release. This configuration has been replaced with the ignitionConfigOverride method, which provides a more flexible way of creating a disk partition for any use case. For more information, see Configuring disk partitioning with SiteConfig . 1.5.2.2. Removal of platform Operators and plain bundles (Technology Preview) OpenShift Container Platform 4.16 removes platform Operators (Technology Preview) and plain bundles (Technology Preview), which were prototypes for Operator Lifecycle Manager (OLM) 1.0 (Technology Preview). 1.5.2.3. Dell iDRAC driver for BMC addressing removed OpenShift Container Platform 4.16 supports baseboard management controller (BMC) addressing with Dell servers as documented in BMC addressing for Dell iDRAC . Specifically, it supports idrac-virtualmedia , redfish , and ipmi . In versions, idrac was included, but not documented or supported. In OpenShift Container Platform 4.16, idrac has been removed. 1.5.2.4. Dedicated service monitors for core platform monitoring With this release, the dedicated service monitors feature for core platform monitoring has been removed. You can no longer enable this feature in the cluster-monitoring-config config map object in the openshift-monitoring namespace. To replace this feature, Prometheus functionality has been improved to ensure that alerts and time aggregations are accurate. This improved functionality is active by default and makes the dedicated service monitors feature obsolete. 1.5.2.5. Prometheus Adapter for core platform monitoring With this release, the Prometheus Adapter component for core platform monitoring has been removed. It has been replaced by the new Metrics Server component. 1.5.2.6. MetalLB AddressPool custom resource definition (CRD) removed The MetalLB AddressPool custom resource definition (CRD) has been deprecated for several versions. However, in this release, the CRD is completely removed. The sole supported method of configuring MetalLB address pools is by using the IPAddressPools CRD. 1.5.2.7. Service Binding Operator documentation removed With this release, the documentation for the Service Binding Operator (SBO) has been removed because this Operator is no longer supported. 1.5.2.8. AliCloud CSI Driver Operator is no longer supported OpenShift Container Platform 4.16 no longer supports AliCloud Container Storage Interface (CSI) Driver Operator. 1.5.2.9. Beta APIs removed from Kubernetes 1.29 Kubernetes 1.29 removed the following deprecated APIs, so you must migrate manifests and API clients to use the appropriate API version. For more information about migrating removed APIs, see the Kubernetes documentation . Table 1.18. APIs removed from Kubernetes 1.29 Resource Removed API Migrate to Notable changes FlowSchema flowcontrol.apiserver.k8s.io/v1beta2 flowcontrol.apiserver.k8s.io/v1 or flowcontrol.apiserver.k8s.io/v1beta3 No PriorityLevelConfiguration flowcontrol.apiserver.k8s.io/v1beta2 flowcontrol.apiserver.k8s.io/v1 or flowcontrol.apiserver.k8s.io/v1beta3 Yes 1.5.2.10. Managing machine with Machine API for Alibaba Cloud OpenShift Container Platform 4.16 removes support for managing machines with Machine API for Alibaba Cloud clusters. This change includes removing support for the cloud controller manager for Alibaba Cloud, which was previously a Technology Preview feature. 1.6. Bug fixes API Server and Authentication Previously, ephemeral and csi volumes were not properly added to security context constraints (SCCs) on upgraded clusters. With this release, SCCs on upgraded clusters are properly updated to have ephemeral and csi volumes. ( OCPBUGS-33522 ) Previously, the ServiceAccounts resource could not be used with OAuth clients for a cluster with the ImageRegistry capability enabled. With this release, this issue is fixed. ( OCPBUGS-30319 ) Previously, when you created a pod with an empty security context and you have access to all security context constraints (SCCs), the pod would receive the anyuid SCC. After the ovn-controller component added a label to the pod, the pod would be re-admitted for SCC selection, where the pod received an escalated SCC, such as privileged . With this release, this issue is resolved so the pod is not re-admitted for SCC selection. ( OCPBUGS-11933 ) Previously, the hostmount-anyuid security context constraints (SCC) did not have a built-in cluster role because the name of the SCC was incorrectly named hostmount in the cluster role. With this release, the SCC name in the cluster role was updated properly to hostmount-anyuid , so the hostmount-anyuid SCC now has a functioning cluster role. ( OCPBUGS-33184 ) Previously, clusters that were created before OpenShift Container Platform 4.7 had several secrets of type SecretTypeTLS . Upon upgrading to OpenShift Container Platform 4.16, these secrets are deleted and re-created with the type kubernetes.io/tls . This removal could cause a race condition and the contents of the secrets could be lost. With this release, the secret type change now happens automatically and clusters created before OpenShift Container Platform 4.7 can upgrade to 4.16 without risking losing the contents of these secrets. ( OCPBUGS-31384 ) Previously, some Kubernetes API server events did not have the correct timestamps. With this release, Kubernetes API server events now have the correct timestamps. ( OCPBUGS-27074 ) Previously, the Kubernetes API Server Operator attempted to delete a Prometheus rule that was removed in OpenShift Container Platform 4.13 to ensure it was deleted. This resulted in failed deletion messages in the audit logs every few minutes. With this release, the Kubernetes API Server Operator no longer attempts to remove this nonexistent rule and there are no more failed deletion messages in the audit logs. ( OCPBUGS-25894 ) Bare Metal Hardware Provisioning Previously, newer versions of Redfish used Manager resources to deprecate the Uniform Resource Identifier (URI) for the RedFish Virtual Media API. This caused any hardware that used the newer Redfish URI for Virtual Media to not be provisioned. With this release, the Ironic API identifies the correct Redfish URI to deploy for the RedFish Virtual Media API so that hardware relying on either the deprecated or newer URI could be provisioned. ( OCPBUGS-30171 ) Previously, the Bare Metal Operator (BMO) was not using a leader lock to control incoming and outgoing Operator pod traffic. After an OpenShift Deployment object included a new Operator pod, the new pod competed with system resources, such as the ClusterOperator status, and this terminated any outgoing Operator pods. This issue also impacted clusters that do not include any bare-metal nodes. With this release, the BMO includes a leader lock to manage new pod traffic, and this fix resolves the competing pod issue. ( OCPBUGS-25766 ) Previously, when you attempted to delete a BareMetalHost object before the installation started, the metal3 Operator attempted to create a PreprovImage image. The process of creating this image caused the BareMetalHost object to still exist in certain processes. With this release, an exception is added for this situation so that the BareMetalHost object is deleted without impacting running processes. ( OCPBUGS-33048 ) Previously, a Redfish virtual media in the context of Hewlett Packard Enterprise (HPE) Lights Out (iLO) 5 had its bare-metal machine compression forcibly disabled to work around other unrelated issues in different hardware models. This caused the FirmwareSchema resource to be missing from each iLO 5 bare-metal machine. Each machine needs compression to fetch message registries from their Redfish Baseboard Management Controller (BMC) endpoints. With this release, each iLO 5 bare-metal machine that needs the FirmwareSchema resource does not have compression forcibly disabled. ( OCPBUGS-31104 ) Previously, the inspector.ipxe configuration file used the IRONIC_IP variable, which did not account for IPv6 addresses because they have brackets. Consequently, when the user supplied an incorrect boot_mac_address , iPXE fell back to the inspector.ipxe configuration file, which supplied a malformed IPv6 host header since it did not contain brackets. With this release, the inspector.ipxe configuration file has been updated to use the IRONIC_URL_HOST variable, which accounts for IPv6 addresses and resolves the issue. ( OCPBUGS-22699 ) Previously, Ironic Python Agent assumed all server disks to have a 512 byte sector size when trying to wipe disks. This caused the disk wipe to fail. With this release, Ironic Python Agent checks the disk sector size and has separate values for disk wiping so that the disk wipe succeeds. ( OCPBUGS-31549 ) Builds Previously, clusters that updated from earlier versions to 4.16 continued to allow builds to be triggered by unauthenticated webhooks. With this release, new clusters require build webhooks to be authenticated. Builds are not triggered by unauthenticated webhooks unless a cluster administrator allows unauthenticated webhooks in the namespace or cluster. ( OCPBUGS-33378 ) Previously, if the developer or cluster administrator used lowercase environment variable names for proxy information, these environment variables were carried into the build output container image. At runtime, the proxy settings were active and had to be unset. With this release, lowercase versions of the *_PROXY environment variables are prevented from leaking into built container images. Now, buildDefaults are only kept during the build and settings created for the build process only are removed before pushing the image in the registry. ( OCPBUGS-34825 ) Cloud Compute Previously, the Cloud Controller Manager (CCM) Operator used predefined roles on Google Cloud Platform (GCP) instead of granular permissions. With this release, the CCM Operator is updated to use granular permissions on GCP clusters. ( OCPBUGS-26479 ) Previously, the installation program populated the network.devices , template and workspace fields in the spec.template.spec.providerSpec.value section of the VMware vSphere control plane machine set custom resource (CR). These fields should be set in the vSphere failure domain, and the installation program populating them caused unintended behaviors. Updating these fields did not trigger an update to the control plane machines, and these fields were cleared when the control plane machine set was deleted. With this release, the installation program is updated to no longer populate values that are included in the failure domain configuration. If these values are not defined in a failure domain configuration, for instance on a cluster that is updated to OpenShift Container Platform 4.16 from an earlier version, the values defined by the installation program are used. ( OCPBUGS-32947 ) Previously, a node associated with a rebooting machine briefly having a status of Ready=Unknown triggered the UnavailableReplicas condition in the Control Plane Machine Set Operator. This condition caused the Operator to enter the Available=False state and trigger alerts because that state indicates a nonfunctional component that requires immediate administrator intervention. This alert should not have been triggered for the brief and expected unavailabilty while rebooting. With this release, a grace period for node unreadiness is added to avoid triggering unnecessary alerts. ( OCPBUGS-34970 ) Previously, a transient failure to fetch bootstrap data during machine creation, such as a transient failure to connect to the API server, caused the machine to enter a terminal failed state. With this release, failure to fetch bootstrap data during machine creation is retried indefinitely until it eventually succeeds. ( OCPBUGS-34158 ) Previously, the Machine API Operator operator panicked when deleting a server in an error state because it was not passed a port list. With this release, deleting a machine stuck in an ERROR state does not crash the controller. ( OCPBUGS-34155 ) Previously, an optional internal function of the cluster autoscaler caused repeated log entries when it was not implemented. The issue is resolved in this release. ( OCPBUGS-33932 ) Previously, if the control plane machine set was created with a template without a path during installation on a VMware vSphere cluster, the Control Plane Machine Set Operator rejected modification or deletion of the control plane machine set custom resource (CR). With this release, the Operator allows template names for vSphere in the control plane machine set definition. ( OCPBUGS-32295 ) Previously, the Control Plane Machine Set Operator crashed when attempting to update a VMware vSphere cluster because the infrastructure resource was not configured. With this release, the Operator can handle this scenario so that the cluster update is able to proceed. ( OCPBUGS-31808 ) Previously, when a user created a compute machine set with taints, they could choose to not specify the Value field. Failure to specify this field caused the cluster autoscaler to crash. With this release, the cluster autoscaler is updated to handle an empty Value field. ( OCPBUGS-31421 ) Previously, IPv6 services were wrongly marked as internal on the RHOSP cloud provider, making it impossible to share IPv6 load balancers between OpenShift Container Platform services. With this release, IPv6 services are not marked as internal, allowing IPv6 load balancers to be shared between services that use stateful IPv6 addresses. This fix allows load balancers to use stateful IPv6 addresses that are defined in the loadBalancerIP property of the service. ( OCPBUGS-29605 ) Previously, when a control plane machine was marked as unready and a change was initiated by the modifying the control plane machine set, the unready machine was removed prematurely. This premature action caused multiple indexes to be replaced simultaneously. With this release, the control plane machine set no longer deletes a machine when only a single machine exists within the index. This change prevents premature roll-out of changes and prevents more than one index from being replaced at a time. ( OCPBUGS-29249 ) Previously, connections to the Azure API sometimes hung for up to 16 minutes. With this release, a timeout is introduced to prevent hanging API calls. ( OCPBUGS-29012 ) Previously, the Machine API IBM Cloud controller did not integrate the full logging options from the klogr package. As a result, the controller crashed in Kubernetes version 1.29 and later. With this release, the missing options are included and the issue is resolved. ( OCPBUGS-28965 ) Previously, the Cluster API IBM Power Virtual Server controller pod would start on the unsupported IBM Cloud platform. This caused the controller pod to get stuck in the creation phase. With this release, the cluster detects the difference between IBM Power Virtual Server and IBM Cloud. The cluster then only starts on the supported platform. ( OCPBUGS-28539 ) Previously, the machine autoscaler could not account for any taint set directly on the compute machine set spec due to a parsing error. This could cause undesired scaling behavior when relying on a compute machine set taint to scale from zero. The issue is resolved in this release and the machine autoscaler can now scale up correctly and identify taints that prevent workloads from scheduling. ( OCPBUGS-27509 ) Previously, machine sets that ran on Microsoft Azure regions with no availability zone support always created AvailabilitySets objects for Spot instances. This operation caused Spot instances to fail because the instances did not support availability sets. With this release, machine sets do not create AvailabilitySets objects for Spot instances that operate in non-zonal configured regions. ( OCPBUGS-25940 ) Previously, the removal of code that provided image credentials from the kubelet in OpenShift Container Platform 4.14 caused pulling images from the Amazon Elastic Container Registry (ECR) to fail without a specified pull secret. This release includes a separate credential provider that provides ECR credentials for the kubelet. ( OCPBUGS-25662 ) Previously, the default VM type for the Azure load balancer was changed from Standard to VMSS , but the service type load balancer code could not attach standard VMs to load balancers. With this release, the default VM type is reverted to remain compatible with OpenShift Container Platform deployments. ( OCPBUGS-25483 ) Previously, OpenShift Container Platform did not include the cluster name in the names of the RHOSP load balancer resources that were created by the OpenStack Cloud Controller Manager. This behavior caused issues when LoadBalancer services had the same name in multiple clusters that ran in a single RHOSP project. With this release, the cluster name is included in the names of Octavia resources. When upgrading from a cluster version, the load balancers are renamed. The new names follow the pattern kube_service_<cluster-name>_<namespace>_<service-name> instead of kube_service_kubernetes_<namespace>_<service-name> . ( OCPBUGS-13680 ) Previously, when you created or deleted large volumes of service objects simultaneously, service controller ability to process each service sequentially would slow down. This caused short timeout issues for the service controller and backlog issues for the objects. With this release, the service controller can now process up to 10 service objects simultaneously to reduce the backlog and timeout issues. ( OCPBUGS-13106 ) Previously, the logic that fetches the name of a node did not account for the possibility of multiple values for the returned hostname from the AWS metadata service. When multiple domains are configured for a VPC Dynamic Host Configuration Protocol (DHCP) option, this hostname might return multiple values. The space between multiple values caused the logic to crash. With this release, the logic is updated to use only the first returned hostname as the node name. ( OCPBUGS-10498 ) Previously, the Machine API Operator requested unnecessary virtualMachines/extensions permissions on Microsoft Azure clusters. The unnecessary credentials request is removed in this release. ( OCPBUGS-29956 ) Cloud Credential Operator Previously, the Cloud Credential Operator (CCO) was missing some permissions required to create a private cluster on Microsoft Azure. These missing permissions prevented installation of an Azure private cluster using Microsoft Entra Workload ID. This release includes the missing permissions and enables installation of an Azure private cluster using Workload ID. ( OCPBUGS-25193 ) Previously, a bug caused the Cloud Credential Operator (CCO) to report an incorrect mode in the metrics. Even though the cluster was in the default mode, the metrics reported that it was in the credentials removed mode. This update uses a live client in place of a cached client so that it is able to obtain the root credentials, and the CCO no longer reports an incorrect mode in the metrics. ( OCPBUGS-26488 ) Previously, the Cloud Credential Operator credentials mode metric on an OpenShift Container Platform cluster that uses Microsoft Entra Workload ID reported using manual mode. With this release, clusters that use Workload ID are updated to report that they are using manual mode with pod identity. ( OCPBUGS-27446 ) Previously, creating an Amazon Web Services (AWS) root secret on a bare metal cluster caused the Cloud Credential Operator (CCO) pod to crash. The issue is resolved in this release. ( OCPBUGS-28535 ) Previously, removing the root credential from a Google Cloud Platform (GCP) cluster that used the Cloud Credential Operator (CCO) in mint mode caused the CCO to become degraded after approximately one hour. In a degraded state, the CCO cannot manage the component credential secrets on a cluster. The issue is resolved in this release. ( OCPBUGS-28787 ) Previously, the Cloud Credential Operator (CCO) checked for a nonexistent s3:HeadBucket permission during installation on Amazon Web Services (AWS). When the CCO failed to find this permission, it considered the provided credentials insufficient for mint mode. With this release, the CCO no longer checks for the nonexistent permission. ( OCPBUGS-31678 ) Cluster Version Operator This release expands the ClusterOperatorDown and ClusterOperatorDegraded alerts to cover ClusterVersion conditions and send alerts for Available=False ( ClusterOperatorDown ) and Failing=True ( ClusterOperatorDegraded ). In releases, those alerts only covered ClusterOperator conditions. ( OCPBUGS-9133 ) Previously, Cluster Version Operator (CVO) changes that were introduced in OpenShift Container Platform 4.15.0, 4.14.0, 4.13.17, and 4.12.43 caused failing risk evaluations to block the CVO from fetching new update recommendations. When the risk evaluations failed, the bug caused the CVO to overlook the update recommendation service. With this release, the CVO continues to poll the update recommendation service, regardless of whether update risks are being successfully evaluated and the issue has been resolved. ( OCPBUGS-25708 ) Developer Console Previously, when a serverless function was created in the create serverless form, BuilldConfig was not created. With this update, if the Pipelines Operator is not installed, the pipeline resource is not created for particular resource, or the pipeline is not added while creating a serverless function, it will create BuildConfig as expected. ( OCPBUGS-34143 ) Previously, after installing the Pipelines Operator, Pipeline templates took some time to become available in the cluster, but users were still able to create the deployment. With this update, the Create button on the Import from Git page is disabled if there is no pipeline template present for the resource selected. ( OCPBUGS-34142 ) Previously, the maximum number of nodes was set to 100 on the Topology page. A persistent alert, "Loading is taking longer than expected." was provided. With this update, the limit of nodes is increased to 300 . ( OCPBUGS-32307 ) With this update, an alert message to notify you that Service Bindings are deprecated with OpenShift Container Platform 4.15 was added to the ServiceBinding list , ServiceBinding details , Add , and Topology pages when creating a ServiceBinding , binding a component, or a ServiceBinding was found in the current namespace. ( OCPBUGS-32222 ) Previously, the Helm Plugin index view did not display the same number of charts as the Helm CLI if the chart names varied. With this release, the Helm catalog now looks for charts.openshift.io/name and charts.openshift.io/provider so that all versions are grouped together in a single catalog title. ( OCPBUGS-32059 ) Previously, the TaskRun status was not displayed near the TaskRun name on the TaskRun details page. With this update, the TaskRun status is located beside the name of the TaskRun in the page heading. ( OCPBUGS-31745 ) Previously, there is an error when adding parameters to the Pipeline when the resources field was added to the payload, and as resources are deprecated. With this update, the resources fields have been removed from the payload, and you can add parameters to the Pipeline. ( OCPBUGS-31082 ) This release updates the OpenShift Pipelines plugin to support the latest Pipeline Trigger API version for the custom resource definitions (CRDs) ClusterTriggerBinding , TriggerTemplate and EventListener . ( OCPBUGS-30958 ) Previously, CustomTasks were not recognized or remained in a Pending state. With this update, CustomTasks can be easily identified from the Pipelines List and Details pages. ( OCPBUGS-29513 ) Previously, if there was a build output image with an Image tag then the Output Image link would not redirect to the correct ImageStream page. With this update, this has been fixed by generating a URL for the ImageStream page without adding the tag in the link. ( OCPBUGS-29355 ) Previously, BuildRun logs were not visible in the Logs tab of the BuildRun page due to a recent update in the API version of the specified resources. With this update, the logs of the TaskRuns were added back into the Logs tab of the BuildRun page for both v1alpha1 and v1beta1 versions of the Builds Operator. ( OCPBUGS-27473 ) Previously, the annotations to set scale bound values were setting to autoscaling.knative.dev/maxScale and autoscaling.knative.dev/minScale . With this update, the annotations to set scale bound values are updated to autoscaling.knative.dev/min-scale and autoscaling.knative.dev/max-scale to determine the minimum and maximum numbers of replicas that can serve an application at any given time. You can set scale bounds for an application to help prevent cold starts or control computing costs. ( OCPBUGS-27469 ) Previously, the Log tab for PipelineRuns from the Tekton Results API never finished loading. With this release, this tab loads fully complete for PipelineRuns loaded from the Kubernetes API or the Tekton Results API. ( OCPBUGS-25612 ) Previously, there was no indicator shown to differentiate between PipelineRuns that are loaded from the Kubernetes API or the Tekton Results API. With this update, a small archived icon in the PipelineRun list and details page to differentiate between PipelineRuns that are loaded from the Kubernetes API or the Tekton Results API. ( OCPBUGS-25396 ) Previously, on the PipelineRun list page, all TaskRuns were fetched and separated based on pipelineRun name. With this update, TaskRuns are fetched only for Failed and Cancelled PipelineRun. A caching mechanism was also added to fetch PipelineRuns and TaskRuns associated to the Failed and Cancelled PipelineRuns. ( OCPBUGS-23480 ) Previously, the visual connector was not present between the VMs node and other non-VMs nodes in the Topology view. With this update, the visual connector is located between VMs nodes and non-VMs nodes. ( OCPBUGS-13114 ) Edge computing Previously, an issue with image based upgrades on clusters that use proxy configurations caused operator rollouts that lengthened startup times. With this release, the issue has been fixed and upgrade times are reduced. ( OCPBUGS-33471 ) etcd Cluster Operator Previously, the wait-for-ceo command that was used during bootstrap to verify etcd rollout did not report errors for some failure modes. With this release, those error messages now are visible on the bootkube script if the cmd exits in an error case. ( OCPBUGS-33495 ) Previously, the etcd Cluster Operator entered a state of panic during pod health checks and this caused requests to an etcd cluster to fail. With this release, the issue is fixed so that these panic situations no longer occur. ( OCPBUGS-27959 ) Previously, the etcd Cluster Operator wrongly identified non-running controllers as deadlocked and this caused an unnecessary pod restart. With this release, this issue is now fixed so that the Operator marks a non-running controller as an unhealthy etcd member without restarting a pod. ( OCPBUGS-30873 ) Hosted control planes Previously, Multus Container Network Interface (CNI) required certificate signing requests (CSRs) to be approved when you used the Other network type in hosted clusters. The proper role-based access control (RBAC) rules were set only when the network type was Other and was set to Calico. As a consequence, the CSRs were not approved when the network type was Other and set to Cilium. With this update, the correct RBAC rules are set for all valid network types, and RBACs are now properly configured when you use the Other network type. ( OCPBUGS-26977 ) Previously, an Amazon Web Services (AWS) policy issue prevented the Cluster API Provider AWS from retrieving the necessary domain information. As a consequence, installing an AWS hosted cluster with a custom domain failed. With this update, the policy issue is resolved. ( OCPBUGS-29391 ) Previously, in disconnected environments, the HyperShift Operator ignored registry overrides. As a consequence, changes to node pools were ignored, and node pools encountered errors. With this update, the metadata inspector works as expected during the HyperShift Operator reconciliation, and the override images are properly populated. ( OCPBUGS-34773 ) Previously, the HyperShift Operator was not using the RegistryOverrides mechanism to inspect the image from the internal registry. With this release, the metadata inspector works as expected during the HyperShift Operator reconciliation, and the OverrideImages are properly populated. ( OCPBUGS-32220 ) Previously, the Red Hat OpenShift Cluster Manager container did not have the correct Transport Layer Security (TLS) certificates. As a result, image streams could not be used in disconnected deployments. With this update, the TLS certificates are added as projected volumes. ( OCPBUGS-34390 ) Previously, the azure-kms-provider-active container in the KAS pod used an entrypoint statement in shell form in the Dockerfile. As a consequence, the container failed. To resolve this issue, use the exec form for the entrypoint statement. ( OCPBUGS-33940 ) Previously, the konnectivity-agent daemon set used the ClusterIP DNS policy. As a result, when CoreDNS was down, the konnectivity-agent pods on the data plane could not resolve the proxy server URL, and they could fail to konnectivity-server in the control plane. With this update, the konnectivity-agent daemon set was modified to use dnsPolicy: Default . The konnectivity-agent uses the host system DNS service to look up the proxy server address, and it does not depend on CoreDNS anymore. ( OCPBUGS-31444 ) Previously, the inability to find a resource caused re-creation attempts to fail. As a consequence, many 409 response codes were logged in Hosted Cluster Config Operator logs. With this update, specific resources were added to the cache so that the Hosted Cluster Config Operator does not try to re-create existing resources. ( OCPBUGS-23228 ) Previously, the pod security violation alert was missing in hosted clusters. With this update, the alert is added to hosted clusters. ( OCPBUGS-31263 ) Previously, the recycler-pod template in hosted clusters in disconnected environments pointed to quay.io/openshift/origin-tools:latest . As a consequence, the recycler pods failed to start. With this update, the recycler pod image now points to the OpenShift Container Platform payload reference. ( OCPBUGS-31398 ) With this update, in disconnected deployments, the HyperShift Operator receives the new ImageContentSourcePolicy (ICSP) or ImageDigestMirrorSet (IDMS) from the management cluster and adds them to the HyperShift Operator and the Control Plane Operator in every reconciliation loop. The changes to the ICSP or IDMS cause the control-plane-operator pod to be restarted. ( OCPBUGS-29110 ) With this update, the ControllerAvailabilityPolicy setting becomes immutable after it is set. Changing between SingleReplica and HighAvailability is not supported. ( OCPBUGS-27282 ) With this update, the machine-config-operator custom resource definitions (CRDs) are renamed to ensure that resources are being omitted properly in hosted control planes. ( OCPBUGS-34575 ) With this update, the size is reduced for audit log files that are stored in the kube-apiserver , openshift-apiserver , and oauth-apiserver pods for hosted control planes. ( OCPBUGS-31106 ) Previously, the Hypershift Operator was not using the RegistryOverrides mechanism to inspect the image from the internal registry. With this release, the metadata inspector works as expected during the Hypershift Operator reconciliation, and the OverrideImages are properly populated. ( OCPBUGS-29494 ) Image Registry Previously, after you imported image streams tags, the ImageContentSourcePolicy (ICSP) custom resource (CR) could not co-exist with the ImageDigestMirrorSet (IDMS) or ImageTagMirrorSet (ITMS) CR. OpenShift Container Platform chose ICSP instead of the other CR types. With this release, these custom resources can co-exist, so after you import image stream tags, OpenShift Container Platform can choose the required CR. ( OCPBUGS-30279 ) Previously, the oc tag command did not validate tag names when the command created new tags. After images were created from tags with invalid names, the podman pull command would fail. With this release, a validation step checks new tags for invalid names and you can now delete existing tags that have invalid names, so that this issue no longer exists. ( OCPBUGS-25703 ) Previously, the Image Registry Operator had maintained its own list of IBM Power(R) Virtual Server regions, so any new regions were not added to the list. With this release, the Operator relies on an external library for accessing regions so that it can support new regions. ( OCPBUGS-26767 ) Previously, the image registry Microsoft Azure path-fix job incorrectly required the presence of AZURE_CLIENT_ID and TENANT_CLIENT_ID parameters to function. This caused a valid configuration to throw an error message. With this release, a check is added to the Identity and Access Management (IAM) service account key to validate if these parameters are needed, so that a cluster upgrade operation no longer fails. ( OCPBUGS-32328 ) Previously, the image registry did not support Amazon Web Services (AWS) region ca-west-1 . With this release, the image registry can now be deployed in this region. ( OCPBUGS-29233 ) Previously, when the virtualHostedStyle parameter was set to regionEndpoint in the Image Registry Operator configuration, the image registry ignored the virtual hosted style configuration. With this release, the issue is resolved so that a new upstream distribution configuration, force path style, is used instead of the downstream only version, virtual hosted style. ( OCPBUGS-34166 ) Previously, when running an OpenShift Container Platform cluster on IBM Power(R) Virtual Server where service-endpoint override was enabled, the Cloud Credential Operator (CCO) Operator would ignore the overriding service endpoints. With this release, the CCO Operator no longer ignores overriding service endpoints. ( OCPBUGS-32491 ) Previously, the Image Registry Operator ignored endpoint service cluster-level overrides, making configuring your cluster in an IBM Cloud(R) disconnected environment difficult. This issue only existed on installer-provisioned infrastructure. With this release, the Image Registry Operator no longer ignores these cluster-level overrides. ( OCPBUGS-26064 ) Installer Previously, installation of a three-node cluster with an invalid configuration on Google Cloud Platform (GCP) failed with a panic error that did not report the reason for the failure. With this release, the installation program validates the installation configuration to successfully install a three-node cluster on GCP. ( OCPBUGS-35103 ) Previously, installations with the Assisted Installer failed if the pull secret contained a colon in the password. With this release, pull secrets containing a colon in the password do not cause the Assisted Installer to fail. ( OCPBUGS-34400 ) Previously, the monitor-add-nodes command, which is used to monitor the process of adding nodes to an Agent-based cluster, failed to run due to a permission error. With this release, the command operates in the correct directory where it has permissions. ( OCPBUGS-34388 ) Previously, long cluster names were trimmed without warning the user. With this release, the installation program warns the user when trimming long cluster names. ( OCPBUGS-33840 ) Previously, OpenShift Container Platform did not perform quota checking for clusters installed in the ca-west-1 an Amazon Web Services (AWS) region. With this release, quotas are properly enforced in this region. ( OCPBUGS-33649 ) Previously, the installation program could sometimes fail to detect that the OpenShift Container Platform API is unavailable. An additional error was resolved by increasing the disk size of the bootstrap node in Microsoft Azure installations. With this release, the installation program correctly detects if the API is unavailable. ( OCPBUGS-33610 ) Previously, control plane nodes on Microsoft Azure clusters were using Read-only caches. With this release, Microsoft Azure control plane nodes use ReadWrite caches. ( OCPBUGS-33470 ) Previously, when installing an Agent-based cluster with a proxy configured, the installation failed if the proxy configuration contained a string starting with a percent sign ( % ). With this release, the installation program correctly validates this configuration text. ( OCPBUGS-33024 ) Previously, installations on GCP could fail because the installation program attempted to create a bucket twice. With this release, the installation program no longer attempts to create the bucket twice. ( OCPBUGS-32133 ) Previously, a rare timing issue could prevent all control plane nodes from being added to an Agent-based cluster during installation. With this release, all control plane nodes are successfully rebooted and added to the cluster during installation. ( OCPBUGS-32105 ) Previously, when using the Agent-based installation program in a disconnected environment, unnecessary certificates were added to the Certificate Authority (CA) trust bundle. With this release, the CA bundle ConfigMap only contains CAs explicitly specified by the user. ( OCPBUGS-32042 ) Previously, the installation program required a non-existent permission s3:HeadBucket when installing a cluster on Amazon Web Services (AWS). With this release, the installation program correctly requires the permission s3:ListBucket instead. ( OCPBUGS-31813 ) Previously, if the installation program failed to gather logs from the bootstrap due to an SSH connection issue, it would also not provide virtual machine (VM) serial console logs even if they were collected. With this release, the installation program provides VM serial console logs even if the SSH connection to the bootstrap machine fails. ( OCPBUGS-30774 ) Previously, when installing a cluster on VMware vSphere with static IP addresses, the cluster could create control plane machines without static IP addresses due to a conflict with other Technology Preview features. With this release, the Control Plane Machine Set Operator correctly manages the static IP assignment for control plane machines. ( OCPBUGS-29114 ) Previously, when installing a cluster on GCP with user-provided DNS, the installation program still attempted to validate DNS within the GCP DNS network. With this release, the installation program does not perform this validation for user-provided DNS. ( OCPBUGS-29068 ) Previously, when deleting a private cluster on IBM Cloud(R) that used the same domain name as a non-private IBM Cloud(R) cluster, some resources were not deleted. With this release, all private cluster resources are deleted when the cluster is removed. ( OCPBUGS-28870 ) Previously, when installing a cluster using a proxy with a character string that used the percent sign ( % ) in the configuration string, the cluster installation failed. With this release, the installation program correctly validates proxy configuration strings containing "%". ( OCPBUGS-27965 ) Previously, the installation program still allowed the use of the OpenShiftSDN network plugin even though it was removed. With this release, the installation program correctly prevents installing a cluster with this network plugin. ( OCPBUGS-27813 ) Previously, when installing a cluster on Amazon Web Services (AWS) Wavelengths or Local Zones into a region that supports either Wavelengths or Local Zones, but not both, the installation failed. With this release, installations into regions that support either Wavelengths or Local Zones can succeed. ( OCPBUGS-27737 ) Previously, when a cluster installation was attempted that used the same cluster name and base domain as an existing cluster and the installation failed due to DNS record set conflicts, removal of the second cluster would also remove the DNS record sets in the original cluster. With this release, the stored metadata contains the private zone name rather than the cluster domain, so only the correct DNS records are deleted from a removed cluster. ( OCPBUGS-27156 ) Previously, platform specific passwords that were configured in the installation configuration file of an Agent-based installation could be present in the output of the agent-gather command. With this release, passwords are redacted from the agent-gather output. ( OCPBUGS-26434 ) Previously, a OpenShift Container Platform cluster installed with version 4.15 or 4.16 showed a default upgrade channel of version 4.14. With this release, clusters have the correct upgrade channel after installation. ( OCPBUGS-26048 ) Previously, when deleting a VMware vSphere cluster, some TagCategory objects failed to be deleted. With this release, all cluster-related objects are correctly deleted when the cluster is removed. ( OCPBUGS-25841 ) Previously, when specifying the baremetal platform type but disabling the baremetal capability in install-config.yaml , the installation failed after a long timeout without a helpful error. With this release, the installation program provides a descriptive error and does not attempt a bare metal installation if the baremetal capability is disabled. ( OCPBUGS-25835 ) Previously, installations on VMware vSphere using the Assisted Installer could fail by preventing VMware vSphere from initializing nodes correctly. With this release, Assisted Installer installations on VMware vSphere successfully complete with all nodes initialized. ( OCPBUGS-25718 ) Previously, if a VM type was selected that did not match the architecture specified in the install-config.yaml file, the installation would fail. With this release, a validation check ensures that the architectures match before the installation begins. ( OCPBUGS-25600 ) Previously, agent-based installations could fail if an invalid number of control plane replicas was specified, such as 2. With this release, the installation program enforces the requirement of specifying either 1 or 3 control plane replicas for agent-based installations. ( OCPBUGS-25462 ) Previously, when installing a cluster on VMware vSphere using the control plane machine set Technology Preview feature, the resulting control plane machine sets had duplicate failure domains in their configuration. With this release, the installation program creates the control plane machine sets with the correct failure domains. ( OCPBUGS-25453 ) Previously, the required iam:TagInstanceProfile permission was not validated before an installer-provisioned installation, causing an installation to fail if the Identity and Access Management (IAM) permission was missing. With this release, a validation check ensures that the permission is included before the installation begins. ( OCPBUGS-25440 ) Previously, the installation program did not prevent users from installing a cluster on non-bare-metal platforms with the Cloud Credential capability disabled, although it is required. With this release, the installation program produces an error and prevents installation with the Cloud Credential capability disabled, except for on the bare-metal platform. ( OCPBUGS-24956 ) Previously, setting an architecture different from the one supported by the instance type resulted in the installation failing mid-process, after some resources were created. With this release, a validation check verifies that the instance type is compatible with the specified architecture. If the architecture is not compatible, the process fails before the installation begins. ( OCPBUGS-24575 ) Previously, the installation program did not prevent a user from installing a cluster on a cloud provider with the Cloud Controller Manager disabled, which failed without a helpful error message. With this release, the installation program produces an error stating that the Cloud Controller Manager capability is required for installations on cloud platforms. ( OCPBUGS-24415 ) Previously, the installation program could fail to remove a cluster installed on IBM Cloud(R) due to unexpected results from the IBM Cloud(R) API. With this release, clusters installed on IBM Cloud(R) can reliably be deleted by the installation program. ( OCPBUGS-20085 ) Previously, the installation program did not enforce the requirement that FIPS-enabled clusters were installed from FIPS-enabled Red Hat Enterprise Linux (RHEL) hosts. With this release, the installation program enforces the FIPS requirement. ( OCPBUGS-15845 ) Previously, proxy information that was set in the install-config.yaml file was not applied to the bootstrap process. With this release, proxy information is applied to bootstrap ignition data, which is then applied to the bootstrap machine. ( OCPBUGS-12890 ) Previously, when the IBM Power(R) Virtual Server platform had no Dynamic Host Configuration Protocol (DHCP) network name, the DHCP resource was not deleted. With this release, a check looks for any DHCP resources with an ERROR state and deletes them so that this issue no longer occurs. ( OCPBUGS-35224 ) Previously, when creating an IBM Power(R) Virtual Server cluster on installer-provisioned infrastructure by using the Cluster API, the load balancer would become busy and stall. With this release, you can use the AddIPToLoadBalancerPool command in a PollUntilContextCancel loop to restart the load balancer. ( OCPBUGS-35088 ) Previously, an installer-provisioned installation on a bare-metal platform with FIPS-enabled nodes caused installation failures. With this release, the issue is resolved. ( OCPBUGS-34985 ) Previously, when creating an install configuration for an installer-provisioned installation on IBM Power(R) Virtual Server, the survey stopped if the administrator did not enter a command on the OpenShift CLI ( oc ). The survey stopped because no default region was set in the install-config survey. With this release, the issue is resolved. ( OCPBUGS-34728 ) Previously, solid state drives (SSD) that used SATA hardware were identified as removable. The Assisted Installer for OpenShift Container Platform reported that no eligible disks were found and the installation stopped. With this release, removable disks are eligible for installation. ( OCPBUGS-34652 ) Previously, Agent-based installations with dual-stack networking failed due to IPv6 connectivity check failures, even though IPv6 connectivity could be established between nodes. With this release, the issue has been resolved. ( OCPBUGS-31631 ) Previously, due to a programming error, a script created compute server groups with the policy set for control planes. As a consequence, the serverGroupPolicy property of install-config.yaml files was ignored for compute groups. With this fix, the server group policy set in the install-config.yaml file for compute machine pools is applied at installation in the script flow. ( OCPBUGS-31050 ) Previously, when configuring an Agent-based installation that uses the openshift-baremetal-install binary, the Agent-based installer erroneously attempted to verify the libvirt network interfaces. This might cause the following error: Platform.BareMetal.externalBridge: Invalid value: "baremetal": could not find interface "baremetal" With this update, as the Agent-based installation method does not require libvirt, this erroneous validation has been disabled and the issue is resolved. ( OCPBUGS-30941 ) Previously, using network types with dual-stack networking other than Open vSwitch-based software-defined networking (SDN) or Open Virtual Network (OVN) caused a validation error. With this release, the issue is resolved. ( OCPBUGS-30232 ) Previously, a closed IPv6 port range for nodePort services in user-provisioned-infrastructure installations on RHOSP caused traffic through certain node ports to be blocked. With this release, appropriate security group rules have been added to the security-group.yaml playbook, resolving the issue. ( OCPBUGS-30154 ) Previously, manifests that were generated by using the command openshift-install agent create cluster-manifests command were not directly applied to an OpenShift Container Platform cluster because the manifests did not include type data. With this release, type data has been added to the manifests. Administrators can now apply the manifests to initiate a Zero Touch Provisioning (ZTP) installation that uses the same settings as the Agent-based installation. ( OCPBUGS-29968 ) Previously, a file required for the aarch64 architecture was renamed by mistake while generating the aarch64 agent ISO. With this release, the specified file does not get renamed. ( OCPBUGS-28827 ) Previously, when installing a cluster on VMware vSphere, the installation would fail if an ESXi host was in maintenance mode due to the installation program failing to retrieve version information from the host. With this release, the installation program does not attempt to retrieve version information from ESXi hosts that are in maintenance mode, allowing the installation to proceed. ( OCPBUGS-27848 ) Previously, the IBM Cloud(R) Terraform Plugin incorrectly prevented the use of non-private service endpoints during cluster installation. With this release, the IBM Cloud(R) Terraform Plugin supports non-private service endpoints during installation. ( OCPBUGS-24473 ) Previously, installing a cluster on VMware vSphere required specifying the full path to the datastore. With this release, the installation program accepts full paths and relative paths for the datastore. ( OCPBUGS-22410 ) Previously, when you installed an OpenShift Container Platform cluster by using the Agent-based installation program, a large number of manifests before installation could fill the Ignition storage causing the installation to fail. With this release, the Ignition storage has been increased to allow for a much greater amount of installation manifests. ( OCPBUGS-14478 ) Previously, when the coreos-installer iso kargs show <iso> command was used on Agent ISO files, the output would not properly show the kernel arguments embedded in the specified ISO. With this release, the command output displays the information correctly. ( OCPBUGS-14257 ) Previously, Agent-based installations created ImageContentSource objects instead of ImageDigestSources even though the former object is deprecated. With this release, the Agent-based installation program creates ImageDigestSource objects. ( OCPBUGS-11665 ) Previously, there was an issue with the destroy functionality of the Power VS where not all resources were deleted as expected. With this release, the issue has been resolved. ( OCPBUGS-29425 ) Insights Operator The Insights Operator now collects instances outside of the openshift-monitoring of the following custom resources: Kind: Prometheus Group: monitoring.coreos.com Kind: AlertManager Group: monitoring.coreos.com ( OCPBUGS-35086 ) Kubernetes Controller Manager Previously, when deleting a ClusterResourceQuota resource using the foreground deletion cascading strategy, the removal failed to complete. With this release, ClusterResourceQuota resources are deleted properly when using the foreground cascading strategy. ( OCPBUGS-22301 ) Machine Config Operator Previously, the MachineConfigNode object was not created with a proper owner. As a result, the MachineConfigNode object could not be garbage collected, meaning that previously generated, but no longer useful, objects were not removed. With this release, the proper owner is set upon the creation of the MachineConfigNode object and objects that become obsolete are available for garbage collection. ( OCPBUGS-30090 ) Previously, the default value of the nodeStatusUpdateFrequency parameter was changed from 0s to 10s . This change inadvertently caused the nodeStatusReportFrequency to increase significantly, because the value was linked to the nodeStatusReportFrequency value. This resulted in high CPU usage on control plane operators and the API server. This fix manually sets the nodeStatusReportFrequency value to 5m , which prevents this high CPU usage. ( OCPBUGS-29713 ) Previously, a typographical error in an environment variable prevented a script from detecting if the node.env file was present. Because of this, the node.env file would be overwritten on every restart, preventing the kubelet hostname from being fixed. With this fix the typographical error is corrected. As a result, edits to the node.env are now persist across reboots. ( OCPBUGS-27261 ) Previously, when the kube-apiserver server Certificate Authority (CA) certificate was rotated, the Machine Config Operator (MCO) did not properly react and update the on-disk kubelet kubeconfig. This meant that the kubelet and some pods on the node were eventually unable to communicate with the APIserver, causing the node to enter the NotReady state. With this release, the MCO properly reacts to the change, and updates the on-disk kubeconfig such that authenticated communication with the APIServer can continue when this rotates, and also restarts kubelet/MCDaemon pod. The certificate authority has 10-year validity, so this rotation should happen rarely and is generally non-disruptive. ( OCPBUGS-25821 ) Previously, when a new node was added to or removed from a cluster, the MachineConfigNode (MCN) objects did not react. As a result, extraneous MCN objects existed. With this release, the Machine Config Operator removes and adds MCN objects as appropriate when nodes are added or removed. ( OCPBUGS-24416 ) Previously, the nodeip-configuration service did not send logs to the serial console, which made it difficult to debug problems when networking is not available and there is no access to the node. With this release, the nodeip-configuration service logs output to the serial console for easier debugging when there is no network access to the node. ( OCPBUGS-19628 ) Previously, when a MachineConfigPool had the OnClusterBuild functionality enabled and the configmap was updated with an invalid imageBuilderType , the machine-config ClusterOperator was not degraded. With this release, the Machine Config Operator (MCO) ClusterOperator status now validates the OnClusterBuild inputs each time it syncs, ensuring that if those are invalid, the ClusterOperator is degraded. ( OCPBUGS-18955 ) Previously, when the machine config not found error was reported, there was not enough information to troubleshoot and correct the problem. With this release, an alert and metric have been added to the Machine Config Operator. As a result, you have more information to troubleshoot and remediate the machine config not found error. ( OCPBUGS-17788 ) Previously, the Afterburn service used to set the hostname on nodes timed out while waiting for the metadata service to become available, causing issues when deploying with OVN-Kubernetes. Now, the Afterburn service waits longer for the metadata service to become available, resolving these timeouts. ( OCPBUGS-11936 ) Previously, when a node was removed from a MachineConfigPool , the Machine Config Operator (MCO) did not report an error or the removal of the node. The MCO does not support managing nodes when they are not in a pool and there was no indication that node management ceased after the node was removed. With this release, if a node is removed from all pools, the MCO now logs an error. ( OCPBUGS-5452 ) Management Console Previously, the Debug container link was not shown for pods with a Completed status. With this release, the link shows as expected. ( OCPBUGS-34711 ) Previously, due to an issue in PatternFly 5, text boxes in the web console were no longer resizable. With this release, text boxes are again resizable. ( OCPBUGS-34393 ) Previously, French and Spanish were not available in the web console. With this release, translations for French and Spanish are now available. ( OCPBUGS-33965 ) Previously, the masthead logo was not restricted to a max-height of 60px. As a result, logos that are larger than 60px high display at their native size and cause the masthead size too to be too large. With this release, the masthead logo is restricted to a max-height of 60px. ( OCPBUGS-33523 ) Previously, there was a missing return statement in the HealthCheck controller causing it to panic under certain circumstances. With this release, the proper return statement was added to the HealthCheck controller so it no longer panics. ( OCPBUGS-33505 ) Previously, an incorrect field was sent to the API server that was not noticeable. With the implementation of Admission Webhook display warning the same action would return a warning notification. A fix was provided to resolve the issue. ( OCPBUGS-33222 ) Previously, the message text of a StatusItem might have been vertically misaligned with the icon when a timestamp was not present. With this release, the message text is correctly aligned. ( OCPBUGS-33219 ) Previously, the creator field was autopopulated and not mandatory. Updates to the API made the field empty from OpenShift Container Platform 4.15 and higher. With this release, the field is marked as mandatory for correct validation. ( OCPBUGS-31931 ) Previously, the YAML editor in the web console did not have the Create button and samples did not show on the web console. With this release, you can now see the Create button and the samples. ( OCPBUGS-31703 ) Previously, changes to the bridge server flags on an external OpenID Connect (OIDC) feature caused the bridge server fail to start in local development. With this release, the flags usage are updated and the bridge server starts. ( OCPBUGS-31695 ) Previously, when editing a VMware vSphere connection, the form could be submitted even if no values were actually changed. This resulted in unnecessary node reboots. With this release, the console now detects the form changes, and does not allow submission if no value was changed. ( OCPBUGS-31613 ) Previously, the NetworkAttachmentDefinition was always created in the default namespace if the form method from the console was used. The selected name is also not honored, and creates the NetworkAttachmentDefinition object with the selected name and a random suffix. With this release, the NetworkAttachmentDefinition object is created in the current project. ( OCPBUGS-31558 ) Previously, when clicking the Configure button by the AlertmanagerRecieversNotConfigured alert, the Configuration page did not show. With this release, the link in the AlertmanagerRecieversNotConfigured alert is fixed and directs you to the Configuration page. ( OCPBUGS-30805 ) Previously, plugins using ListPageFilters were only using two filters: label and name. With this release, a parameter was added that enables plugins to configure multiple text-based search filters. ( OCPBUGS-30077 ) Previously, there was no response when clicking on quick start items. With this release, the quick start window shows when clicking on the quick start selections. ( OCPBUGS-29992 ) Previously, the OpenShift Container Platform web console terminated unexpectedly if authentication discovery failed on the first attempt. With this release, authentication initialization was updated to retry up to 5 minutes before failing. ( OCPBUGS-29479 ) Previously there was an issue causing an error message on the Image Manifest Vulnerability page after an Image Manifest Vulnerability (IMV) was created in the CLI. With this release, the error message no longer shows. ( OCPBUGS-28967 ) Previously, when using the modal dialog in a hook as part of the actions hook, an error occurred because the console framework passed null objects as part of the render cycle. With this release, getGroupVersionKindForResource is now null-safe and will return undefined if the apiVersion or kind are undefined. Additionally, the run time error for useDeleteModal no longer occurs, but note that it will not work with an undefined resource. ( OCPBUGS-28856 ) Previously, the Expand PersistentVolumeClaim modal assumes the pvc.spec.resources.requests.storage value includes a unit. With this release, the size is updated to 2GiB and you can change the value of the persistent volume claim (PVC). ( OCPBUGS-27779 ) Previously, the value of image vulnerabilities reported in the OpenShift Container Platform web console were inconsistent. With this release, the image vulnerabilities on the Overview page were removed. ( OCPBUGS-27455 ) Previously, a certificate signing request (CSR) could show for a recently approved Node. With this release, the duplication is detected and does not show CSRs for approved Nodes. ( OCPBUGS-27399 ) Previously, the Type column was not first on the condition table on the MachineHealthCheck detail page. With this release, the Type is now listed first on the condition table. ( OCPBUGS-27246 ) Previously, the console plugin proxy was not copying the status code from plugin service responses. This caused all responses from the plugin service to have a 200 status, causing unexpected behavior, especially around browser caching. With this release, the console proxy logic was updated to forward the plugin service proxy response status code. Proxied plugin requests now behave as expected. ( OCPBUGS-26933 ) Previously, when cloning a persistent volume claim (PVC), the modal assumes pvc.spec.resources.requests.storage value includes a unit. With this release, pvc.spec.resources.requests.storage includes a unit suffix and the Clone PVC modal works as expected. ( OCPBUGS-26772 ) Previously, escaped strings were not handled properly when editing VMware vSphere connection, causing broken VMware vSphere configuration. With this release, the escape strings work as expected and the VMware vSphere configuration no longer breaks. ( OCPBUGS-25942 ) Previously, when configuring a VMware vSphere connection, the resourcepool-path key was not added to the VMware vSphere config map which might have caused issues connecting to VMware vSphere. With this release, there are no longer issues connecting to VMware vSphere. ( OCPBUGS-25927 ) Previously, there was missing text in the Customer feedback modal. With this release, the link text is restored and the correct Red Hat image is displayed. ( OCPBUGS-25843 ) Previously, the Update cluster modal would not open when clicking Select a version from the Cluster Settings page. With this release, the Update cluster modal shows when clicking Select a version . ( OCPBUGS-25780 ) Previously, on a mobile device, the filter part in the resource section of the Search page did not work on a mobile device. With this release, filtering now works as expected on a mobile device. ( OCPBUGS-25530 ) Previously, the console Operator was using a client instead of listeners for fetching a cluster resource. This caused the Operator to do operations on resources with an older revision. With this release, the console Operator uses list to fetch data from cluster instead of clients. ( OCPBUGS-25484 ) Previously, the console was incorrectly parsing restore size values from volume snapshots in the restore as new persistent volume claims (PVC) modal. With this release, the modal parses the restore size correctly. ( OCPBUGS-24637 ) Previously, the Alerting , Metrics , and Target pages were not available in the console due to a change on the routing library. With this release, routes load correctly. ( OCPBUGS-24515 ) Previously, there was a runtime error on the Node details page when a MachineHealthCheck without conditions existed. With this release, the Node details page loads as expected. ( OCPBUGS-24408 ) Previously, the console backend would proxy operand list requests to the public API server endpoint, which caused CA certificate issues under some circumstances. With this release, the proxy configuration was updated to point to the internal API server endpoint which fixed this issue. ( OCPBUGS-22487 ) Previously, a deployment could not be scaled up or down when a HorizontalPodAutoscaler was present. With this release, when a deployment with an HorizontalPodAutoscaler is scaled down to zero , an Enable Autoscale button is displayed so you can enable pod autoscaling. ( OCPBUGS-22405 ) Previously, when editing a file, the Info alert:Non-printable file detected. File contains non-printable characters. Preview is not available. error was presented. With this release, a check was added to determine if a file is binary, and you are able to edit the file as expected. ( OCPBUGS-18699 ) Previously, the console API conversion webhook server could not update serving certificates at runtime, and would fail if these certificates were updated by deleting the signing key. This would cause the console to not recover when CA certs were rotated. With this release, console conversion webhook server was updated to detect CA certificate changes, and handle them at runtime. The server now remains available and the console recovers as expected after CA certificates are rotated. ( OCPBUGS-15827 ) Previously, production builds of the console front-end bundle have historically had source maps disabled. As a consequence, browser tools for analyzing source code could not be used on production builds. With this release, the console Webpack configuration is updated to enable source maps on production builds. Browser tools will now work as expected for both dev and production builds. ( OCPBUGS-10851 ) Previously, the console redirect service had the same service Certificate Authority (CA) controller annotation as the console service. This caused the service CA controller to sometimes incorrectly sync CA certs for these services, and the console would not function correctly after removing and reinstalling. With this release, the console Operator was updated to remove this service CA annotation from the console redirect service. The console services and CA certs now function as expected when the Operator transitions from a removed to a managed state. ( OCPBUGS-7656 ) Previously, removing an alternate service when editing a Route by using the Form view did not result in the removal of the alternate service from the Route. With this update, the alternate service is now removed. ( OCPBUGS-33011 ) Previously, nodes of paused MachineConfigPools migh be incorrectly unpaused when performing a cluster update. With this release, nodes of paused MachineConfigPools correctly stay paused when performing a cluster update. ( OCPBUGS-23319 ) Monitoring Previously, the Fibre Channel collector in the node-exporter agent failed if certain Fibre Channel device drivers did not expose all attributes. With this release, the Fibre Channel collector disregards these optional attributes and the issue has been resolved. ( OCPBUGS-20151 ) Previously, the oc get podmetrics and oc get nodemetrics commands were not working properly. With this release, the issue has been resolved. ( OCPBUGS-25164 ) Previously, setting an invalid .spec.endpoints.proxyUrl attribute in the ServiceMonitor resource would result in breaking, reloading, and restarting Prometheus. This update fixes the issue by validating the proxyUrl attribute against invalid syntax. ( OCPBUGS-30989 ) Networking Previously, the API documentation for the status.componentRoutes.currentHostnames field in the Ingress API included developer notes. After you entered the oc explain ingresses.status.componentRoutes.currentHostnames --api-version=config.openshift.io/v1 command, developer notes would show in the output along with the intended information. With this release, the developer notes are removed from the status.componentRoutes.currentHostnames field, so that after you enter the command, the output lists current hostnames used by the route. ( OCPBUGS-31058 ) Previously, the load balancing algorithm did not differentiate between active and inactive services when determining weights, and it employed a random algorithm excessively in environments with many inactive services or environments routing backends with weight 0 . This led to increased memory usage and a higher risk of excessive memory consumption. With this release, changes optimize traffic direction towards active services only and prevent unnecessary use of a random algorithm with higher weights, reducing the potential for excessive memory consumption. ( OCPBUGS-29690 ) Previously, if multiple routes were specified in the same certificate or if a route specified the default certificate as a custom certificate, and HTTP/2 was enabled on the router, an HTTP/2 client could perform connection coalescing on routes. Clients, such as a web browser, could re-use connections and potentially connect to the wrong backend server. With this release, the OpenShift Container Platform router now checks when the same certificate is specified on more than one route or when a route specifies the default certificate as a custom certificate. When either one of these conditions is detected, the router configures the HAProxy load balancer so to not allow HTTP/2 client connections to any routes that use these certificate. ( OCPBUGS-29373 ) Previously, if you configured a deployment with the routingViaHost parameter set to true , traffic failed to reach the IPv6 ExternalTrafficPolicy=Local load balancer service. With this release, the issue is fixed. ( OCPBUGS-27211 ) Previously, a pod selected by an EgressIp object that was hosted on a secondary network interface controller (NIC) caused connections to node IP addresses to timeout. With this release, the issue is fixed. ( OCPBUGS-26979 ) Previously, a leap file package that the OpenShift Container Platform Precision Time Protocol (PTP) Operator installed could not be used by the ts2phc process because the package expired. With this release, the leap file package is updated to read leap events from Global Positioning System (GPS) signals and update the offset dynamically so that the expired package situation no longer occurs. ( OCPBUGS-25939 ) Previously, pods assigned an IP from the pool created by the Whereabouts CNI plugin were getting stuck in the ContainerCreating state after a node forced a reboot. With this release, the Whereabouts CNI plugin issue associated with the IP allocation after a node force reboot is resolved. ( OCPBUGS-24608 ) Previously, there was a conflict between two scripts on OpenShift Container Platform in IPv6, including single and dual-stack, deployments. One script set the hostname to a fully qualified domain name (FQDN) but the other script might set it to a short name too early. This conflict happened because the event that triggered setting the hostname to FQDN might run after the script that set it to a short name. This occurred due to asynchronous network events. With this release, new code has been added to ensure that the FQDN is set properly. This new code ensures that there is a wait for a specific network event before allowing the hostname to be set. ( OCPBUGS-22324 ) Previously, if a pod selected by an EgressIP through a secondary interface had its label removed, another pod in the same namespace would also lose its EgressIP assignment, breaking its connection to the external host. With this release, the issue is fixed, so that when a pod label is removed and it stops using the EgressIP , other pods with the matching label continue to use the EgressIP without interruption. ( OCPBUGS-20220 ) Previously, the global navigation satellite system (GNSS) module was capable of reporting both the GPS fix position and the GNSS offset position, which represents the offset between the GNSS module and the constellations. The T-GM did not use the ubloxtool CLI tool to probe the ublox module for reading offset and fix positions. Instead, it could only read the GPS fix information via GPSD. The reason for this was that the implementation of the ubloxtool CLI tool took 2 seconds to receive a response, and with every call it increased CPU usage by threefold. With this release, the ubloxtool request is now optimized, and the GPS offset position is now available. ( OCPBUGS-17422 ) Previously, EgressIP pods hosted by a secondary interface would not failover because of a race condition. Users would receive an error message indicating that the EgressIP pod could not be assigned because it conflicted with an existing IP address. With this release, the EgressIP pod moves to an egress node. ( OCPBUGS-20209 ) Previously, when a MAC address changed on the physical interface being used by OVN-Kubernetes, it would not be updated correctly within OVN-Kubernetes and could cause traffic disruption and Kube API outages from the node for a prolonged period of time. This was most common when a bond interface was being used, where the MAC address of the bond might swap depending on which device was the first to come up. With this release, the issues if fixed so that OVN-Kubernetes dynamically detects MAC address changes and updates it correctly. ( OCPBUGS-18716 ) Previously, IPv6 was unsupported when assigning an egress IP to a network interface that was not the primary network interface. This issue has been resolved, and the egress IP can be IPv6. ( OCPBUGS-24271 ) Previously, the network-tools image, which is a debugging tool, included the Wireshark network protocol analyzer. Wireshark had a dependency on the gstreamer1 package, and this package has specific licensing requirements. With this release, the gstreamer1 package is removed from the network-tools image and the image now includes the wireshark-cli package. ( OCPBUGS-31699 ) Previously, when the default gateway of a node was set to vlan and multiple network manager connection had the same name, the node would fail as it could not configure the default OVN-Kubernetes bridge. With this release, the configure-ovs.sh shell script includes an nmcli connection show uuid command that retrieves the correct network manager connection if many connections with the same name exist. ( OCPBUGS-24356 ) For OpenShift Container Platform clusters on Microsoft Azure, when using OVN-Kubernetes as the Container Network Interface (CNI), there was an issue where the source IP recognized by the pod was the OVN gateway router of the node when using a load balancer service with externalTrafficPolicy: Local . This occurred due to a Source Network Address Translation (SNAT) being applied to UDP packets. With this update, session affinity without a timeout is possible by setting the affinity timeout to a higher value, for example, 86400 seconds, or 24 hours. As a result, the affinity is treated as permanent unless there are network disruptions like endpoints or nodes going down. As a result, session affinity is more persistent. ( OCPBUGS-24219 ) Node Previously, OpenShift Container Platform upgrades for Ansible caused an error as the IPsec configuration was not idempotent. With this update, the issue is resolved. Now, all IPsec configurations for OpenShift Ansible playbooks are idempotent. ( OCPBUGS-30802 ) Previously, the CRI-O removed all of the images installed between minor version upgrades of OpenShift Container Platform to ensure stale payload images did not take up space on the node. However, it was decided this was a performance penalty, and this functionality was removed. With this fix, the kubelet will still garbage collect stale images after disk usage hits a certain level. As a result, OpenShift Container Platform no longer removes all images after an upgrade between minor versions. ( OCPBUGS-24743 ) Node Tuning Operator (NTO) Previously, the distributed unit profile on single-node OpenShift Container Platform was degraded because the net.core.busy_read , net.core.busy_poll , and kernel.numa_balancing sysctls did not exist in the real-time kernel. With this release, the Tuned profile is no longer degraded and the issue has been resolved. ( OCPBUGS-23167 ) Previously, the Tuned profile reported a Degraded condition after PerformanceProfile was applied. The profile had attempted to set a sysctl value for the default Receive Packet Steering (RPS) mask, but the mask was already configured with the same value using an /etc/sysctl.d file. With this update, the sysctl value is no longer set with the Tuned profile and the issue has been resolved. ( OCPBUGS-24638 ) Previously, the Performance Profile Creator (PPC) incorrectly populated the metadata.ownerReferences.uid field for Day 0 performance profile manifests. As a result, it was impossible to apply a performance profile at Day 0 without manual intervention. With this release, the PPC does not generate the metadata.ownerReferences.uid field for Day 0 manifests. As a result, you can apply a performance profile manifest at Day 0 as expected. ( OCPBUGS-29751 ) Previously, the TuneD daemon could unnecessarily reload an additional time after a Tuned custom resource (CR) update. With this release, the Tuned object has been removed and the TuneD (daemon) profiles are carried directly in the Tuned Profile Kubernetes objects. As a result, the issue has been resolved. ( OCPBUGS-32469 ) OpenShift CLI (oc) Previously, when mirroring operator images with incompatible semantic versioning, oc-mirror plugin v2 (Technology Preview) would fail and exit. This fix ensures that a warning appears in the console, indicating the skipped image and allowing the mirroring process to continue without interruption. ( OCPBUGS-34587 ) Previously, oc-mirror plugin v2 (Technology Preview) failed to mirror certain Operator catalogs that included image references with both tag and digest formats. This issue prevented the creation of cluster resources, such as ImageDigestMirrorSource (IDMS) and ImageTagMirrorSource (ITMS). With this update, oc-mirror resolves the issue by skipping images that have both tag and digest references, while displaying an appropriate warning message in the console output. ( OCPBUGS-33196 ) Previously, with oc-mirror plugin v2 (Technology Preview), mirroring errors were only displayed in the console output, making it difficult for users to analyze and troubleshoot other issues. For example, an unstable network might require a rerun, while a manifest unknown error might need further analysis to skip an image or Operator. With this update, a file is generated that contains all errors in the workspace working-dir/logs folder. And all the errors that occur during the mirroring process are now logged in mirroring_errors_YYYYMMdd.txt . ( OCPBUGS-33098 ) Previously, the Cloud Credential Operator utility ( ccoctl ) could not run on a RHEL 9 host with FIPS enabled. With this release, a user can run a version of the ccoctl utility that is compatible with the RHEL version of their host, including RHEL 9. ( OCPBUGS-32080 ) Previously, when mirroring operator catalogs, oc-mirror would rebuild the catalogs and regenerate their internal cache based on imagesetconfig catalog filtering specifications. This process required the opm binary from within the catalogs. Starting with version 4.15, operator catalogs include the opm RHEL 9 binary, which caused the mirroring process to fail when executed on RHEL 8 systems. With this release, oc-mirror no longer rebuilds catalogs by default; instead, it simply mirrors them to their destination registries. To retain the catalog rebuilding functionality, use --rebuild-catalog . However, note that no changes were made to the current implementation, so using this flag might result in the cache not being generated or the catalog not being deployed to the cluster. If you use this command, you can export OPM_BINARY to specify a custom opm binary that corresponds to the catalog versions and platform found in OpenShift Container Platform. Mirroring of catalog images is now done without signature verification. Use --enable-operator-secure-policy to enable signature verification during mirroring. ( OCPBUGS-31536 ) Previously, some credentials requests were not extracted properly when running the oc adm release extract --credentials-requests command with an install-config.yaml file that included the CloudCredential cluster capability. With this release, the CloudCredential capability is correctly included in the OpenShift CLI ( oc ) so that this command extracts credentials requests properly. ( OCPBUGS-24834 ) Previously, users encountered sequence errors when using the tar.gz artifact with the oc-mirror plugin. To resolve this, the oc-mirror plugin now ignores these errors when executed with the --skip-pruning flag. This update ensures that the sequence error, which no longer affects the order of tar.gz usage in mirroring, is effectively handled. ( OCPBUGS-23496 ) Previously, when using the oc-mirror plugin to mirror local Open Container Initiative Operator catalogs located in hidden folders, oc-mirror previously failed with an error: ".hidden_folder/data/publish/latest/catalog-oci/manifest-list/kubebuilder/kube-rbac-proxy@sha256:db06cc4c084dd0253134f156dddaaf53ef1c3fb3cc809e5d81711baa4029ea4c is not a valid image reference: invalid reference format ". With this release, oc-mirror now calculates references to images within local Open Container Initiative catalogs differently, ensuring that the paths to hidden catalogs no longer disrupt the mirroring process. ( OCPBUGS-23327 ) Previously, oc-mirror would not stop and return a valid error code when mirroring failed. With this release, oc-mirror now exits with the correct error code when encountering "operator not found", unless the --continue-on-error flag is used. ( OCPBUGS-23003 ) Previously, when mirroring operators, oc-mirror would ignore the maxVersion constraint in imageSetConfig if both minVersion and maxVersion were specified. This resulted in mirroring all bundles up to the channel head. With this release, oc-mirror now considers the maxVersion constraint as specified in imageSetConfig . ( OCPBUGS-21865 ) Previously, oc-mirror failed to mirror releases using eus-* channels, as it did not recognize that eus-* channels are designated for even-numbered releases only. With this release, oc-mirror plugin now properly acknowledges that eus-* channels are intended for even-numbered releases, enabling users to successfully mirror releases using these channels. ( OCPBUGS-19429 ) Previously, the addition of the defaultChannel field in the mirror.operators.catalog.packages file enabled users to specify their preferred channel, overriding the defaultChannel set in the operator. With this release, oc-mirror plugin now enforces an initial check if the defaultChannel field is set, users must also define it in the channels section of the ImageSetConfig . This update ensures that the specified defaultChannel is properly configured and applied during operator mirroring. ( OCPBUGS-385 ) Previously, when running a cluster with FIPS enabled, you might have received the following error when running the OpenShift CLI ( oc ) on a RHEL 9 system: FIPS mode is enabled, but the required OpenSSL backend is unavailable . With this release, the default version of OpenShift CLI ( oc ) is compiled with Red Hat Enterprise Linux (RHEL) 9 and works properly when running a cluster with FIPS enabled on RHEL 9. Additionally, a version of oc compiled with RHEL 8 is also provided, which must be used if you are running a cluster with FIPS enabled on RHEL 8. ( OCPBUGS-23386 , OCPBUGS-28540 ) Previously, role bindings related to the ImageRegistry and Build capabilities were created in every namespace, even if the capability was disabled. With this release, the role bindings are only created if the respective cluster capability is enabled on the cluster. ( OCPBUGS-34384 ) Previously, during the disk-to-mirror process for fully disconnected environments, oc-mirror plugin v1 would fail to mirror the catalog image when access to Red Hat registries was blocked. Additionally, if the ImageSetConfiguration used a targetCatalog for the mirrored catalog, mirroring would fail due to incorrect catalog image references regardless of the workflow. This issue has been resolved by updating the catalog image source for mirroring to the mirror registry. ( OCPBUGS-34646 ) Operator Lifecycle Manager (OLM) Previously, Operator catalogs were not being refreshed properly, due to the imagePullPolicy field being set to IfNotPresent for the index image. This bug fix updates OLM to use the appropriate image pull policy for catalogs, and as a result catalogs are refreshed properly. ( OCPBUGS-30132 ) Previously, cluster upgrades could be blocked due to OLM getting stuck in a CrashLoopBackOff state. This was due to an issue with resources having multiple owner references. This bug fix updates OLM to avoid duplicate owner references and only validate the related resources that it owns. As a result, cluster upgrades can proceed as expected. ( OCPBUGS-28744 ) Previously, default OLM catalog pods backed by a CatalogSource object would not survive an outage of the node that they were being run on. The pods remained in termination state, despite the tolerations that should move them. This caused Operators to no longer be able to be installed or updated from related catalogs. This bug fix updates OLM so catalog pods that get stuck in this state are deleted. As a result, catalog pods now correctly recover from planned or unplanned node maintenance. ( OCPBUGS-32183 ) Previously, installing an Operator could sometimes fail if the same Operator had been previously installed and uninstalled. This was due to a caching issue. This bug fix updates OLM to correctly install the Operator in this scenario, and as a result this issue no longer occurs. ( OCPBUGS-31073 ) Previously, the catalogd component could crash loop after an etcd restore. This was due to the garbage collection process causing a looping failure state when the API server was unreachable. This bug fix updates catalogd to add a retry loop, and as a result catalogd no longer crashes in this scenario. ( OCPBUGS-29453 ) Previously, the default catalog source pod would not receive updates, requiring users to manually re-create it to get updates. This was caused by image IDs for catalog pods not getting detected correctly. This bug fix updates OLM to correctly detect catalog pod image IDs, and as a result, default catalog sources are updated as expected. ( OCPBUGS-31438 ) Previously, users could experience Operator installation errors due to OLM not being able to find existing ClusterRoleBinding or Service resources and creating them a second time. This bug fix updates OLM to pre-create these objects, and as a result these installation errors no longer occur. ( OCPBUGS-24009 ) Red Hat Enterprise Linux CoreOS (RHCOS) Previously, the OVS network configured before the kdump service generated its special initramfs . When the kdump service started, it picked up the network-manager configuration files and copied them into the kdump initramfs . When the node rebooted into the kdump initramfs , the kernel crash dump upload over the network failed because OVN did not run into the initramfs and the virtual interface was not configured. With this release, the ordering has been updated so that the kdump starts and builds the kdump initramfs before the OVS networking configuration is set up and the issue has been resolved. ( OCPBUGS-30239 ) Scalability and performance Previously, the Machine Config Operator (MCO) on single-node OpenShift Container Platform was rendered after the Performance Profile rendered, so the control plane and worker machine config pools were not created at the right time. With this release, the Performance Profile renders correctly and the issue is resolved. ( OCPBUGS-22095 ) Previously, the TuneD and irqbalanced daemons modified the Interrupt Request (IRQ) CPU affinity configuration, which created conflicts in the IRQ CPU affinity configuration and caused unexpected behavior after a single-node OpenShift node restart. With this release, only the irqbalanced daemon determines IRQ CPU affinity configuration. ( OCPBUGS-26400 ) Previously, during OpenShift Container Platform updates in performance-tuned clusters, resuming a MachineConfigPool resource resulted in additional restarts for nodes in the pool. With this release, the controller reconciles against the latest planned machine configurations before the pool resumes, which prevents additional node reboots. ( OCPBUGS-31271 ) Previously, ARM installations used 4k pages in the kernel. With this release, support was added for installing 64k pages in the kernel at installation time only, providing a performance boost on the NVIDIA CPU. Driver Tool Kit (DTK) was also updated to compile kernel modules for the 64k page size ARM kernel. ( OCPBUGS-29223 ) Storage Previously, some LVMVolumeGroupNodeStatus operands were not deleted on the cluster during the deletion of the LVMCluster custom resource (CR). With this release, deleting the LVMCluster CR triggers the deletion of all the LVMVolumeGroupNodeStatus operands. ( OCPBUGS-32954 ) Previously, LVM Storage uninstallation was stuck waiting for the deletion of the LVMVolumeGroupNodeStatus operands. This fix corrects the behavior by ensuring all operands are deleted, allowing LVM Storage to be uninstalled without delay. ( OCPBUGS-32753 ) Previously, LVM Storage did not support minimum storage size for persistent volume claims (PVCs). This can lead to mount failures while provisioning PVCs. With this release, LVM Storage supports minimum storage size for PVCs. The following are the minimum storage sizes that you can request for each file system type: block : 8 MiB xfs : 300 MiB ext4 : 32 MiB If the value of the requests.storage field in the PersistentVolumeClaim object is less than the minimum storage size, the requested storage size is rounded to the minimum storage size. If the value of the limits.storage field is less than the minimum storage size, PVC creation fails with an error. ( OCPBUGS-30266 ) Previously, LVM Storage created persistent volume claims (PVCs) with storage size requests that were not multiples of the disk sector size. This can cause issues during LVM2 volume creation. This fix corrects the behavior by rounding the storage size requested by PVCs to the nearest multiple of 512. ( OCPBUGS-30032 ) Previously, the LVMCluster custom resource (CR) contained an excluded status element for a device that is set up correctly. This fix filters the correctly set device from being considered for an excluded status element, so it only appears in the ready devices. ( OCPBUGS-29188 ) Previously, CPU limits for the Amazon Web Services (AWS) Elastic File Store (EFS) Container Storage Interface (CSI) driver container could cause performance degradation of volumes managed by the AWS EFS CSI Driver Operator. With this release, the CPU limits from the AWS EFS CSI driver container are removed to help prevent potential performance degradation. ( OCPBUGS-28551 ) Previously, the Microsoft Azure Disk CSI driver was not properly counting allocatable volumes on certain instance types and exceeded the maximum. As a result, the pod could not start. With this release, the count table for the Microsoft Azure Disk CSI driver has been updated to include new instance types. The pod now runs and data can be read and written to the properly configured volumes. ( OCPBUGS-18701 ) Previously, the secrets store Container Storage Interface driver on Hosted Control Planes failed to mount secrets because of a bug in the CLI. With this release, the driver is able to mount volumes and the issue has been resolved. ( OCPBUGS-34759 ) Previously, static Persistent Volumes (PVs) in Microsoft Azure Workload Identity clusters could not be configured due to a bug in the driver, causing PV mounts to fail. With this release, the driver works and static PVs mount correctly. ( OCPBUGS-32785 ) 1.7. Technology Preview features status Some features in this release are currently in Technology Preview. These experimental features are not intended for production use. Note the following scope of support on the Red Hat Customer Portal for these features: Technology Preview Features Support Scope In the following tables, features are marked with the following statuses: Not Available Technology Preview General Availability Deprecated Removed Networking Technology Preview features Table 1.19. Networking Technology Preview tracker Feature 4.14 4.15 4.16 Ingress Node Firewall Operator General Availability General Availability General Availability Advertise using L2 mode the MetalLB service from a subset of nodes, using a specific pool of IP addresses Technology Preview Technology Preview Technology Preview Multi-network policies for SR-IOV networks Technology Preview General Availability General Availability OVN-Kubernetes network plugin as secondary network General Availability General Availability General Availability Updating the interface-specific safe sysctls list Technology Preview Technology Preview Technology Preview Egress service custom resource Technology Preview Technology Preview Technology Preview VRF specification in BGPPeer custom resource Technology Preview Technology Preview Technology Preview VRF specification in NodeNetworkConfigurationPolicy custom resource Technology Preview Technology Preview Technology Preview Admin Network Policy ( AdminNetworkPolicy ) Technology Preview Technology Preview General Availability IPsec external traffic (north-south) Technology Preview General Availability General Availability Integration of MetalLB and FRR-K8s Not Available Not Available Technology Preview Dual-NIC hardware as PTP boundary clock General Availability General Availability General Availability Egress IPs on additional network interfaces General Availability General Availability General Availability Dual-NIC Intel E810 PTP boundary clock with highly available system clock Not Available Not Available General Availability Intel E810 Westport Channel NIC as PTP grandmaster clock Technology Preview Technology Preview General Availability Dual-NIC Intel E810 Westport Channel as PTP grandmaster clock Not Available Technology Preview General Availability Configure the br-ex bridge needed by OVN-Kubernetes to use NMState Not Available Not Available General Availability Creating a route with externally managed certificate Not Available Not Available Technology Preview Live migration to OVN-Kubernetes from OpenShift SDN Not Available Not Available General Availability Overlapping IP configuration for multi-tenant networks with Whereabouts Not Available Not Available General Availability Improved integration between CoreDNS and egress firewall Not Available Not Available Technology Preview Storage Technology Preview features Table 1.20. Storage Technology Preview tracker Feature 4.14 4.15 4.16 Automatic device discovery and provisioning with Local Storage Operator Technology Preview Technology Preview Technology Preview Google Filestore CSI Driver Operator General Availability General Availability General Availability IBM Power(R) Virtual Server Block CSI Driver Operator Technology Preview General Availability General Availability Read Write Once Pod access mode Technology Preview Technology Preview General Availability Build CSI Volumes in OpenShift Builds General Availability General Availability General Availability Shared Resources CSI Driver in OpenShift Builds Technology Preview Technology Preview Technology Preview Secrets Store CSI Driver Operator Technology Preview Technology Preview Technology Preview CIFS/SMB CSI Driver Operator Not Available Not Available Technology Preview Installation Technology Preview features Table 1.21. Installation Technology Preview tracker Feature 4.14 4.15 4.16 Installing OpenShift Container Platform on Oracle(R) Cloud Infrastructure (OCI) with VMs General Availability General Availability General Availability Installing OpenShift Container Platform on Oracle(R) Cloud Infrastructure (OCI) on bare metal Developer Preview Developer Preview Developer Preview Adding kernel modules to nodes with kvc Technology Preview Technology Preview Technology Preview Enabling NIC partitioning for SR-IOV devices Technology Preview Technology Preview Technology Preview User-defined labels and tags for Google Cloud Platform (GCP) Technology Preview Technology Preview Technology Preview Installing a cluster on Alibaba Cloud by using installer-provisioned infrastructure Technology Preview Technology Preview Not Available Installing a cluster on Alibaba Cloud by using Assisted Installer Not Available Not Available Technology Preview Mount shared entitlements in BuildConfigs in RHEL Technology Preview Technology Preview Technology Preview Selectable Cluster Inventory Technology Preview Technology Preview Technology Preview Static IP addresses with VMware vSphere (IPI only) Technology Preview Technology Preview General Availability Support for iSCSI devices in RHCOS Not Available Technology Preview General Availability Installing a cluster on GCP using the Cluster API implementation Not Available Not Available Technology Preview Support for Intel(R) VROC-enabled RAID devices in RHCOS Technology Preview Technology Preview General Availability Node Technology Preview features Table 1.22. Nodes Technology Preview tracker Feature 4.14 4.15 4.16 MaxUnavailableStatefulSet featureset Technology Preview Technology Preview Technology Preview Multi-Architecture Technology Preview features Table 1.23. Multi-Architecture Technology Preview tracker Feature 4.14 4.15 4.16 IBM Power(R) Virtual Server using installer-provisioned infrastructure Technology Preview General Availability General Availability kdump on arm64 architecture Technology Preview Technology Preview Technology Preview kdump on s390x architecture Technology Preview Technology Preview Technology Preview kdump on ppc64le architecture Technology Preview Technology Preview Technology Preview Multiarch Tuning Operator Not available Not available General Availability Specialized hardware and driver enablement Technology Preview features Table 1.24. Specialized hardware and driver enablement Technology Preview tracker Feature 4.14 4.15 4.16 Driver Toolkit General Availability General Availability General Availability Kernel Module Management Operator General Availability General Availability General Availability Kernel Module Management Operator - Hub and spoke cluster support General Availability General Availability General Availability Node Feature Discovery General Availability General Availability General Availability Scalability and performance Technology Preview features Table 1.25. Scalability and performance Technology Preview tracker Feature 4.14 4.15 4.16 factory-precaching-cli tool Technology Preview Technology Preview Technology Preview Hyperthreading-aware CPU manager policy Technology Preview Technology Preview Technology Preview HTTP transport replaces AMQP for PTP and bare-metal events Technology Preview Technology Preview General Availability Mount namespace encapsulation Technology Preview Technology Preview Technology Preview Node Observability Operator Technology Preview Technology Preview Technology Preview Tuning etcd latency tolerances Technology Preview Technology Preview General Availability Increasing the etcd database size Not Available Not Available Technology Preview Using RHACM PolicyGenerator resources to manage GitOps ZTP cluster policies Not Available Not Available Technology Preview Operator lifecycle and development Technology Preview features Table 1.26. Operator lifecycle and development Technology Preview tracker Feature 4.14 4.15 4.16 Operator Lifecycle Manager (OLM) v1 Technology Preview Technology Preview Technology Preview RukPak Technology Preview Technology Preview Technology Preview Platform Operators Technology Preview Technology Preview Removed Scaffolding tools for Hybrid Helm-based Operator projects Technology Preview Technology Preview Deprecated Scaffolding tools for Java-based Operator projects Technology Preview Technology Preview Deprecated OpenShift CLI (oc) Technology Preview features Table 1.27. OpenShift CLI (oc) Technology Preview tracker Feature 4.14 4.15 4.16 oc-mirror plugin v2 Not Available Not Available Technology Preview Enclave support Not Available Not Available Technology Preview Delete functionality Not Available Not Available Technology Preview Monitoring Technology Preview features Table 1.28. Monitoring Technology Preview tracker Feature 4.14 4.15 4.16 Metrics Collection Profiles Technology Preview Technology Preview Technology Preview Metrics Server Not Available Technology Preview General Availability Red Hat OpenStack Platform (RHOSP) Technology Preview features Table 1.29. RHOSP Technology Preview tracker Feature 4.14 4.15 4.16 Dual-stack networking with installer-provisioned infrastructure Technology Preview General Availability General Availability Dual-stack networking with user-provisioned infrastructure Not Available General Availability General Availability RHOSP integration into the Cluster CAPI Operator Not Available Technology Preview Technology Preview Control Plane with rootVolumes and etcd on local disk Not Available Technology Preview Technology Preview Hosted control planes Technology Preview features Table 1.30. Hosted control planes Technology Preview tracker Feature 4.14 4.15 4.16 Hosted control planes for OpenShift Container Platform on Amazon Web Services (AWS) Technology Preview Technology Preview General Availability Hosted control planes for OpenShift Container Platform on bare metal General Availability General Availability General Availability Hosted control planes for OpenShift Container Platform on OpenShift Virtualization General Availability General Availability General Availability Hosted control planes for OpenShift Container Platform using non-bare metal agent machines Not Available Technology Preview Technology Preview Hosted control planes for an ARM64 OpenShift Container Platform cluster on Amazon Web Services Technology Preview Technology Preview Technology Preview Hosted control planes for OpenShift Container Platform on IBM Power Technology Preview Technology Preview Technology Preview Hosted control planes for OpenShift Container Platform on IBM Z Technology Preview Technology Preview Technology Preview Machine management Technology Preview features Table 1.31. Machine management Technology Preview tracker Feature 4.14 4.15 4.16 Managing machines with the Cluster API for Amazon Web Services Technology Preview Technology Preview Technology Preview Managing machines with the Cluster API for Google Cloud Platform Technology Preview Technology Preview Technology Preview Managing machines with the Cluster API for VMware vSphere Not Available Not Available Technology Preview Defining a vSphere failure domain for a control plane machine set Not Available Technology Preview General Availability Cloud controller manager for Alibaba Cloud Technology Preview Technology Preview Removed Cloud controller manager for Google Cloud Platform Technology Preview General Availability General Availability Cloud controller manager for IBM Power(R) Virtual Server Technology Preview Technology Preview Technology Preview Authentication and authorization Technology Preview features Table 1.32. Authentication and authorization Technology Preview tracker Feature 4.14 4.15 4.16 Pod security admission restricted enforcement Technology Preview Technology Preview Technology Preview Machine Config Operator Technology Preview features Table 1.33. Machine Config Operator Technology Preview tracker Feature 4.14 4.15 4.16 Improved MCO state reporting Not Available Technology Preview Technology Preview On-cluster RHCOS image layering Not Available Not Available Technology Preview Node disruption policies Not Available Not Available Technology Preview Updating boot images Not Available Not Available Technology Preview Edge computing Technology Preview features Table 1.34. Edge computing Technology Preview tracker Feature 4.14 4.15 4.16 Accelerated provisioning of GitOps ZTP Not Available Not Available Technology Preview 1.8. Known issues A regression in the behaviour of libreswan caused some nodes with IPsec enabled to lose communication with pods on other nodes in the same cluster. To resolve this issue, consider disabling IPsec for your cluster. ( OCPBUGS-43715 ) The oc annotate command does not work for LDAP group names that contain an equal sign ( = ), because the command uses the equal sign as a delimiter between the annotation name and value. As a workaround, use oc patch or oc edit to add the annotation. ( BZ#1917280 ) Run Once Duration Override Operator (RODOO) cannot be installed on clusters managed by the Hypershift Operator. ( OCPBUGS-17533 ) OpenShift Container Platform 4.16 installation on AWS in a secret or top secret region fails due to an issue with Network Load Balancers (NLBs) and security groups in these regions. ( OCPBUGS-33311 ) When you run Cloud-native Network Functions (CNF) latency tests on an OpenShift Container Platform cluster, the oslat test can sometimes return results greater than 20 microseconds. This results in an oslat test failure. ( RHEL-9279 ) When installing a cluster on Amazon Web Services (AWS) using Local Zones, edge nodes fail to deploy if deployed in the us-east-1-iah-2a region. ( OCPBUGS-35538 ) Installing OpenShift Container Platform 4.16 with the Infrastructure Operator, Central Infrastructure Management, or ZTP methods using ACM versions 2.10.3 or earlier is not possible. This is because of a change in the dynamically linked installer binary, openshift-baremetal-install , which in OpenShift Container Platform 4.16 requires a Red Hat Enterprise Linux (RHEL) 9 host to run successfully. It is planned to use the statically linked binary in future versions of ACM to avoid this issue. ( ACM-12405 ) When installing a cluster on AWS, the installation can time out if the load balancer DNS time-to-live (TTL) value is very high. ( OCPBUGS-35898 ) For a bonding network interface that holds a br-ex bridge device, do not set the mode=6 balance-alb bond mode in a node network configuration. This bond mode is not supported on OpenShift Container Platform and it can cause the Open vSwitch (OVS) bridge device to disconnect from your networking environment. ( OCPBUGS-34430 ) Deploying an installer-provisioned cluster on bare metal fails when a proxy is used. A service in the bootstrap virtual machine cannot access IP address 0.0.0.0 through the proxy because of a regression bug. As a workaround, add 0.0.0.0 to the noProxy list. For more information, see Setting proxy settings . ( OCPBUGS-35818 ) When installing a cluster on Amazon Web Services (AWS) in a VPC that contains multiple CIDR blocks, if the machine network is configured to use a non-default CIDR block in the install-config.yaml file, the installation fails. ( OCPBUGS-35054 ) When a OpenShift Container Platform 4.16 cluster is installed or configured as a postinstallation activity on a single VIOS host with virtual SCSI storage on IBM Power(R) with multipath configured, the CoreOS nodes with multipath enabled fail to boot. This behavior is expected as only one path is available to the node. ( OCPBUGS-32290 ) When using CPU load balancing on cgroupv2, a pod can fail to start if another pod that has access to exclusive CPUs already exists. This can happen when a pod is deleted and another one is quickly created to replace it. As a workaround, ensure that the old pod is fully terminated before attempting to create the new one. ( OCPBUGS-34812 ) Enabling LUKS encryption on a system using 512 emulation disks causes provisioning to fail and the system launches the emergency shell in the initramfs. This happens because of an alignment bug in sfdisk when growing a partition. As a workaround, you can use Ignition to perform the resizing instead. ( OCPBUGS-35410 ) OpenShift Container Platform version 4.16 disconnected installation fails on IBM Power(R) Virtual Server. ( OCPBUGS-36250 ) If you have IPsec enabled on the cluster, on the node hosting the north-south IPsec connection, restarting the ipsec.service systemd unit or restarting the ovn-ipsec-host pod causes a loss of the IPsec connection. ( RHEL-26878 ) If you set the baselineCapabilitySet field to None , you must explicitly enable the Ingress Capability, because the installation of a cluster fails if the Ingress Capability is disabled. ( OCPBUGS-33794 ) The current PTP grandmaster clock (T-GM) implementation has a single National Marine Electronics Association (NMEA) sentence generator sourced from the GNSS without a backup NMEA sentence generator. If NMEA sentences are lost before reaching the e810 NIC, the T-GM cannot synchronize the devices in the network synchronization chain and the PTP Operator reports an error. A proposed fix is to report a FREERUN event when the NMEA string is lost. Until this limitation is addressed, T-GM does not support PTP clock holdover state. ( OCPBUGS-19838 ) When a worker node's Topology Manager policy is changed, the NUMA-aware secondary pod scheduler does not respect this change, which can result in incorrect scheduling decisions and unexpected topology affinity errors. As a workaround, restart the NUMA-aware scheduler by deleting the NUMA-aware scheduler pod. ( OCPBUGS-34583 ) If you plan to deploy the NUMA Resources Operator, avoid using OpenShift Container Platform versions 4.16.25 or 4.16.26. ( OCPBUGS-45983 ) Due to an issue with Kubernetes, the CPU Manager is unable to return CPU resources from the last pod admitted to a node to the pool of available CPU resources. These resources are allocatable if a subsequent pod is admitted to the node. However, this pod then becomes the last pod, and again, the CPU manager cannot return this pod's resources to the available pool. This issue affects CPU load balancing features, which depend on the CPU Manager releasing CPUs to the available pool. Consequently, non-guaranteed pods might run with a reduced number of CPUs. As a workaround, schedule a pod with a best-effort CPU Manager policy on the affected node. This pod will be the last admitted pod and this ensures the resources will be correctly released to the available pool. ( OCPBUGS-17792 ) After applying a SriovNetworkNodePolicy resource, the CA certificate might be replaced during SR-IOV Network Operator webhook reconciliation. As a consequence, you might see unknown authority errors when applying SR-IOV Network node policies. As a workaround, try to re-apply the failed policies. ( OCPBUGS-32139 ) If you delete a SriovNetworkNodePolicy resource for a virtual function with a vfio-pci driver type, the SR-IOV Network Operator is unable to reconcile the policy. As a consequence the sriov-device-plugin pod enters a continuous restart loop. As a workaround, delete all remaining policies affecting the physical function, then re-create them. ( OCPBUGS-34934 ) If the controller pod terminates while cloning is in progress, the Microsoft Azure File clone persistent volume claims (PVCs) remain in the Pending state. To resolve this issue, delete any affected clone PVCs, and then recreate the PVCs. ( OCPBUGS-35977 ) There is no log pruning available for azcopy (underlying tool running copy jobs) in Microsoft Azure, so this might eventually lead to filling up a root device of the controller pod, and you have to manually clean this up. ( OCPBUGS-35980 ) The limited live migration method stops when the mtu parameter of a ConfigMap object in the openshift-network-operator namespace is missing. In most cases, the mtu field of the ConfigMap object is created by the mtu-prober job during installation. However, if the cluster was upgraded from an early release, for example, OpenShift Container Platform 4.4.4, the ConfigMap object might be absent. As a temporary workaround, you can manually create the ConfigMap object before starting the limited live migration process. For example: apiVersion: v1 kind: ConfigMap metadata: name: mtu namespace: openshift-network-operator data: mtu: "1500" 1 1 The mtu value must be aligned with the MTU of the node interface. ( OCPBUGS-35316 ) In hosted clusters, self-signed certificates from the API cannot be replaced. ( OCPSTRAT-1516 ) Low-latency applications that rely on high-resolution timers to wake up their threads might experience higher wake up latencies than expected. Although the expected wake up latency is under 20ms, latencies exceeding this time can occasionally be seen when running the cyclictest tool for long durations. Testing has shown that wake up latencies are under 20ms for over 99.99999% of the samples. ( OCPBUGS-34022 ) 1.9. Asynchronous errata updates Security, bug fix, and enhancement updates for OpenShift Container Platform 4.16 are released as asynchronous errata through the Red Hat Network. All OpenShift Container Platform 4.16 errata is available on the Red Hat Customer Portal . See the OpenShift Container Platform Life Cycle for more information about asynchronous errata. Red Hat Customer Portal users can enable errata notifications in the account settings for Red Hat Subscription Management (RHSM). When errata notifications are enabled, users are notified through email whenever new errata relevant to their registered systems are released. Note Red Hat Customer Portal user accounts must have systems registered and consuming OpenShift Container Platform entitlements for OpenShift Container Platform errata notification emails to generate. This section will continue to be updated over time to provide notes on enhancements and bug fixes for future asynchronous errata releases of OpenShift Container Platform 4.16. Versioned asynchronous releases, for example with the form OpenShift Container Platform 4.16.z, will be detailed in subsections. In addition, releases in which the errata text cannot fit in the space provided by the advisory will be detailed in subsections that follow. Important For any OpenShift Container Platform release, always review the instructions on updating your cluster properly. 1.9.1. RHSA-2025:1907 - OpenShift Container Platform 4.16.37 bug fix and security update Issued: 5 March 2025 OpenShift Container Platform release 4.16.37 is now available. The list of bug fixes that are included in the update is documented in the RHSA-2025:1907 advisory. The RPM packages that are included in the update are provided by the RHSA-2025:1910 advisory. You can view the container images in this release by running the following command: USD oc adm release info 4.16.37 --pullspecs 1.9.1.1. Bug fixes Previously, you could not create a cluster with an enabled secure proxy and certificate set in the configuration.proxy.trustCA field. Another issue prevented you from reaching cloud APIs through the management cluster proxy. With this release, these issues are resolved. ( OCPBUGS-51296 ) Previously, there was incorrect logic for bucket name generation. With this release, the issue is resolved. ( OCPBUGS-51167 ) Previously, when you deleted Dynamic Host Configuration Protocol (DHCP) network on an IBM Power Virtual Server cluster, subresources could still exist. With this release, when you delete a DHCP network, the subresources deletion now occurs before continuing the destroy operation. ( OCPBUGS-51111 ) Previously, incorrect addresses were being passed to the Kubernetes EndpointSlice on a cluster, and this issue prevented the installation of the MetalLB Operator on an Agent-based cluster in an IPv6 disconnected environment. With this release, a fix modifies the address evaluation method. Red Hat Marketplace pods can now successfully connect to the cluster API server so that the installation of MetalLB Operator and handling of ingress traffic in IPv6 disconnected environments can occur. ( OCPBUGS-50694 ) Previously, the cnf-tests image used an outdated image version for running tests. With this release, the issue is resolved. ( OCPBUGS-50611 ) Previously, an extra name prop was being passed into resource list page extensions used to list related operands on the CSV details page. This caused the operand list to be filtered by the CSV name, which would typically cause it to be an empty list. With this update, operands are listed as expected. ( OCPBUGS-46441 ) 1.9.1.2. Updating To update an existing OpenShift Container Platform 4.16 cluster to this latest release, see Updating a cluster using the CLI . 1.9.2. RHSA-2025:1707 - OpenShift Container Platform 4.16.36 bug fix and security update Issued: 27 February 2025 OpenShift Container Platform release 4.16.36 is now available. The list of bug fixes that are included in the update is documented in the RHSA-2025:1707 advisory. The RPM packages that are included in the update are provided by the RHBA-2025:1709 advisory. You can view the container images in this release by running the following command: USD oc adm release info 4.16.36 --pullspecs 1.9.2.1. Bug fixes Previously, certain OpenShift Container Platform clusters with hundreds of nodes and network policies caused the live migration from the OpenShift SDN network plugin to the OVN-Kubernetes network plugin to fail. The live migration operation failed because of excess RAM consumption. With this release, a fix means that the live migration operation no longer fails for these cluster configurations. ( OCPBUGS-46493 ) 1.9.2.2. Updating To update an existing OpenShift Container Platform 4.16 cluster to this latest release, see Updating a cluster by using the CLI . 1.9.3. RHSA-2025:1386 - OpenShift Container Platform 4.16.35 bug fix update Issued: 19 February 2025 OpenShift Container Platform release 4.16.35 is now available. The list of bug fixes that are included in the update is documented in the RHSA-2025:1386 advisory. The RPM packages that are included in the update are provided by the RHBA-2025:1390 advisory. You can view the container images in this release by running the following command: USD oc adm release info 4.16.35 --pullspecs 1.9.3.1. Bug fixes Previously, the Bare Metal Operator (BMO) created the HostFirmwareComponents custom resource for all BareMetalHosts (BMH), including the intelligent platform management interface (IPMI), which does not support the HostFirmwareComponents custom resource. With this release, HostFirmwareComponents custom resources are only created for BMH. ( OCPBUGS-49703 ) Previously, importing manifest lists could cause an API crash if the source registry returned an invalid sub-manifest result. With this update, the API flags an error on the imported tag instead of crashing. ( OCPBUGS-49656 ) Previously, when you used the installation program to install a cluster in a Prism Central environment, the installation failed because a prism-api call that loads an RHCOS image timed out. This issue happened because the prismAPICallTimeout parameter was set to 5 minutes. With this release, the prismAPICallTimeout parameter in the install-config.yaml configuration file now defaults to 10 minutes. You can also configure the parameter if you need a longer timeout for a prism-api call. ( OCPBUGS-49416 ) Previously, when you attempted to scale a DeploymentConfig object with an admission webhook that the object's deploymentconfigs/scale subresource, the apiserver failed to handle the request. This impacted the DeploymentConfig object as it could not be scaled. With this release, a fix ensures that this issue no longer occurs. ( OCPBUGS-45010 ) 1.9.3.2. Updating To update an existing OpenShift Container Platform 4.16 cluster to this latest release, see Updating a cluster by using the CLI . 1.9.4. RHBA-2025:1124 - OpenShift Container Platform 4.16.34 bug fix update Issued: 12 February 2025 OpenShift Container Platform release 4.16.34 is now available. The list of bug fixes that are included in the update is documented in the RHBA-2025:1124 advisory. There are no RPM packages for this release. You can view the container images in this release by running the following command: USD oc adm release info 4.16.34 --pullspecs 1.9.4.1. Updating To update an existing OpenShift Container Platform 4.16 cluster to this latest release, see Updating a cluster by using the CLI . 1.9.5. RHBA-2025:0828 - OpenShift Container Platform 4.16.33 bug fix and security update Issued: 06 February 2025 OpenShift Container Platform release 4.16.33 is now available. The list of bug fixes that are included in the update is documented in the RHBA-2025:0828 advisory. The RPM packages that are included in the update are provided by the RHSA-2025:0830 advisory. Space precluded documenting all of the container images for this release in the advisory. You can view the container images in this release by running the following command: USD oc adm release info 4.16.33 --pullspecs 1.9.5.1. Bug fixes Previously, some cluster autoscaler metrics were not initialized and were unavailable. With this release, the cluster autoscaler metrics are initialized and available. ( OCPBUGS-48732 ) Previously, every time a subcription was reconciled, the OLM catalog Operator requested a full view of the catalog metadata from the catalog source pod of the subscription. These requests caused performance issues for the catalog pods. With this release, the OLM catalog Operator now uses a local cache that is refreshed periodically and reused by all subscription reconciliations, so that the performance issue for the catalog pods no longer persists. ( OCPBUGS-48696 ) Previously, if you specified a forceSelinuxRelabel field in the ClusterResourceOverride CR and then modified the CR at a later stage, the Cluster Resource Override Operator did not apply the update to the associated ConfigMap resource. This ConfigMap resource is important for an SELinux relabeling feature, forceSelinuxRelabel . With this release, the Cluster Resource Override Operator now applies and tracks any ClusterResourceOverride CR changes to the ConfigMap resource. ( OCPBUGS-48690 ) Previously, after you deleted a pod from an OpenShift Container Platform cluster, the crun container runtime failed to stop any running containers that existed in the pod. This caused the pod to remain in a terminating state. With this release, a fix ensures that if you delete a pod, crun stops any running containers without placing the pod in a permanent "terminating" state. ( OCPBUGS-48564 ) Previously, the Cluster Version Operator (CVO) did not filter internal errors that were propogated to the ClusterVersion Failing condition message. As a result, errors that did not negatively impact the update were shown for the ClusterVersion Failing condition message. With this release, the errors that are propogated to the ClusterVersion Failing condition message are filtered. ( OCPBUGS-46408 ) 1.9.5.2. Updating To update an existing OpenShift Container Platform 4.16 cluster to this latest release, see Updating a cluster by using the CLI . 1.9.6. RHSA-2025:0650 - OpenShift Container Platform 4.16.32 bug fix and security update Issued: 29 January 2025 OpenShift Container Platform release 4.16.32 is now available. The list of bug fixes that are included in the update is documented in the RHSA-2025:0650 advisory. The RPM packages that are included in the update are provided by the RHBA-2025:0652 advisory. Space precluded documenting all of the container images for this release in the advisory. You can view the container images in this release by running the following command: USD oc adm release info 4.16.32 --pullspecs 1.9.6.1. Bug fixes Previously, the Operator Lifecycle Manager (OLM) would sometimes concurrently resolve the same namespace in a cluster. This led to subscriptions reaching a terminal state of ConstraintsNotSatisfiable , because two concurrent processes interacted with a subscription and this caused a CSV file to become unassociated. With this release, OLM no longer concurrently resolves namespaces, so that OLM correctly processes a subscription without leaving a CSV file in an unassociated state. ( OCPBUGS-48661 ) Previously, Google Cloud Platform (GCP) updated their zone API error message, and this update caused the OpenShift Container Platform machine controller to mistakenly label a machine as valid because of a generated temporary error message that was related to Google Cloud Platform (GCP). This situation prevented invalid machines transitioning into a failed state. With this release, the machine controller now handles the error correctly by checking if an invalid zone or projectID exists in the machine configuration. The machine controller then correctly places the machine in a failed state. ( OCPBUGS-48484 ) Previously, if the RendezvousIP matched a substring in the -hop-address field of a compute node configuration, a validation error. The RendezvousIP must match only a control plane host address. With this release, a substring comparison for RendezvousIP is used only against a control plane host address, so that the error no longer exists. ( OCPBUGS-48442 ) Previously, you could not use all the available machine types in a zone for a cluster installed on IBM Power(R) Virtual Server. This issues existed because all zones in a region were assumed to have the same set of machine types. With this release, you can use all the available machine types in a zone for a cluster installed on IBM Power(R) Virtual Server. ( OCPBUGS-47663 ) Previously, a PipelineRuns CR that used a resolver could not be rerun on the OpenShift Container Platform web console. If you attempted ro rerun the CR, a Invalid PipelineRun configuration, unable to start Pipeline. was generated. With this release, you can now rerun a PipelineRuns CR that uses resolver without experience this issue. ( OCPBUGS-46602 ) Previously, when you used the Form View to edit Deployment or DeploymentConfig API objects on the OpenShift Container Platform web console, duplicate ImagePullSecrets parameters existed in the YAML configuration for either object. With this release, a fix ensures that duplicate ImagePullSecrets parameters do not get automatically added for either object. ( OCPBUGS-45948 ) Previously, when the installation program installed a cluster on Microsoft Azure, the installation program enabled cross-tenant objects and replicated them. These replicated objects do not comply with Payment Card Industry Data Security Standard (PCI DSS) and the Federal Financial Supervisory Authority ( BaFin ) regulations. With this release, the installation program disables the objects, so that the cluster stricly adheres to the previously mentioned data governance regulations. ( OCPBUGS-45999 ) Previously, Red Hat Enterprise Linux (RHEL) CoreOS templates that were shipped by the Machine Config Operator (MCO) caused node scaling to fail on Red Hat OpenStack Platform (RHOSP). This issue happened because of an issue with systemd and the presence of a legacy boot image from older versions of OpenShift Container Platform. With this release, a patch fixes the issue with systemd and removes the legacy boot image, so that node scaling can continue as expected. ( OCPBUGS-43765 ) Previously, on the OpenShift Container Platform web console, the VMware vSphere configuration dialog box stalled because of network or validation errors. With this release, a fix ensures that you can close the dialog, cancel any configuration changes, or edit a configuration without the errors causing the dialog to stall. ( OCPBUGS-29823 ) Previously on the OpenShift Container Platform web console, the VMware vSphere configuration dialog box did not validate values entered into any fields because of an issue with the vSphere plugin. After you saved the configuration, the outputted data was not logically formatted. With this release, the vSphere plugin now performs validation checks on inputted data so the plugin outputs the data in a logical format.( OCPBUGS-29616 ) 1.9.6.2. Updating To update an existing OpenShift Container Platform 4.16 cluster to this latest release, see Updating a cluster by using the CLI . 1.9.7. RHSA-2025:0140 - OpenShift Container Platform 4.16.30 bug fix and security update Issued: 15 January 2025 OpenShift Container Platform release 4.16.30 is now available. The list of bug fixes that are included in the update is documented in the RHSA-2025:0140 advisory. The RPM packages that are included in the update are provided by the RHBA-2025:0143 advisory. Space precluded documenting all of the container images for this release in the advisory. You can view the container images in this release by running the following command: USD oc adm release info 4.16.30 --pullspecs 1.9.7.1. Bug fixes Previously, the Operator Lifecycle Manager (OLM)catalog registry pods were terminated by the kubelet with a TerminationByKubelet error, and the pods were not recreated by the catalog Operator. With this release, the registry pods are recreated without an error. ( OCPBUGS-47738 ) Previously, the certificate signing request (CSR) approver included certificates from other systems in its calculations to determine if it was congested. When this happened, the CSR approver stopped approving certificates. With this release, the CSR approver only includes CSRs that it can approve, using the signerName property as a filter. The CSR approver only prevents new approvals when there are a large number of CSRs, for the signerName values that it observes, and for certificates that it does not approve. ( OCPBUGS-47704 ) Previously, the Performance Profile Creator (PPC) failed to build a performance profile for compute nodes that had different core ID numbering (core per socket) for their logical processors and the nodes existed under the same node pool. With this release, the PPC does not fail to create the performance profile because the PPC can build a performance profile for a cluster that has compute nodes with different core ID numbering for their logical processors. The PPC produces a warning message to use the generated performance profile with caution, because different core ID numbering might impact the system optimization and isolated management of tasks. ( OCPBUGS-47701 ) Previously, an event was missed by the informer watch stream. If an object was deleted while this disconnection occurred, the informer resulted in a different type, reported that the state was invalid, and the object was deleted. With this release, the temporary disconnection possibilities are handled correctly. ( OCPBUGS-47645 ) Previously, when using the SiteConfig custom resource (CR) to delete a cluster or a node, the BareMetalHost CR was stuck in the Deprovisioning state. With this release, the order deletion is correct and the SiteConfig CR deletes a cluster or a node successfully. This fix requires version Red Hat OpenShift GitOps 1.13 or later. ( OCPBUGS-46524 ) 1.9.7.2. Updating To update an existing OpenShift Container Platform 4.16 cluster to this latest release, see Updating a cluster by using the CLI . 1.9.8. RHBA-2025:0018 - OpenShift Container Platform 4.16.29 bug fix and security update Issued: 09 January 2025 OpenShift Container Platform release 4.16.29 is now available. The list of bug fixes that are included in the update is documented in the RHBA-2025:0018 advisory. The RPM packages that are included in the update are provided by the RHBA-2025:0021 advisory. Space precluded documenting all of the container images for this release in the advisory. You can view the container images in this release by running the following command: USD oc adm release info 4.16.29 --pullspecs 1.9.8.1. Enhancements 1.9.8.1.1. Node Tuning Operator architecture detection The Node Tuning Operator can now properly select kernel arguments and management options for Intel and AMD CPUs. ( OCPBUGS-46496 ) 1.9.8.2. Bug fixes Previously, if you deleted the default sriovOperatorConfig custom resource (CR), you could not recreate the default sriovOperatorConfig CR, because the ValidatingWebhookConfiguration was not initially deleted. With this release, the Single Root I/O Virtualization (SR-IOV) Network Operator removes validating webhooks when you delete the sriovOperatorConfig CR, so that you can create a new sriovOperatorConfig CR. ( OCPBUGS-44727 ) Previously, users could enter an invalid string for any CPU set in the performance profile, resulting in a broken cluster. With this release, the fix ensures that only valid strings can be entered, eliminating the risk of cluster breakage. ( OCPBUGS-47678 ) Previously, installation of an AWS cluster failed in certain environments on existing subnets when the MachineSet object's parameter publicIp was explicitly set to false . With this release, a fix ensures that a configuration value set for publicIp no longer causes issues when the installation program provisions machines for your AWS cluster in certain environments. ( OCPBUGS-46508 ) Previously, the IDs that were used to determine the number of rows in a Dashboard table were not unique, and some rows were combined if their IDs were the same. With this release, the ID uses more information to prevent duplicate IDs and the table can display each expected row. ( OCPBUGS-45334 ) 1.9.8.3. Updating To update an existing OpenShift Container Platform 4.16 cluster to this latest release, see Updating a cluster by using the CLI . 1.9.9. RHBA-2024:11502 - OpenShift Container Platform 4.16.28 bug fix and security update Issued: 02 January 2025 OpenShift Container Platform release 4.16.28 is now available. The list of bug fixes that are included in the update is documented in the RHBA-2024:11502 advisory. The RPM packages that are included in the update are provided by the RHBA-2024:11505 advisory. Space precluded documenting all of the container images for this release in the advisory. You can view the container images in this release by running the following command: USD oc adm release info 4.16.28 --pullspecs 1.9.9.1. Known issues A regression in the behavior of libreswan caused some IPsec-enabled nodes to lose communication with pods on other nodes in the same cluster. To resolve this issue, disable IPsec for your cluster. ( OCPBUGS-43715 ) 1.9.9.2. Bug fixes Previously, when the webhook token authenticator was enabled and had the authorization type set to None , the OpenShift Container Platform web console would consistently crash. With this release, a bug fix ensures that this configuration does not cause the OpenShift Container Platform web console to crash. ( OCPBUGS-46481 ) Previously, when you attempted to use Operator Lifecycle Manager (OLM) to upgrade an Operator, the upgrade was blocked and an error validating existing CRs against new CRD's schema message was generated. An issue existed with OLM, where OLM erroneously identified incompatibility issues validating existing custom resources (CRs) against the new Operator version custom resource definitions (CRDs). With this release, the validation is corrected so that Operator upgrades are no longer blocked. ( OCPBUGS-46434 ) Previously, when a long string of individual CPUs were in the Performance Profile, the machine configurations were not processed. With this release, the user input process is updated to use a sequence of numbers or a range of numbers on the kernel command line. ( OCPBUGS-46074 ) Previously, when users wanted to configure their Amazon Web Services (AWS) DHCP option set with a custom domain name that contained a trailing period and the hostname of EC2 instances were converted to Kubelet node names, the trailing period was not removed. Trailing periods are not allowed in a Kubernetes object name. With this release, trailing periods are allowed in a domain name in a DHCP option set. ( OCPBUGS-45974 ) Previously, the kdump initramfs stopped responding when opening a local encrypted disk, even when the kdump destination was a remote machine that did not need to access the local machine. With this release, this issue is fixed and the kdump initramfs successfully opens a local encrypted disk. ( OCPBUGS-45837 ) Previously, the aws-sdk-go-v2 software development kit (SDK) failed to authenticate an AssumeRoleWithWebIdentity API operation on an Amazon Web Services (AWS) Security Token Service (STS) cluster. With this release, pod-identity-webhook now includes a default region so that this issue no longer persists. ( OCPBUGS-45939 ) Previously, an ingress rule was created for a security group in an AWS cluster that allowed a 0.0.0.0/0 Classless Inter-Domain Routing (CIDR) address access to a node port in the 30000-32767 range. With this release, the rule is removed during AWS cluster installation. ( OCPBUGS-45669 ) Previously, the build controller looked for secrets that were linked for general use, not specifically for the image pull. With this release, when searching for default image pull secrets, the builds use ImagePullSecrets that are linked to the service account. ( OCPBUGS-31213 ) Previously, the Maximum Transmission Unit (MTU) migration phase of the SDN-OVN live migration could run many times if one machine config pool (MCP) was paused. This prevented the live migration from ending successfully. After this release, this should no longer happen. ( OCPBUGS-44338 ) Previously, after upgrading from OpenShift Container Platform 4.12 to 4.14, the customer reported that the pods could not reach their service when a NetworkAttachmentDefinition was set. With this release, the pods can reach their service after the upgrade. ( OCPBUGS-44457 ) 1.9.9.3. Updating To update an existing OpenShift Container Platform 4.16 cluster to this latest release, see Updating a cluster by using the CLI . 1.9.10. RHBA-2024:10973 - OpenShift Container Platform 4.16.27 bug fix and security update Issued: 19 December 2024 OpenShift Container Platform release 4.16.27 is now available. The list of bug fixes that are included in the update is documented in the RHBA-2024:10973 advisory. The RPM packages that are included in the update are provided by the RHBA-2024:10976 advisory. Space precluded documenting all of the container images for this release in the advisory. You can view the container images in this release by running the following command: USD oc adm release info 4.16.27 --pullspecs 1.9.10.1. Bug fixes Previously, when openshift-sdn pods were deployed during the OpenShift Container Platform upgrading process, the Open vSwitch (OVS) storage table was cleared. This issue occurred on OpenShift Container Platform 4.16.19 and later versions. Ports for existing pods had to be re-created and this caused disruption to numerous services. With this release, a fix ensures that the OVS tables do not get cleared and pods do not get disconnected during a cluster upgrade operation. ( OCPBUGS-45806 ) Previously, you could not remove a finally pipeline task from the edit Pipeline form if you created a pipeline with only one finally task. With this release, you can remove the finally task from the edit Pipeline form and the issue is resolved. ( OCPBUGS-45229 ) Previously, the installation program did not validate the maximum transmission unit (MTU) for a custom IPv6 network on Red Hat Enterprise Linux CoreOS (RHCOS). If you specified a low value for the MTU, installation of the cluster would fail. With this release, the minimum MTU value for IPv6 networks is set to 1380 octets, where 1280 octets is the minimum MTU for the IPv6 protocol and the remaining 100 octets is for the OVN-Kubernetes encapsulation overhead. With this release, the installation program now validates the MTU for a custom IPv6 network on Red Hat Enterprise Linux CoreOS (RHCOS) ( OCPBUGS-41813 ) Previously, the Display Admission Webhook warning implementation presented issues with some incorrect code. With this update, the unnecessary warning message has been removed. ( OCPBUGS-43750 ) Previously, deploying the NUMA Resources Operator on a cluster was not possible for OpenShift Container Platform versions 4.16.25, 4.16.26, or potentially subsequent z-stream versions of 4.16. With this release, deployment of the NUMA Resources Operator is now supported starting from OpenShift Container Platform 4.16.27 and later versions of OpenShift Container Platform4.16. The issue remains unresolved for 4.16.25 and 4.16.26. ( OCPBUGS-45983 ) 1.9.10.2. Updating To update an existing OpenShift Container Platform 4.16 cluster to this latest release, see Updating a cluster by using the CLI . 1.9.11. RHSA-2024:10823 - OpenShift Container Platform 4.16.26 bug fix and security update Issued: 12 December 2024 OpenShift Container Platform release 4.16.26 is now available. The list of bug fixes that are included in the update is documented in the RHSA-2024:10823 advisory. The RPM packages that are included in the update are provided by the RHBA-2024:10826 advisory. Space precluded documenting all of the container images for this release in the advisory. You can view the container images in this release by running the following command: USD oc adm release info 4.16.26 --pullspecs 1.9.11.1. Enhancements Previously, ClusterTasks were listed on the Pipelines builder page and the ClusterTask list page in the Tasks navigation menu. Currently, the ClusterTasks are deprecated from Pipelines 1.17, and the ClusterTask dependency is removed from static plug-in. On the Pipelines builder page, you will only see the task that is present in the namespace and community tasks are displayed. ( OCPBUGS-45015 ) 1.9.11.2. Bug fixes Previously, when you used the Agent-based Installer to install a cluster on a node that had an incorrect date, the cluster installation failed. With this release, a patch is applied to the Agent-based Installer live ISO time synchronization. The patch configures the /etc/chrony.conf file with the list of additional Network Time Protocol (NTP) servers, so that you can set any of these additional NTP servers in the agent-config.yaml without experiencing a cluster installation issue. ( OOCPBUGS-45181 ) Previously, when you used a custom template, you could not enter multi-line parameters, such as private keys. With this release, you can switch between single-line and multi-line modes and you can complete the template fields with multi-line input. ( OOCPBUGS-45124 ) Previously, up to OpenShift Container Platform 4.15, you had the option to close the Getting started with resources section. After OpenShift Container Platform 4.15, the Getting started with resources section converted to an expandable section, and you did not have a way to close the section. With this release, you can close the Getting started with resources section. ( OOCPBUGS-45181 ) Previously in Red Hat OpenShift Container Platform, when you selected the start lastrun option on the Edit BuildConfig page, an error prevented the lastrun operation from running. With this release, a fix ensures that the start lastrun option successfully completes. ( OOCPBUGS-44875 ) 1.9.11.3. Updating To update an existing OpenShift Container Platform 4.16 cluster to this latest release, see Updating a cluster by using the CLI . 1.9.12. RHSA-2024:10528 - OpenShift Container Platform 4.16.25 bug fix and security update Issued: 4 December 2024 OpenShift Container Platform release 4.16.25 is now available. The list of bug fixes that are included in the update is documented in the RHSA-2024:10528 advisory. The RPM packages that are included in the update are provided by the RHBA-2024:10531 advisory. Space precluded documenting all of the container images for this release in the advisory. You can view the container images in this release by running the following command: USD oc adm release info 4.16.25 --pullspecs 1.9.12.1. Bug fixes Previously, in the web console Notifications section, silenced alerts were visible on the notification drawer because the alerts did not include external labels. With this release, the alerts include external labels and the silenced alerts are not visible on the notification drawer. ( OCPBUGS-44885 ) Previously, the installation program failed to parse the cloudControllerManager field correctly, passing it as an empty string to the Assisted Service API. This error caused the Assisted Service to fail, blocking successful cluster installations on Oracle(R) Cloud Infrastructure (OCI). With this release, The parsing logic is updated to correctly interpret the cloudControllerManager field from the install-config.yaml file, ensuring that the expected value is properly sent to the Assisted Service API.( OCPBUGS-44348 ) Previously, the Display Admission Webhook warning implementation resulted in problems with invalid code. With this release, the unnecessary warning message is removed, preventing problems with invalid code. ( OCPBUGS-44207 ) Previously, when importing a Git repository using the serverless import strategy, the environment variables from the func.yaml file were not automatically loaded into the form. With this release, the environment variables are loaded during the Git repository import process. ( OCPBUGS-43447 ) 1.9.12.2. Updating To update an existing OpenShift Container Platform 4.16 cluster to this latest release, see Updating a cluster by using the CLI . 1.9.13. RHSA-2024:10147 - OpenShift Container Platform 4.16.24 bug fix and security update Issued: 26 November 2024 OpenShift Container Platform release 4.16.24 is now available. The list of bug fixes that are included in the update is documented in the RHSA-2024:10147 advisory. The RPM packages that are included in the update are provided by the RHBA-2024:10150 advisory. Space precluded documenting all of the container images for this release in the advisory. You can view the container images in this release by running the following command: USD oc adm release info 4.16.24 --pullspecs 1.9.13.1. Bug fixes Previously, if Operator Lifecycle Manager (OLM) was unable to access the secret associated with a service account, OLM would rely on the Kubernetes API server to automatically create a bearer token. With Kubernetes versions 1.22 and later, this action is no longer automatic, so with this release OLM uses the TokenRequest API to request a new Kubernetes API token. ( OCPBUGS-44351 ) Previously, the approval mechanism for certificate signing requests (CSRs) failed because the node name and internal DNS entry for a CSR did not match in terms of character case differences. With this release, an update to the approval mechanism for CSRs skips case-sensitive checks so that a CSR with a matching node name and internal DNS entry does not fail the check because of character case differences. ( OCPBUGS-44629 ) Previously, HyperShift-based ROKS clusters were unable to authenticate through the oc login command. The web browser displayed an error when it attepted to retrieve the the token after selecting Display Token . with this release, cloud.ibm.com and other cloud-based endpoints are no longer proxied and authentication is successful. ( OCPBUGS-44277 ) OpenShift Container Platform 4.16 now supports Operator SDK 1.36.1. See Installing the Operator SDK CLI to install or update to this latest version. Note Operator SDK 1.36.1 now supports Kubernetes 1.29 and uses a Red Hat Enterprise Linux (RHEL) 9 base image. If you have Operator projects that were previously created or maintained with Operator SDK 1.31.0, update your projects to keep compatibility with Operator SDK 1.36.1. Updating Go-based Operator projects Updating Ansible-based Operator projects Updating Helm-based Operator projects Updating Hybrid Helm-based Operator projects Updating Java-based Operator projects ( OCPBUGS-44485 , OCPBUGS-44486 ) 1.9.13.2. Updating To update an existing OpenShift Container Platform 4.16 cluster to this latest release, see Updating a cluster by using the CLI . 1.9.14. RHSA-2024:9615 - OpenShift Container Platform 4.16.23 bug fix and security update Issued: 20 November 2024 OpenShift Container Platform release 4.16.23 is now available. The list of bug fixes that are included in the update is documented in the RHSA-2024:9615 advisory. The RPM packages that are included in the update are provided by the RHSA-2024:9618 advisory. Space precluded documenting all of the container images for this release in the advisory. You can view the container images in this release by running the following command: USD oc adm release info 4.16.23 --pullspecs 1.9.14.1. Bug fixes Previously, when the Cluster Resource Override Operator failed to run its operand controller, the Operator attempted to re-run the controller. Each re-run operation generated a new set of secrets that eventually constrained cluster namespace resources. With this release, the service account for a cluster now includes annotations that prevent the Operator from creating additional secrets when a secret already exists for the cluster. ( OCPBUGS-44351 ) Previously, when the VMware vSphere vCenter cluster contained an ESXi host that did not have a standard port group defined, and the installation program tried to select that host to import the OVA, the import failed and the error Invalid Configuration for device 0 was reported. With this release, the installation program verifies whether a standard port group for an ESXi host is defined and, if not, continues until it locates an ESXi host with a defined standard port group, or reports an error message if it fails to locate one, resolving the issue. ( OCPBUGS-38930 ) Previously, when installing a cluster on IBM Cloud(R) into an existing VPC, the installation program retrieved an unsupported VPC region. Attempting to install into a supported VPC region that follows the unsupported VPC region alphabetically caused the installation program to crash. With this release, the installation program is updated to ignore any VPC regions that are not fully available during resource lookups. ( OCPBUGS-36290 ) Previously, when you used the limited live migration method, and a namespace in your cluster included a network policy that allowed communication with the host network, a communication issues existed for nodes in the cluster. More specifically, host network pods on nodes managed by different Container Network Interfaces could not communicate with pods in the namespace. With this release, a fix ensures that you can now use the live migration on a namespace that includes a network policy that allows communication with the host network without experiencing the communication issue. ( OCPBUGS-43344 ) 1.9.14.2. Updating To update an existing OpenShift Container Platform 4.16 cluster to this latest release, see Updating a cluster by using the CLI . 1.9.15. RHBA-2024:8986 - OpenShift Container Platform 4.16.21 bug fix Issued: 13 November 2024 OpenShift Container Platform release 4.16.21 is now available. The list of bug fixes that are included in the update is documented in the RHBA-2024:8986 advisory. The RPM packages that are included in the update are provided by the RHBA-2024:8989 advisory. Space precluded documenting all of the container images for this release in the advisory. You can view the container images in this release by running the following command: USD oc adm release info 4.16.21 --pullspecs 1.9.15.1. Bug fixes Previously, the installation program populated the network.devices , template , and workspace fields in the spec.template.spec.providerSpec.value section of the VMware vSphere control plane machine set custom resource (CR). These fields should be set in the vSphere failure domain, and the installation program that populated them caused unintended behaviors. Updating these fields did not trigger an update to the control plane machines, and these fields were cleared when the control plane machine set was deleted. With this release, the installation program is updated to no longer populate values that are included in the failure domain configuration. If these values are not defined in a failure domain configuration, for instance on a cluster that is updated to OpenShift Container Platform 4.16 from an earlier version, the values defined by the installation program are used. ( OCPBUGS-44179 ) Previously, enabling ESP hardware offload using IPSec on attached interfaces in Open vSwitch broke connectivity due to a bug in Open vSwitch. With this release, OpenShift automatically disables ESP hardware offload on the Open vSwitch attached interfaces, and the issue is resolved. ( OCPBUGS-44043 ) Previously, restarting a CVO pod while it was initializing the synchronization work broke the guard of a blocked upgrade request. As a result, the blocked request was incorrectly accepted. With this release, the CVO postpones the reconciliation during the initialization step, and the issue is resolved. ( OCPBUGS-43964 ) Previously, if you ran RHCOS in the live environment where the rpm-ostree-fix-shadow-mode.service used to run, the rpm-ostree-fix-shadow-mode.service logged a failure that did not impact the deployment or live system. With this release, the rpm-ostree-fix-shadow-mode.service does not run when RHCOS is not running from an installed environment and the issue is resolved. ( OCPBUGS-36806 ) Previously, the installation program retrieved an unsupported VPC region when you installed a cluster on IBM Cloud(R) into an existing VPC. Attempting to install into a supported VPC region that follows the unsupported VPC region alphabetically caused the installation program to crash. With this release, the installation program is updated to ignore any VPC regions that are not fully available during resource lookups. ( OCPBUGS-36290 ) 1.9.15.2. Updating To update an existing OpenShift Container Platform 4.16 cluster to this latest release, see Updating a cluster by using the CLI . 1.9.16. RHSA-2024:8683 - OpenShift Container Platform 4.16.20 bug fix and security update Issued: 06 November 2024 OpenShift Container Platform release 4.16.20 is now available. The list of bug fixes that are included in the update is documented in the RHSA-2024:8683 advisory. The RPM packages that are included in the update are provided by the RHSA-2024:8686 advisory. Space precluded documenting all of the container images for this release in the advisory. You can view the container images in this release by running the following command: USD oc adm release info 4.16.20 --pullspecs 1.9.16.1. Bug fixes Previously, an invalid or unreachable identity provider (IDP) blocked updates to hosted control planes. With this release, the ValidIDPConfiguration condition in the HostedCluster object now reports any IDP errors so that these errors do not block updates to hosted control planes. ( OCPBUGS-43840 ) Previously, the Machine Config Operator (MCO) vSphere resolve-prepender script used systemd directives that were incompatible with old bootimage versions used in OpenShift Container Platform 4. With this release, nodes can scale using newer boot image versions 4.16 4.13 or later, through manual intervention, or by updating to a release that includes this fix. ( OCPBUGS-42109 ) 1.9.16.2. Updating To update an existing OpenShift Container Platform 4.16 cluster to this latest release, see Updating a cluster by using the CLI . 1.9.17. RHSA-2024:8415 - OpenShift Container Platform 4.16.19 bug fix and security update Issued: 30 October 2024 OpenShift Container Platform release 4.16.19 is now available. The list of bug fixes that are included in the update is documented in the RHSA-2024:8415 advisory. The RPM packages that are included in the update are provided by the RHSA-2024:8418 advisory. Space precluded documenting all of the container images for this release in the advisory. You can view the container images in this release by running the following command: USD oc adm release info 4.16.19 --pullspecs 1.9.17.1. Bug fixes Previously, when the Image Registry Operator was configured with NetworkAccess: Internal in Microsoft Azure, you could not successfully set managementState to Removed in the Operator configuration. This was due to an authorization error when the Operator tried to delete the storage container. With this release, the Operator continues to delete the storage account, which automatically deletes the storage container. This results in a successful change to the Removed state. ( OCPBUGS-43555 ) Previously, in managed services, audit logs were sent to a local webhook service. Control plane deployments sent traffic through konnectivity and attempted to send the audit webhook traffic through the konnectivity proxies: openshift-apiserver and oauth-openshift . With this release, the audit-webhook is in the list of no_proxy hosts for the affected pods, and the audit log traffic that is sent to the audit-webhook is successfully sent. ( OCPBUGS-43046 ) Previously, when you used the Agent-based Installer to install a cluster, the assisted-installer-controller timed out of the installation process, depending on whether assisted-service was unavailable on the rendezvous host. This event caused the cluster installation to fail during CSR approval checks. With this release, an update to assisted-installer-controller ensures that the controller does not time out if the assisted-service is unavailable. The CSR approval check now works as expected. ( OCPBUGS-42710 ) Previously, the IBM(R) cloud controller manager (CCM) was reconfigured to use loopback as the bind address in OpenShift Container Platform 4.16. The liveness probe was not configured to use loopback, so the CCM constantly failed the liveness probe and continuously restarted. With this release, the IBM(R) CCM liveness probe is configured to use the loopback for the request host. ( OCPBUGS-42125 ) Previously, the Messaging Application Programming Interface (MAPI) for IBM Cloud currently only checks the first group of subnets (50) when searching for subnet details by name. With this release, the search provides pagination support to search all subnets. ( OCPBUGS-36698 ) 1.9.17.2. Updating To update an existing OpenShift Container Platform 4.16 cluster to this latest release, see Updating a cluster by using the CLI . 1.9.18. RHSA-2024:8260 - OpenShift Container Platform 4.16.18 bug fix and security update Issued: 24 October 2024 OpenShift Container Platform release 4.16.18 is now available. The list of bug fixes that are included in the update is documented in the RHSA-2024:8260 advisory. The RPM packages that are included in the update are provided by the RHSA-2024:8263 advisory. Space precluded documenting all of the container images for this release in the advisory. You can view the container images in this release by running the following command: USD oc adm release info 4.16.18 --pullspecs 1.9.18.1. Enhancements The SR-IOV Network Operator supports Intel NetSec Accelerator Cards and Marvell Octeon 10 DPUs. ( OCPBUGS-43452 ) 1.9.18.2. Bug fixes Previously, the Single-Root I/O Virtualization (SR-IOV) Operator did not expire the acquired lease during the Operator's shutdown operation. This impacted a new instance of the Operator, because the new instance had to wait for the lease to expire before the new instance was operational. With this release, an update to the Operator shutdown logic ensures that the Operator expires the lease when the Operator is shutting down. ( OCPBUGS-37669 ) Previously, an interface created inside a new pod would remain inactive and the Gratuitous Address Resolution Protocol (GARP) notification would be generated. The notification did not reach the cluster and this prevented ARP tables of other pods inside the cluster from updating the MAC address of the new pod. This situation caused cluster traffic to stall until ARP table entries expired. With this release, a GARP notification is now sent after the interface inside a pod is active so that the GARP notification reaches the cluster. As a result, surrounding pods can identify the new pod earlier than they could with the behavior. ( OCPBUGS-36735 ) Previously, a machine controller failed to save the VMware vSphere task ID of an instance template clone operation. This caused the machine to go into the Provisioning state and to power off. With this release, the VMware vSphere machine controller can detect and recover from this state. ( OCPBUGS-43433 ) Previously, when you attempted to use the oc import-image command to import an image in a hosted control planes cluster, the command failed because of access issues with a private image registry. With this release, an update to openshift-apiserver pods in a hosted control planes cluster resolves names that use the data plane so that the oc import-image command now works as expected with private image registries. ( OCPBUGS-43308 ) Previously, when you used the must-gather tool, a Multus Container Network Interface (CNI) log file, multus.log , was stored in a node's file system. This situation caused the tool to generate unnecessary debug pods in a node. With this release, the Multus CNI no longer creates a multus.log file, and instead uses a CNI plugin pattern to inspect any logs for Multus DaemonSet pods in the openshift-multus namespace. ( OCPBUGS-33959 ) Previously, when you configured the image registry to use an Microsoft Azure storage account that was located in a resource group other than the cluster's resource group, the Image Registry Operator would become degraded. This occurred because of a validation error. With this release, an update to the Operator allows for authentication only by using a storage account key. Validation of other authentication requirements is not required. ( OCPBUGS-42933 ) Previously, during root certification rotation, the metrics-server pod in the data plane failed to start correctly. This happened because of a certificate issue. With this release, the hostedClusterConfigOperator resource sends the correct certificate to the data plane so that the metrics-server pod starts as expected. ( OCPBUGS-42432 ) 1.9.18.3. Updating To update an existing OpenShift Container Platform 4.16 cluster to this latest release, see Updating a cluster by using the CLI . 1.9.19. RHSA-2024:7944 - OpenShift Container Platform 4.16.17 bug fix and security update Issued: 16 October 2024 OpenShift Container Platform release 4.16.17 is now available. The list of bug fixes that are included in the update is documented in the RHSA-2024:7944 advisory. The RPM packages that are included in the update are provided by the RHBA-2024:7947 advisory. Space precluded documenting all of the container images for this release in the advisory. You can view the container images in this release by running the following command: USD oc adm release info 4.16.17 --pullspecs 1.9.19.1. Bug fixes Previously, the Ingress and DNS Operators failed to start correctly because of rotating root certificates. With this release, the Ingress and DNS Operator kubeconfigs are conditionally managed by using the annotation that defines when the PKI requires management, and the issue is resolved. ( OCPBUGS-42431 ) Previously on Red Hat OpenShift Service on AWS (ROSA) with hosted control planes (HCP), a cluster that used mirroring release images might result in existing node pools to use the hosted cluster's operating system version instead of the NodePool version. With this release, a fix ensures that node pools use their own versions. ( OCPBUGS-42342 ) Previously, a coding issue caused the Ansible script on RHCOS user-provisioned installation infrastructure to fail. This occurred when IPv6 was enabled for a three-node cluster. With this release, support exists for installing a three-node cluster with IPv6 enabled on RHCOS. ( OCPBUGS-41334 ) Previously, bonds that were configured in active-backup mode would have IPsec Encapsulating Security Payload (ESP) offload active even if underlying links did not support ESP offload. This caused IPsec associations to fail. With this release, ESP offload is disabled for bonds so that IPsec associations pass. ( OCPBUGS-41256 ) Previously, Ironic inspection failed if special or invalid characters existed in the serial number of a block device. This occurred because the lsblk command failed to escape the characters. With this release, the command escapes the characters so this issue no longer persists. ( OCPBUGS-39017 ) Previously, the manila-csi-driver and node registrar pods had missing health checks because of a configuration issue. With this release, the health checks are added to each resource. ( OCPBUGS-38458 ) 1.9.19.2. Updating To update an existing OpenShift Container Platform 4.16 cluster to this latest release, see Updating a cluster by using the CLI . 1.9.20. RHSA-2024:7599 - OpenShift Container Platform 4.16.16 bug fix and security update Issued: 09 October 2024 OpenShift Container Platform release 4.16.16 is now available. The list of bug fixes that are included in the update is documented in the RHSA-2024:7599 advisory. The RPM packages that are included in the update are provided by the RHBA-2024:7602 advisory. Space precluded documenting all of the container images for this release in the advisory. You can view the container images in this release by running the following command: USD oc adm release info 4.16.16 --pullspecs 1.9.20.1. Bug fixes Previously, the metal3-ironic-inspector container in the openshift-machine-api namespace caused memory consumption issues for clusters. With this release, the memory consumption issue is fixed. ( OCPBUGS-42113 ) Previously, creating cron jobs to create pods for your cluster caused the component that fetches the pods to fail. Because of this issue, the Topology page on the OpenShift Container Platform web console failed. With this release, a 3 second delay is configured for the component that fetches pods that are generated from the cron job so that this issue no longer exists. ( OCPBUGS-42015 ) Previously, because of an internal bug, the Node Tuning Operator incorrectly computed CPU masks for interrupt and network handling CPU affinity if a machine had more than 256 CPUs. This prevented proper CPU isolation on those machines and resulted in systemd unit failures. With this release, the Node Tuning Operator computes the masks correctly. ( OCPBUGS-39377 ) Previously, when you used the Redfish Virtual Media to add an xFusion bare-metal node to your cluster, the node did not get added because of a node registration issue. The issue occurred because the hardware was not 100 percent compliant with Redfish. With this release, you can now add xFusion bare-metal nodes to your cluster.( OCPBUGS-38797 ) Previously, when you added IPv6 classless inter-domain routing (CIDR) addresses to the no_proxy variable, the Ironic API ignored the addresses. With this release, the Ironic API honors any IPv6 CIDR address added to the no_proxy variable. ( OCPBUGS-37654 ) Previously, dynamic plugins using PatternFly 4 were referencing variables that are not available in OpenShift Container Platform 4.15 and later. This was causing contrast issues for ACM in dark mode. With this update, older chart styles are now available to support PatternFly 4 charts used by dynamic plugins. ( OCPBUGS-36816 ) 1.9.20.2. Updating To update an existing OpenShift Container Platform 4.16 cluster to this latest release, see Updating a cluster using the CLI . 1.9.21. RHSA-2024:7174 - OpenShift Container Platform 4.16.15 bug fix and security update Issued: 2 October 2024 OpenShift Container Platform release 4.16.15 is now available. The list of bug fixes that are included in the update is documented in the RHSA-2024:7174 advisory. The RPM packages that are included in the update are provided by the RHBA-2024:7177 advisory. Space precluded documenting all of the container images for this release in the advisory. You can view the container images in this release by running the following command: USD oc adm release info 4.16.15 --pullspecs 1.9.21.1. Bug fixes Previously, a local patch to kube-proxy caused OpenShift SDN to add a duplicate copy of a particular rule to the iptables ruleset each time it resynchronized. The synchronization would slow down and eventually trigger the NodeProxyApplySlow alert. With this release, the kube-proxy patch has been fixed and the alert no longer appears. ( OCPBUGS-42159 ) Previously, when the Node Tuning Operator (NTO) was configured by using PerformanceProfiles it would create an ocp-tuned-one-shot systemd service. The systemd service would run prior to kubelet and blocked execution. The systemd service invokes Podman which uses an NTO image. But, when the NTO image was not present, Podman still tried to fetch the image and it would fail. With this release, support is added for cluster-wide proxy environment variables defined in /etc/mco/proxy.env . Now, Podman pulls NTO images in environments which need to use proxies for out-of-cluster connections. ( OCPBUGS-42061 ) Previously, a change in the ordering of the TextInput parameters for PatternFly v4 and v5 caused the until field to be improperly filled and was not editable. With this release, the until field is editable so you can input the correct information. ( OCPBUGS-41996 ) Previously, when templates were defined for each failure domain, the installation program required an external connection to download the OVA in vSphere. With this release, the issue is resolved. ( OCPBUGS-41885 ) Previously, when installing a cluster on bare metal using installer provisioned infrastructure, the installation could time out if the network to the bootstrap virtual machine is slow. With this update, the timeout duration has been increased to cover a wider range of network performance scenarios. ( OCPBUGS-41845 ) Previously, when a hosted cluster proxy was configured and it used an identity provider (IDP) that had an http or https endpoint, the host name of the IDP was unresolved before sending it through the proxy. Consequently, host names that could only be resolved by the data plane failed to resolve for IDPs. With this update, a DNS lookup is performed before sending IPD traffic through the konnectivity tunnel. As a result, IDPs with host names that can only be resolved by the data plane can be verified by the Control Plane Operator. ( OCPBUGS-41372 ) Previously, due to an internal bug, if a machine had more than 256 CPUs, the Node Tuning Operator (NTO) incorrectly computed CPU masks for interrupt and network handling CPU affinity. This prevented proper CPU isolation on those machines and resulted in systemd unit failures. With this release, the NTO computes the masks correctly.( OCPBUGS-39377 ) Previously, when users provided public subnets while using existing subnets and creating a private cluster, the installation program occasionally exposed on the public internet the load balancers that were created in public subnets. This invalidated the reason for a private cluster. With this release, the issue is resolved by displaying a warning during a private installation that providing public subnets might break the private clusters and, to prevent this, users must fix their inputs. ( OCPBUGS-38964 ) 1.9.21.2. Updating To update an existing OpenShift Container Platform 4.16 cluster to this latest release, see Updating a cluster using the CLI . 1.9.22. RHSA-2024:6824 - OpenShift Container Platform 4.16.14 bug fix and security update Issued: 24 September 2024 OpenShift Container Platform release 4.16.14 is now available. The list of bug fixes that are included in the update is documented in the RHSA-2024:6824 advisory. The RPM packages that are included in the update are provided by the RHSA-2024:6827 advisory. Space precluded documenting all of the container images for this release in the advisory. You can view the container images in this release by running the following command: USD oc adm release info 4.16.14 --pullspecs 1.9.22.1. Enhancements The following enhancement is included in this z-stream release: 1.9.22.1.1. Collecting data from the Red Hat OpenStack Platform (RHOSP) on OpenStack Services cluster resources with the Insight Operator Insight Operator can collect data from the following Red Hat OpenStack on OpenShift Services (RHOSO) cluster resources: OpenStackControlPlane , OpenStackDataPlaneNodeSet , OpenStackDataPlaneDeployment , and OpenStackVersions . ( OCPBUGS-38021 ) 1.9.22.2. Bug fixes Previously, when Operator Lifecycle Manager (OLM) evaluated a potential upgrade, it used the dynamic client list for all custom resource (CR) instances in the cluster. For clusters with a large number of CRs, that could result in timeouts from the API server and stranded upgrades. With this release, the issue is resolved. ( OCPBUGS-41677 ) Previously, if the Hosted Cluster (HC) controllerAvailabilityPolicy value was SingleReplica , networking components with podAntiAffinity would block the rollout. With this release, the issue is resolved. ( OCPBUGS-41555 ) Previously, when deploying a cluster into an Amazon Virtual Private Cloud (VPC) with multiple CIDR blocks, the installation program failed. With this release, network settings are updated to support VPCs with multiple CIDR blocks. ( OCPBUGS-39496 ) Previously, the order of an Ansible playbook was modified to run before the metadata.json file was created, which caused issues with older versions of Ansible. With this release, the playbook is more tolerant of missing files and the issue is resolved. ( OCPBUGS-39287 ) Previously, during the same scrape, Prometheus would drop samples from the same series and only consider one of them, even when they had different timestamps. When this issue occurred continuously, it triggered the PrometheusDuplicateTimestamps alert. With this release, all samples are now ingested if they meet the other conditions. ( OCPBUGS-39179 ) Previously, when a folder was undefined and the datacenter was located in a datacenter folder, an incorrect folder structure was created starting from the root of the vCenter server. By using the Govmomi DatacenterFolders.VmFolder , it used the an incorrect path. With this release, the folder structure uses the datacenter inventory path and joins it with the virtual machine (VM) and cluster ID value, and the issue is resolved. ( OCPBUGS-39082 ) Previously, the installation program failed to install an OpenShift Container Platform cluster in the eu-es (Madrid, Spain) region on a IBM Power Virtual Server platform that was configured as an e980 system type. With this release, the installation program no longer fails to install a cluster in this environment. ( OCPBUGS-38502 ) Previously, proxying for IDP communication occurred in the Konnectivity agent. By the time traffic reached Konnectivity, its protocol and hostname were no longer available. As a consequence, proxying was not done correctly for the OAUTH server pod. It did not distinguish between protocols that require proxying (HTTP/S) and protocols that do not (LDAP). In addition, it did not honor the no_proxy variable that is configured in the HostedCluster.spec.configuration.proxy spec. With this release, you can configure the proxy on the Konnectivity sidecar of the OAUTH server so that traffic is routed appropriately, honoring your no_proxy settings. As a result, the OAUTH server can communicate properly with identity providers when a proxy is configured for the hosted cluster. ( OCPBUGS-38058 ) Previously, if you created a hosted cluster by using a proxy for the purposes of making the cluster reach a control plane from a compute node, the compute node would be unavailable to the cluster. With this release, the proxy settings are updated for the node so that the node can use a proxy to successfully communicate with the control plane. ( OCPBUGS-37937 ) Previously introduced IPv6 support with UPI type installation caused an issue with naming OpenStack resources, which manifests itself on creating two UPI installations on the same OpenStack cloud. The outcome of this will set network, subnets, and routers to have the same name, which will interfere with one setup and prevent deployment of the other. Now, all the names for mentioned resources will be unique per OpenShift deployment. ( OCPBUGS-36855 ) Previously, some safe sysctls were erroneously omitted from the allow list. With this release, the sysctls are added back to the allow list and the issue is resolved. ( OCPBUGS-29403 ) Previously, when an OpenShift Container Platform cluster was upgraded from version 4.14 to 4.15, the vCenter cluster field was not populated in the configuration form of the UI. The infrastructure cluster resource did not have information for upgraded clusters. With this release, the UI uses the cloud-provider-config config map for the vCenter cluster value and the issue is resolved. ( OCPBUGS-41619 ) 1.9.22.3. Updating To update an existing OpenShift Container Platform 4.16 cluster to this latest release, see Updating a cluster using the CLI . 1.9.23. RHSA-2024:6687 - OpenShift Container Platform 4.16.13 bug fix update Issued: 19 September 2024 OpenShift Container Platform release 4.16.13 is now available. The list of bug fixes that are included in the update is documented in the RHSA-2024:6687 advisory. There are no RPM packages for this update. Space precluded documenting all of the container images for this release in the advisory. You can view the container images in this release by running the following command: USD oc adm release info 4.16.13 --pullspecs 1.9.23.1. Updating To update an existing OpenShift Container Platform 4.16 cluster to this latest release, see Updating a cluster using the CLI . 1.9.24. RHSA-2024:6632 - OpenShift Container Platform 4.16.12 bug fix and security update Issued: 17 September 2024 OpenShift Container Platform release 4.16.12 is now available. The list of bug fixes that are included in the update is documented in the RHSA-2024:6632 advisory. The RPM packages that are included in the update are provided by the RHBA-2024:6635 advisory. Space precluded documenting all of the container images for this release in the advisory. You can view the container images in this release by running the following command: USD oc adm release info 4.16.12 --pullspecs 1.9.24.1. Enhancements The following enhancements are included in this z-stream release: 1.9.24.1.1. Supporting HTTPS for TransferProtocolTypes in Redfish APIs TLS can be enabled for communication between ironic and the baseboard management controller (BMC) in the bootstrap phase of the install processes by adding 'disableVirtualMediaTLS: false' to the Provisioning CR file created on disk by the installer. ( OCPBUGS-39468 ) 1.9.24.1.2. Updating to Kubernetes version 1.29.8 This release contains the changes that come from the update to Kubernetes version 1.29.8. ( OCPBUGS-39015 ) 1.9.24.1.3. Redirecting on web console with Edit Source Code There are two options in the Git Advanced section of the web console: one option is to add a branch, tag, or commit ID and other option is to add the context directory. With this release, if you add the context directory of a particular branch, tag, or commit ID, you are redirected to that directory by selecting the Edit source code icon. If a branch, tag, or commit ID is not entered, you are redirected to the base url as previously expected. ( OCPBUGS-38914 ) 1.9.24.2. Bug fixes Previously, when a large number of secrets in a cluster were fetched in a single call, the API timed out and the CCO threw an error and then restarted. With this release, the CCO pulls the secret list in smaller batches of 100 and the issue is resolved. ( OCPBUGS-41234 ) Previously, Operator Lifecycle Manager (OLM) catalog source pods did not recover from node failure if the registryPoll field was none . With this release, OLM CatalogSource registry pods recover from cluster node failures and the issue is resolved. ( OCPBUGS-41217 ) Previously, the Cluster Ingress Operator logged non-existent updates. With this release, the issue is resolved. ( OCPBUGS-39324 ) Previously, the installation program failed to install an OpenShift Container Platform cluster in the eu-es (Madrid, Spain) region on a IBM Power Virtual Server platform that was configured as an e980 system type. With this release, the installation program no longer fails to install a cluster in this environment. ( OCPBUGS-38502 ) Previously, the Ingress Controller Degraded status would not set because of an issue with the CanaryRepetitiveFailures condition transition time. With this release, the condition transition time is only updated when the condition status changes, instead of when the message or reason are the only changes. ( OCPBUGS-39323 ) Previously, an AdditionalTrustedCA field that was specified in the Hosted Cluster image configuration was not reconciled into the openshift-config namespace as expected and the component was not available. With this release, the issue is resolved. ( OCPBUGS-39293 ) Previously, an installer regression issue caused problems with Nutanix cluster deployments using the Dynamic Host Configuration Protocol (DHCP) network. With this release, the issue is resolved. ( OCPBUGS-38956 ) Previously, a rare condition caused the CAPV session to time out unexpectedly. With this release, the Keep Alive support is disabled in later versions of CAPV, and the issue is resolved. ( OCPBUGS-38822 ) Previously, the version number text in the updates graph on the Cluster Settings appeared as black text on a dark background while viewing the page using Firefox in dark mode. With this update, the text appears as white text. ( OCPBUGS-38424 ) Previously, proxying for Operators that run in the control plane of a HyperShift cluster was performed through proxy settings on the konnectivity agent pod that runs in the data plane. As a result, it was not possible to distinguish whether proxying was needed based on application protocol. For parity with {rh-short}, IDP communication through https/http should be proxied, but LDAP communication should not be proxied. With this release, how proxy is handled in hosted clusters is changed to invoke the proxy in the control plane via konnectivity-https-proxy and konnectivity-socks5-proxy , and to stop proxying traffic from the konnectivity agent. ( OCPBUGS-38062 ) 1.9.24.3. Updating To update an existing OpenShift Container Platform 4.16 cluster to this latest release, see Updating a cluster using the CLI . 1.9.25. RHBA-2024:6401 - OpenShift Container Platform 4.16.11 bug fix update Issued: 11 September 2024 OpenShift Container Platform release 4.16.11 is now available. The list of bug fixes that are included in the update is documented in the RHBA-2024:6401 advisory. The RPM packages that are included in the update are provided by the RHBA-2024:6404 advisory. Space precluded documenting all of the container images for this release in the advisory. You can view the container images in this release by running the following command: USD oc adm release info 4.16.11 --pullspecs 1.9.25.1. Known issues Red Hat OpenShift Service on AWS hosted control planes (HCP) and OpenShift Container Platform clusters fail to add new nodes in MachinePool versions older than 4.15.23. As a result, some updates are blocked. To see what clusters are affected and the recommended workaround, see ( ROSA upgrade issue mitigation for HOSTEDCP-1941 ). ( OCPBUGS-39447 ) 1.9.25.2. Bug fixes Previously, the noProxy field from the cluster-wide Proxy wasn't taken into account while configuring proxying for the Platform Prometheus remote write endpoints. With this release, Cluster Monitoring Operator (CMO) no longer configures proxying for any remote write endpoint whose URL should bypass proxy according to noProxy . ( OCPBUGS-39170 ) Previously, Red Hat HyperShift periodic conformance jobs failed because of changes to the core operating system. These failed jobs caused the OpenShift API deployment to fail. With this release, an update recursively copies individual trusted certificate authority (CA) certificates instead of copying a single file, so that the periodic conformance jobs succeed and the OpenShift API runs as expected. ( OCPBUGS-38942 ) Previously, for egress IP, if an IP is assigned to an egress node and it is deleted, then pods selected by that egressIP might have incorrect routing information to that egress node. With this release, the issue is fixed. ( OCPBUGS-38705 ) Previously, the installation program failed to install an OpenShift Container Platform cluster in the eu-es (Madrid, Spain) region on a IBM Power Virtual Server platform that is configured as an e980 system type. With this release, the installation program no longer fails to install a cluster in this environment. ( OCPBUGS-38502 ) Previously, updating the firmware for the BareMetalHosts (BMH) resource by editing the HostFirmwareComponents resource would result in the BMH remaining in the Preparing state such that it would execute the firmware update repeatedly. This issue has been resolved. ( OCPBUGS-35559 ) 1.9.25.3. Updating To update an existing OpenShift Container Platform 4.16 cluster to this latest release, see Updating a cluster using the CLI . 1.9.26. RHSA-2024:6004 - OpenShift Container Platform 4.16.10 bug fix update Issued: 3 September 2024 OpenShift Container Platform release 4.16.10 is now available. The list of bug fixes that are included in the update is documented in the RHSA-2024:6004 advisory. The RPM packages that are included in the update are provided by the RHBA-2024:6007 advisory. Space precluded documenting all of the container images for this release in the advisory. You can view the container images in this release by running the following command: USD oc adm release info 4.16.10 --pullspecs 1.9.26.1. Enhancements 1.9.26.1.1. Updating the CENTOS 8 references to CENTOS 9 CENTOS 8 has recently ended its lifecycle. This release updates the CENTOS 8 references to CENTOS 9. ( OCPBUGS-38627 ) 1.9.26.2. Bug fixes Previously, the egressip controller failed to correctly manage the assignment of EgressIP addresses for network interfaces associated with Virtual Routing and Forwarding (VRF) tables. As a result, when a VRF instance was configured for a network interface, packets were not routed correctly because OVN-K used the main routing table instead of the VRF's routing table. With this update, the egressip controller uses the VRF's routing table when a VRF instance is configured on a network interface, ensuring accurate EgressIP assignment and correct traffic routing. ( OCPBUGS-38704 ) Previously, an internal timeout occurred when the service account had short-lived credentials. This release removes the timeout and allows the parent context to control the timeout. ( OCPBUGS-38196 ) Previously, when a user with limited permission attempted to delete an application that was deployed using Serveless, an error occurred. With this release, a check is added to determine that the user has permission to list the Pipeline resources. ( OCPBUGS-37954 ) Previously, utlization cards displayed limit in a way that incorrectly implied a relationship between capacity and limits. With this release, the position of limit is changed to remove this implication. ( OCPBUGS-37430 ) 1.9.26.3. Updating To update an existing OpenShift Container Platform 4.16 cluster to this latest release, see Updating a cluster using the CLI . 1.9.27. RHBA-2024:5757 - OpenShift Container Platform 4.16.9 bug fix update Issued: 29 August 2024 OpenShift Container Platform release 4.16.9 is now available. The list of bug fixes that are included in the update is documented in the RHBA-2024:5757 advisory. The RPM packages that are included in the update are provided by the RHBA-2024:5760 advisory. Space precluded documenting all of the container images for this release in the advisory. You can view the container images in this release by running the following command: USD oc adm release info 4.16.9 --pullspecs 1.9.27.1. Enhancements The Insights Operator (IO) can now collect data from the haproxy_exporter_server_threshold metric. ( OCPBUGS-38230 ) 1.9.27.2. Updating To update an existing OpenShift Container Platform 4.16 cluster to this latest release, see Updating a cluster using the CLI . 1.9.28. RHSA-2024:5422 - OpenShift Container Platform 4.16.8 bug fix and security update Issued: 20 August 2024 OpenShift Container Platform release 4.16.8, which includes security updates, is now available. The list of bug fixes that are included in the update is documented in the RHSA-2024:5422 advisory. The RPM packages that are included in the update are provided by the RHBA-2024:5425 advisory. Space precluded documenting all of the container images for this release in the advisory. You can view the container images in this release by running the following command: USD oc adm release info 4.16.8 --pullspecs 1.9.28.1. Bug fixes Previously, when you clicked the Red Hat OpenShift Lightspeed link on the Settings page of your OpenShift Container Platform cluster, the OpenShift Lightspeed modal in Operator Hub did not open. With this update, the OpenShift Lightspeed modal opens as expected. ( OCPBUGS-38093 ) Previously, when you mirrored Operator catalogs with the --rebuild-catalogs argument, catalog cache was recreated on the local machine. This required extraction and use of the opm binary from the catalog image, which caused failure of either the mirroring operation or the catalog source. These failures would happen because the supported operating system and the platform of the opm binary caused a mismatch with the operating system and platform of oc-mirror . With this release, the value of true is applied to the --rebuild-catalogs argument by default; any catalog rebuilds do not re-create internal cache. Additionally, this release updates the image from opm serve /configs --cache-dir=/tmp/cache to opm serve /configs so that the creation of cache happens at pod startup. Cache at startup might increase pod startup time. ( OCPBUGS-38035 ) Previously, the PrometheusRemoteWriteBehind alert was only triggered after Prometheus sent data to the remote-write endpoint on at least one occasion. With this release, the alert now also triggers if a connection could never be established with the endpoint, such as when an error exists with the endpoint URL from the time you added it to the remote-write endpoint configuration. ( OCPBUGS-36918 ) Previously, the build controller did not gracefully handle multiple MachineOSBuild objects that use the same secret. With this release, the build controller can handle these objects as expected. ( OCPBUGS-36171 ) Previously, role bindings related to the ImageRegistry , Build , and DeploymentConfig capabilities were created in every namespace, even if the capability was disabled. With this release, the role bindings are only created if the cluster capability is enabled on the cluster. ( OCPBUGS-34384 ) 1.9.28.2. Known issues An error might occur when deleting a pod that uses an SR-IOV network device. This error is caused by a change in RHEL 9 where the name of a network interface is added to its alternative names list when it is renamed. As a consequence, when a pod attached to an SR-IOV virtual function (VF) is deleted, the VF returns to the pool with a new unexpected name, for example dev69 , instead of its original name, for example ensf0v2 . Although this error is non-fatal, Multus and SR-IOV logs might show the error while the system reboots. Deleting the pod might take a few extra seconds due to this error. ( OCPBUGS-11281 , OCPBUGS-18822 , RHEL-5988 ) 1.9.28.3. Updating To update an existing OpenShift Container Platform 4.16 cluster to this latest release, see Updating a cluster using the CLI . 1.9.29. RHSA-2024:5107 - OpenShift Container Platform 4.16.7 bug fix and security update Issued: 13 August 2024 OpenShift Container Platform release 4.16.7, which includes security updates, is now available. The list of bug fixes that are included in the update is documented in the RHSA-2024:5107 advisory. The RPM packages that are included in the update are provided by the RHBA-2024:5110 advisory. Space precluded documenting all of the container images for this release in the advisory. You can view the container images in this release by running the following command: USD oc adm release info 4.16.7 --pullspecs 1.9.29.1. Bug fixes Previously, the openshift-install CLI sometimes failed to connect to the bootstrap node when collecting bootstrap gather logs. The installation program reported an error message such as The bootstrap machine did not execute the release-image.service systemd unit . With this release and after the bootstrap gather logs issue occurs, the installation program now reports Invalid log bundle or the bootstrap machine could not be reached and bootstrap logs were not collected , which is a more accurate error message. ( OCPBUGS-37838 ) Previously, after a firmware update through the HostFirmwareComponents resource, the resource would not show the newer information about the installed firmware in Status.Components . With this release, after a firmware update is run and the BareMetalHosts (BMH) object moves to provisioning , the newer information about the firmware is populated in the HostFirmwareComponents resource under Status.Components. ( OCPBUGS-37765 ) Previously, oc-mirror plugin v2 for tags were not created for the OpenShift Container Platform release images. Some container registries depend on these tags as mandatory tags. With this release, these tags are added to all release images. ( OCPBUGS-37757 ) Previously, extracting the IP address from the Cluster API Machine object only returned a single address. On VMware vSphere, the returned address would always be an IPv6 address and this caused issues with the must-gather implementation if the address was non-routable. With this release, the Cluster API Machine object returns all IP addresses, including IPv4, so that the must-gather issue no longer occurs on VMware vSphere. ( OCPBUGS-37607 ) Previously, the installation program incorrectly required Amazon Web Services (AWS) permissions for creating Identity and Access Management (IAM) roles for an OpenShift Container Platform cluster that already had these roles. With this release, the installation program only requests permissions for roles not yet created. ( OCPBUGS-37494 ) Previously, when you attempted to install a cluster on Red Hat OpenStack Platform (RHOSP) and you used special characters, such as the hash sign ( # ) in a cluster name, the Neutron API failed to tag a security group with the name of the cluster. This caused the installation of the cluster to fail. With this release, the installation program uses an alternative endpoint to tag security groups and this endpoint supports the use of special characters in tag names. ( OCPBUGS-37492 ) Previously, the Dell iDRAC baseboard management controller (BMC) with the Redfish protocol caused clusters to fail on the Dell iDRAC servers. With this release, an update to the idrac-redfish management interface to unset the ipxe parameter fixed this issue. ( OCPBUGS-37262 ) Previously, the assisted-installer did not reload new data from the assisted-service when the assisted-installer checked control plane nodes for readiness and a conflict existed with a write operation from the assisted-installer-controller . This conflict prevented the assisted-installer from detecting a node that was marked by the assisted-installer-controller as Ready because the assisted-installer relied on older information. With this release, the assisted-installer can receive the newest information from the assisted-service , so that it the assisted-installer can accurately detect the status of each node. ( OCPBUGS-37167 ) Previously, the DNS-based egress firewall incorrectly caused memory increases for nodes running in a cluster because of multiple retry operations. With this release, the retry logic is fixed so that DNS pods no longer leak excess memory to nodes. ( OCPBUGS-37078 ) Previously, HostedClusterConfigOperator resource did not delete the ImageDigestMirrorSet (IDMS) object after a user removed the ImageContentSources field from the HostedCluster object. This caused the IDMS object to remain in the HostedCluster object. With this release, HostedClusterConfigOperator removes all IDMS resources in the HostedCluster object so that this issue no longer exists. ( OCPBUGS-36766 ) Previously, in a cluster that runs OpenShift Container Platform 4.16 with the Telco RAN DU reference configuration, long duration cyclictest or timerlat tests could fail with maximum latencies detected above 20 us. This issue occured because the psi kernel command line argument was being set to 1 by default when cgroup v2 is enabled. With this release, the issue is fixed by setting psi=0 in the kernel arguments when enabling cgroup v2. The cyclictest latency issue reported in OCPBUGS-34022 is now also fixed. ( OCPBUGS-37271 ) 1.9.29.2. Updating To update an existing OpenShift Container Platform 4.16 cluster to this latest release, see Updating a cluster using the CLI . 1.9.30. RHSA-2024:4965 - OpenShift Container Platform 4.16.6 bug fix Issued: 6 August 2024 OpenShift Container Platform release 4.16.6 is now available. The list of bug fixes that are included in the update is documented in the RHSA-2024:4965 advisory. The RPM packages that are included in the update are provided by the RHBA-2024:4968 advisory. Space precluded documenting all of the container images for this release in the advisory. You can view the container images in this release by running the following command: USD oc adm release info 4.16.6 --pullspecs 1.9.30.1. Enhancements The following enhancements are included in this z-stream release: 1.9.30.1.1. Ingress Controller certificate expiration dates collected The Insights Operator now collects information about all Ingress Controller certificate expiration dates. The information is put into a JSON file in the path aggregated/ingress_controllers_certs.json . ( OCPBUGS-37671 ) 1.9.30.1.2. Enabling debug log levels Previously, you could not control log levels for the internal component that selects IP addresses for cluster nodes. With this release, you can now enable debug log levels so that you can either increase or decrease log levels on demand. To adjust log levels, you must create a config map manifest file with a configuration similar to the following: apiVersion: v1 data: enable-nodeip-debug: "true" kind: ConfigMap metadata: name: logging namespace: openshift-vsphere-infra # ... ( OCPBUGS-35891 ) 1.9.30.1.3. Ironic and Inspector htpasswd improvement Previously, the Ironic and Inspector htpasswd were provided to the ironic-image using environment variables, which is not secure. From this release, the Ironic htpasswd is provided to ironic-image using the /auth/ironic/htpasswd file, and the Inspector htpasswd is provided to ironic-image using the /auth/inspector/htpasswd file for better security. ( OCPBUGS-36285 ) 1.9.30.2. Bug fixes Previously, installer-created subnets were being tagged with kubernetes.io/cluster/<clusterID>: shared . With this release, subnets are now tagged with kubernetes.io/cluster/<clusterID>: owned . ( OCPBUGS-37510 ) Previously, the same node was queued multiple times in the draining controller, which caused the the same node to be drained twice. With this release, a node will only be drained once. ( OCPBUGS-37470 ) Previously, cordoned nodes in machine config pools (MCPs) with higher maxUnavailable than unavailable nodes might be selected as an update candidate. With this release, cordoned nodes will never be queued for an update. ( OCPBUGS-37460 ) Previously, oc-mirror plugin v2, when running behind proxy with the system proxy configuration set, would attempt to recover signatures for releases without using the system proxy configuration. With this release, the system proxy configuration is taken into account during signature recovery as well and the issue is resolved. ( OCPBUGS-37445 ) Previously, an alert for OVNKubernetesNorthdInactive would not fire in circumstances where it should fire. With this release, the issue is fixed so that the alert for OVNKubernetesNorthdInactive fires as expected. ( OCPBUGS-37362 ) Previously, the Load Balancer ingress rules were continuously revoked and authorized, causing unnecessary Amazon Web Services (AWS) Application Programming Interface (API) calls and cluster provision delays. With this release, the Load Balancer checks for ingress rules that need to be applied and the issue is resolved. ( OCPBUGS-36968 ) Previously, in the OpenShift Container Platform web console, one inactive or idle browser tab caused the session to expire for all other tabs. With this release, activity in any tab will prevent session expiration. ( OCPBUGS-36864 ) Previously, the Open vSwitch (OVS) pinning procedure set the CPU affinity of the main thread, but other CPU threads did not pick up this affinity if they had already been created. As a consequence, some OVS threads did not run on the correct CPU set, which might interfere with the performance of pods with a Quality of Service (QoS) class of Guaranteed . With this update, the OVS pinning procedure updates the affinity of all the OVS threads, ensuring that all OVS threads run on the correct CPU set. ( OCPBUGS-36608 ) Previously, the etcd Operator checked the health of etcd members in serial with an all-member timeout that matched the single-member timeout. That allowed one slow member check to consume the entire timeout, and cause later member checks to fail with the error deadline-exceeded , regardless of the health of that later member. Now, etcd checks the health of members in parallel so the health and speed of one member's check doesn't affect the other members' checks. ( OCPBUGS-36489 ) Previously, you could not change the snapshot limits for the VMware vSphere Container Storage Interface (CSI) driver without enabling the TechPreviewNoUpgrade feature gate because of a missing API that caused a bug with the Cluster Storage Operator. With this release, the missing API is added so that you can now change the snapshot limits without having to enable the TechPreviewNoUpgrade feature gate. For more information about changing the snapshot limits, see Changing the maximum number of snapshots for vSphere ( OCPBUGS-36969 ) 1.9.30.3. Updating To update an existing OpenShift Container Platform 4.16 cluster to this latest release, see Updating a cluster using the CLI . 1.9.31. RHBA-2024:4855 - OpenShift Container Platform 4.16.5 bug fix Issued: 31 July 2024 OpenShift Container Platform release 4.16.5 is now available. The list of bug fixes that are included in the update is documented in the RHBA-2024:4855 advisory. The RPM packages that are included in the update are provided by the RHSA-2024:4858 advisory. Space precluded documenting all of the container images for this release in the advisory. You can view the container images in this release by running the following command: USD oc adm release info 4.16.5 --pullspecs 1.9.31.1. Bug fixes Previously, with oc-mirror plugin v2 (Technology Preview), when a generated archive was moved to a different machine, the mirroring from archive to the mirror registry operation failed and outputted the following error message: [ERROR]: [ReleaseImageCollector] open USD{FOLDER}/working-dir/hold-release/ocp-release/4.15.17-x86_64/release-manifests/image-references: no such file or directory With this release, the machine that runs oc-mirror receives an automatic update to change its target location to the working directory. ( OCPBUGS-37040 ) Previously, the OpenShift CLI ( oc ) command openshift-install destroy cluster stalled and caused the following error message: VM has a local SSD attached but an undefined value for 'discard-local-ssd' when using A3 instance types With this release, after you issue the command, local SSDs are removed so that this bug no longer persists. ( OCPBUGS-36965 ) Previously, when the Cloud Credential Operator checked if passthrough mode permissions were correct, the Operator sometimes received a response from the Google Cloud Platform (GCP) API about an invalid permission for a project. This bug caused the Operator to enter a degraded state that in turn impacted the installation of the cluster. With this release, the Cloud Credential Operator checks specifically for this error so that it diagnoses it separately without impacting the installation of the cluster. ( OCPBUGS-36834 ) Previously, with oc-mirror plugin v2 (Technology Preview), when a generated archive was moved to a different machine, the mirroring from archive to the mirror registry operation failed and outputted the following error message: [ERROR]: [ReleaseImageCollector] open USD{FOLDER}/working-dir/hold-release/ocp-release/4.15.17-x86_64/release-manifests/image-references: no such file or directory With this release, the machine that runs oc-mirror receives an automatic update to change its target location to the working directory. ( OCPBUGS-37040 ) 1.9.31.2. Updating To update an existing OpenShift Container Platform 4.16 cluster to this latest release, see Updating a cluster using the CLI . 1.9.32. RHSA-2024:4613 - OpenShift Container Platform 4.16.4 bug fix and security update Issued: 24 July 2024 OpenShift Container Platform release 4.16.4, which includes security updates, is now available. The list of bug fixes that are included in the update is documented in the RHSA-2024:4613 advisory. The RPM packages that are included in the update are provided by the RHSA-2024:4616 advisory. Space precluded documenting all of the container images for this release in the advisory. You can view the container images in this release by running the following command: USD oc adm release info 4.16.4 --pullspecs 1.9.32.1. Bug fixes Previously, a change to the Ingress Operator added logic to clear spec.host and set spec.subdomain on the canary route. However, the Operator's service account did not have the necessary routes/custom-host permission to update spec.host or spec.subdomain on an existing route. With this release, the permission is added to the ClusterRole resource for the Operator's service account and the issue is resolved. ( OCPBUGS-32887 ) Previously, the number of calls to the subscription's fetchOrganization endpoint from the Console Operator was too high, which caused issues with installation. With this release, the organization ID is cached and the issue is resolved. ( OCPBUGS-34012 ) Previously, role bindings related to the ImageRegistry , Build , and DeploymentConfig capabilities were created in every namespace, even if the respective capability was disabled. With this release, the role bindings are only created if the respective cluster capability is enabled on the cluster. ( OCPBUGS-34384 ) Previously, the MetalLB Operator deployed the downstream image when deploying with FRR-K8s, the Border Gateway Protocol (BGP) backend for MetalLB. With this release, the MetalLB Operator deploys the upstream image instead of the dowstream one. ( OCPBUGS-35864 ) Previously, when LUKS encryption was enabled on a system using 512 emulation (512e) disks, the encryption failed at the ignition-ostree-growfs step and reported an error because of an alignment issue. With this release, a workaround is added in the ignition-ostree-growfs step to detect this situation and resolve the alignment issue. ( OCPBUGS-36147 ) Previously, the --bind-address parameter for localhost caused liveness test failure for IBM Power Virtual Server clusters. With this release, the --bind-address parameter for localhost is removed and the issue is resolved. ( OCPBUGS-36317 ) Previously, Operator bundle unpack jobs that had already been created were not found by the Operator Lifecycle Manager (OLM) when installing an Operator. With this release, the issue is resolved. ( OCPBUGS-36450 ) Previously, the etcd data store used for Cluster API-provisioned installations was only removed when either the bootstrap node or the cluster was destroyed. With this release, if there is an error during infrastructure provisioning, the data store is removed and does not take up unnecessary disk space. ( OCPBUGS-36463 ) Previously, enabling custom feature gates could cause the installation to fail in AWS if the feature gate ClusterAPIInstallAWS=true was not enabled. With this release, the ClusterAPIInstallAWS=true feature gate is no longer required. ( OCPBUGS-36720 ) Previously, if create cluster was run after the destroy cluster command, an error would report that local infrastructure provisioning artifacts already exist. With this release, leftover artifacts are removed with destroy cluster and the issue is resolved. ( OCPBUGS-36777 ) Previously, the OperandDetails page displayed information for the first custom resource definition (CRD) that matched by name. With this release, the OperandDetails page displays information for the CRD that matches by name and by the version of the operand. ( OCPBUGS-36841 ) Previously, if the openshift.io/internal-registry-pull-secret-ref annotation was removed from a ServiceAccount resource, OpenShift Container Platform re-created the deleted annotation and created a new managed image pull secret. This contention could cause the cluster to get overloaded with image pull secrets. With this release, OpenShift Container Platform attempts to reclaim managed image pull secrets that were previously referenced and deletes managed image pull secrets that remain orphaned after reconciliation. ( OCPBUGS-36862 ) Previously, some of the processes remained running after the installation program stopped due to setup failures. With this release, all installation processes stop when the installation program stops running. ( OCPBUGS-36890 ) Previously, there was no runbook for the ClusterMonitoringOperatorDeprecatedConfig alert. With this release, the runbook for the ClusterMonitoringOperatorDeprecatedConfig alert is added and the issue is resolved. ( OCPBUGS-36907 ) Previously, the Cluster overview page included a View all steps in documentation link that resulted in a 404 error for ROSA and OSD clusters. With this update, the link does not appear for ROSA and OSD clusters. ( OCPBUGS-37063 ) Previously, there was a mismatch between OpenSSL versions of Machine Config Operator tools used by OpenShift Container Platform and the OpenSSL version that runs on the hosted control plane. With this release, the issue is resolved. ( OCPBUGS-37241 ) 1.9.32.2. Updating To update an existing OpenShift Container Platform 4.16 cluster to this latest release, see Updating a cluster using the CLI . 1.9.33. RHSA-2024:4469 - OpenShift Container Platform 4.16.3 bug fix and security update Issued: 16 July 2024 OpenShift Container Platform release 4.16.3, which includes security updates, is now available. The list of bug fixes that are included in the update is documented in the RHSA-2024:4469 advisory. The RPM packages that are included in the update are provided by the RHBA-2024:4472 advisory. Space precluded documenting all of the container images for this release in the advisory. You can view the container images in this release by running the following command: USD oc adm release info 4.16.3 --pullspecs 1.9.33.1. Enhancements The following enhancements are included in this z-stream release: 1.9.33.1.1. Configuring Capacity Reservation by using machine sets OpenShift Container Platform release 4.16.3 introduces support for on-demand Capacity Reservation with Capacity Reservation groups on Microsoft Azure clusters. For more information, see Configuring Capacity Reservation by using machine sets for compute or control plane machine sets. ( OCPCLOUD-1646 ) 1.9.33.1.2. Adding alternative ingress for disabled ingress clusters With this release, the console Operator configuration API can add alternative ingress to environments where the ingress cluster capability has been disabled. ( OCPBUGS-33788 ) 1.9.33.2. Bug fixes Previously, if spec.grpcPodConfig.securityContextConfig was not set for CatalogSource objects in namespaces with the PodSecurityAdmission "restricted" level enforced, the default securityContext was set as restricted . With this release, the OLM catalog operator configures the catalog pod with the securityContexts necessary to pass PSA validation and the issue has been resolved. ( OCPBUGS-34979 ) Previously, the HighOverallControlPlaneCPU alert triggered warnings based on criteria for multi-node clusters with high availability. As a result, misleading alerts were triggered in single-node OpenShift clusters because the configuration did not match the environment criteria. This update refines the alert logic to use single-node OpenShift-specific queries and thresholds and account for workload partitioning settings. As a result, CPU utilization alerts in single-node OpenShift clusters are accurate and relevant to single-node configurations. ( OCPBUGS-35831 ) Previously, the --bind-address to localhost caused the liveness test to fail for PowerVS clusters. With this release, the --bind-address to localhost is removed and the issue has been resolved. ( OCPBUGS-36317 ) Previously, nodes that were booted using 4.1 and 4.2 boot images for OpenShift Container Platform got stuck during provisioning because the machine-config-daemon-firstboot.service had incompatible machine-config-daemon binary code. With this release, the binary has been updated and the issue has been resolved. ( OCPBUGS-36330 ) Previously, there was no access to the source registry when the diskToMirror action was performed on a fully disconnected environment. When using oc-mirror v2 in MirrorToDisk , the catalog image and contents are stored under a subfolder under working-dir that corresponds to the digest of the image. Then, while using DiskToMirror , oc-mirror attempts to call the source registry to resolve the catalog image tag to a digest to find the corresponding subfolder on disk. With this release, oc-mirror interrogates the local cache during the diskToMirror process to determine this digest. ( OCPBUGS-36386 ) Previously, if a new deployment was performed at the OSTree level on a host that was identical to the current deployment but on a different stateroot, the OSTree saw them as equal. This behavior incorrectly prevented the boot loader from updating when set-default was invoked, as OSTree did not recognize the two stateroots as a differentiation factor for deployments. With this release, OSTree's logic has been modified to consider the stateroots and allows OSTree to properly set the default deployment to a new deployment with different stateroots. ( OCPBUGS-36386 ) Previously, Installer logs for AWS clusters contained unnecessary messages about the Elastic Kubernetes Service (EKS) that could lead to confusion. With this release, the EKS log lines are disabled and the issue has been resolved. ( OCPBUGS-36447 ) Previously, a change of dependency targets was introduced in OpenShift Container Platform 4.14 that prevented disconnected ARO installs from scaling up new nodes after they upgraded to affected versions. With this release, disconnected ARO installs can scale up new nodes after upgrading to OpenShift Container Platform 4.16. ( OCPBUGS-36536 ) Previously, connection refused on port 9637 reported as Target Down for Windows nodes because CRIO does not run on Windows nodes. With this release, Windows nodes are excluded from the Kubelet Service Monitor. ( OCPBUGS-36717 ) 1.9.33.3. Updating To update an existing OpenShift Container Platform 4.16 cluster to this latest release, see Updating a cluster using the CLI . 1.9.34. RHSA-2024:4316 - OpenShift Container Platform 4.16.2 bug fix and security update Issued: 9 July 2024 OpenShift Container Platform release 4.16.2, which includes security updates, is now available. The list of bug fixes that are included in the update is documented in the RHSA-2024:4316 advisory. The RPM packages that are included in the update are provided by the RHBA-2024:4319 advisory. Space precluded documenting all of the container images for this release in the advisory. You can view the container images in this release by running the following command: USD oc adm release info 4.16.2 --pullspecs 1.9.34.1. Bug fixes Previously, for clusters upgraded from older versions of OpenShift Container Platform, enabling kdump on an OVN-enabled cluster sometimes prevented the node from rejoining the cluster or returning to the Ready state. With this release, stale data is removed from older OpenShift Container Platform versions and ensures this stale data is always cleaned up. The node can now start correctly and rejoin the cluster. ( OCPBUGS-36198 ) Previously, unexpected output would appear in the terminal when creating an installer-provisioned infrastructure (IPI) cluster. With this release, the issue has been resolved and the unexpected output no longer appears. ( OCPBUGS-36156 ) Previously, the OpenShift Container Platform console did not show filesystem metrics on the nodes list. With this release, the filesystem metrics now appear in the nodes table. ( OCPBUGS-35946 ) Previously, the Prometheus dashboard showed up empty for non-multi-cluster environments. With this release, the dashboard populates the dashboard panels as expected for both cases. ( OCPBUGS-35904 ) Previously, a regression in 4.16.0 caused new baremetal installer-provisioned infrastructure (IPI) installations to fail when proxies were used. This was caused by one of the services in the bootstrap virtual machine (VM) trying to access IP address 0.0.0.0 through the proxy. With this release, this service no longer accesses 0.0.0.0. ( OCPBUGS-35818 ) Previously, the Cluster API Provider IBM Cloud waited for some resources to be created before creating the load balancers on IBM Power Virtual Server clusters. This delay sometimes resulted in the load balancers not being created before the 15 minute timeout. With this release, the timeout has been increased. ( OCPBUGS-35722 ) Previously, when installing a cluster on Red Hat OpenStack Platform (RHOSP) using the Cluster API implementation, the additional security group rule added to control plane nodes for compact clusters was forcing IPv4 protocol and prevented deploying dual-stack clusters. This was a regression from installations using Terraform. With this release, the rule now uses the correct protocol based on the requested IP version. ( OCPBUGS-35718 ) Previously, the internal image registry would not correctly authenticate users on clusters configured with external OpenID Connect (OIDC) users, making it impossible for users to push or pull images to and from the internal image registry. With this release, the internal image registry starts using the SelfSubjectReview API, dropping use of the OpenShift Container Platform specific user API, which is not available on clusters configured with external OIDC users, making it possible to successfully authenticate with the image registry again. ( OCPBUGS-35567 ) Prevously, an errant code change resulted in a duplicated oauth.config.openshift.io item on the Global Configuration page. With this update, the duplicated item is removed. ( OCPBUGS-35565 ) Previously, with oc-mirror v2, when mirroring fails due to various reasons, such as network errors or invalid operator catalog content, oc-mirror did not generate cluster resources. With this bug fix, oc-mirror v2 performs the following actions: Pursues mirroring other images when errors occur on Operator images and additional images, and aborts mirroring when errors occur on release images. Generates cluster resources for the cluster based on subset of correctly mirrored images. Collects all mirroring errors in a log file. Logs all mirroring errors in a separate log file. ( OCPBUGS-35409 ) Previously, pseudolocalization was not working in the OpenShift Container Platform console due to a configuration issue. With this release, the issue is resolved and pseudolocalization works again. ( OCPBUGS-35408 ) Previously, the must-gather process ran too long while collecting CPU-related performance data for nodes due to collecting the data sequentially for each node. With this release, the node data is collected in parallel, which significantly shortens the must-gather data collection time. ( OCPBUGS-35357 ) Previously, builds could not set the GIT_LFS_SKIP_SMUDGE environment variable and use its value when cloning source code. This caused builds to fail for some git repositories with LFS files. With this release, the build is allowed to set this environment variable and use it during the git clone step of the build. ( OCPBUGS-35283 ) Previously, registry overrides were present in non-relevant data plane images. With this release, the way OpenShift Container Platform propagates the override-registries has been modified and the issue is fixed. ( OCPBUGS-34602 ) Previously, RegistryMirrorProvider images were not being updated during the reconciliation because RegistryMirrorProvider was modifying the cached image directly instead of the internal entries. With this release, the way we update the images has been modified, avoiding the cache and doing it directly in the entries so the bug no longer presents. ( OCPBUGS-34569 ) Previously, the alertmanager-trusted-ca-bundle ConfigMap was not injected into the user-defined Alertmanager container, which prevented the verification of HTTPS web servers receiving alert notifications. With this update, the trusted CA bundle ConfigMap is mounted into the Alertmanager container at the /etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem path. ( OCPBUGS-34530 ) Previously, for Amazon Web Services (AWS) clusters that use Security Token Service (STS), the Cloud Credential Operator (CCO) checked the value of awsSTSIAMRoleARN in the CredentialsRequest custom resource to create a secret. When awsSTSIAMRoleARN was not present, CCO logged an error. The issue is resolved in this release. ( OCPBUGS-34117 ) Previously, with the OVN-Kubernetes setting for routing-via-host set to shared gateway mode, its default value, OVN-Kubernetes did not correctly handle traffic streams that mixed non-fragmented and fragmented packets from the IP layer on cluster ingress. This caused connection resets or packet drops. With this release, OVN-Kubernetes correctly reassembles and handles external traffic IP packet fragments on ingress. ( OCPBUGS-29511 ) 1.9.34.2. Known issue If the ConfigMap maximum transmission unit (MTU) is absent in the namespace openshift-network-operator , users have to create the ConfigMap manually with the machine MTU value, before starting the live migration. Otherwise, the live migration will get stuck and fail. ( OCPBUGS-35829 ) 1.9.34.3. Updating To update an existing OpenShift Container Platform 4.16 cluster to this latest release, see Updating a cluster using the CLI . 1.9.35. RHSA-2024:4156 - OpenShift Container Platform 4.16.1 bug fix and security update Issued: 3 July 2024 OpenShift Container Platform release 4.16.1, which includes security updates, is now available. The list of bug fixes that are included in the update is documented in the RHSA-2024:4156 advisory. The RPM packages that are included in the update are provided by the RHSA-2024:4159 advisory. Space precluded documenting all of the container images for this release in the advisory. You can view the container images in this release by running the following command: USD oc adm release info 4.16.1 --pullspecs 1.9.35.1. Bug fixes Previously, an error in growpart caused the device to be locked, which prevented the Linux Unified Key Setup-on-disk-format (LUKS) device from being opened. As a result, the node was unable to boot and went into emergency mode. With this release, the call to the growpart is removed and this issue is fixed. ( OCPBUGS-35973 ) Previously, a bug in systemd might have caused the coreos-multipath-trigger.service unit to hang indefinitely. As a result, the system would never finish booting. With this release, the systemd unit was removed and the issue is fixed. ( OCPBUGS-35748 ) Previously, the KMS key was applied as an empty string, which caused the key to be invalid. With this release, the empty string is removed and the KMS key is only applied when one exists from the install-config.yaml . ( OCPBUGS-35531 ) Previously, there was no validation of the values for confidential compute and on host maintenance set by the user. With this release, when confidential compute is enabled by the user the value for onHostMaintenance must be set to onHostMaintenance: Terminate . ( OCPBUGS-35493 ) Previously, in user-provisioned infrastructure (UPI) clusters or clusters that were upgraded from older versions, failureDomains might be missing in Infrastructure objects, which caused certain checks to fail. With this release, a failureDomains fallback is synthesized from cloudConfig if none are available in infrastructures.config.openshift.io . ( OCPBUGS-35446 ) Previously, when a new version of a custom resource definition (CRD) specified a new conversion strategy, this conversion strategy was expected to successfully convert resources. This was not the case because Operator Lifecycle Manager (OLM) cannot run the new conversion strategies for CRD validation without actually performing the update operation. With this release, the OLM generates a warning message during the update process when CRD validations fail with the existing conversion strategy and the new conversion strategy is specified in the new version of the CRD. ( OCPBUGS-35373 ) Previously, Amazon Web Services (AWS) HyperShift clusters leveraged their Amazon Virtual Private Cloud (VPC)'s primary classless inter-domain routing (CIDR) range to generate security group rules on the data plane. As a consequence, installing AWS HyperShift clusters into an AWS VPC with multiple CIDR ranges could cause the generated security group rules to be insufficient. With this update, security group rules are generated based on the provided Machine CIDR range to resolve this issue. ( OCPBUGS-35056 ) Previously, the Source-to-Image (S2I) build strategy needed to be explicitly added to the func.yaml in order to create the Serverless function. Additionally, the error message did not indicate the problem. With this release, if S2I is not added, users can still create the Serverless function. However, if it is not S2I, users cannot create the function. Additionally, the error messages have been updated to provide more information. ( OCPBUGS-34717 ) Previously, the CurrentImagePullSecret field on the MachineOSConfig object was not being used when rolling out new on-cluster layering build images. With this release, the CurrentImagePullSecret field on the MachineOSConfig object is allowed to be used by the image rollout process. ( OCPBUGS-34261 ) Previously, when sending multiple failing port-forwarding requests, CRI-O memory usage increases until the node dies. With this release, the memory leakage when sending a failing port-forward request is fixed and the issue is resolved. ( OCPBUGS-30978 ) Previously, the oc get podmetrics and oc get nodemetrics commands were not working properly. This update fixes the issue. ( OCPBUGS-25164 ) 1.9.35.2. Updating To update an existing OpenShift Container Platform 4.16 cluster to this latest release, see Updating a cluster using the CLI . 1.9.36. RHSA-2024:0041 - OpenShift Container Platform 4.16.0 image release, bug fix, and security update advisory Issued: 27 June 2024 OpenShift Container Platform release 4.16.0, which includes security updates, is now available. The list of bug fixes that are included in the update is documented in the RHSA-2024:0041 advisory. The RPM packages that are included in the update are provided by the RHSA-2024:0045 advisory. Space precluded documenting all of the container images for this release in the advisory. You can view the container images in this release by running the following command: USD oc adm release info 4.16.0 --pullspecs
[ "featureSet: CustomNoUpgrade featureGates: - ClusterAPIInstall=true", "Warning: short name \"ex\" could also match lower priority resource examples.test.com", "- lastTransitionTime: \"2024-04-11T05:54:37Z\" message: Cluster is configured with OpenShiftSDN, which is not supported in the next version. Please follow the documented steps to migrate from OpenShiftSDN to OVN-Kubernetes in order to be able to upgrade. https://docs.openshift.com/container-platform/4.16/networking/ovn_kubernetes_network_provider/migrate-from-openshift-sdn.html reason: OpenShiftSDNConfigured status: \"False\" type: Upgradeable", "This pod appears to have created one or more iptables rules. IPTables is deprecated and will no longer be available in RHEL 10 and later. You should consider migrating to another API such as nftables or eBPF.", "Platform.BareMetal.externalBridge: Invalid value: \"baremetal\": could not find interface \"baremetal\"", "apiVersion: v1 kind: ConfigMap metadata: name: mtu namespace: openshift-network-operator data: mtu: \"1500\" 1", "oc adm release info 4.16.37 --pullspecs", "oc adm release info 4.16.36 --pullspecs", "oc adm release info 4.16.35 --pullspecs", "oc adm release info 4.16.34 --pullspecs", "oc adm release info 4.16.33 --pullspecs", "oc adm release info 4.16.32 --pullspecs", "oc adm release info 4.16.30 --pullspecs", "oc adm release info 4.16.29 --pullspecs", "oc adm release info 4.16.28 --pullspecs", "oc adm release info 4.16.27 --pullspecs", "oc adm release info 4.16.26 --pullspecs", "oc adm release info 4.16.25 --pullspecs", "oc adm release info 4.16.24 --pullspecs", "oc adm release info 4.16.23 --pullspecs", "oc adm release info 4.16.21 --pullspecs", "oc adm release info 4.16.20 --pullspecs", "oc adm release info 4.16.19 --pullspecs", "oc adm release info 4.16.18 --pullspecs", "oc adm release info 4.16.17 --pullspecs", "oc adm release info 4.16.16 --pullspecs", "oc adm release info 4.16.15 --pullspecs", "oc adm release info 4.16.14 --pullspecs", "oc adm release info 4.16.13 --pullspecs", "oc adm release info 4.16.12 --pullspecs", "oc adm release info 4.16.11 --pullspecs", "oc adm release info 4.16.10 --pullspecs", "oc adm release info 4.16.9 --pullspecs", "oc adm release info 4.16.8 --pullspecs", "oc adm release info 4.16.7 --pullspecs", "oc adm release info 4.16.6 --pullspecs", "apiVersion: v1 data: enable-nodeip-debug: \"true\" kind: ConfigMap metadata: name: logging namespace: openshift-vsphere-infra", "oc adm release info 4.16.5 --pullspecs", "[ERROR]: [ReleaseImageCollector] open USD{FOLDER}/working-dir/hold-release/ocp-release/4.15.17-x86_64/release-manifests/image-references: no such file or directory", "VM has a local SSD attached but an undefined value for 'discard-local-ssd' when using A3 instance types", "[ERROR]: [ReleaseImageCollector] open USD{FOLDER}/working-dir/hold-release/ocp-release/4.15.17-x86_64/release-manifests/image-references: no such file or directory", "oc adm release info 4.16.4 --pullspecs", "oc adm release info 4.16.3 --pullspecs", "oc adm release info 4.16.2 --pullspecs", "oc adm release info 4.16.1 --pullspecs", "oc adm release info 4.16.0 --pullspecs" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.16/html/release_notes/ocp-4-16-release-notes
Release notes for Red Hat build of OpenJDK 8.0.432
Release notes for Red Hat build of OpenJDK 8.0.432 Red Hat build of OpenJDK 8 Red Hat Customer Content Services
null
https://docs.redhat.com/en/documentation/red_hat_build_of_openjdk/8/html/release_notes_for_red_hat_build_of_openjdk_8.0.432/index
Chapter 7. Compiler and Tools
Chapter 7. Compiler and Tools Support for new instructions in IBM z Systems z13 The new version of GCC brings support for the new hardware instructions of the IBM z Systems z13, along with support for SIMD instructions. The -march=z13 command-line option is needed to enable the new intrinsics. (BZ#1182152) GCC now generates optimal code for POWER8 On the PowerPC 64 LE architecture, the GCC compiler is now configured with the --with-cpu=power8 and --with-tune=power8 parameters, to make GCC generate optimal code for POWER8 platforms. (BZ#1213268) Support for Intel Memory Protection Keys (IMPK) This update to the GCC compiler provides support for IMPK - the compiler can now generate the new PKU instructions. The new instructions can be enabled by using the -mpku command-line option. (BZ#1304449) gcc-libraries rebased The gcc-libraries package has been rebased to the latest GCC 5 version to include various bug fixes and enhancements from the upstream version. (BZ#1265252) GDB now supports IBM z13 features This update provides a GDB extension for debugging code utilizing IBM z13 features. This includes disassembling extended IBM z13 instructions and supporting SIMD instructions using 128-bit wide vector registers v0-v31 . Code optimized for IBM z13 can be now debugged by GDB displaying correct instruction mnemonics, vector registers, and retrieving and passing vector register content during inferior calls. (BZ#1182151) binutils rebased to version 2.25.1 The highlights of the new rebased binutils package include: The strings program now has a --data command-line option which only prints strings in loadable, initialized data sections. The default behaviour has been changed to match the --all command-line option. The strings program now has a --include-all-whitespace command-line option which treats any non-displaying ASCII character as part of the string. This includes carriage return and new line characters which otherwise would be considered to be line terminators. The objcopy program now has a --dump-section command-line option to extract the contents of named sections and copy them into separate files. The objcopy program now supports wildcard characters in command-line options that take section names. The as assembler now has a --gdwarf-sections command-line option to enable the generation of per-code-section DWARF.debug_line sections. This facilitates the removal of those sections when their corresponding code section is removed by linker garbage collection. (BZ# 1341730 ) Support for the z13 extensions to IBM z Systems architecture. This update provides multiple upstream patches combined into a single patch and applied to the Red Hat Enterprise Linux 7 binutils package. The z13 extensions are now supported. (BZ#1364516) Support for MWAITX The updated binutils package for the 32-bit AMD and Intel architecture now provides support for the MWAITX instruction. (BZ#1335684) Support for Zeppelin The updated binutils package for the 32-bit AMD and Intel architecture now provides support for the Zeppelin extensions. (BZ#1335313) Support for the Large System Extensions The updated binutils package now provides support for the Large System Extensions to the AArch64 assembler. In addition, support for the .arch_extension pseudo-operation has also been added. (BZ#1276755) elfutils rebased to version 0.166 The elfutils packages contain a number of utilities and libraries related to the creation and maintenance of executable code. The package has been upgraded to version 0.166. Highlighted improvements include: strip , unstrip - These utilities can now handle ELF files with merged strtab/shstrtab tables. elfcompress - A new utility to compress or decompress ELF sections. readelf - A new -z,--decompress option. New functions have been added to libelf and libdw to handle compressed ELF sections: elf_compress , elf_compress_gnu , elf32_getchdr , elf64_getchdr , and gelf_getchdr . libdwelf - a new dwelf_scn_gnu_compressed_size() function. New libelf and libdw pkgconfig (package configuration) files. (BZ# 1296313 ) valgrind rebased to version 3.11.0 Valgrind is an instrumentation framework that is used for debugging memory, detecting memory leaks, and profiling applications. The package has been upgraded to upstream version 3.11.0. Highlighted improvements include: The JIT's register allocator is now significantly faster, making JIT-intensive activities, for example program startup, approximately 5% faster. Intel AVX2 support is now more complete for 64-bit targets. On AVX2-capable hosts, the simulated CPUID will now indicate AVX2 support. The default value for the --smc-check option has been changed from stack to all-non-file on targets that provide automatic D-I cache coherence. The result is to provide, by default, transparent support for JIT generated and self-modifying code on all targets. Highlighted new features in the Memcheck utility include: The default value for the --leak-check-heuristics option has been changed from none to all . This helps to reduce the number of possibly lost blocks, in particular for C++ applications. The default value for the --keep-stacktraces option has been changed from malloc-then-free to malloc-and-free . This has a small cost in memory but allows Memcheck to show the 3 stack traces of a dangling reference: where the block was allocated, where it was freed, and where it is accessed after being freed. The default value for the --partial-loads-ok option has been changed from no to yes , to avoid false-positive errors resulting from certain vectorised loops. A new gdb monitor command xb [addr] [len] shows the validity bits of [len] bytes at [addr] . The monitor command xb is easier to use than get_vbits when you need to associate byte data value with their corresponding validity bits. The block_list gdb monitor command has been enhanced: it can print a range of loss records; it now accepts an optional argument, limited [max_blocks] , to control the number of printed blocks; if a block has been found using a heuristic, then block_list now shows the heuristic after the block size; the loss records/blocks to print can be limited to the blocks found via specified heuristics. A new --expensive-definedness-checks=yes|no command-line option has been added. This is useful for avoiding occasional invalid uninitialized-value errors in optimized code. Beware of potential runtime degradation, as this can be up to 25%. The slowdown is highly application-specific though. The default value is no . (BZ# 1296318 ) Interception of user-defined allocation functions in valgrind Some applications do not use the glibc allocator. Consequently, it was not always convenient to run such applications under valgrind . With this update, valgrind tries to automatically intercept user-defined memory allocation functions as if the program used the normal glibc allocator, making it possible to use memory tracing utilities such as memcheck on those programs out of the box. (BZ# 1271754 ) systemtap rebased to version 3.0 The systemtap packages have been updated to upstream version 3.0, which provides a number of bug fixes and enhancements. For example, the translator has been improved to require less memory, produce faster code, support more function callee probing, print improved diagnostics, include language extensions for function overloading and private scoping, and introduce experimental --monitor and --interactive modes. (BZ# 1289617 ) Support for the 7th-generation Core i3, i5, and i7 Intel processors This update provides a complete set of performance monitoring events for the 7th-generation Core i3, i5, and i7 Intel processors (Kabylake-U/Y). (BZ#1310950) Support for the 7th-generation Core i3, i5, and i7 Intel processors This update provides a complete set of performance monitoring events for the 7th-generation Core i3, i5, and i7 Intel processors (Kabylake-H/S). (BZ#1310951) libpfm rebased to version 4.7.0 The libpfm package has been upgraded to version 4.7.0. This version provides support for the following 32-bit AMD and Intel architectures: Intel Skylake core PMU Intel Haswell-EP uncore PMUs Intel Broadwell-DE Intel Broadwell (desktop core) Intel Haswell-EP (core) Intel Haswell-EP (core) Intel Ivy Bridge-EP uncore PMUs (all boxes) Intel Silvermont core PMU Intel RAPL events support Intel SNB, IVB, HSW event table updates Major update on Intel event tables AMD Fam15h Northbridge PMU (BZ# 1321051 ) gssproxy now supports RELRO and PIE The GSS-API gssproxy daemon is now built using the security-related RELRO and PIE compile-time flags to harden the daemon. As a result, gssproxy provides a higher security against loader memory area overwrite attempts and memory corruption attacks. (BZ#1092515) iputils rebased to version 20160308 The iputils packages have been upgraded to upstream version 20160308, which provides a number of bug fixes and enhancements over the version. Notably, the ping command is now dual stack aware. It can be used for probing both IPv4 and IPv6 addresses. The old ping6 command is now a symbolic link to the ping command and works the same way as before. (BZ# 1273336 ) Logging capabilities of the tftp server have been enhanced As a result of improved logging, the Trivial File Transfer Protocol (TFTP) server can now track successes and failures. For example, a log event is now created when a client successfully finishes downloading a file, or the file not found message is provided in case of a failure. (BZ#1311092) New option for arpwatch: -p This update introduces option -p for the arpwatch command of the arpwatch network monitoring tool. This option disables promiscuous mode. (BZ#1291722) The chrt utility now has new options This update introduces new command-line options for the chrt utility: --deadline , --sched-runtime , --sched-period , and --sched-deadline . These options take advantage of the kernel SCHED_DEADLINE scheduler and provide full control of deadline scheduling policy for scripts and when using the command line. (BZ#1298384) New command-line utility: lsipc This update introduces the lsipc utility that lists information about inter-process communication (IPC) facilities. In comparison with the old ipcs command, lsipc provides more details, is easier to use in scripts, and is more user-friendly. This results into better control of the output on IPC information for scripts and when using the command line. (BZ#1153770) Searching using libmount and findmnt is now more reliable Overlay filesystem's st_dev does not provide possibility for reliable searching to the libmount library and the findmnt utility. With this update, libmount and findmnt search in mount tables by other means than with st_dev in some cases, achieving better reliability. (BZ#587393) New --family option for the alternatives utility This update introduces the new --family option for the alternatives utility. The software packager can use this option to group similar alternative packages from the same group into families. Families inside groups ensure that if the currently used alternative is removed, and it belonged to a family, then the current alternative will change to package with the highest priority within the same family, and not outside the family. For example, a system has four packages installed in the same alternatives group: a1 , a2 , a3 , b (listed in increasing priority). Packages a1 , a2 , and a3 belong to the same family. a1 is the currently used alternative. If a1 is removed, then the currently used alternative will change to a3 . It will not be b , because b is outside the family of a1 , and it will not be a2 , because a2 has lower priority than a3 . This option is useful when just setting priorities for each alternative is not enough. For example, all openjdk packages can be put into the same family to ensure that if one of them is uninstalled, the alternative will switch to another openjdk package, and not to the java-1.7.0-oracle package (if another openjdk package is installed). (BZ# 1291340 ) sos rebased to version 3.3 The sos package has been updated to upstream version 3.3, which provides a number of enhancements, new features, and bug fixes, including: Support for OpenShift Enterprise 3.x Improved and expanded OpenStack plug-ins Enhanced support for Open vSwitch Enhanced Kubernetes data collection Improved support for systemd journal collection Enhanced display manager and 3D acceleration data capture Improved support for Linux clusters, including Pacemaker Expanded CPU and NUMA topology collection Expanded mainframe (IBM z Systems) coverage Collection of multipath topology (BZ#1293044) ethtool rebased to version 4.5 The ethtool utility enables querying and changing settings such as speed, port, auto-negotiation, PCI locations, and checksum offload on many network devices, especially Ethernet devices. The package has been upgraded to upstream version 4.5. Notable improvements include: SFP serial number and date are now included in EEPROM dump (option -m ) Added missing Advertised speeds, some combinations of 10GbE and 56GbE Added register dump support for VMware vmxnet3 (option -d ) Added support for setting the default Rx flow indirection table (option -X ) (BZ#1318316) pcp rebased to version 3.11.3 Performance Co-Pilot (PCP) is a suite of tools, services, and libraries for acquisition, archiving, and analysis of system-level performance measurements. The package has been upgraded to version 3.11.3. Highlighted improvements include: pcp-ipcs - new command to show inter-process communication pcp-atopsar - new PMAPI sar command based on http://atoptool.nl pcp-vmstat - wrapper for pmstat modified to more closely resemble vmstat libpcp - new fetchgroup API pmdamic - new PMDA for Intel MIC card metrics pmdaslurm - new PMDA exporting HPC scheduler metrics pmdapipe - command output event capture PMDA pmdaxfs - support for per-device XFS metrics pmdavmware - updated to work with current VMWare Perl API pmdaperfevent - variety of improvements surrounding derived metrics; added reference clock cycles for NHM and WSM pmdaoracle - Oracle database metrics available and updated pmdads389 - added normalized dn cache metrics pmdalinux - added metrics for per numa node memory bandwidth, shared memory segments, IPC, MD driver stats, transparent-huge-page zero page alloc counters, NVME devices, IPv6 metrics pmdaelasticsearch - restrict to local node metrics by default and adjust to elasticsearch API change pmdaxfs - support for per-device XFS metrics pmrep - powerful and versatile metric-reporting utility pmlogconf - support for automatic recording of Oracle database, nginx, elasticsearch, memcache, and application metrics supplied by mmv zbxpcp - Zabbix Agent loadable module for PCP metrics supporting Zabbix v2 and v3 simultaneously pmcd - support for starting PMDAs via pmdaroot , allowing restart on PMDA failure without restarting pmcd itself sar2pcp - support for additional mem.util metrics and sysstat-11.0.1 commands pmmgr - added general monitor-program launching option pcp-atop - updated with latest atop features (especially NFS-related) libpcp - allowed the name of a server certificate to be customized; added support for permanent, global derived metrics, and multi-archive contexts pmdaproc - cgroup blkio throttle throughput and IOPS metrics pcp-iostat - added the -R flag for device-name matching using regular expressions and the -G flag for sum , avg , min , or max statistics pmieconf - new rule to automate restarting of unresponsive PMDAs (BZ# 1284307 ) OpenJDK 8 now supports ECC With this update, support for Elliptic Curve Cryptography (ECC) and associated ciphers for TLS connections has been added to OpenJDK 8 . In most cases, ECC is preferable to older cryptographic solutions for establishing secure network connections. (BZ# 1245810 ) pycurl now provides options to require TLSv1.1 or 1.2 With this update, pycurl has been enhanced to support options that make it possible to require the use of the 1.1 or 1.2 versions of the TLS protocol, which improves the security of communication. (BZ# 1260407 ) Perl Net:SSLeay now supports elliptic curve parameters Support for elliptic-curve parameters has been added to the Perl Net:SSLeay module, which contains bindings to the OpenSSL library. Namely, the EC_KEY_new_by_curve_name() , EC_KEY_free*() , SSL_CTX_set_tmp_ecdh() , and OBJ_txt2nid() subroutines have been ported from upstream. This is required for the support of the Elliptic Curve Diffie-Hellman Exchange (ECDHE) key exchange in the IO::Socket::SSL Perl module. (BZ# 1316379 ) Perl IO::Socket::SSL now supports ECDHE Support for Elliptic Curve Diffie-Hellman Exchange (ECDHE) has been added to the IO::Socket::SSL Perl module. The new SSL_ecdh_curve option can be used for specifying a suitable curve by the Object Identifier (OID) or Name Identifier (NID). As a result, it is now possible to override the default elliptic curve parameters when implementing a TLS client using IO::Socket:SSL . (BZ# 1316377 ) tcsh now uses system allocation functions The tcsh command language interpreter now uses allocation functions from the glibc library instead of built-in allocation functions. This eliminates earlier problems with the malloc() library call. (BZ#1315713) Python performance enhancement The CPython interpreter now uses computed goto statements at the main switch statement, which executes Python bytecode. This enhancement allows the interpreter to avoid a bounds check that is required by the C99 standard for the switch statement, and allows the CPU to perform more efficient branch prediction, which reduces pipeline flushes. As a result of this enhancement, Python code is interpreted significantly faster than before. (BZ#1289277) telnet now accepts -i to use an IP address when calling login When a computer on a network has multiple IP addresses, it was previously possible to use one address to connect to the telnet server, but the other addresses were saved in the /var/run/utmp file. To prevent the telnet utility from performing a DNS lookup and ensure that telnet uses a particular IP address when calling the login utility, you can now use the -i option. Note that -i works in the same way as the -N option on Debian systems. (BZ# 1323094 ) sg3_utils rebased to version 1.37-7 The sg3_utils packages provide command-line utilities for devices that use the Small Computer System Interface (SCSI) command sets. With this update, the sg_inq and sg_vpd utilities allow decoding of more feature information on storage devices. Additionally, the presentation of date and software version information is now displayed correctly. The sg_rdac utility has been fixed as well and now supports 10-byte Command Descriptor Block (CDB) mode, which allows management of up to 256 logical unit numbers (LUN). (BZ#1170719) New configuration options for SSL/TLS certificate verification for the HTTP clients in the Python standard library New per-application and per-process configuration options for SSL/TLS certificate verification have been added for the HTTP clients in the Python standard library. The options are described in the 493 Python Enhancement Proposal ( https://www.python.org/dev/peps/pep-0493/ ). The default global setting continues to be to not verify certificates. For details, see https://access.redhat.com/articles/2039753 . (BZ# 1315758 ) glibc now supports the BIG5-HKSCS-2008 character set Previously, glibc supported an earlier version of the Hong Kong Supplementary Character Set, BIG5-HKSCS-2004. The BIG5-HKSCS character set map has been updated to the HKSCS-2008 revision of the standard. This allows Red Hat Enterprise Linux customers to write applications processing text that is encoded with this version of the standard. (BZ# 1211823 ) memtest86+ rebased to version 5.01 The memtest86+ package has been upgraded to upstream version 5.01, which provides a number of bug fixes and enhancements over the version. Notable changes include the following: Support for up to 2 TB of RAM on AMD64 and Intel 64 CPUs Support for new Intel and AMD CPUs, for example Intel Haswell Experimental SMT support up to 32 cores For detailed changes, see http://www.memtest.org/#change (BZ#1280352) mcelog rebased to version 136 The mcelog packages have been upgraded to upstream version 136, which provides a number of bug fixes and enhancements over the version. Notably, support for various 5th and 6th generation Intel Core processors (Broadwell-DE/SoC, Broadwell-EP, Broadwell-EX, and Skylake Client) has been included. (BZ#1336431) xz rebased to version 5.2.2 The xz packages have been upgraded to upstream version 5.2.2, which provides several optimization fixes, fixes for race conditions, translations, portability fixes, and also a new stabilized API previously available only for testing. Additionally, this update introduces a new experimental feature controlled by the --flush-timeout option (by default off). When compressing, if more than timeout milliseconds (a positive integer) have passed since the flush and reading more input would be blocked, all the pending input data is flushed from the encoder and made available in the output stream. This can be useful if the xz utility is used for compressing data that is streamed over a network. (BZ#1160193) tapestat has been added to sysstat The sysstat packages now provide the tapestat utility, which can be used to monitor performance of tape drives. (BZ#1332662) sysstat now supports a larger number of processors The sysstat packages now support the maximum number of processors supported by the Linux kernel, which is 8192 at the time of Red Hat Enterprise Linux 7.3 release. Previously, sysstat could not handle more than 2048 processors. (BZ#1258990) ruby rebased to version 2.0.0.648 The ruby packages have been upgraded to upstream version 2.0.0.648, which provides a number of bug and security fixes. This is the last upstream stable release of Ruby 2.0.0 as it has been deprecated in upstream. More recent versions of Ruby are available in Red Hat Software Collections. (BZ# 1197720 ) Enhancements to abrt reporting workflow The problem-reporting workflow in abrt has been enhanced to improve the overall crash-reporting experience and customer-case creation. The enhancements include: The Provide additional information screen now allows you to select whether the problem happens repeatedly, and also contains an additional input field for providing steps to reproduce the problem. A new reporting workflow Submit anonymous report , which should be used when the reported problem is not critical and no Red Hat support team assistance is required. New tests have been added to the internal logic to ensure that users only open cases for critical problems and software released by Red Hat. (BZ#1258482) abrt can now exclude specific programs from generating a core dump Previously, ignoring crashes of blacklisted programs in abrt did not prevent it from creating their core dumps, which were written to disk and then deleted. This approach allowed abrt to notify system administrators of a crash while not using disk space to store unneeded crash dumps. However, creating these dumps only to delete them later was unnecessarily wasting system resources. This update introduces a new configuration option IgnoredPaths in the /etc/abrt/plugins/CCpp.conf configuration file, which allows you to specify a comma-separated list of file system path patterns, for which core dump will not be generated at all. (BZ#1277848) User and group whitelisting added to abrt Previously, abrt allowed all users to generate and collect core dumps, which could potentially enable any user to maliciously generate a large number of core dumps and waste system resources. This update adds a whitelisting functionality to abrt , and you can now only allow specific users or groups to generate core dumps. Use the new AllowedUsers = user1, user2, ... and AllowedGroups = group1, group2, ... options in the /etc/abrt/plugins/CCpp.conf configuration file to restrict core dump generation and collection to these users or groups, or leave these options empty to configure abrt to process core dumps for all users and groups. (BZ#1277849) Format of emails sent by ABRT is now configurable You can now configure the format of emails sent by ABRT using the new -F FORMAT_FILE command-line option of the reporter-mailx utility. This option allows you to define your own format. Without the -F option, reporter-mailx uses the default format, which sorts all important elements by importance. For more information about the format of formatting files, see the reporter-mailx(1) man page. (BZ# 1281312 ) The Oracle ACFS is now included among known file systems Previously, the Oracle ASM Cluster file system (ACFS) was not listed among known file systems for the stat and tail utilities. As a consequence, the tail utility printed an error message stating that the file system was not recognized. ACFS has been added to the list of known file systems, and the error message no longer appears in the described situation. In addition, other file systems recognized by upstream have been added to the list of known file systems as well, namely bpf_fs , btrfs_test , configfs , hfs+ , hfsx , ibrix , logfs , m1fs , nsfs , overlayfs , prl_fs , and tracefs . (BZ# 1280357 ) Support for Octave 3.8 used by swig Previously, the Octave code generated by swig 2.0.10 did not work with Octave 3.8, because it contained deprecated bits such as variables and macros. This update ensures that swig produces code which works with Octave of versions 3.0.5, 3.2.4, 3.4.3, 3.6.4, and 3.8.0. (BZ# 1136487 ) The sos cluster plug-in has been divided into type-specific plug-ins The cluster plug-in in the sos package has been divided into several plug-ins ( cman , dlm , gfs2 , and pacemaker ). The new plug-in organization reflects that there are two different types of cluster ( cman and pacemaker ) and prevents certain commands from needing to be run multiple times. (BZ#1187258) libvpd rebased to version 2.2.5 The libvpd packages have been updated to upstream version 2.2.5, which provides a number of bug fixes and enhancements over the version. Notably, it also implements several security fixes, including the buffer overflow and memory allocation validation. (BZ#1182031) Man pages for pchrt and ptaskset added to python-schedutils This update adds man pages for the pchrt and ptaskset utilities, which are provided by the python-schedutils package. (BZ#948381) The socket timeout value for SSL connections of the subscription-manager client is now configurable Previously, the socket timeout value for SSL connections to an entitlement server was hard-coded. With this update, users can configure a custom SSL timeout value in the /etc/rhsm/rhsm.conf file. Setting a larger SSL timeout helps ensure that expensive operations involving many subscriptions have enough time to complete. (BZ# 1346417 ) redhat-uep.pem CA certificate moved to a python-rhsm-certificates package The /etc/rhsm/ca/redhat-uep.pem certificate authority (CA) certificate was previously included in the python-rhsm package. This update moves this certificate into a simplified python-rhsm-certificates package that provides only the certificate. As a result, container images can now be built only with python-rhsm-certificates without all the package dependencies required by python-rhsm , specifically the python package. (BZ# 1104332 ) gfs2-utils rebased to version 3.1.9 The gfs2-utils package has been updated to upstream version 3.1.9, which provides a number of enhancements, new features, and bug fixes, including the following: fsck.gfs2 now uses less memory Improvements and fixes to the extended attributes and resource group checking of fsck.gfs2 mkfs.gfs2 reports progress so that the user can tell it is still active during a long mkfs operation The -t option of mkfs.gfs2 now accepts a longer cluster name and file system name A udev helper script is now installed to suspend the device on withdraw, which prevents hangs Support for the de_rahead and de_cookie dirent fields has been added gfs2_edit savemeta performance improvements The glocktop utility has been added to help analyze locking-related performance problems The mkfs.gfs2(8) man page has been reworked The rgrplbv and loccookie mount options have been added to the gfs2(5) man page Fixes for out-of-tree builds and testing (BZ#1271674) system-switch-java rebased to version 1.7 The system-switch-java package, which provides an easy-to-use tool to select the default Java toolset for the system, has been updated to version 1.7. The new version has been rewritten to support modern JDK packages. The main enhancements include support for multiple Java installations, addition of -debug packages, and support for JDK 9. (BZ# 1283904 ) Optional branch predictor optimization for certain Intel micro-architectures The branch predictor in the 2nd generation Xeon Phi and 3rd generation Atom micro-architectures only supports 32-bit offsets between branch and branch targets. If a branch and its target were further apart than 4 GiB, performance was very poor. With this update, glibc maps the main program and shared objects into the first 31 bits of the address space if the LD_PREFER_MAP_32BIT_EXEC environment variable is set, improving performance on the described architectures. Note that this improvement reduces address space layout randomization (ASLR) and is therefore not enabled by default. (BZ#1292018) Optimized memory routines for Intel hardware using AVX 512 This update provides optimized memory copying routines to the core C library (glibc) for Intel hardware using AVX 512. These optimized routines are automatically selected when applications use the C library memcpy() , memmove() , or memset() function on AVX 512-enabled hardware. The AVX 512-enabled memory copying routine provides the best possible performance on the latest Intel hardware that supports this feature, particularly on the second-generation Xeon Phi systems. (BZ#1298526) Better-performance memset() routine This update provides a key optimization to the core C library memset() routine for Intel Xeon v5 server hardware. The existing memset() routine for AMD64 and Intel 64 architectures made extensive use of non-temporal stores, a hardware feature which does not provide uniform performance across hardware variants. The new memset() provides better performance across hardware variants, including Intel Xeon v5 hardware. (BZ#1335286) Support for the --instLangs option in glibc The glibc-common packages provide a large locale archive containing data for all locales supported by glibc . Typical installations only need a subset of these locales, and installing all of them is wasteful. With this update, it is possible to create system installations and container images which only include required locales, greatly reducing image size. (BZ#1296297) Optimizations in glibc for IBM POWER8 With this update, all libraries provided by glibc have been compiled for optimal execution on POWER8 hardware. Optimized memory and string manipulation routines for 64-bit IBM POWER7 and POWER8 hardware have been added to the core C library (glibc). These optimized routines are automatically selected when applications use C library routines like strncat() or strncmp() . These POWER7 and POWER8-enabled routines provide the best possible performance on the latest IBM hardware. (BZ# 1213267 , BZ#1183088, BZ#1240351) Optimizations in glibc for IBM z Systems z13 The core C library (glibc) has been enhanced to provide optimized support for IBM z Systems z13 hardware. Core string and memory manipulation routines such as strncpy() or memcpy() have all been optimized. The z13-enabled routines provide the best possible performance on the latest IBM hardware. (BZ#1268008) Origin plug-in added to the sos package The origin plug-in has been added to the sos package. The plug-in collects information about OpenShift Origin and related products, such as Atomic Platform or OpenShift Enterprise 3 and higher. This allows users to gather information about OpenShift Origin deployments. (BZ# 1246423 ) gssproxy now supports krb5 1.14 The gssproxy packages, which provide a daemon to manage access to GSSAPI credentials, as well as a GSSAPI interposer plug-in, have been updated to upstream version 0.4.1-10. gssproxy now supports the krb5 packages in version 1.14. (BZ# 1292487 ) A possibility to configure optional SSH key files for the ABRT reporter-upload tool has been added This update adds the possibility to configure an SSH key in the reporter-upload utility of Automatic Bug Reporting Tool (ABRT). To specify the key file, choose one of the following ways: Using the SSHPublicKey and SSHPrivateKey options in the /etc/libreport/plugins/upload.conf configuration file Using the -b and -r command-line options for the public and private key, respectively Setting the Upload_SSHPublicKey and Upload_SSHPrivateKey environment variables, respectively. If none of these options or variables are specified, reporter-upload uses the default SSH key from the user's ~/.ssh/ directory. (BZ#1289513)
null
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/7.3_release_notes/new_features_compiler_and_tools
1.3. Fencing Configuration
1.3. Fencing Configuration You must configure a fencing device for each node in the cluster. For information about the fence configuration commands and options, see the Red Hat Enterprise Linux 7 High Availability Add-On Reference . For general information on fencing and its importance in a Red Hat High Availability cluster, see Fencing in a Red Hat High Availability Cluster . Note When configuring a fencing device, attention should be given to whether that device shares power with any nodes or devices in the cluster. If a node and its fence device do share power, then the cluster may be at risk of being unable to fence that node if the power to it and its fence device should be lost. Such a cluster should either have redundant power supplies for fence devices and nodes, or redundant fence devices that do not share power. Alternative methods of fencing such as SBD or storage fencing may also bring redundancy in the event of isolated power losses. This example uses the APC power switch with a host name of zapc.example.com to fence the nodes, and it uses the fence_apc_snmp fencing agent. Because both nodes will be fenced by the same fencing agent, you can configure both fencing devices as a single resource, using the pcmk_host_map and pcmk_host_list options. You create a fencing device by configuring the device as a stonith resource with the pcs stonith create command. The following command configures a stonith resource named myapc that uses the fence_apc_snmp fencing agent for nodes z1.example.com and z2.example.com . The pcmk_host_map option maps z1.example.com to port 1, and z2.example.com to port 2. The login value and password for the APC device are both apc . By default, this device will use a monitor interval of sixty seconds for each node. Note that you can use an IP address when specifying the host name for the nodes. Note When you create a fence_apc_snmp stonith device, you may see the following warning message, which you can safely ignore: The following command displays the parameters of an existing STONITH device. After configuring your fence device, you should test the device. For information on testing a fence device, see Fencing: Configuring Stonith in the High Availability Add-On Reference . Note Do not test your fence device by disabling the network interface, as this will not properly test fencing. Note Once fencing is configured and a cluster has been started, a network restart will trigger fencing for the node which restarts the network even when the timeout is not exceeded. For this reason, do not restart the network service while the cluster service is running because it will trigger unintentional fencing on the node.
[ "pcs stonith create myapc fence_apc_snmp ipaddr=\"zapc.example.com\" pcmk_host_map=\"z1.example.com:1;z2.example.com:2\" pcmk_host_check=\"static-list\" pcmk_host_list=\"z1.example.com,z2.example.com\" login=\"apc\" passwd=\"apc\"", "Warning: missing required option(s): 'port, action' for resource type: stonith:fence_apc_snmp", "pcs stonith show myapc Resource: myapc (class=stonith type=fence_apc_snmp) Attributes: ipaddr=zapc.example.com pcmk_host_map=z1.example.com:1;z2.example.com:2 pcmk_host_check=static-list pcmk_host_list=z1.example.com,z2.example.com login=apc passwd=apc Operations: monitor interval=60s (myapc-monitor-interval-60s)" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/high_availability_add-on_administration/s1-fenceconfig-HAAA
Chapter 1. Red Hat Cloud Access program overview
Chapter 1. Red Hat Cloud Access program overview The Red Hat Cloud Access program is designed to provide subscription portability for customers who want to use their Red Hat product subscriptions in the cloud. Red Hat Cloud Access provides the following customer benefits: Cloud Access is available with most Red Hat subscriptions at no cost. You keep all the benefits of a Red Hat subscription and maintain your existing support relationship with Red Hat. You have flexibility and choice for how and where you use your Red Hat products. You have access to value-add features and capabilities, like gold images and Azure Hybrid Benefit for Linux . 1.1. Cloud Access product eligibility Subscription portability is a feature included with most Red Hat products and is key to creating open hybrid cloud infrastructures built on Red Hat technologies. Most Red Hat products are cloud-ready by default but the nature of multi-tenant public clouds (a wide range of providers, differing technologies/platforms, and shared infrastructures) as well as a customer's limited access to those infrastructures can create technical challenges that customers should be aware of. The following examples are general guidelines to help you understand Cloud Access product eligibility: Your subscription term must be active. The subscription is available to use in the cloud, that is, it is not currently in use elsewhere. The subscription has a cloud compatible unit of measure, depending on your cloud provider and the instance type you are deploying. Some examples of cloud units of measure are core, core band, managed node, RAM, storage band, vCPU, or Virtual Node/Guest. The Red Hat product you are deploying on the cloud is technically suitable for use in a multi-tenant public cloud infrastructure. Examples of products and subscriptions that are not eligible include the following: Virtual Datacenter or other unlimited RHEL guest subscriptions that require virt-who Red Hat Virtualization products; nested virtualization is not supported Subscriptions that have a physical unit of measure such as socket or socket-pair Subscriptions for Red Hat-hosted offerings These guidelines are not definitive, and Red Hat product and subscription eligibility change over time as we introduce new products and subscription types. It is also a good idea to refer to the Red Hat product documentation for any specific details about the product's use on a public cloud infrastructure. If you are unsure about the eligibility of your Red Hat products for public cloud use, contact your Red Hat account manager. 1.2. Unit conversion for Red Hat Cloud Access-eligible subscriptions To understand your subscription usage in the cloud, you need to be able to count based upon the unit of measure associated with each subscription as well as understand the relationship between subscriptions and entitlements. Each Red Hat subscription includes at least one entitlement that can be used to register a system with Red Hat subscription management tooling. Red Hat subscriptions used in virtualized environments like the public cloud may include an additional number of entitlements. For example, a single Red Hat Enterprise Linux Server (RHEL) (Physical or Virtual Node) subscription includes 1 physical entitlement or 2 virtual entitlements. When a subscription of this type is used on physical, bare metal hardware, it entitles a single physical RHEL server. When it is used in a virtualized environment like the public cloud, it entitles up to 2 virtual RHEL servers. Unit conversions differ widely depending upon the Red Hat product, subscription type, and deployment environment, but the following table contains some general guidelines. Table 1.1. Red Hat Cloud Access Unit Conversion Table Physical or Virtual Node 1 physical node or 2 virtual nodes 2 virtual nodes System typically sockets or cores 1 virtual node Core or vCPU cores vCPUs (typically 2vCPU:1Core) Core Band groups of cores (for example, 2, 4, 16, 64, 128) vCPUs (typically 2vCPU:1Core) Socket socket, socket-pair, cores N/A Additional resources See the Red Hat Subscription Manager user interface inside the Red Hat Customer Portal for entitlement quantities, units of measure, and related details for each of your Red Hat product subscriptions. See Appendix 1 of the Red Hat Enterprise Agreement for more details about units of measure, conversions, and counting guidelines for Red Hat products. 1.3. Cloud Access provider eligibility Red Hat has a large ecosystem of Certified Cloud and Service Provider (CCSP) partners, where Cloud Access customers can use their eligible subscriptions. The Red Hat Ecosystem Catalog contains details about our featured providers (Alibaba, AWS, Google Cloud Platform, IBM Cloud, and Microsoft Azure) as well as other providers with certified cloud images and instance types. Consider these recommendations when you chose a Cloud Access provider: The provider must have a supported mechanism for customers to import their virtual machine images into the provider's environment. Note Look for CCSP partners offering Image Upload in the cloud ecosystem catalog. If image upload is not possible, Cloud Access customers need to use Red Hat gold images or have the ability to convert an on-demand PAYG Red Hat image or instance to BYOS. Note Cloud Access gold images are available on AWS, Azure, and Google. The Azure Hybrid Benefit for Linux provides a PAYG-to-BYOS conversion capability for Red Hat Cloud Access customers. The provider should be a TSANet member and collaborate with Red Hat when necessary to solve common customer issues. Red Hat strives to help customers successfully deploy and use Red Hat products across their hybrid cloud infrastructures. The Cloud Access product eligibility and provider guidelines help ensure customer success. We urge customers to follow them. Customers choosing to deploy Red Hat products outside of these guidelines should be aware of the following conditions: The product or subscription may not work as designed. Product performance may be degraded. Product features and capabilities may be limited. Red Hat may not be able to provide the expected level of support. See Red Hat's third-party support policy for more details.
null
https://docs.redhat.com/en/documentation/subscription_central/1-latest/html/red_hat_cloud_access_reference_guide/red-hat-cloud-access-program-overview_cloud-access
Chapter 15. Performance tuning for automation controller
Chapter 15. Performance tuning for automation controller Tune your automation controller to optimize performance and scalability. When planning your workload, ensure that you identify your performance and scaling needs, adjust for any limitations, and monitor your deployment. Automation controller is a distributed system with multiple components that you can tune, including the following: Task system in charge of scheduling jobs Control Plane in charge of controlling jobs and processing output Execution plane where jobs run Web server in charge of serving the API Websocket system that serve and broadcast websocket connections and data Database used by multiple components 15.1. Capacity Planning for deploying automation controller Capacity planning for automation controller is planning the scale and characteristics of your deployment so that it has the capacity to run the planned workload. Capacity planning includes the following phases: Characterizing your workload Reviewing the capabilities of different node types Planning the deployment based on the requirements of your workload 15.1.1. Characteristics of your workload Before planning your deployment, establish the workload that you want to support. Consider the following factors to characterize an automation controller workload: Managed hosts Tasks per hour per host Maximum number of concurrent jobs that you want to support Maximum number of forks set on jobs. Forks determine the number of hosts that a job acts on concurrently. Maximum API requests per second Node size that you prefer to deploy (CPU/Memory/Disk) 15.1.2. Types of nodes in automation controller You can configure four types of nodes in an automation controller deployment: Control nodes Hybrid nodes Execution nodes Hop nodes 15.1.2.1. Benefits of scaling control nodes Control and hybrid nodes provide control capacity. They provide the ability to start jobs and process their output into the database. Every job is assigned a control node. In the default configuration, each job requires one capacity unit to control. For example, a control node with 100 capacity units can control a maximum of 100 jobs. Vertically scaling a control node by deploying a larger virtual machine with more resources increases the following capabilities of the control plane: The number of jobs that a control node can perform control tasks for, which requires both more CPU and memory. The number of job events a control node can process concurrently. Scaling CPU and memory in the same proportion is recommended, for example, 1 CPU: 4 GB RAM. Even when memory consumption is high, increasing the CPU of an instance can often relieve pressure. The majority of the memory that control nodes consume is from unprocessed events that are stored in a memory-based queue. Note Vertically scaling a control node does not automatically increase the number of workers that handle web requests. An alternative to vertically scaling is horizontally scaling by deploying more control nodes. This allows spreading control tasks across more nodes as well as allowing web traffic to be spread over more nodes, given that you provision a load balancer to spread requests across nodes. Horizontally scaling by deploying more control nodes in many ways can be preferable as it additionally provides for more redundancy and workload isolation in the event that a control node goes down or experiences higher than normal load. 15.1.2.2. Benefits of scaling execution nodes Execution and hybrid nodes provide execution capacity. The capacity consumed by a job is equal to the number of forks set on the job template or the number of hosts in the inventory, whichever is less, plus one additional capacity unit to account for the main ansible process. For example, a job template with the default forks value of 5 acting on an inventory with 50 hosts consumes 6 capacity units from the execution node it is assigned to. Vertically scaling an execution node by deploying a larger virtual machine with more resources provides more forks for job execution. This increases the number of concurrent jobs that an instance can run. In general, scaling CPU alongside memory in the same proportion is recommended. Like control and hybrid nodes, there is a capacity adjustment on each execution node that you can use to align actual use with the estimation of capacity consumption that the automation controller makes. By default, all nodes are set to the top of that range. If actual monitoring data reveals the node to be over-used, decreasing the capacity adjustment can help bring this in line with actual usage. An alternative to vertically scaling execution nodes is horizontally scaling the execution plane by deploying more virtual machines to be execution nodes. Because horizontally scaling can provide additional isolation of workloads, you can assign different instances to different instance groups. You can then assign these instance groups to organizations, inventories, or job templates. For example, you can configure an instance group that can only be used for running jobs against a certain Inventory. In this scenario, by horizontally scaling the execution plane, you can ensure that lower-priority jobs do not block higher-priority jobs 15.1.2.3. Benefits of scaling hop nodes Because hop nodes use very low memory and CPU, vertically scaling these nodes does not impact capacity. Monitor the network bandwidth of any hop node that serves as the sole connection between many execution nodes and the control plane. If bandwidth use is saturated, consider changing the network. Horizontally scaling by adding more hop nodes could provide redundancy in the event that one hop node goes down, which can allow traffic to continue to flow between the control plane and the execution nodes. 15.1.2.4. Ratio of control to execution capacity Assuming default configuration, the maximum recommended ratio of control capacity to execution capacity is 1:5 in traditional VM deployments. This ensures that there is enough control capacity to run jobs on all the execution capacity available and process the output. Any less control capacity in relation to the execution capacity, and it would not be able to launch enough jobs to use the execution capacity. There are cases in which you might want to modify this ratio closer to 1:1. For example, in cases where a job produces a high level of job events, reducing the amount of execution capacity in relation to the control capacity helps relieve pressure on the control nodes to process that output. 15.2. Example capacity planning exercise After you have determined the workload capacity that you want to support, you must plan your deployment based on the requirements of the workload. To help you with your deployment, review the following planning exercise. For this example, the cluster must support the following capacity: 300 managed hosts 1,000 tasks per hour per host or 16 tasks per minute per host 10 concurrent jobs Forks set to 5 on playbooks. This is the default. Average event size is 1 Mb The virtual machines have 4 CPU and 16 GB RAM, and disks that have 3000 IOPs. 15.2.1. Example workload requirements For this example capacity planning exercise, use the following workload requirements: Execution capacity To run the 10 concurrent jobs requires at least 60 units of execution capacity. You calculate this by using the following equation: (10 jobs * 5 forks) + (10 jobs * 1 base task impact of a job) = 60 execution capacity Control capacity To control 10 concurrent jobs requires at least 10 units of control capacity. To calculate the number of events per hour that you need to support 300 managed hosts and 1,000 tasks per hour per host, use the following equation: 1000 tasks * 300 managed hosts per hour = 300,000 events per hour at minimum. You must run the job to see exactly how many events it produces, because this is dependent on the specific task and verbosity. For example, a debug task printing "Hello World" produces 6 job events with the verbosity of 1 on one host. With a verbosity of 3, it produces 34 job events on one host. Therefore, you must estimate that the task produces at least 6 events. This would produce closer to 3,000,000 events per hour, or approximately 833 events per second. Determining quantity of execution and control nodes needed To determine how many execution and control nodes you need, reference the experimental results in the following table that shows the observed event processing rate of a single control node with 5 execution nodes of equal size (API Capacity column). The default "forks" setting of job templates is 5, so using this default, the maximum number of jobs a control node can dispatch to execution nodes makes 5 execution nodes of equal CPU/RAM use 100% of their capacity, arriving to the previously mentioned 1:5 ratio of control to execution capacity. Node API capacity Default execution capacity Default control capacity Mean event processing rate at 100% capacity usage Mean events processing rate at 50% capacity usage Mean event processing rate at 40% capacity usage 4 CPU at 2.5Ghz, 16 GB RAM control node, a maximum of 3000 IOPs disk approximately 10 requests per second n/a 137 jobs 1100 per second 1400 per second 1630 per second 4 CPU at 2.5Ghz, 16 GB RAM execution node, a maximum of 3000 IOPs disk n/a 137 n/a n/a n/a n/a 4 CPU at 2.5Ghz, 16 GB RAM database node, a maximum of 3000 IOPs disk n/a n/a n/a n/a n/a n/a Because controlling jobs competes with job event processing on the control node, over-provisioning control capacity can reduce processing times. When processing times are high, you can experience a delay between when the job runs and when you can view the output in the API or UI. For this example, for a workload on 300 managed hosts, executing 1000 tasks per hour per host, 10 concurrent jobs with forks set to 5 on playbooks, and an average event size 1 Mb, use the following procedure: Deploy 1 execution node, 1 control node, 1 database node of 4 CPU at 2.5Ghz, 16 GB RAM, and disks that have approximately 3000 IOPs. Keep the default fork setting of 5 on job templates. Use the capacity adjustment feature in the instance view of the UI on the control node to reduce the capacity down to 16, the lowest value, to reserve more of the control node's capacity for processing events. Additional Resources For more information on workloads with high levels of API interaction, see Scaling Automation Controller for API Driven Workloads . For more information on managing capacity with instances, see Managing Capacity With Instances . For more information on operator-based deployments, see Red Hat Ansible Automation Platform Performance Considerations for Operator Based Installations . 15.3. Performance troubleshooting for automation controller Users experience many request timeouts (504 or 503 errors), or in general high API latency. In the UI, clients face slow login and long wait times for pages to load. What system is the likely culprit? If these issues occur only on login, and you use external authentication, the problem is likely with the integration of your external authentication provider. See Setting up enterprise authentication or seek Red Hat Support. For other issues with timeouts or high API latency, see Web server tuning . Long wait times for job output to load. Job output streams from the execution node where the ansible-playbook is actually run to the associated control node. Then the callback receiver serializes this data and writes it to the database. Relevant settings to observe and tune can be found in Settings for managing job event processing and PostgreSQL database configuration and maintenance for automation controller . In general, to resolve this symptom it is important to observe the CPU and memory use of the control nodes. If CPU or memory use is very high, you can either horizontally scale the control plane by deploying more virtual machines to be control nodes naturally spreads out work more, or to modify the number of jobs a control node will manage at a time. For more information, see Capacity settings for control and execution nodes for more information. What can I do to increase the number of jobs that automation controller can run concurrently? Factors that cause jobs to remain in "pending" state are: Waiting for "dependencies" to finish : this includes project updates and inventory updates when "update on launch" behavior is enabled. The "allow_simultaneous" setting of the job template : if multiple jobs of the same job template are in "pending" status, check the "allow_simultaneous" setting of the job template ("Concurrent Jobs" checkbox in the UI). If this is not enabled, only one job from a job template can run at a time. The "forks" value of your job template : the default value is 5. The amount of capacity required to run the job is roughly the forks value (some small overhead is accounted for). If the forks value is set to a very large number, this will limit what nodes will be able to run it. Lack of either control or execution capacity : see "awx_instance_remaining_capacity" metric from the application metrics available on /api/v2/metrics. See Metrics for monitoring automation controller application for more information about how to monitor metrics. See Capacity planning for deploying automation controller for information on how to plan your deployment to handle the number of jobs you are interested in. Jobs run more slowly on automation controller than on a local machine. Some additional overhead is expected, because automation controller might be dispatching your job to a separate node. In this case, automation controller is starting a container and running ansible-playbook there, serializing all output and writing it to a database. Project update on launch and inventory update on launch behavior can cause additional delays at job start time. Size of projects can impact how long it takes to start the job, as the project is updated on the control node and transferred to the execution node. Internal cluster routing can impact network performance. For more information, see Internal cluster routing . Container pull settings can impact job start time. The execution environment is a container that is used to run jobs within it. Container pull settings can be set to "Always", "Never" or "If not present". If the container is always pulled, this can cause delays. Ensure that all cluster nodes, including execution, control, and the database, have been deployed in instances with storage rated to the minimum required IOPS, because the manner in which automation controller runs ansible and caches event data implicates significant disk I/O. For more information, see Red Hat Ansible Automation Platform system requirements . Database storage does not stop growing. Automation controller has a management job titled "Cleanup Job Details". By default, it is set to keep 120 days of data and to run once a week. To reduce the amount of data in the database, you can shorten the retention time. For more information, see Removing Old Activity Stream Data . Running the cleanup job deletes the data in the database. However, the database must at some point perform its vacuuming operation which reclaims storage. See PostgreSQL database configuration and maintenance for automation controller for more information about database vacuuming. 15.4. Metrics to monitor automation controller Monitor your automation controller hosts at the system and application levels. System level monitoring includes the following information: Disk I/O RAM use CPU use Network traffic Application level metrics provide data that the application knows about the system. This data includes the following information: How many jobs are running in a given instance Capacity information about instances in the cluster How many inventories are present How many hosts are in those inventories Using system and application metrics can help you identify what was happening in the application when a service degradation occurred. Information about automation controller's performance over time helps when diagnosing problems or doing capacity planning for future growth. 15.4.1. Metrics for monitoring automation controller application For application level monitoring, automation controller provides Prometheus-style metrics on an API endpoint /api/v2/metrics . Use these metrics to monitor aggregate data about job status and subsystem performance, such as for job output processing or job scheduling. The metrics endpoint includes descriptions of each metric. Metrics of particular interest for performance include: awx_status_total Current total of jobs in each status. Helps correlate other events to activity in system. Can monitor upticks in errored or failed jobs. awx_instance_remaining_capacity Amount of capacity remaining for running additional jobs. callback_receiver_event_processing_avg_seconds colloquially called "job events lag". Running average of the lag time between when a task occurred in ansible and when the user is able to see it. This indicates how far behind the callback receiver is in processing events. When this number is very high, users can consider scaling up the control plane or using the capacity adjustment feature to reduce the number of jobs a control node controls. callback_receiver_events_insert_db Counter of events that have been inserted by a node. Can be used to calculate the job event insertion rate over a given time period. callback_receiver_events_queue_size_redis Indicator of how far behind callback receiver is in processing events. If too high, Redis can cause the control node to run out of memory (OOM). 15.4.2. System level monitoring Monitoring the CPU and memory use of your cluster hosts is important because capacity management for instances does not introspect into the actual resource usage of hosts. The resource impact of automation jobs depends on what the playbooks are doing. For example, many cloud or networking modules do most of the processing on the execution node, which runs the Ansible Playbook. The impact on the automation controller is very different than if you were running a native module like "yum" where the work is performed on the target hosts where the execution node spends much of the time during this task waiting on results. If CPU or memory usage is very high, consider lowering the capacity adjustment (available on the instance detail page) on affected instances in the automation controller. This limits how many jobs are run on or controlled by this instance. Monitor the disk I/O and use of your system. The manner in which an automation controller node runs Ansible and caches output on the file system, and eventually saves it in the database, creates high levels of disk reads and writes. Identifying poor disk performance early can help prevent poor user experience and system degradation. Additional resources For more information about configuring monitoring, see Metrics . Additional insights into automation usage are available when you enable data collection for automation analytics. For more information, see Automation analytics and Red Hat Insights for Red Hat Ansible Automation Platform . 15.5. PostgreSQL database configuration and maintenance for automation controller To improve the performance of automation controller, you can configure the following configuration parameters in the database: Maintenance The VACUUM and ANALYZE tasks are important maintenance activities that can impact performance. In normal PostgreSQL operation, tuples that are deleted or obsoleted by an update are not physically removed from their table; they remain present until a VACUUM is done. Therefore it's necessary to do VACUUM periodically, especially on frequently-updated tables. ANALYZE collects statistics about the contents of tables in the database, and stores the results in the pg_statistic system catalog. Subsequently, the query planner uses these statistics to help determine the most efficient execution plans for queries. The autovacuuming PostgreSQL configuration parameter automates the execution of VACUUM and ANALYZE commands. Setting autovacuuming to true is a good practice. However, autovacuuming will not occur if there is never any idle time on the database. If it is observed that autovacuuming is not sufficiently cleaning up space on the database disk, then scheduling specific vacuum tasks during specific maintenance windows can be a solution. Configuration parameters To improve the performance of the PostgreSQL server, configure the following Grand Unified Configuration (GUC) parameters that manage database memory. You can find these parameters inside the USDPDATA directory in the postgresql.conf file, which manages the configurations of the database server. shared_buffers : determines how much memory is dedicated to the server for caching data. The default value for this parameter is 128 MB. When you modify this value, you must set it between 15% and 25% of the machine's total RAM. Note You must restart the database server after changing the value for shared_buffers . Note If you are compiling Postgres against OpenSSL 3.2, your system regresses to remove the parameter for User during startup. You can rectify this by using the BIO_get_app_data call instead of open_get_data . Only an administrator can make these changes, but it impacts all users connected to the PostgreSQL database. If you update your systems without the OpenSSL patch, you are not impacted, and you do not need to take action. Warning If you are compiling Postgres against OpenSSL 3.2, your system regresses to remove the parameter for User during startup. You can rectify this by using the BIO_get_app_data call instead of open_get_data. Only an administrator can make these changes, but it impacts all users connected to the PostgreSQL database. If you update your systems without the OpenSSL patch, you are not impacted, and you do not need to take action. work_mem : provides the amount of memory to be used by internal sort operations and hash tables before disk-swapping. Sort operations are used for order by, distinct, and merge join operations. Hash tables are used in hash joins and hash-based aggregation. The default value for this parameter is 4 MB. Setting the correct value of the work_mem parameter improves the speed of a search by reducing disk-swapping. Use the following formula to calculate the optimal value of the work_mem parameter for the database server: Total RAM * 0.25 / max_connections Note Setting a large work_mem can cause the PostgreSQL server to go out of memory (OOM) if there are too many open connections to the database. max_connections : specifies the maximum number of concurrent connections to the database server. maintenance_work_mem : provides the maximum amount of memory to be used by maintenance operations, such as vacuum, create index, and alter table add foreign key operations. The default value for this parameter is 64 MB. Use the following equation to calculate a value for this parameter: Total RAM * 0.05 Note Set maintenance_work_mem higher than work_mem to improve performance for vacuuming. Additional resources For more information on autovacuuming settings, see Automatic Vacuuming . 15.6. Automation controller tuning You can configure many automation controller settings by using the automation controller UI, API, and file based settings including: Live events in the automation controller UI Job event processing Control and execution node capacity Instance group and container group capacity Task management (job scheduling) Internal cluster routing Web server tuning 15.6.1. Managing live events in the automation controller UI Events are sent to any node where there is a UI client subscribed to a job. This task is expensive, and becomes more expensive as the number of events that the cluster is producing increases and the number of control nodes increases, because all events are broadcast to all nodes regardless of how many clients are subscribed to particular jobs. To reduce the overhead of displaying live events in the UI, administrators can choose to either: Disable live streaming events. Reduce the number of events shown per second or before truncating or hiding events in the UI. When you disable live streaming of events, they are only loaded on hard refresh to a job's output detail page. When you reduce the number of events shown per second, this limits the overhead of showing live events, but still provides live updates in the UI without a hard refresh. 15.6.1.1. Disabling live streaming events Procedure Disable live streaming events by using one of the following methods: In the API, set UI_LIVE_UPDATES_ENABLED to False . Navigate to your automation controller. Open the Miscellaneous System Settings window. Set the Enable Activity Stream toggle to Off . 15.6.1.2. Settings to modify rate and size of events If you cannot disable live streaming of events because of their size, reduce the number of events that are displayed in the UI. You can use the following settings to manage how many events are displayed: Settings available for editing in the UI or API : EVENT_STDOUT_MAX_BYTES_DISPLAY : Maximum amount of stdout to display (as measured in bytes). This truncates the size displayed in the UI. MAX_WEBSOCKET_EVENT_RATE : Number of events to send to clients per second. Settings available by using file based settings : MAX_UI_JOB_EVENTS : Number of events to display. This setting hides the rest of the events in the list. MAX_EVENT_RES_DATA : The maximum size of the ansible callback event's "res" data structure. The "res" is the full "result" of the module. When the maximum size of ansible callback events is reached, then the remaining output will be truncated. Default value is 700000 bytes. LOCAL_STDOUT_EXPIRE_TIME : The amount of time before a stdout file is expired and removed locally. Additional resources For more information on file based settings, see Additional settings for automation controller . 15.6.2. Settings for managing job event processing The callback receiver processes all the output of jobs and writes this output as job events to the automation controller database. The callback receiver has a pool of workers that processes events in batches. The number of workers automatically increases with the number of CPU available on an instance. Administrators can override the number of callback receiver workers with the setting JOB_EVENT_WORKERS . Do not set more than 1 worker per CPU, and there must be at least 1 worker. Greater values have more workers available to clear the Redis queue as events stream to the automation controller, but can compete with other processes such as the web server for CPU seconds, uses more database connections (1 per worker), and can reduce the batch size of events each worker commits. Each worker builds up a buffer of events to write in a batch. The default amount of time to wait before writing a batch is 1 second. This is controlled by the JOB_EVENT_BUFFER_SECONDS setting. Increasing the amount of time the worker waits between batches can result in larger batch sizes. 15.6.3. Capacity settings for control and execution nodes The following settings impact capacity calculations on the cluster. Set them to the same value on all control nodes by using the following file-based settings. AWX_CONTROL_NODE_TASK_IMPACT : Sets the impact of controlling jobs. You can use it when your control plane exceeds desired CPU or memory usage to control the number of jobs that your control plane can run at the same time. SYSTEM_TASK_FORKS_CPU and SYSTEM_TASK_FORKS_MEM : Influence how many resources are estimated to be consumed by each fork of Ansible. By default, 1 fork of Ansible is estimated to use 0.25 of a CPU and 100 Mb of memory. Additional resources For information about file-based settings, see Additional settings for automation controller . 15.6.4. Capacity settings for instance group and container group Use the max_concurrent_jobs and max_forks settings available on instance groups to limit how many jobs and forks can be consumed across an instance group or container group. To calculate the max_concurrent_jobs you need on a container group consider the pod_spec setting for that container group. In the pod_spec , you can see the resource requests and limits for the automation job pod. Use the following equation to calculate the maximum concurrent jobs that you need: ((number of worker nodes in kubernetes cluster) * (CPU available on each worker)) / (CPU request on pod_spec) = maximum number of concurrent jobs For example, if your pod_spec indicates that a pod will request 250 mcpu Kubernetes cluster has 1 worker node with 2 CPU, the maximum number of jobs that you need to start with is 8. You can also consider the memory consumption of the forks in the jobs. Calculate the appropriate setting of max_forks with the following equation: ((number of worker nodes in kubernetes cluster) * (memory available on each worker)) / (memory request on pod_spec) = maximum number of forks For example, given a single worker node with 8 Gb of Memory, we determine that the max forks we want to run is 81. This way, either 39 jobs with 1 fork can run (task impact is always forks + 1), or 2 jobs with forks set to 39 can run. You might have other business requirements that motivate using max_forks or max_concurrent_jobs to limit the number of jobs launched in a container group. 15.6.5. Settings for scheduling jobs The task manager periodically collects tasks that need to be scheduled and determines what instances have capacity and are eligible for running them. The task manager has the following workflow: Find and assign the control and execution instances. Update the job's status to waiting. Message the control node through pg_notify for the dispatcher to pick up the task and start running it. If the scheduling task is not completed within TASK_MANAGER_TIMEOUT seconds (default 300 seconds), the task is terminated early. Timeout issues generally arise when there are thousands of pending jobs. One way the task manager limits how much work it can do in a single run is the START_TASK_LIMIT setting. This limits how many jobs it can start in a single run. The default is 100 jobs. If more jobs are pending, a new scheduler task is scheduled to run immediately after. Users who are willing to have potentially longer latency between when a job is launched and when it starts, to have greater overall throughput, can consider increasing the START_TASK_LIMIT . To see how long individual runs of the task manager take, use the Prometheus metric task_manager__schedule_seconds , available in /api/v2/metrics . Jobs elected to begin running by the task manager do not do so until the task manager process exits and commits its changes. The TASK_MANAGER_TIMEOUT setting determines how long a single run of the task manager will run for before committing its changes. When the task manager reaches its timeout, it attempts to commit any progress it made. The task is not actually forced to exit until after a grace period (determined by TASK_MANAGER_TIMEOUT_GRACE_PERIOD ) has passed. 15.6.6. Internal Cluster Routing Automation controller cluster hosts communicate across the network within the cluster. In the inventory file for the traditional VM installer, you can indicate multiple routes to the cluster nodes that are used in different ways: Example : [automationcontroller] controller1 ansible_user=ec2-user ansible_host=10.10.12.11 node_type=hybrid routable_hostname=somehost.somecompany.org controller1 is the inventory hostname for the automation controller host. The inventory hostname is what is shown as the instance hostname in the application. This can be useful when preparing for disaster recovery scenarios where you want to use the backup/restore method to restore the cluster to a new set of hosts that have different IP addresses. In this case you can have entries in /etc/hosts that map these inventory hostnames to IP addresses, and you can use internal IP addresses to mitigate any DNS issues when it comes to resolving public DNS names. ansible_host=10.10.12.11 indicates how the installer reaches the host, which in this case is an internal IP address. This is not used outside of the installer. routable_hostname=somehost.somecompany.org indicates the hostname that is resolvable for the peers that connect to this node on the receptor mesh. Since it may cross multiple networks, we are using a hostname that will map to an IP address resolvable for the receptor peers. 15.6.7. Web server tuning Control and Hybrid nodes each serve the UI and API of automation controller. WSGI traffic is served by the uwsgi web server on a local socket. ASGI traffic is served by Daphne. NGINX listens on port 443 and proxies traffic as needed. To scale automation controller's web service, follow these best practices: Deploy multiple control nodes and use a load balancer to spread web requests over multiple servers. Set max connections per automation controller to 100. To optimize automation controller's web service on the client side, follow these guidelines: Direct user to use dynamic inventory sources instead of individually creating inventory hosts by using the API. Use webhook notifications instead of polling for job status. Use the bulk APIs for host creation and job launching to batch requests. Use token authentication. For automation clients that must make many requests very quickly, using tokens is a best practice, because depending on the type of user, there may be additional overhead when using basic authentication. Additional resources For more information on workloads with high levels of API interaction, see Scaling Automation Controller for API Driven Workloads . For more information on bulk API, see Bulk API in Automation Controller . For more information on how to generate and use tokens, see Token-Based Authentication .
[ "Total RAM * 0.25 / max_connections", "Total RAM * 0.05", "((number of worker nodes in kubernetes cluster) * (CPU available on each worker)) / (CPU request on pod_spec) = maximum number of concurrent jobs", "((number of worker nodes in kubernetes cluster) * (memory available on each worker)) / (memory request on pod_spec) = maximum number of forks", "[automationcontroller] controller1 ansible_user=ec2-user ansible_host=10.10.12.11 node_type=hybrid routable_hostname=somehost.somecompany.org" ]
https://docs.redhat.com/en/documentation/red_hat_ansible_automation_platform/2.4/html/automation_controller_administration_guide/assembly-controller-improving-performance
Chapter 4. Deploy standalone Multicloud Object Gateway
Chapter 4. Deploy standalone Multicloud Object Gateway Deploying only the Multicloud Object Gateway component with the OpenShift Data Foundation provides the flexibility in deployment and helps to reduce the resource consumption. Use this section to deploy only the standalone Multicloud Object Gateway component, which involves the following steps: Installing the Local Storage Operator. Installing Red Hat OpenShift Data Foundation Operator Creating standalone Multicloud Object Gateway Important The Multicloud Object Gateway only has a single copy of the database (NooBaa DB). This means if NooBaa DB PVC gets corrupted and we are unable to recover it, can result in total data loss of applicative data residing on the Multicloud Object Gateway. Because of this, Red Hat recommends taking a backup of NooBaa DB PVC regularly. If NooBaa DB fails and cannot be recovered, then you can revert to the latest backed-up version. For instructions on backing up your NooBaa DB, follow the steps in this knowledgabase article . 4.1. Installing Local Storage Operator Use this procedure to install the Local Storage Operator from the Operator Hub before creating OpenShift Data Foundation clusters on local storage devices. Procedure Log in to the OpenShift Web Console. Click Operators OperatorHub . Type local storage in the Filter by keyword... box to find the Local Storage Operator from the list of operators and click on it. Set the following options on the Install Operator page: Update channel as stable . Installation Mode as A specific namespace on the cluster . Installed Namespace as Operator recommended namespace openshift-local-storage . Approval Strategy as Automatic . Click Install . Verification steps Verify that the Local Storage Operator shows a green tick indicating successful installation. 4.2. Installing Red Hat OpenShift Data Foundation Operator You can install Red Hat OpenShift Data Foundation Operator using the Red Hat OpenShift Container Platform Operator Hub. For information about the hardware and software requirements, see Planning your deployment . Prerequisites Access to an OpenShift Container Platform cluster using an account with cluster-admin and Operator installation permissions. You must have at least three worker nodes in the Red Hat OpenShift Container Platform cluster. Important When you need to override the cluster-wide default node selector for OpenShift Data Foundation, you can use the following command in the command line interface to specify a blank node selector for the openshift-storage namespace (create openshift-storage namespace in this case): Taint a node as infra to ensure only Red Hat OpenShift Data Foundation resources are scheduled on that node. This helps you save on subscription costs. For more information, see How to use dedicated worker nodes for Red Hat OpenShift Data Foundation chapter in the Managing and Allocating Storage Resources guide. Procedure Log in to the OpenShift Web Console. Click Operators OperatorHub . Scroll or type OpenShift Data Foundation into the Filter by keyword box to find the OpenShift Data Foundation Operator. Click Install . Set the following options on the Install Operator page: Update Channel as stable-4.14 . Installation Mode as A specific namespace on the cluster . Installed Namespace as Operator recommended namespace openshift-storage . If Namespace openshift-storage does not exist, it is created during the operator installation. Select Approval Strategy as Automatic or Manual . If you select Automatic updates, then the Operator Lifecycle Manager (OLM) automatically upgrades the running instance of your Operator without any intervention. If you select Manual updates, then the OLM creates an update request. As a cluster administrator, you must then manually approve that update request to update the Operator to a newer version. Ensure that the Enable option is selected for the Console plugin . Click Install . Verification steps Verify that the OpenShift Data Foundation Operator shows a green tick indicating successful installation. After the operator is successfully installed, a pop-up with a message, Web console update is available appears on the user interface. Click Refresh web console from this pop-up for the console changes to reflect. In the Web Console, navigate to Storage and verify if Data Foundation is available. 4.3. Creating standalone Multicloud Object Gateway on IBM Power You can create only the standalone Multicloud Object Gateway component while deploying OpenShift Data Foundation. Prerequisites Ensure that the OpenShift Data Foundation Operator is installed. (For deploying using local storage devices only) Ensure that Local Storage Operator is installed. To identify storage devices on each node, refer to Finding available storage devices . Procedure Log into the OpenShift Web Console. In openshift-local-storage namespace, click Operators Installed Operators to view the installed operators. Click the Local Storage installed operator. On the Operator Details page, click the Local Volume link. Click Create Local Volume . Click on YAML view for configuring Local Volume. Define a LocalVolume custom resource for filesystem PVs using the following YAML. The above definition selects sda local device from the worker-0 , worker-1 and worker-2 nodes. The localblock storage class is created and persistent volumes are provisioned from sda . Important Specify appropriate values of nodeSelector as per your environment. The device name should be same on all the worker nodes. You can also specify more than one devicePaths. Click Create . In the OpenShift Web Console, click Operators Installed Operators to view all the installed operators. Ensure that the Project selected is openshift-storage . Click OpenShift Data Foundation operator and then click Create StorageSystem . In the Backing storage page, select Multicloud Object Gateway for Deployment type . Select the Use an existing StorageClass option for Backing storage type . Select the Storage Class that you used while installing LocalVolume. Click . Optional: In the Security page, select the Connect to an external key management service checkbox. This is optional for cluster-wide encryption. From the Key Management Service Provider drop down list, either select Vault or Thales CipherTrust Manager (using KMIP) . If you selected Vault , go to the step. If you selected Thales CipherTrust Manager (using KMIP) , go to step iii. Select an Authentication Method . Using Token authentication method Enter a unique Connection Name , host Address of the Vault server ('https://<hostname or ip>'), Port number and Token . Expand Advanced Settings to enter additional settings and certificate details based on your Vault configuration: Enter the Key Value secret path in Backend Path that is dedicated and unique to OpenShift Data Foundation. Optional: Enter TLS Server Name and Vault Enterprise Namespace . Upload the respective PEM encoded certificate file to provide the CA Certificate , Client Certificate and Client Private Key . Click Save and skip to step iv. Using Kubernetes authentication method Enter a unique Vault Connection Name , host Address of the Vault server ('https://<hostname or ip>'), Port number and Role name. Expand Advanced Settings to enter additional settings and certificate details based on your Vault configuration: Enter the Key Value secret path in the Backend Path that is dedicated and unique to OpenShift Data Foundation. Optional: Enter TLS Server Name and Authentication Path if applicable. Upload the respective PEM encoded certificate file to provide the CA Certificate , Client Certificate , and Client Private Key . Click Save and skip to step iv. To use Thales CipherTrust Manager (using KMIP) as the KMS provider, follow the steps below: Enter a unique Connection Name for the Key Management service within the project. In the Address and Port sections, enter the IP of Thales CipherTrust Manager and the port where the KMIP interface is enabled. For example: Address : 123.34.3.2 Port : 5696 Upload the Client Certificate , CA certificate , and Client Private Key . If StorageClass encryption is enabled, enter the Unique Identifier to be used for encryption and decryption generated above. The TLS Server field is optional and used when there is no DNS entry for the KMIP endpoint. For example, kmip_all_<port>.ciphertrustmanager.local . Select a Network . Click . In the Review and create page, review the configuration details: To modify any configuration settings, click Back . Click Create StorageSystem . Verification steps Verifying that the OpenShift Data Foundation cluster is healthy In the OpenShift Web Console, click Storage Data Foundation . Click the Storage Systems tab and then click on ocs-storagecluster-storagesystem . In the Status card of the Object tab, verify that both Object Service and Data Resiliency have a green tick. In the Details card, verify that the MCG information is displayed. Verifying the state of the pods Click Workloads Pods from the OpenShift Web Console. Select openshift-storage from the Project drop-down list and verify that the following pods are in Running state. Note If the Show default projects option is disabled, use the toggle button to list all the default projects. Component Corresponding pods OpenShift Data Foundation Operator ocs-operator-* (1 pod on any storage node) ocs-metrics-exporter-* (1 pod on any storage node) odf-operator-controller-manager-* (1 pod on any storage node) odf-console-* (1 pod on any storage node) csi-addons-controller-manager-* (1 pod on any storage node) Rook-ceph Operator rook-ceph-operator-* (1 pod on any storage node) Multicloud Object Gateway noobaa-operator-* (1 pod on any storage node) noobaa-core-* (1 pod on any storage node) noobaa-db-pg-* (1 pod on any storage node) noobaa-endpoint-* (1 pod on any storage node) noobaa-default-backing-store-noobaa-pod-* (1 pod on any storage node)
[ "oc annotate namespace openshift-storage openshift.io/node-selector=", "apiVersion: local.storage.openshift.io/v1 kind: LocalVolume metadata: name: localblock namespace: openshift-local-storage spec: logLevel: Normal managementState: Managed nodeSelector: nodeSelectorTerms: - matchExpressions: - key: kubernetes.io/hostname operator: In values: - worker-0 - worker-1 - worker-2 storageClassDevices: - devicePaths: - /dev/sda storageClassName: localblock volumeMode: Filesystem" ]
https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.14/html/deploying_openshift_data_foundation_using_ibm_power/deploy-standalone-multicloud-object-gateway-ibm-power
2.4. Configuring Fencing
2.4. Configuring Fencing You must configure a fencing device for each node in the cluster. When configuring a fencing device, you should ensure that your fencing device does not share power with the node that it controls. For information on fence device configuration, see Section 4.6, "Configuring Fence Devices" . For information on configuring fencing for cluster nodes, see Section 4.7, "Configuring Fencing for Cluster Members" . After configuring a fence device for a node, it is important to test the fence device, to ensure that the cluster will cut off access to a resource when the cluster loses communication with that node. How you break communication with the node will depend on your system setup and the type of fencing you have configured. You may need to physically disconnect network cables, or force a kernel panic on the node. You can then check whether the node has been fenced as expected. When creating a two-node cluster, you may need to configure a tie-breaking mechanism for the cluster to avoid split brains and fence races for the cluster, which can occur when the cluster interconnect experiences issues that prevent the nodes from communicating. For information on avoiding fence races, see the Red Hat Knowledgebase solution "What are my options for avoiding fence races in RHEL 5, 6, and 7 High Availability clusters with an even number of nodes?" on Red Hat Customer Portal at https://access.redhat.com/solutions/91653 . For information on avoiding fencing loops, see the Red Hat Knowledgebase solution "How can I avoid fencing loops with 2 node clusters and Red Hat High Availability clusters?" on Red Hat Customer Portal at https://access.redhat.com/solutions/272913 .
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/cluster_administration/s1-fenceconfig-ca
RHACS Cloud Service
RHACS Cloud Service Red Hat Advanced Cluster Security for Kubernetes 4.7 About the RHACS Cloud Service Red Hat OpenShift Documentation Team
null
https://docs.redhat.com/en/documentation/red_hat_advanced_cluster_security_for_kubernetes/4.7/html/rhacs_cloud_service/index
24.4.5. Using the du Command
24.4.5. Using the du Command The du command allows you to displays the amount of space that is being used by files in a directory. To display the disk usage for each of the subdirectories in the current working directory, run the command with no additional command-line options: du For example: By default, the du command displays the disk usage in kilobytes. To view the information in megabytes and gigabytes, supply the -h command-line option, which causes the utility to display the values in a human-readable format: du -h For instance: At the end of the list, the du command always shows the grand total for the current directory. To display only this information, supply the -s command-line option: du -sh For example: For a complete list of available command-line options, see the du (1) manual page.
[ "~]USD du 14972 ./Downloads 4 ./.gnome2 4 ./.mozilla/extensions 4 ./.mozilla/plugins 12 ./.mozilla 15004 .", "~]USD du -h 15M ./Downloads 4.0K ./.gnome2 4.0K ./.mozilla/extensions 4.0K ./.mozilla/plugins 12K ./.mozilla 15M .", "~]USD du -sh 15M ." ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/deployment_guide/s2-sysinfo-filesystems-du
Chapter 63. Storage
Chapter 63. Storage No support for thin provisioning on top of RAID in a cluster While RAID logical volumes and thinly provisioned logical volumes can be used in a cluster when activated exclusively, there is currently no support for thin provisioning on top of RAID in a cluster. This is the case even if the combination is activated exclusively. Currently this combination is only supported in LVM's single machine non-clustered mode. (BZ# 1014758 ) Anaconda installation can fail when LVM or md device has metadata from a install During Red Hat Enetrprise Linux 7 installation on a machine where a disk to be multipathed already starts with LVM or md metadata on it from a install, multipath will not get set up on the device, and LVM/md will get set up on one of the path devices while Anaconda is starting up. This can create problems with Anaconda, and cause the installation to fail. The workaround for this issue is to add mpath.wwid=<WWID> to the kernel command line when booting up for the installation. <WWID> is the wwid of the device that multipath should claim. This value is also the same as the ID_SERIAL udev database value for scsi devices and ID_UID for DASD devices. (BZ#1378714)
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/7.4_release_notes/known_issues_storage
Providing feedback on Red Hat build of OpenJDK documentation
Providing feedback on Red Hat build of OpenJDK documentation To report an error or to improve our documentation, log in to your Red Hat Jira account and submit an issue. If you do not have a Red Hat Jira account, then you will be prompted to create an account. Procedure Click the following link to create a ticket . Enter a brief description of the issue in the Summary . Provide a detailed description of the issue or enhancement in the Description . Include a URL to where the issue occurs in the documentation. Clicking Submit creates and routes the issue to the appropriate documentation team.
null
https://docs.redhat.com/en/documentation/red_hat_build_of_openjdk/21/html/release_notes_for_red_hat_build_of_openjdk_21.0.2/providing-direct-documentation-feedback_openjdk
Chapter 1. Federal Information Processing Standard (FIPS) readiness and compliance
Chapter 1. Federal Information Processing Standard (FIPS) readiness and compliance The Federal Information Processing Standard (FIPS) developed by the National Institute of Standards and Technology (NIST) is regarded as the highly regarded for securing and encrypting sensitive data, notably in highly regulated areas such as banking, healthcare, and the public sector. Red Hat Enterprise Linux (RHEL) and OpenShift Container Platform support FIPS by providing a FIPS mode , in which the system only allows usage of specific FIPS-validated cryptographic modules like openssl . This ensures FIPS compliance. 1.1. Enabling FIPS compliance Use the following procedure to enable FIPS compliance on your Red Hat Quay deployment. Prerequisite If you are running a standalone deployment of Red Hat Quay, your Red Hat Enterprise Linux (RHEL) deployment is version 8 or later and FIPS-enabled. If you are deploying Red Hat Quay on OpenShift Container Platform, OpenShift Container Platform is version 4.10 or later. Your Red Hat Quay version is 3.5.0 or later. If you are using the Red Hat Quay on OpenShift Container Platform on an IBM Power or IBM Z cluster: OpenShift Container Platform version 4.14 or later is required Red Hat Quay version 3.10 or later is required You have administrative privileges for your Red Hat Quay deployment. Procedure In your Red Hat Quay config.yaml file, set the FEATURE_FIPS configuration field to true . For example: --- FEATURE_FIPS = true --- With FEATURE_FIPS set to true , Red Hat Quay runs using FIPS-compliant hash functions.
[ "--- FEATURE_FIPS = true ---" ]
https://docs.redhat.com/en/documentation/red_hat_quay/3/html/red_hat_quay_operator_features/fips-overview
Chapter 3. Deployment of the Ceph File System
Chapter 3. Deployment of the Ceph File System As a storage administrator, you can deploy Ceph File Systems (CephFS) in a storage environment and have clients mount those Ceph File Systems to meet the storage needs. Basically, the deployment workflow is three steps: Create Ceph File Systems on a Ceph Monitor node. Create a Ceph client user with the appropriate capabilities, and make the client key available on the node where the Ceph File System will be mounted. Mount CephFS on a dedicated node, using either a kernel client or a File System in User Space (FUSE) client. Prerequisites A running, and healthy Red Hat Ceph Storage cluster. Installation and configuration of the Ceph Metadata Server daemon ( ceph-mds ). 3.1. Layout, quota, snapshot, and network restrictions These user capabilities can help you restrict access to a Ceph File System (CephFS) based on the needed requirements. Important All user capability flags, except rw , must be specified in alphabetical order. Layouts and Quotas When using layouts or quotas, clients require the p flag, in addition to rw capabilities. Setting the p flag restricts all the attributes being set by special extended attributes, those with a ceph. prefix. Also, this restricts other means of setting these fields, such as openc operations with layouts. Example In this example, client.0 can modify layouts and quotas on the file system cephfs_a , but client.1 cannot. Snapshots When creating or deleting snapshots, clients require the s flag, in addition to rw capabilities. When the capability string also contains the p flag, the s flag must appear after it. Example In this example, client.0 can create or delete snapshots in the temp directory of file system cephfs_a . Network Restricting clients connecting from a particular network. Example The optional network and prefix length is in CIDR notation, for example, 10.3.0.0/16 . Additional Resources See the Creating client users for a Ceph File System section in the Red Hat Ceph Storage File System Guide for details on setting the Ceph user capabilities. 3.2. Creating Ceph File Systems You can create multiple Ceph File Systems (CephFS) on a Ceph Monitor node. Prerequisites A running, and healthy Red Hat Ceph Storage cluster. Installation and configuration of the Ceph Metadata Server daemon ( ceph-mds ). Root-level access to a Ceph Monitor node. Root-level access to a Ceph client node. Procedure Configure the client node to use the Ceph storage cluster. Enable the Red Hat Ceph Storage Tools repository: Red Hat Enterprise Linux 8 Red Hat Enterprise Linux 9 Install the ceph-fuse package: Copy the Ceph client keyring from the Ceph Monitor node to the client node: Syntax Replace MONITOR_NODE_NAME with the Ceph Monitor host name or IP address. Example Copy the Ceph configuration file from a Ceph Monitor node to the client node: Syntax Replace MONITOR_NODE_NAME with the Ceph Monitor host name or IP address. Example Set the appropriate permissions for the configuration file: Create a Ceph File System: Syntax Example Repeat this step to create additional file systems. Note By running this command, Ceph automatically creates the new pools, and deploys a new Ceph Metadata Server (MDS) daemon to support the new file system. This also configures the MDS affinity accordingly. Verify access to the new Ceph File System from a Ceph client. Authorize a Ceph client to access the new file system: Syntax Example Note Optionally, you can add a safety measure by specifying the root_squash option. This prevents accidental deletion scenarios by disallowing clients with a uid=0 or gid=0 to do write operations, but still allows read operations. Example In this example, root_squash is enabled for the file system cephfs01 , except within the /volumes directory tree. Important The Ceph client can only see the CephFS it is authorized for. Copy the Ceph user's keyring to the Ceph client node: Syntax Example On the Ceph client node, create a new directory: Syntax Example On the Ceph client node, mount the new Ceph File System: Syntax Example On the Ceph client node, list the directory contents of the new mount point, or create a file on the new mount point. Additional Resources See the Creating client users for a Ceph File System section in the Red Hat Ceph Storage File System Guide for more details. See the Mounting the Ceph File System as a kernel client section in the Red Hat Ceph Storage File System Guide for more details. See the Mounting the Ceph File System as a FUSE client section in the Red Hat Ceph Storage File System Guide for more details. See Ceph File System limitations and the POSIX standards section in the Red Hat Ceph Storage File System Guide for more details. See the Pools chapter in the Red Hat Ceph Storage Storage Strategies Guide for more details. 3.3. Adding an erasure-coded pool to a Ceph File System By default, Ceph uses replicated pools for data pools. You can also add an additional erasure-coded data pool to the Ceph File System, if needed. Ceph File Systems (CephFS) backed by erasure-coded pools use less overall storage compared to Ceph File Systems backed by replicated pools. While erasure-coded pools use less overall storage, they also use more memory and processor resources than replicated pools. Important CephFS EC pools are for archival purpose only. Important For production environments, Red Hat recommends using the default replicated data pool for CephFS. The creation of inodes in CephFS creates at least one object in the default data pool. It is better to use a replicated pool for the default data to improve small-object write performance, and to improve read performance for updating backtraces. Prerequisites A running Red Hat Ceph Storage cluster. An existing Ceph File System. Pools using BlueStore OSDs. Root-level access to a Ceph Monitor node. Installation of the attr package. Procedure Create an erasure-coded data pool for CephFS: Syntax Example Verify the pool was added: Example Enable overwrites on the erasure-coded pool: Syntax Example Verify the status of the Ceph File System: Syntax Example Add the erasure-coded data pool to the existing CephFS: Syntax Example This example adds the new data pool, cephfs-data-ec01 , to the existing erasure-coded file system, cephfs-ec . Verify that the erasure-coded pool was added to the Ceph File System: Syntax Example Set the file layout on a new directory: Syntax Example In this example, all new files created in the /mnt/cephfs/newdir directory inherit the directory layout and places the data in the newly added erasure-coded pool. Additional Resources See The Ceph File System Metadata Server chapter in the Red Hat Ceph Storage File System Guide for more information about CephFS MDS. See the Creating Ceph File Systems section in the Red Hat Ceph Storage File System Guide for more information. See the Erasure Code Pools chapter in the Red Hat Ceph Storage Storage Strategies Guide for more information. See the Erasure Coding with Overwrites section in the Red Hat Ceph Storage Storage Strategies Guide for more information. 3.4. Creating client users for a Ceph File System Red Hat Ceph Storage uses cephx for authentication, which is enabled by default. To use cephx with the Ceph File System, create a user with the correct authorization capabilities on a Ceph Monitor node and make its key available on the node where the Ceph File System will be mounted. Prerequisites A running Red Hat Ceph Storage cluster. Installation and configuration of the Ceph Metadata Server daemon (ceph-mds). Root-level access to a Ceph Monitor node. Root-level access to a Ceph client node. Procedure Log into the Cephadm shell on the monitor node: Example On a Ceph Monitor node, create a client user: Syntax To restrict the client to only writing in the temp directory of filesystem cephfs_a : Example To completely restrict the client to the temp directory, remove the root ( / ) directory: Example Note Supplying all or asterisk as the file system name grants access to every file system. Typically, it is necessary to quote the asterisk to protect it from the shell. Verify the created key: Syntax Example Copy the keyring to the client. On the Ceph Monitor node, export the keyring to a file: Syntax Example Copy the client keyring from the Ceph Monitor node to the /etc/ceph/ directory on the client node: Syntax Replace CLIENT_NODE_NAME with the Ceph client node name or IP. Example From the client node, set the appropriate permissions for the keyring file: Syntax Example Additional Resources See the Ceph user management chapter in the Red Hat Ceph Storage Administration Guide for more details. 3.5. Mounting the Ceph File System as a kernel client You can mount the Ceph File System (CephFS) as a kernel client, either manually or automatically on system boot. Important Clients running on other Linux distributions, aside from Red Hat Enterprise Linux, are permitted but not supported. If issues are found in the CephFS Metadata Server or other parts of the storage cluster when using these clients, Red Hat will address them. If the cause is found to be on the client side, then the issue will have to be addressed by the kernel vendor of the Linux distribution. Prerequisites Root-level access to a Linux-based client node. Root-level access to a Ceph Monitor node. An existing Ceph File System. Procedure Configure the client node to use the Ceph storage cluster. Enable the Red Hat Ceph Storage 8 Tools repository: Red Hat Enterprise Linux 9 Install the ceph-common package: Log into the Cephadm shell on the monitor node: Example Copy the Ceph client keyring from the Ceph Monitor node to the client node: Syntax Replace CLIENT_NODE_NAME with the Ceph client host name or IP address. Example Copy the Ceph configuration file from a Ceph Monitor node to the client node: Syntax Replace CLIENT_NODE_NAME with the Ceph client host name or IP address. Example From the client node, set the appropriate permissions for the configuration file: Choose either automatically or manually mounting. Manually Mounting Create a mount directory on the client node: Syntax Example Mount the Ceph File System. To specify multiple Ceph Monitor addresses, separate them with commas in the mount command, specify the mount point, and set the client name: Note As of Red Hat Ceph Storage 4.1, mount.ceph can read keyring files directly. As such, a secret file is no longer necessary. Just specify the client ID with name= CLIENT_ID , and mount.ceph will find the right keyring file. Syntax Example Note You can configure a DNS server so that a single host name resolves to multiple IP addresses. Then you can use that single host name with the mount command, instead of supplying a comma-separated list. Note You can also replace the Monitor host names with the string :/ and mount.ceph will read the Ceph configuration file to determine which Monitors to connect to. Note You can set the nowsync option to asynchronously execute file creation and removal on the Red Hat Ceph Storage clusters. This improves the performance of some workloads by avoiding round-trip latency for these system calls without impacting consistency. The nowsync option requires kernel clients with Red Hat Enterprise Linux 9.0 or later. Example Verify that the file system is successfully mounted: Syntax Example Automatically Mounting On the client host, create a new directory for mounting the Ceph File System. Syntax Example Edit the /etc/fstab file as follows: Syntax The first column sets the Ceph Monitor host names and the port number. The second column sets the mount point The third column sets the file system type, in this case, ceph , for CephFS. The fourth column sets the various options, such as, the user name and the secret file using the name and secretfile options. You can also set specific volumes, sub-volume groups, and sub-volumes using the ceph.client_mountpoint option. Set the _netdev option to ensure that the file system is mounted after the networking subsystem starts to prevent hanging and networking issues. If you do not need access time information, then setting the noatime option can increase performance. Set the fifth and sixth columns to zero. Example The Ceph File System will be mounted on the system boot. Note As of Red Hat Ceph Storage 4.1, mount.ceph can read keyring files directly. As such, a secret file is no longer necessary. Just specify the client ID with name= CLIENT_ID , and mount.ceph will find the right keyring file. Note You can also replace the Monitor host names with the string :/ and mount.ceph will read the Ceph configuration file to determine which Monitors to connect to. Additional Resources See the mount(8) manual page. See the Ceph user management chapter in the Red Hat Ceph Storage Administration Guide for more details on creating a Ceph user. See the Creating Ceph File Systems section of the Red Hat Ceph Storage File System Guide for details. 3.6. Mounting the Ceph File System as a FUSE client You can mount the Ceph File System (CephFS) as a File System in User Space (FUSE) client, either manually or automatically on system boot. Prerequisites Root-level access to a Linux-based client node. Root-level access to a Ceph Monitor node. An existing Ceph File System. Procedure Configure the client node to use the Ceph storage cluster. Enable the Red Hat Ceph Storage 8 Tools repository: Red Hat Enterprise Linux 8 Red Hat Enterprise Linux 9 Install the ceph-fuse package: Log into the Cephadm shell on the monitor node: Example Copy the Ceph client keyring from the Ceph Monitor node to the client node: Syntax Replace CLIENT_NODE_NAME with the Ceph client host name or IP address. Example Copy the Ceph configuration file from a Ceph Monitor node to the client node: Syntax Replace CLIENT_NODE_NAME with the Ceph client host name or IP address. Example From the client node, set the appropriate permissions for the configuration file: Choose either automatically or manually mounting. Manually Mounting On the client node, create a directory for the mount point: Syntax Example Note If you used the path option with MDS capabilities, then the mount point must be within what is specified by the path . Use the ceph-fuse utility to mount the Ceph File System. Syntax Example Note If you do not use the default name and location of the user keyring, that is /etc/ceph/ceph.client. CLIENT_ID .keyring , then use the --keyring option to specify the path to the user keyring, for example: Example Note Use the -r option to instruct the client to treat that path as its root: Syntax Example Note If you want to automatically reconnect an evicted Ceph client, then add the --client_reconnect_stale=true option. Example Verify that the file system is successfully mounted: Syntax Example Automatically Mounting On the client node, create a directory for the mount point: Syntax Example Note If you used the path option with MDS capabilities, then the mount point must be within what is specified by the path . Edit the /etc/fstab file as follows: Syntax The first column sets the Ceph Monitor host names and the port number. The second column sets the mount point The third column sets the file system type, in this case, fuse.ceph , for CephFS. The fourth column sets the various options, such as the user name and the keyring using the ceph.name and ceph.keyring options. You can also set specific volumes, sub-volume groups, and sub-volumes using the ceph.client_mountpoint option. To specify which Ceph File System to access, use the ceph.client_fs option. Set the _netdev option to ensure that the file system is mounted after the networking subsystem starts to prevent hanging and networking issues. If you do not need access time information, then setting the noatime option can increase performance. If you want to automatically reconnect after an eviction, then set the client_reconnect_stale=true option. Set the fifth and sixth columns to zero. Example The Ceph File System will be mounted on the system boot. Additional Resources The ceph-fuse(8) manual page. See the Ceph user management chapter in the Red Hat Ceph Storage Administration Guide for more details on creating a Ceph user. See the Creating Ceph File Systems section of the Red Hat Ceph Storage File System Guide for details. Additional Resources See Section 2.5, "Management of MDS service using the Ceph Orchestrator" to install Ceph Metadata servers. See Section 3.2, "Creating Ceph File Systems" for details. See Section 3.4, "Creating client users for a Ceph File System" for details. See Section 3.5, "Mounting the Ceph File System as a kernel client" for details. See Section 3.6, "Mounting the Ceph File System as a FUSE client" for details. See Chapter 2, The Ceph File System Metadata Server for details on configuring the CephFS Metadata Server daemon.
[ "client.0 key: AQAz7EVWygILFRAAdIcuJ10opU/JKyfFmxhuaw== caps: [mds] allow rwp caps: [mon] allow r caps: [osd] allow rw tag cephfs data=cephfs_a client.1 key: AQAz7EVWygILFRAAdIcuJ11opU/JKyfFmxhuaw== caps: [mds] allow rw caps: [mon] allow r caps: [osd] allow rw tag cephfs data=cephfs_a", "client.0 key: AQAz7EVWygILFRAAdIcuJ10opU/JKyfFmxhuaw== caps: [mds] allow rw, allow rws path=/temp caps: [mon] allow r caps: [osd] allow rw tag cephfs data=cephfs_a", "client.0 key: AQAz7EVWygILFRAAdIcuJ10opU/JKyfFmxhuaw== caps: [mds] allow r network 10.0.0.0/8, allow rw path=/bar network 10.0.0.0/8 caps: [mon] allow r network 10.0.0.0/8 caps: [osd] allow rw tag cephfs data=cephfs_a network 10.0.0.0/8", "subscription-manager repos --enable=rhceph-6-tools-for-rhel-8-x86_64-rpms", "subscription-manager repos --enable=rhceph-6-tools-for-rhel-9-x86_64-rpms", "dnf install ceph-fuse", "scp root@ MONITOR_NODE_NAME :/etc/ceph/ KEYRING_FILE /etc/ceph/", "scp [email protected]:/etc/ceph/ceph.client.1.keyring /etc/ceph/", "scp root@ MONITOR_NODE_NAME :/etc/ceph/ceph.conf /etc/ceph/ceph.conf", "scp [email protected]:/etc/ceph/ceph.conf /etc/ceph/ceph.conf", "chmod 644 /etc/ceph/ceph.conf", "ceph fs volume create FILE_SYSTEM_NAME", "ceph fs volume create cephfs01", "ceph fs authorize FILE_SYSTEM_NAME CLIENT_NAME DIRECTORY PERMISSIONS", "ceph fs authorize cephfs01 client.1 / rw [client.1] key = BQAmthpf81M+JhAAiHDYQkMiCq3x+J0n9e8REK== ceph auth get client.1 exported keyring for client.1 [client.1] key = BQAmthpf81M+JhAAiHDYQkMiCq3x+J0n9e8REK== caps mds = \"allow rw fsname=cephfs01\" caps mon = \"allow r fsname=cephfs01\" caps osd = \"allow rw tag cephfs data=cephfs01\"", "ceph fs authorize cephfs01 client.1 / rw root_squash /volumes rw [client.1] key = BQAmthpf81M+JhAAiHDYQkMiCq3x+J0n9e8REK== ceph auth get client.1 [client.1] key = BQAmthpf81M+JhAAiHDYQkMiCq3x+J0n9e8REK== caps mds = \"allow rw fsname=cephfs01 root_squash, allow rw fsname=cephfs01 path=/volumes\" caps mon = \"allow r fsname=cephfs01\" caps osd = \"allow rw tag cephfs data=cephfs01\"", "ceph auth get CLIENT_NAME > OUTPUT_FILE_NAME scp OUTPUT_FILE_NAME TARGET_NODE_NAME :/etc/ceph", "ceph auth get client.1 > ceph.client.1.keyring exported keyring for client.1 scp ceph.client.1.keyring client:/etc/ceph root@client's password: ceph.client.1.keyring 100% 178 333.0KB/s 00:00", "mkdir PATH_TO_NEW_DIRECTORY_NAME", "mkdir /mnt/mycephfs", "ceph-fuse PATH_TO_NEW_DIRECTORY_NAME -n CEPH_USER_NAME --client-fs=_FILE_SYSTEM_NAME", "ceph-fuse /mnt/mycephfs/ -n client.1 --client-fs=cephfs01 ceph-fuse[555001]: starting ceph client 2022-05-09T07:33:27.158+0000 7f11feb81200 -1 init, newargv = 0x55fc4269d5d0 newargc=15 ceph-fuse[555001]: starting fuse", "ceph osd pool create DATA_POOL_NAME erasure", "ceph osd pool create cephfs-data-ec01 erasure pool 'cephfs-data-ec01' created", "ceph osd lspools", "ceph osd pool set DATA_POOL_NAME allow_ec_overwrites true", "ceph osd pool set cephfs-data-ec01 allow_ec_overwrites true set pool 15 allow_ec_overwrites to true", "ceph fs status FILE_SYSTEM_NAME", "ceph fs status cephfs-ec cephfs-ec - 14 clients ========= RANK STATE MDS ACTIVITY DNS INOS DIRS CAPS 0 active cephfs-ec.example.ooymyq Reqs: 0 /s 8231 8233 891 921 POOL TYPE USED AVAIL cephfs-metadata-ec metadata 787M 8274G cephfs-data-ec data 2360G 12.1T STANDBY MDS cephfs-ec.example.irsrql cephfs-ec.example.cauuaj", "ceph fs add_data_pool FILE_SYSTEM_NAME DATA_POOL_NAME", "ceph fs add_data_pool cephfs-ec cephfs-data-ec01", "ceph fs status FILE_SYSTEM_NAME", "ceph fs status cephfs-ec cephfs-ec - 14 clients ========= RANK STATE MDS ACTIVITY DNS INOS DIRS CAPS 0 active cephfs-ec.example.ooymyq Reqs: 0 /s 8231 8233 891 921 POOL TYPE USED AVAIL cephfs-metadata-ec metadata 787M 8274G cephfs-data-ec data 2360G 12.1T cephfs-data-ec01 data 0 12.1T STANDBY MDS cephfs-ec.example.irsrql cephfs-ec.example.cauuaj", "mkdir PATH_TO_DIRECTORY setfattr -n ceph.dir.layout.pool -v DATA_POOL_NAME PATH_TO_DIRECTORY", "mkdir /mnt/cephfs/newdir setfattr -n ceph.dir.layout.pool -v cephfs-data-ec01 /mnt/cephfs/newdir", "cephadm shell", "ceph fs authorize FILE_SYSTEM_NAME client. CLIENT_NAME / DIRECTORY CAPABILITY [/ DIRECTORY CAPABILITY ] PERMISSIONS", "ceph fs authorize cephfs_a client.1 / r /temp rw client.1 key = AQBSdFhcGZFUDRAAcKhG9Cl2HPiDMMRv4DC43A==", "ceph fs authorize cephfs_a client.1 /temp rw", "ceph auth get client. ID", "ceph auth get client.1 client.1 key = AQBSdFhcGZFUDRAAcKhG9Cl2HPiDMMRv4DC43A== caps mds = \"allow r, allow rw path=/temp\" caps mon = \"allow r\" caps osd = \"allow rw tag cephfs data=cephfs_a\"", "ceph auth get client. ID -o ceph.client. ID .keyring", "ceph auth get client.1 -o ceph.client.1.keyring exported keyring for client.1", "scp /ceph.client. ID .keyring root@ CLIENT_NODE_NAME :/etc/ceph/ceph.client. ID .keyring", "scp /ceph.client.1.keyring root@client01:/etc/ceph/ceph.client.1.keyring", "chmod 644 ceph.client. ID .keyring", "chmod 644 /etc/ceph/ceph.client.1.keyring", "subscription-manager repos --enable=rhceph-6-tools-for-rhel-9-x86_64-rpms", "dnf install ceph-common", "cephadm shell", "scp /ceph.client. ID .keyring root@ CLIENT_NODE_NAME :/etc/ceph/ceph.client. ID .keyring", "scp /ceph.client.1.keyring root@client01:/etc/ceph/ceph.client.1.keyring", "scp /etc/ceph/ceph.conf root@ CLIENT_NODE_NAME :/etc/ceph/ceph.conf", "scp /etc/ceph/ceph.conf root@client01:/etc/ceph/ceph.conf", "chmod 644 /etc/ceph/ceph.conf", "mkdir -p MOUNT_POINT", "mkdir -p /mnt/cephfs", "mount -t ceph MONITOR-1_NAME :6789, MONITOR-2_NAME :6789, MONITOR-3_NAME :6789:/ MOUNT_POINT -o name= CLIENT_ID ,fs= FILE_SYSTEM_NAME", "mount -t ceph mon1:6789,mon2:6789,mon3:6789:/ /mnt/cephfs -o name=1,fs=cephfs01", "mount -t ceph mon1:6789,mon2:6789,mon3:6789:/ /mnt/cephfs -o nowsync,name=1,fs=cephfs01", "stat -f MOUNT_POINT", "stat -f /mnt/cephfs", "mkdir -p MOUNT_POINT", "mkdir -p /mnt/cephfs", "#DEVICE PATH TYPE OPTIONS MON_0_HOST : PORT , MOUNT_POINT ceph name= CLIENT_ID , MON_1_HOST : PORT , ceph.client_mountpoint=/ VOL / SUB_VOL_GROUP / SUB_VOL / UID_SUB_VOL , fs= FILE_SYSTEM_NAME , MON_2_HOST : PORT :/q[_VOL_]/ SUB_VOL / UID_SUB_VOL , [ ADDITIONAL_OPTIONS ]", "#DEVICE PATH TYPE OPTIONS DUMP FSCK mon1:6789, /mnt/cephfs ceph name=1, 0 0 mon2:6789, ceph.client_mountpoint=/my_vol/my_sub_vol_group/my_sub_vol/0, mon3:6789:/ fs=cephfs01, _netdev,noatime", "subscription-manager repos --enable=6-tools-for-rhel-8-x86_64-rpms", "subscription-manager repos --enable=6-tools-for-rhel-9-x86_64-rpms", "dnf install ceph-fuse", "cephadm shell", "scp /ceph.client. ID .keyring root@ CLIENT_NODE_NAME :/etc/ceph/ceph.client. ID .keyring", "scp /ceph.client.1.keyring root@client01:/etc/ceph/ceph.client.1.keyring", "scp /etc/ceph/ceph.conf root@ CLIENT_NODE_NAME :/etc/ceph/ceph.conf", "scp /etc/ceph/ceph.conf root@client01:/etc/ceph/ceph.conf", "chmod 644 /etc/ceph/ceph.conf", "mkdir PATH_TO_MOUNT_POINT", "mkdir /mnt/mycephfs", "ceph-fuse -n client. CLIENT_ID --client_fs FILE_SYSTEM_NAME MOUNT_POINT", "ceph-fuse -n client.1 --client_fs cephfs01 /mnt/mycephfs", "ceph-fuse -n client.1 --keyring=/etc/ceph/client.1.keyring /mnt/mycephfs", "ceph-fuse -n client. CLIENT_ID MOUNT_POINT -r PATH", "ceph-fuse -n client.1 /mnt/cephfs -r /home/cephfs", "ceph-fuse -n client.1 /mnt/cephfs --client_reconnect_stale=true", "stat -f MOUNT_POINT", "stat -f /mnt/cephfs", "mkdir PATH_TO_MOUNT_POINT", "mkdir /mnt/mycephfs", "#DEVICE PATH TYPE OPTIONS DUMP FSCK HOST_NAME : PORT , MOUNT_POINT fuse.ceph ceph.id= CLIENT_ID , 0 0 HOST_NAME : PORT , ceph.client_mountpoint=/ VOL / SUB_VOL_GROUP / SUB_VOL / UID_SUB_VOL , HOST_NAME : PORT :/ ceph.client_fs= FILE_SYSTEM_NAME ,ceph.name= USERNAME ,ceph.keyring=/etc/ceph/ KEYRING_FILE , [ ADDITIONAL_OPTIONS ]", "#DEVICE PATH TYPE OPTIONS DUMP FSCK mon1:6789, /mnt/mycephfs fuse.ceph ceph.id=1, 0 0 mon2:6789, ceph.client_mountpoint=/my_vol/my_sub_vol_group/my_sub_vol/0, mon3:6789:/ ceph.client_fs=cephfs01,ceph.name=client.1,ceph.keyring=/etc/ceph/client1.keyring, _netdev,defaults" ]
https://docs.redhat.com/en/documentation/red_hat_ceph_storage/8/html/file_system_guide/deployment-of-the-ceph-file-system
Chapter 2. Red Hat OpenShift support for Windows Containers release notes
Chapter 2. Red Hat OpenShift support for Windows Containers release notes 2.1. About Red Hat OpenShift support for Windows Containers Red Hat OpenShift support for Windows Containers enables running Windows compute nodes in an OpenShift Container Platform cluster. Running Windows workloads is possible by using the Red Hat Windows Machine Config Operator (WMCO) to install and manage Windows nodes. With Windows nodes available, you can run Windows container workloads in OpenShift Container Platform. These release notes track the development of the WMCO, which provides all Windows container workload capabilities in OpenShift Container Platform. Version 5.x of the WMCO is compatible only with OpenShift Container Platform 4.10. Important Because Microsoft has stopped publishing Windows Server 2019 images with Docker , Red Hat no longer supports Windows Azure for WMCO releases earlier than version 6.0.0. For WMCO 5.y.z and earlier, Windows Server 2019 images must have Docker pre-installed. WMCO 6.0.0 and later uses containerd as the runtime. You can upgrade to OpenShift Container Platform 4.11, which uses WMCO 6.0.0. 2.2. Getting support Red Hat OpenShift support for Windows Containers is provided and available as an optional, installable component. Windows Container Support for Red Hat OpenShift is not part of the OpenShift Container Platform subscription. It requires an additional Red Hat subscription and is supported according to the Scope of coverage and Service level agreements . You must have this separate subscription to receive support for Windows Container Support for Red Hat OpenShift. Without this additional Red Hat subscription, deploying Windows container workloads in production clusters is not supported. You can request support through the Red Hat Customer Portal . For more information, see the Red Hat OpenShift Container Platform Life Cycle Policy document for Red Hat OpenShift support for Windows Containers . If you do not have this additional Red Hat subscription, you can use the Community Windows Machine Config Operator, a distribution that lacks official support. 2.3. Release notes for Red Hat Windows Machine Config Operator 5.1.1 This release of the WMCO is now available with a bug fix and a few improvements. The components of the WMCO 5.1.1 are now available in RHBA-2023:4487 . https://errata.devel.redhat.com/advisory/101759 2.3.1. Bug fix Previously, an endpoint object missing required information caused the WMCO pod to fail during startup. With this fix, WMCO verifies the endpoint object is present with the required fields. As a result, WMCO is able to start and reconcile an invalid or misconfigured endpoint object. ( OCPBUGS-5131 ) 2.3.2. Removed features 2.3.2.1. Support for Microsoft Azure has been removed Support for Microsoft Azure has been removed. Microsoft is removing images from the Azure registry that have Docker preinstalled, which is a prerequisite for using the WCMO 5.x on Microsoft Azure. 2.4. Release notes for Red Hat Windows Machine Config Operator 5.1.0 This release of the WMCO is now available with a bug fix and a few improvements. The components of the WMCO 5.1.0 is now available in RHBA-2022:4989-01 . 2.4.1. Bug fix Previously, the reverse DNS lookup of Windows Bring-Your-Own-Host (BYOH) instances failed when the node's external IP was present without pointer records (PTR). With this release, the WMCO looks in the other node addresses until a reverse lookup record is found, if the PTR record is not present in the first node IP address. As a result, the reverse configuration of Windows BYOH instances succeed when the node external IP address is present without a PTR record. ( BZ#2081825 ) 2.4.2. Known Issue Windows machine sets cannot scale up when the publicIP parameter is set to false in machineSets on Microsoft Azure. This issue is tracked by ( BZ#2091642 ). 2.4.3. New features and improvements 2.4.3.1. Windows node certificates are updated With this release, the WMCO updates the Windows node certificates when the kubelet client certificate authority (CA) certificate rotates. 2.4.3.2. Windows Server 2022 support With this release, Windows Server 2022 now supports VMware vSphere and Bare metal. 2.5. Release notes for Red Hat Windows Machine Config Operator 5.0.0 This release of the WMCO provides bug fixes for running Windows compute nodes in an OpenShift Container Platform cluster. The components of the WMCO 5.0.0 were released in RHSA-2022:0577 . Previously, Windows Containers on Windows Nodes could get assigned an incorrect DNS server IP. This caused DNS resolution to fail. This fix removes the hard-coded cluster DNS information and the DNS server IP is now passed as a command-line argument. As a result, Windows Containers on Windows Nodes get assigned a valid DNS Server IP and DNS resolution works for Windows workloads. ( BZ#1994859 ) Previously, certain commands being run by the WMCO against Windows VMs that used PowerShell as the default SSH shell were not parsed correctly. As a result, these VMs could not be added to a cluster as a node. With this fix the WMCO identifies the default SSH shell of a VM and runs commands accordingly. As a result, VMs with PowerShell as the default SSH shell can now be added to the cluster as a node. ( BZ#2000772 ) Previously, if a Bring-Your-Own-Host (BYOH) VM was specified with a DNS object, the WMCO was not properly associating the VM with its node object. This caused the WMCO to attempt to configure VMs that were already fully configured. With this fix the WMCO correctly resolves the DNS address of the VMs when looking for an associated node. As a result, BYOH VMs are now only configured when needed. ( BZ#2005360 ) Previously, if the windows-exporter metrics endpoint object contained a reference to a deleted machine, the WMCO ignored Deleting phase notification event for those machines. This fix removes the validation of the machine object from event filtering. As a result, the windows-exporter metrics endpoint object is correctly updated even when the machine is still deleting. ( BZ#2008601 ) Previously, if an entity other than the WMCO modified the certificate signing request (CSR) associated with a BYOH node, the WMCO would have a stale reference to the CSR and would be unable to approve it. With this fix, if an update conflict is detected, the WMCO retries the CSR approval until a specified timeout. As a result, the CSR processing completes as expected. ( BZ#2032048 ) 2.6. Windows Machine Config Operator prerequisites The following information details the supported platform versions, Windows Server versions, and networking configurations for the Windows Machine Config Operator. See the vSphere documentation for any information that is relevant to only that platform. Important Because Microsoft has stopped publishing Windows Server 2019 images with Docker , Red Hat no longer supports Windows Azure for WMCO releases earlier than version 6.0.0. For WMCO 5.y.z and earlier, Windows Server 2019 images must have Docker pre-installed. WMCO 6.0.0 and later uses containerd as the runtime. You can upgrade to OpenShift Container Platform 4.11, which uses WMCO 6.0.0. 2.6.1. WMCO 5.1.x supported platforms and Windows Server versions The following table lists the Windows Server versions that are supported by WMCO 5.1.1 and 5.1.0, based on the applicable platform. Windows Server versions not listed are not supported and attempting to use them will cause errors. To prevent these errors, use only an appropriate version for your platform. Platform Supported Windows Server version Amazon Web Services (AWS) Windows Server 2019 (version 1809) Microsoft Azure Windows Server 2019 (version 1809) VMware vSphere Windows Server Long-Term Servicing Channel (LTSC): Windows Server 2022 (OS Build 20348.681 or later). Bare metal or provider agnostic Windows Server 2022 Long-Term Servicing Channel (LTSC). OS Build 20348.681 or later. Windows Server 2019 (version 1809) 2.6.2. WMCO 5.0.0 supported platforms and Windows Server versions The following table lists the Windows Server versions that are supported by WMCO 5.0.0, based on the applicable platform. Windows Server versions not listed are not supported and attempting to use them will cause errors. To prevent these errors, use only the appropriate version for your platform. Platform Supported Windows Server version Amazon Web Services (AWS) Windows Server 2019 (version 1809) VMware vSphere Windows Server 2022 Long-Term Servicing Channel (LTSC). OS Build 20348.681 or later. Bare metal or provider agnostic Windows Server 2019 (version 1809) 2.6.3. Supported networking Hybrid networking with OVN-Kubernetes is the only supported networking configuration. See the additional resources below for more information on this functionality. The following tables outline the type of networking configuration and Windows Server versions to use based on your platform. You must specify the network configuration when you install the cluster. Be aware that OpenShift SDN networking is the default network for OpenShift Container Platform clusters. However, OpenShift SDN is not supported by WMCO. Table 2.1. Platform networking support Platform Supported networking Amazon Web Services (AWS) Hybrid networking with OVN-Kubernetes Microsoft Azure Hybrid networking with OVN-Kubernetes VMware vSphere Hybrid networking with OVN-Kubernetes with a custom VXLAN port bare metal Hybrid networking with OVN-Kubernetes Table 2.2. WMCO 5.1.0 Hybrid OVN-Kubernetes Windows Server support Hybrid networking with OVN-Kubernetes Supported Windows Server version Default VXLAN port Windows Server 2019 (version 1809) Custom VXLAN port Windows Server 2022 Long-Term Servicing Channel (LTSC). OS Build 20348.681 or later Table 2.3. WMCO 5.0.0 Hybrid OVN-Kubernetes Windows Server support Hybrid networking with OVN-Kubernetes Supported Windows Server version Default VXLAN port Windows Server 2019 (version 1809) Custom VXLAN port Windows Server 2022 Long-Term Servicing Channel (LTSC). OS Build 20348.681 or later 2.7. Known limitations Note the following limitations when working with Windows nodes managed by the WMCO (Windows nodes): The following OpenShift Container Platform features are not supported on Windows nodes: Image builds OpenShift Pipelines OpenShift Service Mesh OpenShift monitoring of user-defined projects OpenShift Serverless Horizontal Pod Autoscaling Vertical Pod Autoscaling The following Red Hat features are not supported on Windows nodes: Red Hat cost management Red Hat OpenShift Local Windows nodes do not support pulling container images from private registries. You can use images from public registries or pre-pull the images. Windows nodes do not support workloads created by using deployment configs. You can use a deployment or other method to deploy workloads. Windows nodes are not supported in clusters that use a cluster-wide proxy. This is because the WMCO is not able to route traffic through the proxy connection for the workloads. Windows nodes are not supported in clusters that are in a disconnected environment. Red Hat OpenShift support for Windows Containers does not support adding Windows nodes to a cluster through a trunk port. The only supported networking configuration for adding Windows nodes is through an access port that carries traffic for the VLAN. Red Hat OpenShift support for Windows Containers supports only in-tree storage drivers for all cloud providers. Kubernetes has identified the following node feature limitations : Huge pages are not supported for Windows containers. Privileged containers are not supported for Windows containers. Pod termination grace periods require the containerd container runtime to be installed on the Windows node. Kubernetes has identified several API compatibility issues .
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.10/html/windows_container_support_for_openshift/windows-containers-release-notes-5-x
Making open source more inclusive
Making open source more inclusive Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see our CTO Chris Wright's message .
null
https://docs.redhat.com/en/documentation/red_hat_build_of_quarkus/3.15/html/migrating_applications_to_red_hat_build_of_quarkus_3.15/making-open-source-more-inclusive
Chapter 4. Installing a cluster on vSphere with network customizations
Chapter 4. Installing a cluster on vSphere with network customizations In OpenShift Container Platform version 4.13, you can install a cluster on your VMware vSphere instance by using installer-provisioned infrastructure with customized network configuration options. By customizing your network configuration, your cluster can coexist with existing IP address allocations in your environment and integrate with existing MTU and VXLAN configurations. To customize the installation, you modify parameters in the install-config.yaml file before you install the cluster. You must set most of the network configuration parameters during installation, and you can modify only kubeProxy configuration parameters in a running cluster. Note OpenShift Container Platform supports deploying a cluster to a single VMware vCenter only. Deploying a cluster with machines/machine sets on multiple vCenters is not supported. 4.1. Prerequisites You reviewed details about the OpenShift Container Platform installation and update processes. You read the documentation on selecting a cluster installation method and preparing it for users . You provisioned persistent storage for your cluster. To deploy a private image registry, your storage must provide ReadWriteMany access modes. The OpenShift Container Platform installer requires access to port 443 on the vCenter and ESXi hosts. You verified that port 443 is accessible. If you use a firewall, confirm with the administrator that port 443 is accessible. Control plane nodes must be able to reach vCenter and ESXi hosts on port 443 for the installation to succeed. If you use a firewall, you configured it to allow the sites that your cluster requires access to. Note Be sure to also review this site list if you are configuring a proxy. 4.2. Internet access for OpenShift Container Platform In OpenShift Container Platform 4.13, you require access to the internet to install your cluster. You must have internet access to: Access OpenShift Cluster Manager Hybrid Cloud Console to download the installation program and perform subscription management. If the cluster has internet access and you do not disable Telemetry, that service automatically entitles your cluster. Access Quay.io to obtain the packages that are required to install your cluster. Obtain the packages that are required to perform cluster updates. Important If your cluster cannot have direct internet access, you can perform a restricted network installation on some types of infrastructure that you provision. During that process, you download the required content and use it to populate a mirror registry with the installation packages. With some installation types, the environment that you install your cluster in will not require internet access. Before you update the cluster, you update the content of the mirror registry. 4.3. VMware vSphere infrastructure requirements You must install an OpenShift Container Platform cluster on one of the following versions of a VMware vSphere instance that meets the requirements for the components that you use: Version 7.0 Update 2 or later, or VMware Cloud Foundation 4.3 or later Version 8.0 Update 1 or later, or VMware Cloud Foundation 5.0 or later You can host the VMware vSphere infrastructure on-premise or on a VMware Cloud Verified provider that meets the requirements outlined in the following table: Table 4.1. Version requirements for vSphere virtual environments Virtual environment product Required version VMware virtual hardware 15 or later vSphere ESXi hosts 7.0 Update 2 or later, or VMware Cloud Foundation 4.3 or later; 8.0 Update 1 or later, or VMware Cloud Foundation 5.0 or later vCenter host 7.0 Update 2 or later, or VMware Cloud Foundation 4.3 or later; 8.0 Update 1 or later, or VMware Cloud Foundation 5.0 or later Table 4.2. Minimum supported vSphere version for VMware components Component Minimum supported versions Description Hypervisor vSphere 7.0 Update 2 or later, or VMware Cloud Foundation 4.3 or later; vSphere 8.0 Update 1 or later, or VMware Cloud Foundation 5.0 or later with virtual hardware version 15 This hypervisor version is the minimum version that Red Hat Enterprise Linux CoreOS (RHCOS) supports. For more information about supported hardware on the latest version of Red Hat Enterprise Linux (RHEL) that is compatible with RHCOS, see Hardware on the Red Hat Customer Portal. Storage with in-tree drivers vSphere 7.0 Update 2 and later; 8.0 Update 1 or later This plugin creates vSphere storage by using the in-tree storage drivers for vSphere included in OpenShift Container Platform. Optional: Networking (NSX-T) vSphere 7.0 Update 2 or later, or VMware Cloud Foundation 4.3 or later; vSphere 8.0 Update 1 or later, or VMware Cloud Foundation 5.0 or later For more information about the compatibility of NSX and OpenShift Container Platform, see the Release Notes section of VMware's NSX container plugin documentation . CPU micro-architecture x86-64-v2 or higher OpenShift 4.13 and later are based on RHEL 9.2 host operating system which raised the microarchitecture requirements to x86-64-v2. See the RHEL Microarchitecture requirements documentation . You can verify compatibility by following the procedures outlined in this KCS article . Important You must ensure that the time on your ESXi hosts is synchronized before you install OpenShift Container Platform. See Edit Time Configuration for a Host in the VMware documentation. Additional resources For more information about CSI automatic migration, see "Overview" in VMware vSphere CSI Driver Operator . 4.4. Network connectivity requirements You must configure the network connectivity between machines to allow OpenShift Container Platform cluster components to communicate. Review the following details about the required network ports. Table 4.3. Ports used for all-machine to all-machine communications Protocol Port Description VRRP N/A Required for keepalived ICMP N/A Network reachability tests TCP 1936 Metrics 9000 - 9999 Host level services, including the node exporter on ports 9100 - 9101 and the Cluster Version Operator on port 9099 . 10250 - 10259 The default ports that Kubernetes reserves 10256 openshift-sdn UDP 4789 virtual extensible LAN (VXLAN) 6081 Geneve 9000 - 9999 Host level services, including the node exporter on ports 9100 - 9101 . 500 IPsec IKE packets 4500 IPsec NAT-T packets TCP/UDP 30000 - 32767 Kubernetes node port ESP N/A IPsec Encapsulating Security Payload (ESP) Table 4.4. Ports used for all-machine to control plane communications Protocol Port Description TCP 6443 Kubernetes API Table 4.5. Ports used for control plane machine to control plane machine communications Protocol Port Description TCP 2379 - 2380 etcd server and peer ports 4.5. VMware vSphere CSI Driver Operator requirements To install the vSphere CSI Driver Operator, the following requirements must be met: VMware vSphere version: 7.0 Update 2 or later, or VMware Cloud Foundation 4.3 or later; 8.0 Update 1 or later, or VMware Cloud Foundation 5.0 or later vCenter version: 7.0 Update 2 or later, or VMware Cloud Foundation 4.3 or later; 8.0 Update 1 or later, or VMware Cloud Foundation 5.0 or later Virtual machines of hardware version 15 or later No third-party vSphere CSI driver already installed in the cluster If a third-party vSphere CSI driver is present in the cluster, OpenShift Container Platform does not overwrite it. The presence of a third-party vSphere CSI driver prevents OpenShift Container Platform from updating to OpenShift Container Platform 4.13 or later. Note The VMware vSphere CSI Driver Operator is supported only on clusters deployed with platform: vsphere in the installation manifest. Additional resources To remove a third-party vSphere CSI driver, see Removing a third-party vSphere CSI Driver . To update the hardware version for your vSphere nodes, see Updating hardware on nodes running in vSphere . 4.6. vCenter requirements Before you install an OpenShift Container Platform cluster on your vCenter that uses infrastructure that the installer provisions, you must prepare your environment. Required vCenter account privileges To install an OpenShift Container Platform cluster in a vCenter, the installation program requires access to an account with privileges to read and create the required resources. Using an account that has global administrative privileges is the simplest way to access all of the necessary permissions. If you cannot use an account with global administrative privileges, you must create roles to grant the privileges necessary for OpenShift Container Platform cluster installation. While most of the privileges are always required, some are required only if you plan for the installation program to provision a folder to contain the OpenShift Container Platform cluster on your vCenter instance, which is the default behavior. You must create or amend vSphere roles for the specified objects to grant the required privileges. An additional role is required if the installation program is to create a vSphere virtual machine folder. Example 4.1. Roles and privileges required for installation in vSphere API vSphere object for role When required Required privileges in vSphere API vSphere vCenter Always Cns.Searchable InventoryService.Tagging.AttachTag InventoryService.Tagging.CreateCategory InventoryService.Tagging.CreateTag InventoryService.Tagging.DeleteCategory InventoryService.Tagging.DeleteTag InventoryService.Tagging.EditCategory InventoryService.Tagging.EditTag Sessions.ValidateSession StorageProfile.Update StorageProfile.View vSphere vCenter Cluster If VMs will be created in the cluster root Host.Config.Storage Resource.AssignVMToPool VApp.AssignResourcePool VApp.Import VirtualMachine.Config.AddNewDisk vSphere vCenter Resource Pool If an existing resource pool is provided Host.Config.Storage Resource.AssignVMToPool VApp.AssignResourcePool VApp.Import VirtualMachine.Config.AddNewDisk vSphere Datastore Always Datastore.AllocateSpace Datastore.Browse Datastore.FileManagement InventoryService.Tagging.ObjectAttachable vSphere Port Group Always Network.Assign Virtual Machine Folder Always InventoryService.Tagging.ObjectAttachable Resource.AssignVMToPool VApp.Import VirtualMachine.Config.AddExistingDisk VirtualMachine.Config.AddNewDisk VirtualMachine.Config.AddRemoveDevice VirtualMachine.Config.AdvancedConfig VirtualMachine.Config.Annotation VirtualMachine.Config.CPUCount VirtualMachine.Config.DiskExtend VirtualMachine.Config.DiskLease VirtualMachine.Config.EditDevice VirtualMachine.Config.Memory VirtualMachine.Config.RemoveDisk VirtualMachine.Config.Rename VirtualMachine.Config.ResetGuestInfo VirtualMachine.Config.Resource VirtualMachine.Config.Settings VirtualMachine.Config.UpgradeVirtualHardware VirtualMachine.Interact.GuestControl VirtualMachine.Interact.PowerOff VirtualMachine.Interact.PowerOn VirtualMachine.Interact.Reset VirtualMachine.Inventory.Create VirtualMachine.Inventory.CreateFromExisting VirtualMachine.Inventory.Delete VirtualMachine.Provisioning.Clone VirtualMachine.Provisioning.MarkAsTemplate VirtualMachine.Provisioning.DeployTemplate vSphere vCenter Datacenter If the installation program creates the virtual machine folder. For UPI, VirtualMachine.Inventory.Create and VirtualMachine.Inventory.Delete privileges are optional if your cluster does not use the Machine API. InventoryService.Tagging.ObjectAttachable Resource.AssignVMToPool VApp.Import VirtualMachine.Config.AddExistingDisk VirtualMachine.Config.AddNewDisk VirtualMachine.Config.AddRemoveDevice VirtualMachine.Config.AdvancedConfig VirtualMachine.Config.Annotation VirtualMachine.Config.CPUCount VirtualMachine.Config.DiskExtend VirtualMachine.Config.DiskLease VirtualMachine.Config.EditDevice VirtualMachine.Config.Memory VirtualMachine.Config.RemoveDisk VirtualMachine.Config.Rename VirtualMachine.Config.ResetGuestInfo VirtualMachine.Config.Resource VirtualMachine.Config.Settings VirtualMachine.Config.UpgradeVirtualHardware VirtualMachine.Interact.GuestControl VirtualMachine.Interact.PowerOff VirtualMachine.Interact.PowerOn VirtualMachine.Interact.Reset VirtualMachine.Inventory.Create VirtualMachine.Inventory.CreateFromExisting VirtualMachine.Inventory.Delete VirtualMachine.Provisioning.Clone VirtualMachine.Provisioning.DeployTemplate VirtualMachine.Provisioning.MarkAsTemplate Folder.Create Folder.Delete Example 4.2. Roles and privileges required for installation in vCenter graphical user interface (GUI) vSphere object for role When required Required privileges in vCenter GUI vSphere vCenter Always Cns.Searchable "vSphere Tagging"."Assign or Unassign vSphere Tag" "vSphere Tagging"."Create vSphere Tag Category" "vSphere Tagging"."Create vSphere Tag" vSphere Tagging"."Delete vSphere Tag Category" "vSphere Tagging"."Delete vSphere Tag" "vSphere Tagging"."Edit vSphere Tag Category" "vSphere Tagging"."Edit vSphere Tag" Sessions."Validate session" "Profile-driven storage"."Profile-driven storage update" "Profile-driven storage"."Profile-driven storage view" vSphere vCenter Cluster If VMs will be created in the cluster root Host.Configuration."Storage partition configuration" Resource."Assign virtual machine to resource pool" VApp."Assign resource pool" VApp.Import "Virtual machine"."Change Configuration"."Add new disk" vSphere vCenter Resource Pool If an existing resource pool is provided Host.Configuration."Storage partition configuration" Resource."Assign virtual machine to resource pool" VApp."Assign resource pool" VApp.Import "Virtual machine"."Change Configuration"."Add new disk" vSphere Datastore Always Datastore."Allocate space" Datastore."Browse datastore" Datastore."Low level file operations" "vSphere Tagging"."Assign or Unassign vSphere Tag on Object" vSphere Port Group Always Network."Assign network" Virtual Machine Folder Always "vSphere Tagging"."Assign or Unassign vSphere Tag on Object" Resource."Assign virtual machine to resource pool" VApp.Import "Virtual machine"."Change Configuration"."Add existing disk" "Virtual machine"."Change Configuration"."Add new disk" "Virtual machine"."Change Configuration"."Add or remove device" "Virtual machine"."Change Configuration"."Advanced configuration" "Virtual machine"."Change Configuration"."Set annotation" "Virtual machine"."Change Configuration"."Change CPU count" "Virtual machine"."Change Configuration"."Extend virtual disk" "Virtual machine"."Change Configuration"."Acquire disk lease" "Virtual machine"."Change Configuration"."Modify device settings" "Virtual machine"."Change Configuration"."Change Memory" "Virtual machine"."Change Configuration"."Remove disk" "Virtual machine"."Change Configuration".Rename "Virtual machine"."Change Configuration"."Reset guest information" "Virtual machine"."Change Configuration"."Change resource" "Virtual machine"."Change Configuration"."Change Settings" "Virtual machine"."Change Configuration"."Upgrade virtual machine compatibility" "Virtual machine".Interaction."Guest operating system management by VIX API" "Virtual machine".Interaction."Power off" "Virtual machine".Interaction."Power on" "Virtual machine".Interaction.Reset "Virtual machine"."Edit Inventory"."Create new" "Virtual machine"."Edit Inventory"."Create from existing" "Virtual machine"."Edit Inventory"."Remove" "Virtual machine".Provisioning."Clone virtual machine" "Virtual machine".Provisioning."Mark as template" "Virtual machine".Provisioning."Deploy template" vSphere vCenter Datacenter If the installation program creates the virtual machine folder. For UPI, VirtualMachine.Inventory.Create and VirtualMachine.Inventory.Delete privileges are optional if your cluster does not use the Machine API. "vSphere Tagging"."Assign or Unassign vSphere Tag on Object" Resource."Assign virtual machine to resource pool" VApp.Import "Virtual machine"."Change Configuration"."Add existing disk" "Virtual machine"."Change Configuration"."Add new disk" "Virtual machine"."Change Configuration"."Add or remove device" "Virtual machine"."Change Configuration"."Advanced configuration" "Virtual machine"."Change Configuration"."Set annotation" "Virtual machine"."Change Configuration"."Change CPU count" "Virtual machine"."Change Configuration"."Extend virtual disk" "Virtual machine"."Change Configuration"."Acquire disk lease" "Virtual machine"."Change Configuration"."Modify device settings" "Virtual machine"."Change Configuration"."Change Memory" "Virtual machine"."Change Configuration"."Remove disk" "Virtual machine"."Change Configuration".Rename "Virtual machine"."Change Configuration"."Reset guest information" "Virtual machine"."Change Configuration"."Change resource" "Virtual machine"."Change Configuration"."Change Settings" "Virtual machine"."Change Configuration"."Upgrade virtual machine compatibility" "Virtual machine".Interaction."Guest operating system management by VIX API" "Virtual machine".Interaction."Power off" "Virtual machine".Interaction."Power on" "Virtual machine".Interaction.Reset "Virtual machine"."Edit Inventory"."Create new" "Virtual machine"."Edit Inventory"."Create from existing" "Virtual machine"."Edit Inventory"."Remove" "Virtual machine".Provisioning."Clone virtual machine" "Virtual machine".Provisioning."Deploy template" "Virtual machine".Provisioning."Mark as template" Folder."Create folder" Folder."Delete folder" Additionally, the user requires some ReadOnly permissions, and some of the roles require permission to propogate the permissions to child objects. These settings vary depending on whether or not you install the cluster into an existing folder. Example 4.3. Required permissions and propagation settings vSphere object When required Propagate to children Permissions required vSphere vCenter Always False Listed required privileges vSphere vCenter Datacenter Existing folder False ReadOnly permission Installation program creates the folder True Listed required privileges vSphere vCenter Cluster Existing resource pool False ReadOnly permission VMs in cluster root True Listed required privileges vSphere vCenter Datastore Always False Listed required privileges vSphere Switch Always False ReadOnly permission vSphere Port Group Always False Listed required privileges vSphere vCenter Virtual Machine Folder Existing folder True Listed required privileges vSphere vCenter Resource Pool Existing resource pool True Listed required privileges For more information about creating an account with only the required privileges, see vSphere Permissions and User Management Tasks in the vSphere documentation. Using OpenShift Container Platform with vMotion If you intend on using vMotion in your vSphere environment, consider the following before installing an OpenShift Container Platform cluster. Using Storage vMotion can cause issues and is not supported. Using VMware compute vMotion to migrate the workloads for both OpenShift Container Platform compute machines and control plane machines is generally supported, where generally implies that you meet all VMware best practices for vMotion. To help ensure the uptime of your compute and control plane nodes, ensure that you follow the VMware best practices for vMotion, and use VMware anti-affinity rules to improve the availability of OpenShift Container Platform during maintenance or hardware issues. For more information about vMotion and anti-affinity rules, see the VMware vSphere documentation for vMotion networking requirements and VM anti-affinity rules . If you are using VMware vSphere volumes in your pods, migrating a VM across datastores, either manually or through Storage vMotion, causes invalid references within OpenShift Container Platform persistent volume (PV) objects that can result in data loss. OpenShift Container Platform does not support selective migration of VMDKs across datastores, using datastore clusters for VM provisioning or for dynamic or static provisioning of PVs, or using a datastore that is part of a datastore cluster for dynamic or static provisioning of PVs. Cluster resources When you deploy an OpenShift Container Platform cluster that uses installer-provisioned infrastructure, the installation program must be able to create several resources in your vCenter instance. A standard OpenShift Container Platform installation creates the following vCenter resources: 1 Folder 1 Tag category 1 Tag Virtual machines: 1 template 1 temporary bootstrap node 3 control plane nodes 3 compute machines Although these resources use 856 GB of storage, the bootstrap node is destroyed during the cluster installation process. A minimum of 800 GB of storage is required to use a standard cluster. If you deploy more compute machines, the OpenShift Container Platform cluster will use more storage. Cluster limits Available resources vary between clusters. The number of possible clusters within a vCenter is limited primarily by available storage space and any limitations on the number of required resources. Be sure to consider both limitations to the vCenter resources that the cluster creates and the resources that you require to deploy a cluster, such as IP addresses and networks. Networking requirements You can use Dynamic Host Configuration Protocol (DHCP) for the network and configure the DHCP server to set persistent IP addresses to machines in your cluster. In the DHCP lease, you must configure the DHCP to use the default gateway. Note You do not need to use the DHCP for the network if you want to provision nodes with static IP addresses. If you are installing to a restricted environment, the VM in your restricted network must have access to vCenter so that it can provision and manage nodes, persistent volume claims (PVCs), and other resources. Note Ensure that each OpenShift Container Platform node in the cluster has access to a Network Time Protocol (NTP) server that is discoverable by DHCP. Installation is possible without an NTP server. However, asynchronous server clocks can cause errors, which the NTP server prevents. Additionally, you must create the following networking resources before you install the OpenShift Container Platform cluster: Required IP Addresses An installer-provisioned vSphere installation requires two static IP addresses: The API address is used to access the cluster API. The Ingress address is used for cluster ingress traffic. You must provide these IP addresses to the installation program when you install the OpenShift Container Platform cluster. DNS records You must create DNS records for two static IP addresses in the appropriate DNS server for the vCenter instance that hosts your OpenShift Container Platform cluster. In each record, <cluster_name> is the cluster name and <base_domain> is the cluster base domain that you specify when you install the cluster. A complete DNS record takes the form: <component>.<cluster_name>.<base_domain>. . Table 4.6. Required DNS records Component Record Description API VIP api.<cluster_name>.<base_domain>. This DNS A/AAAA or CNAME record must point to the load balancer for the control plane machines. This record must be resolvable by both clients external to the cluster and from all the nodes within the cluster. Ingress VIP *.apps.<cluster_name>.<base_domain>. A wildcard DNS A/AAAA or CNAME record that points to the load balancer that targets the machines that run the Ingress router pods, which are the worker nodes by default. This record must be resolvable by both clients external to the cluster and from all the nodes within the cluster. 4.7. Generating a key pair for cluster node SSH access During an OpenShift Container Platform installation, you can provide an SSH public key to the installation program. The key is passed to the Red Hat Enterprise Linux CoreOS (RHCOS) nodes through their Ignition config files and is used to authenticate SSH access to the nodes. The key is added to the ~/.ssh/authorized_keys list for the core user on each node, which enables password-less authentication. After the key is passed to the nodes, you can use the key pair to SSH in to the RHCOS nodes as the user core . To access the nodes through SSH, the private key identity must be managed by SSH for your local user. If you want to SSH in to your cluster nodes to perform installation debugging or disaster recovery, you must provide the SSH public key during the installation process. The ./openshift-install gather command also requires the SSH public key to be in place on the cluster nodes. Important Do not skip this procedure in production environments, where disaster recovery and debugging is required. Note You must use a local key, not one that you configured with platform-specific approaches such as AWS key pairs . Procedure If you do not have an existing SSH key pair on your local machine to use for authentication onto your cluster nodes, create one. For example, on a computer that uses a Linux operating system, run the following command: USD ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1 1 Specify the path and file name, such as ~/.ssh/id_ed25519 , of the new SSH key. If you have an existing key pair, ensure your public key is in the your ~/.ssh directory. View the public SSH key: USD cat <path>/<file_name>.pub For example, run the following to view the ~/.ssh/id_ed25519.pub public key: USD cat ~/.ssh/id_ed25519.pub Add the SSH private key identity to the SSH agent for your local user, if it has not already been added. SSH agent management of the key is required for password-less SSH authentication onto your cluster nodes, or if you want to use the ./openshift-install gather command. Note On some distributions, default SSH private key identities such as ~/.ssh/id_rsa and ~/.ssh/id_dsa are managed automatically. If the ssh-agent process is not already running for your local user, start it as a background task: USD eval "USD(ssh-agent -s)" Example output Agent pid 31874 Add your SSH private key to the ssh-agent : USD ssh-add <path>/<file_name> 1 1 Specify the path and file name for your SSH private key, such as ~/.ssh/id_ed25519 Example output Identity added: /home/<you>/<path>/<file_name> (<computer_name>) steps When you install OpenShift Container Platform, provide the SSH public key to the installation program. 4.8. Obtaining the installation program Before you install OpenShift Container Platform, download the installation file on the host you are using for installation. Prerequisites You have a machine that runs Linux, for example Red Hat Enterprise Linux 8, with 500 MB of local disk space. Important If you attempt to run the installation program on macOS, a known issue related to the golang compiler causes the installation of the OpenShift Container Platform cluster to fail. For more information about this issue, see the section named "Known Issues" in the OpenShift Container Platform 4.13 release notes document. Procedure Access the Infrastructure Provider page on the OpenShift Cluster Manager site. If you have a Red Hat account, log in with your credentials. If you do not, create an account. Select your infrastructure provider. Navigate to the page for your installation type, download the installation program that corresponds with your host operating system and architecture, and place the file in the directory where you will store the installation configuration files. Important The installation program creates several files on the computer that you use to install your cluster. You must keep the installation program and the files that the installation program creates after you finish installing the cluster. Both files are required to delete the cluster. Important Deleting the files created by the installation program does not remove your cluster, even if the cluster failed during installation. To remove your cluster, complete the OpenShift Container Platform uninstallation procedures for your specific cloud provider. Extract the installation program. For example, on a computer that uses a Linux operating system, run the following command: USD tar -xvf openshift-install-linux.tar.gz Download your installation pull secret from the Red Hat OpenShift Cluster Manager . This pull secret allows you to authenticate with the services that are provided by the included authorities, including Quay.io, which serves the container images for OpenShift Container Platform components. 4.9. Adding vCenter root CA certificates to your system trust Because the installation program requires access to your vCenter's API, you must add your vCenter's trusted root CA certificates to your system trust before you install an OpenShift Container Platform cluster. Procedure From the vCenter home page, download the vCenter's root CA certificates. Click Download trusted root CA certificates in the vSphere Web Services SDK section. The <vCenter>/certs/download.zip file downloads. Extract the compressed file that contains the vCenter root CA certificates. The contents of the compressed file resemble the following file structure: Add the files for your operating system to the system trust. For example, on a Fedora operating system, run the following command: # cp certs/lin/* /etc/pki/ca-trust/source/anchors Update your system trust. For example, on a Fedora operating system, run the following command: # update-ca-trust extract 4.10. VMware vSphere region and zone enablement You can deploy an OpenShift Container Platform cluster to multiple vSphere datacenters that run in a single VMware vCenter. Each datacenter can run multiple clusters. This configuration reduces the risk of a hardware failure or network outage that can cause your cluster to fail. To enable regions and zones, you must define multiple failure domains for your OpenShift Container Platform cluster. Important The VMware vSphere region and zone enablement feature requires the vSphere Container Storage Interface (CSI) driver as the default storage driver in the cluster. As a result, the feature only available on a newly installed cluster. A cluster that was upgraded from a release defaults to using the in-tree vSphere driver, so you must enable CSI automatic migration for the cluster. You can then configure multiple regions and zones for the upgraded cluster. The default installation configuration deploys a cluster to a single vSphere datacenter. If you want to deploy a cluster to multiple vSphere datacenters, you must create an installation configuration file that enables the region and zone feature. The default install-config.yaml file includes vcenters and failureDomains fields, where you can specify multiple vSphere datacenters and clusters for your OpenShift Container Platform cluster. You can leave these fields blank if you want to install an OpenShift Container Platform cluster in a vSphere environment that consists of single datacenter. The following list describes terms associated with defining zones and regions for your cluster: Failure domain: Establishes the relationships between a region and zone. You define a failure domain by using vCenter objects, such as a datastore object. A failure domain defines the vCenter location for OpenShift Container Platform cluster nodes. Region: Specifies a vCenter datacenter. You define a region by using a tag from the openshift-region tag category. Zone: Specifies a vCenter cluster. You define a zone by using a tag from the openshift-zone tag category. Note If you plan on specifying more than one failure domain in your install-config.yaml file, you must create tag categories, zone tags, and region tags in advance of creating the configuration file. You must create a vCenter tag for each vCenter datacenter, which represents a region. Additionally, you must create a vCenter tag for each cluster than runs in a datacenter, which represents a zone. After you create the tags, you must attach each tag to their respective datacenters and clusters. The following table outlines an example of the relationship among regions, zones, and tags for a configuration with multiple vSphere datacenters running in a single VMware vCenter. Datacenter (region) Cluster (zone) Tags us-east us-east-1 us-east-1a us-east-1b us-east-2 us-east-2a us-east-2b us-west us-west-1 us-west-1a us-west-1b us-west-2 us-west-2a us-west-2b Additional resources Additional VMware vSphere configuration parameters Deprecated VMware vSphere configuration parameters vSphere automatic migration VMware vSphere CSI Driver Operator 4.11. Creating the installation configuration file You can customize the OpenShift Container Platform cluster you install on VMware vSphere. Prerequisites Obtain the OpenShift Container Platform installation program and the pull secret for your cluster. Obtain service principal permissions at the subscription level. Procedure Create the install-config.yaml file. Change to the directory that contains the installation program and run the following command: USD ./openshift-install create install-config --dir <installation_directory> 1 1 For <installation_directory> , specify the directory name to store the files that the installation program creates. When specifying the directory: Verify that the directory has the execute permission. This permission is required to run Terraform binaries under the installation directory. Use an empty directory. Some installation assets, such as bootstrap X.509 certificates, have short expiration intervals, therefore you must not reuse an installation directory. If you want to reuse individual files from another cluster installation, you can copy them into your directory. However, the file names for the installation assets might change between releases. Use caution when copying installation files from an earlier OpenShift Container Platform version. Note Always delete the ~/.powervs directory to avoid reusing a stale configuration. Run the following command: USD rm -rf ~/.powervs At the prompts, provide the configuration details for your cloud: Optional: Select an SSH key to use to access your cluster machines. Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. Select vsphere as the platform to target. Specify the name of your vCenter instance. Specify the user name and password for the vCenter account that has the required permissions to create the cluster. The installation program connects to your vCenter instance. Select the data center in your vCenter instance to connect to. Note After you create the installation configuration file, you can modify the file to create a multiple vSphere datacenters environment. This means that you can deploy an OpenShift Container Platform cluster to multiple vSphere datacenters that run in a single VMware vCenter. For more information about creating this environment, see the section named VMware vSphere region and zone enablement . Select the default vCenter datastore to use. Select the vCenter cluster to install the OpenShift Container Platform cluster in. The installation program uses the root resource pool of the vSphere cluster as the default resource pool. Select the network in the vCenter instance that contains the virtual IP addresses and DNS records that you configured. Enter the virtual IP address that you configured for control plane API access. Enter the virtual IP address that you configured for cluster ingress. Enter the base domain. This base domain must be the same one that you used in the DNS records that you configured. Enter a descriptive name for your cluster. The cluster name you enter must match the cluster name you specified when configuring the DNS records. Paste the pull secret from the Red Hat OpenShift Cluster Manager . Modify the install-config.yaml file. You can find more information about the available parameters in the "Installation configuration parameters" section. Back up the install-config.yaml file so that you can use it to install multiple clusters. Important The install-config.yaml file is consumed during the installation process. If you want to reuse the file, you must back it up now. 4.11.1. Installation configuration parameters Before you deploy an OpenShift Container Platform cluster, you provide parameter values to describe your account on the cloud platform that hosts your cluster and optionally customize your cluster's platform. When you create the install-config.yaml installation configuration file, you provide values for the required parameters through the command line. If you customize your cluster, you can modify the install-config.yaml file to provide more details about the platform. Note After installation, you cannot modify these parameters in the install-config.yaml file. 4.11.1.1. Required configuration parameters Required installation configuration parameters are described in the following table: Table 4.7. Required parameters Parameter Description Values apiVersion The API version for the install-config.yaml content. The current version is v1 . The installation program may also support older API versions. String baseDomain The base domain of your cloud provider. The base domain is used to create routes to your OpenShift Container Platform cluster components. The full DNS name for your cluster is a combination of the baseDomain and metadata.name parameter values that uses the <metadata.name>.<baseDomain> format. A fully-qualified domain or subdomain name, such as example.com . metadata Kubernetes resource ObjectMeta , from which only the name parameter is consumed. Object metadata.name The name of the cluster. DNS records for the cluster are all subdomains of {{.metadata.name}}.{{.baseDomain}} . String of lowercase letters and hyphens ( - ), such as dev . platform The configuration for the specific platform upon which to perform the installation: alibabacloud , aws , baremetal , azure , gcp , ibmcloud , nutanix , openstack , ovirt , powervs , vsphere , or {} . For additional information about platform.<platform> parameters, consult the table for your specific platform that follows. Object pullSecret Get a pull secret from the Red Hat OpenShift Cluster Manager to authenticate downloading container images for OpenShift Container Platform components from services such as Quay.io. { "auths":{ "cloud.openshift.com":{ "auth":"b3Blb=", "email":"[email protected]" }, "quay.io":{ "auth":"b3Blb=", "email":"[email protected]" } } } 4.11.1.2. Network configuration parameters You can customize your installation configuration based on the requirements of your existing network infrastructure. For example, you can expand the IP address block for the cluster network or provide different IP address blocks than the defaults. If you use the Red Hat OpenShift Networking OVN-Kubernetes network plugin, both IPv4 and IPv6 address families are supported. If you use the Red Hat OpenShift Networking OpenShift SDN network plugin, only the IPv4 address family is supported. Note On VMware vSphere, dual-stack networking must specify IPv4 as the primary address family. The following additional limitations apply to dual-stack networking: Nodes report only their IPv6 IP address in node.status.addresses Nodes with only a single NIC are supported Pods configured for host networking report only their IPv6 addresses in pod.status.IP If you configure your cluster to use both IP address families, review the following requirements: Both IP families must use the same network interface for the default gateway. Both IP families must have the default gateway. You must specify IPv4 and IPv6 addresses in the same order for all network configuration parameters. For example, in the following configuration IPv4 addresses are listed before IPv6 addresses. networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 - cidr: fd00:10:128::/56 hostPrefix: 64 serviceNetwork: - 172.30.0.0/16 - fd00:172:16::/112 Note Globalnet is not supported with Red Hat OpenShift Data Foundation disaster recovery solutions. For regional disaster recovery scenarios, ensure that you use a nonoverlapping range of private IP addresses for the cluster and service networks in each cluster. Table 4.8. Network parameters Parameter Description Values networking The configuration for the cluster network. Object Note You cannot modify parameters specified by the networking object after installation. networking.networkType The Red Hat OpenShift Networking network plugin to install. Either OpenShiftSDN or OVNKubernetes . OpenShiftSDN is a CNI plugin for all-Linux networks. OVNKubernetes is a CNI plugin for Linux networks and hybrid networks that contain both Linux and Windows servers. The default value is OVNKubernetes . networking.clusterNetwork The IP address blocks for pods. The default value is 10.128.0.0/14 with a host prefix of /23 . If you specify multiple IP address blocks, the blocks must not overlap. An array of objects. For example: networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 networking.clusterNetwork.cidr Required if you use networking.clusterNetwork . An IP address block. An IPv4 network. An IP address block in Classless Inter-Domain Routing (CIDR) notation. The prefix length for an IPv4 block is between 0 and 32 . networking.clusterNetwork.hostPrefix The subnet prefix length to assign to each individual node. For example, if hostPrefix is set to 23 then each node is assigned a /23 subnet out of the given cidr . A hostPrefix value of 23 provides 510 (2^(32 - 23) - 2) pod IP addresses. A subnet prefix. The default value is 23 . networking.serviceNetwork The IP address block for services. The default value is 172.30.0.0/16 . The OpenShift SDN and OVN-Kubernetes network plugins support only a single IP address block for the service network. An array with an IP address block in CIDR format. For example: networking: serviceNetwork: - 172.30.0.0/16 networking.machineNetwork The IP address blocks for machines. If you specify multiple IP address blocks, the blocks must not overlap. An array of objects. For example: networking: machineNetwork: - cidr: 10.0.0.0/16 networking.machineNetwork.cidr Required if you use networking.machineNetwork . An IP address block. The default value is 10.0.0.0/16 for all platforms other than libvirt and IBM Power Virtual Server. For libvirt, the default value is 192.168.126.0/24 . For IBM Power Virtual Server, the default value is 192.168.0.0/24 . An IP network block in CIDR notation. For example, 10.0.0.0/16 . Note Set the networking.machineNetwork to match the CIDR that the preferred NIC resides in. 4.11.1.3. Optional configuration parameters Optional installation configuration parameters are described in the following table: Table 4.9. Optional parameters Parameter Description Values additionalTrustBundle A PEM-encoded X.509 certificate bundle that is added to the nodes' trusted certificate store. This trust bundle may also be used when a proxy has been configured. String capabilities Controls the installation of optional core cluster components. You can reduce the footprint of your OpenShift Container Platform cluster by disabling optional components. For more information, see the "Cluster capabilities" page in Installing . String array capabilities.baselineCapabilitySet Selects an initial set of optional capabilities to enable. Valid values are None , v4.11 , v4.12 and vCurrent . The default value is vCurrent . String capabilities.additionalEnabledCapabilities Extends the set of optional capabilities beyond what you specify in baselineCapabilitySet . You may specify multiple capabilities in this parameter. String array cpuPartitioningMode Enables workload partitioning, which isolates OpenShift Container Platform services, cluster management workloads, and infrastructure pods to run on a reserved set of CPUs. Workload partitioning can only be enabled during installation and cannot be disabled after installation. While this field enables workload partitioning, it does not configure workloads to use specific CPUs. For more information, see the Workload partitioning page in the Scalability and Performance section. None or AllNodes . None is the default value. compute The configuration for the machines that comprise the compute nodes. Array of MachinePool objects. compute.architecture Determines the instruction set architecture of the machines in the pool. Currently, clusters with varied architectures are not supported. All pools must specify the same architecture. Valid values are amd64 (the default). String compute.name Required if you use compute . The name of the machine pool. worker compute.platform Required if you use compute . Use this parameter to specify the cloud provider to host the worker machines. This parameter value must match the controlPlane.platform parameter value. alibabacloud , aws , azure , gcp , ibmcloud , nutanix , openstack , ovirt , powervs , vsphere , or {} compute.replicas The number of compute machines, which are also known as worker machines, to provision. A positive integer greater than or equal to 2 . The default value is 3 . featureSet Enables the cluster for a feature set. A feature set is a collection of OpenShift Container Platform features that are not enabled by default. For more information about enabling a feature set during installation, see "Enabling features using feature gates". String. The name of the feature set to enable, such as TechPreviewNoUpgrade . controlPlane The configuration for the machines that comprise the control plane. Array of MachinePool objects. controlPlane.architecture Determines the instruction set architecture of the machines in the pool. Currently, clusters with varied architectures are not supported. All pools must specify the same architecture. Valid values are amd64 (the default). String controlPlane.name Required if you use controlPlane . The name of the machine pool. master controlPlane.platform Required if you use controlPlane . Use this parameter to specify the cloud provider that hosts the control plane machines. This parameter value must match the compute.platform parameter value. alibabacloud , aws , azure , gcp , ibmcloud , nutanix , openstack , ovirt , powervs , vsphere , or {} controlPlane.replicas The number of control plane machines to provision. The only supported value is 3 , which is the default value. credentialsMode The Cloud Credential Operator (CCO) mode. If no mode is specified, the CCO dynamically tries to determine the capabilities of the provided credentials, with a preference for mint mode on the platforms where multiple modes are supported. Note Not all CCO modes are supported for all cloud providers. For more information about CCO modes, see the Cloud Credential Operator entry in the Cluster Operators reference content. Note If your AWS account has service control policies (SCP) enabled, you must configure the credentialsMode parameter to Mint , Passthrough or Manual . Mint , Passthrough , Manual or an empty string ( "" ). imageContentSources Sources and repositories for the release-image content. Array of objects. Includes a source and, optionally, mirrors , as described in the following rows of this table. imageContentSources.source Required if you use imageContentSources . Specify the repository that users refer to, for example, in image pull specifications. String imageContentSources.mirrors Specify one or more repositories that may also contain the same images. Array of strings publish How to publish or expose the user-facing endpoints of your cluster, such as the Kubernetes API, OpenShift routes. Internal or External . The default value is External . Setting this field to Internal is not supported on non-cloud platforms. Important If the value of the field is set to Internal , the cluster will become non-functional. For more information, refer to BZ#1953035 . sshKey The SSH key to authenticate access to your cluster machines. Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. For example, sshKey: ssh-ed25519 AAAA.. . Not all CCO modes are supported for all cloud providers. For more information about CCO modes, see the "Managing cloud provider credentials" entry in the Authentication and authorization content. 4.11.1.4. Additional VMware vSphere configuration parameters Additional VMware vSphere configuration parameters are described in the following table. Note The platform.vsphere parameter prefixes each parameter listed in the table. Table 4.10. Additional VMware vSphere cluster parameters Parameter Description Values Describes your account on the cloud platform that hosts your cluster. You can use the parameter to customize the platform. If you provide additional configuration settings for compute and control plane machines in the machine pool, the parameter is not required. You can only specify one vCenter server for your OpenShift Container Platform cluster. A dictionary of vSphere configuration objects Virtual IP (VIP) addresses that you configured for control plane API access. Note This parameter applies only to installer-provisioned infrastructure without an external load balancer configured. You must not specify this parameter in user-provisioned infrastructure. Multiple IP addresses Optional: The disk provisioning method. This value defaults to the vSphere default storage policy if not set. Valid values are thin , thick , or eagerZeroedThick . If you define multiple failure domains for your cluster, you must attach the tag to each vCenter datacenter. To define a region, use a tag from the openshift-region tag category. For a single vSphere datacenter environment, you do not need to attach a tag, but you must enter an alphanumeric value, such as datacenter , for the parameter. String Specifies the fully-qualified hostname or IP address of the VMware vCenter server, so that a client can access failure domain resources. You must apply the server role to the vSphere vCenter server location. String If you define multiple failure domains for your cluster, you must attach a tag to each vCenter cluster. To define a zone, use a tag from the openshift-zone tag category. For a single vSphere datacenter environment, you do not need to attach a tag, but you must enter an alphanumeric value, such as cluster , for the parameter. String Lists and defines the datacenters where OpenShift Container Platform virtual machines (VMs) operate. The list of datacenters must match the list of datacenters specified in the vcenters field. String Specifies the path to a vSphere datastore that stores virtual machines files for a failure domain. You must apply the datastore role to the vSphere vCenter datastore location. String Optional: The absolute path of an existing folder where the user creates the virtual machines, for example, /<datacenter_name>/vm/<folder_name>/<subfolder_name> . If you do not provide this value, the installation program creates a top-level folder in the datacenter virtual machine folder that is named with the infrastructure ID. If you are providing the infrastructure for the cluster and you do not want to use the default StorageClass object, named thin , you can omit the folder parameter from the install-config.yaml file. String Lists any network in the vCenter instance that contains the virtual IP addresses and DNS records that you configured. String Optional: The absolute path of an existing resource pool where the installation program creates the virtual machines, for example, /<datacenter_name>/host/<cluster_name>/Resources/<resource_pool_name>/<optional_nested_resource_pool_name> . If you do not specify a value, the installation program installs the resources in the root of the cluster under /<datacenter_name>/host/<cluster_name>/Resources . String Virtual IP (VIP) addresses that you configured for cluster Ingress. Note This parameter applies only to installer-provisioned infrastructure without an external load balancer configured. You must not specify this parameter in user-provisioned infrastructure. Multiple IP addresses Configures the connection details so that services can communicate with a vCenter server. Currently, only a single vCenter server is supported. An array of vCenter configuration objects. Lists and defines the datacenters where OpenShift Container Platform virtual machines (VMs) operate. The list of datacenters must match the list of datacenters specified in the failureDomains field. String The password associated with the vSphere user. String The port number used to communicate with the vCenter server. Integer The fully qualified host name (FQHN) or IP address of the vCenter server. String The username associated with the vSphere user. String 4.11.1.5. Deprecated VMware vSphere configuration parameters In OpenShift Container Platform 4.13, the following vSphere configuration parameters are deprecated. You can continue to use these parameters, but the installation program does not automatically specify these parameters in the install-config.yaml file. The following table lists each deprecated vSphere configuration parameter. Note The platform.vsphere parameter prefixes each parameter listed in the table. Table 4.11. Deprecated VMware vSphere cluster parameters Parameter Description Values The virtual IP (VIP) address that you configured for control plane API access. Note In OpenShift Container Platform 4.12 and later, the apiVIP configuration setting is deprecated. Instead, use a List format to enter a value in the apiVIPs configuration setting. An IP address, for example 128.0.0.1 . The vCenter cluster to install the OpenShift Container Platform cluster in. String Defines the datacenter where OpenShift Container Platform virtual machines (VMs) operate. String The name of the default datastore to use for provisioning volumes. String Optional: The absolute path of an existing folder where the installation program creates the virtual machines. If you do not provide this value, the installation program creates a folder that is named with the infrastructure ID in the data center virtual machine folder. String, for example, /<datacenter_name>/vm/<folder_name>/<subfolder_name> . Virtual IP (VIP) addresses that you configured for cluster Ingress. Note In OpenShift Container Platform 4.12 and later, the ingressVIP configuration setting is deprecated. Instead, use a List format to enter a value in the ingressVIPs configuration setting. An IP address, for example 128.0.0.1 . The network in the vCenter instance that contains the virtual IP addresses and DNS records that you configured. String The password for the vCenter user name. String Optional: The absolute path of an existing resource pool where the installation program creates the virtual machines. If you do not specify a value, the installation program installs the resources in the root of the cluster under /<datacenter_name>/host/<cluster_name>/Resources . String, for example, /<datacenter_name>/host/<cluster_name>/Resources/<resource_pool_name>/<optional_nested_resource_pool_name> . The user name to use to connect to the vCenter instance with. This user must have at least the roles and privileges that are required for static or dynamic persistent volume provisioning in vSphere. String The fully-qualified hostname or IP address of a vCenter server. String 4.11.1.6. Optional VMware vSphere machine pool configuration parameters Optional VMware vSphere machine pool configuration parameters are described in the following table. Note The platform.vsphere parameter prefixes each parameter listed in the table. Table 4.12. Optional VMware vSphere machine pool parameters Parameter Description Values clusterOSImage The location from which the installation program downloads the RHCOS image. You must set this parameter to perform an installation in a restricted network. An HTTP or HTTPS URL, optionally with a SHA-256 checksum. For example, https://mirror.openshift.com/images/rhcos-<version>-vmware.<architecture>.ova . osDisk.diskSizeGB The size of the disk in gigabytes. Integer cpus The total number of virtual processor cores to assign a virtual machine. The value of platform.vsphere.cpus must be a multiple of platform.vsphere.coresPerSocket value. Integer coresPerSocket The number of cores per socket in a virtual machine. The number of virtual sockets on the virtual machine is platform.vsphere.cpus / platform.vsphere.coresPerSocket . The default value for control plane nodes and worker nodes is 4 and 2 , respectively. Integer memoryMB The size of a virtual machine's memory in megabytes. Integer 4.11.2. Sample install-config.yaml file for an installer-provisioned VMware vSphere cluster You can customize the install-config.yaml file to specify more details about your OpenShift Container Platform cluster's platform or modify the values of the required parameters. apiVersion: v1 baseDomain: example.com 1 compute: 2 - architecture: amd64 name: <worker_node> platform: {} replicas: 3 controlPlane: 3 architecture: amd64 name: <parent_node> platform: {} replicas: 3 metadata: creationTimestamp: null name: test 4 networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 networkType: OVNKubernetes 5 serviceNetwork: - 172.30.0.0/16 platform: vsphere: 6 apiVIPs: - 10.0.0.1 failureDomains: 7 - name: <failure_domain_name> region: <default_region_name> server: <fully_qualified_domain_name> topology: computeCluster: "/<datacenter>/host/<cluster>" datacenter: <datacenter> datastore: "/<datacenter>/datastore/<datastore>" 8 networks: - <VM_Network_name> resourcePool: "/<datacenter>/host/<cluster>/Resources/<resourcePool>" 9 folder: "/<datacenter_name>/vm/<folder_name>/<subfolder_name>" zone: <default_zone_name> ingressVIPs: - 10.0.0.2 vcenters: - datacenters: - <datacenter> password: <password> port: 443 server: <fully_qualified_domain_name> user: [email protected] diskType: thin 10 fips: false pullSecret: '{"auths": ...}' sshKey: 'ssh-ed25519 AAAA...' 1 The base domain of the cluster. All DNS records must be sub-domains of this base and include the cluster name. 2 3 The controlPlane section is a single mapping, but the compute section is a sequence of mappings. To meet the requirements of the different data structures, the first line of the compute section must begin with a hyphen, - , and the first line of the controlPlane section must not. Only one control plane pool is used. 4 The cluster name that you specified in your DNS records. 6 Optional: Provides additional configuration for the machine pool parameters for the compute and control plane machines. 7 Establishes the relationships between a region and zone. You define a failure domain by using vCenter objects, such as a datastore object. A failure domain defines the vCenter location for OpenShift Container Platform cluster nodes. 8 The path to the vSphere datastore that holds virtual machine files, templates, and ISO images. Important You can specify the path of any datastore that exists in a datastore cluster. By default, Storage vMotion is automatically enabled for a datastore cluster. Red Hat does not support Storage vMotion, so you must disable Storage vMotion to avoid data loss issues for your OpenShift Container Platform cluster. If you must specify VMs across multiple datastores, use a datastore object to specify a failure domain in your cluster's install-config.yaml configuration file. For more information, see "VMware vSphere region and zone enablement". 9 Optional: Provides an existing resource pool for machine creation. If you do not specify a value, the installation program uses the root resource pool of the vSphere cluster. 10 The vSphere disk provisioning method. 5 The cluster network plugin to install. The supported values are OVNKubernetes and OpenShiftSDN . The default value is OVNKubernetes . Note In OpenShift Container Platform 4.12 and later, the apiVIP and ingressVIP configuration settings are deprecated. Instead, use a list format to enter values in the apiVIPs and ingressVIPs configuration settings. 4.11.3. Configuring the cluster-wide proxy during installation Production environments can deny direct access to the internet and instead have an HTTP or HTTPS proxy available. You can configure a new OpenShift Container Platform cluster to use a proxy by configuring the proxy settings in the install-config.yaml file. Prerequisites You have an existing install-config.yaml file. You reviewed the sites that your cluster requires access to and determined whether any of them need to bypass the proxy. By default, all cluster egress traffic is proxied, including calls to hosting cloud provider APIs. You added sites to the Proxy object's spec.noProxy field to bypass the proxy if necessary. Note The Proxy object status.noProxy field is populated with the values of the networking.machineNetwork[].cidr , networking.clusterNetwork[].cidr , and networking.serviceNetwork[] fields from your installation configuration. For installations on Amazon Web Services (AWS), Google Cloud Platform (GCP), Microsoft Azure, and Red Hat OpenStack Platform (RHOSP), the Proxy object status.noProxy field is also populated with the instance metadata endpoint ( 169.254.169.254 ). Procedure Edit your install-config.yaml file and add the proxy settings. For example: apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5 1 A proxy URL to use for creating HTTP connections outside the cluster. The URL scheme must be http . 2 A proxy URL to use for creating HTTPS connections outside the cluster. 3 A comma-separated list of destination domain names, IP addresses, or other network CIDRs to exclude from proxying. Preface a domain with . to match subdomains only. For example, .y.com matches x.y.com , but not y.com . Use * to bypass the proxy for all destinations. You must include vCenter's IP address and the IP range that you use for its machines. 4 If provided, the installation program generates a config map that is named user-ca-bundle in the openshift-config namespace that contains one or more additional CA certificates that are required for proxying HTTPS connections. The Cluster Network Operator then creates a trusted-ca-bundle config map that merges these contents with the Red Hat Enterprise Linux CoreOS (RHCOS) trust bundle, and this config map is referenced in the trustedCA field of the Proxy object. The additionalTrustBundle field is required unless the proxy's identity certificate is signed by an authority from the RHCOS trust bundle. 5 Optional: The policy to determine the configuration of the Proxy object to reference the user-ca-bundle config map in the trustedCA field. The allowed values are Proxyonly and Always . Use Proxyonly to reference the user-ca-bundle config map only when http/https proxy is configured. Use Always to always reference the user-ca-bundle config map. The default value is Proxyonly . Note The installation program does not support the proxy readinessEndpoints field. Note If the installer times out, restart and then complete the deployment by using the wait-for command of the installer. For example: USD ./openshift-install wait-for install-complete --log-level debug Save the file and reference it when installing OpenShift Container Platform. The installation program creates a cluster-wide proxy that is named cluster that uses the proxy settings in the provided install-config.yaml file. If no proxy settings are provided, a cluster Proxy object is still created, but it will have a nil spec . Note Only the Proxy object named cluster is supported, and no additional proxies can be created. 4.11.4. Optional: Deploying with dual-stack networking For dual-stack networking in OpenShift Container Platform clusters, you can configure IPv4 and IPv6 address endpoints for cluster nodes. To configure IPv4 and IPv6 address endpoints for cluster nodes, edit the machineNetwork , clusterNetwork , and serviceNetwork configuration settings in the install-config.yaml file. Each setting must have two CIDR entries each. For a cluster with the IPv4 family as the primary address family, specify the IPv4 setting first. For a cluster with the IPv6 family as the primary address family, specify the IPv6 setting first. machineNetwork: - cidr: {{ extcidrnet }} - cidr: {{ extcidrnet6 }} clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 - cidr: fd02::/48 hostPrefix: 64 serviceNetwork: - 172.30.0.0/16 - fd03::/112 To provide an interface to the cluster for applications that use IPv4 and IPv6 addresses, configure IPv4 and IPv6 virtual IP (VIP) address endpoints for the Ingress VIP and API VIP services. To configure IPv4 and IPv6 address endpoints, edit the apiVIPs and ingressVIPs configuration settings in the install-config.yaml file . The apiVIPs and ingressVIPs configuration settings use a list format. The order of the list indicates the primary and secondary VIP address for each service. platform: vsphere: apiVIPs: - <api_ipv4> - <api_ipv6> ingressVIPs: - <wildcard_ipv4> - <wildcard_ipv6> Important You can configure dual-stack networking on a single interface only. Note In a vSphere cluster configured for dual-stack networking, the node custom resource object has only the IP address from the primary network listed in Status.addresses field. In the pod that uses the host networking with dual-stack connectivity, the Status.podIP and Status.podIPs fields contain only the IP address from the primary network. 4.11.5. Configuring regions and zones for a VMware vCenter You can modify the default installation configuration file, so that you can deploy an OpenShift Container Platform cluster to multiple vSphere datacenters that run in a single VMware vCenter. The default install-config.yaml file configuration from the release of OpenShift Container Platform is deprecated. You can continue to use the deprecated default configuration, but the openshift-installer will prompt you with a warning message that indicates the use of deprecated fields in the configuration file. Important The example uses the govc command. The govc command is an open source command available from VMware; it is not available from Red Hat. The Red Hat support team does not maintain the govc command. Instructions for downloading and installing govc are found on the VMware documentation website Prerequisites You have an existing install-config.yaml installation configuration file. Important You must specify at least one failure domain for your OpenShift Container Platform cluster, so that you can provision datacenter objects for your VMware vCenter server. Consider specifying multiple failure domains if you need to provision virtual machine nodes in different datacenters, clusters, datastores, and other components. To enable regions and zones, you must define multiple failure domains for your OpenShift Container Platform cluster. Procedure Enter the following govc command-line tool commands to create the openshift-region and openshift-zone vCenter tag categories: Important If you specify different names for the openshift-region and openshift-zone vCenter tag categories, the installation of the OpenShift Container Platform cluster fails. USD govc tags.category.create -d "OpenShift region" openshift-region USD govc tags.category.create -d "OpenShift zone" openshift-zone To create a region tag for each region vSphere datacenter where you want to deploy your cluster, enter the following command in your terminal: USD govc tags.create -c <region_tag_category> <region_tag> To create a zone tag for each vSphere cluster where you want to deploy your cluster, enter the following command: USD govc tags.create -c <zone_tag_category> <zone_tag> Attach region tags to each vCenter datacenter object by entering the following command: USD govc tags.attach -c <region_tag_category> <region_tag_1> /<datacenter_1> Attach the zone tags to each vCenter datacenter object by entering the following command: USD govc tags.attach -c <zone_tag_category> <zone_tag_1> /<datacenter_1>/host/vcs-mdcnc-workload-1 Change to the directory that contains the installation program and initialize the cluster deployment according to your chosen installation requirements. Sample install-config.yaml file with multiple datacenters defined in a vSphere center --- compute: --- vsphere: zones: - "<machine_pool_zone_1>" - "<machine_pool_zone_2>" --- controlPlane: --- vsphere: zones: - "<machine_pool_zone_1>" - "<machine_pool_zone_2>" --- platform: vsphere: vcenters: --- datacenters: - <datacenter1_name> - <datacenter2_name> failureDomains: - name: <machine_pool_zone_1> region: <region_tag_1> zone: <zone_tag_1> server: <fully_qualified_domain_name> topology: datacenter: <datacenter1> computeCluster: "/<datacenter1>/host/<cluster1>" networks: - <VM_Network1_name> datastore: "/<datacenter1>/datastore/<datastore1>" resourcePool: "/<datacenter1>/host/<cluster1>/Resources/<resourcePool1>" folder: "/<datacenter1>/vm/<folder1>" - name: <machine_pool_zone_2> region: <region_tag_2> zone: <zone_tag_2> server: <fully_qualified_domain_name> topology: datacenter: <datacenter2> computeCluster: "/<datacenter2>/host/<cluster2>" networks: - <VM_Network2_name> datastore: "/<datacenter2>/datastore/<datastore2>" resourcePool: "/<datacenter2>/host/<cluster2>/Resources/<resourcePool2>" folder: "/<datacenter2>/vm/<folder2>" --- 4.12. Network configuration phases There are two phases prior to OpenShift Container Platform installation where you can customize the network configuration. Phase 1 You can customize the following network-related fields in the install-config.yaml file before you create the manifest files: networking.networkType networking.clusterNetwork networking.serviceNetwork networking.machineNetwork For more information, see "Installation configuration parameters". Note Set the networking.machineNetwork to match the Classless Inter-Domain Routing (CIDR) where the preferred subnet is located. Important The CIDR range 172.17.0.0/16 is reserved by libVirt . You cannot use any other CIDR range that overlaps with the 172.17.0.0/16 CIDR range for networks in your cluster. Phase 2 After creating the manifest files by running openshift-install create manifests , you can define a customized Cluster Network Operator manifest with only the fields you want to modify. You can use the manifest to specify an advanced network configuration. During phase 2, you cannot override the values that you specified in phase 1 in the install-config.yaml file. However, you can customize the network plugin during phase 2. 4.13. Specifying advanced network configuration You can use advanced network configuration for your network plugin to integrate your cluster into your existing network environment. You can specify advanced network configuration only before you install the cluster. Important Customizing your network configuration by modifying the OpenShift Container Platform manifest files created by the installation program is not supported. Applying a manifest file that you create, as in the following procedure, is supported. Prerequisites You have created the install-config.yaml file and completed any modifications to it. Procedure Change to the directory that contains the installation program and create the manifests: USD ./openshift-install create manifests --dir <installation_directory> 1 1 <installation_directory> specifies the name of the directory that contains the install-config.yaml file for your cluster. Create a stub manifest file for the advanced network configuration that is named cluster-network-03-config.yml in the <installation_directory>/manifests/ directory: apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: Specify the advanced network configuration for your cluster in the cluster-network-03-config.yml file, such as in the following examples: Specify a different VXLAN port for the OpenShift SDN network provider apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: defaultNetwork: openshiftSDNConfig: vxlanPort: 4800 Enable IPsec for the OVN-Kubernetes network provider apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: defaultNetwork: ovnKubernetesConfig: ipsecConfig: {} Optional: Back up the manifests/cluster-network-03-config.yml file. The installation program consumes the manifests/ directory when you create the Ignition config files. 4.13.1. Specifying multiple subnets for your network Before you install an OpenShift Container Platform cluster on a vSphere host, you can specify multiple subnets for a networking implementation so that the vSphere cloud controller manager (CCM) can select the appropriate subnet for a given networking situation. vSphere can use the subnet for managing pods and services on your cluster. For this configuration, you must specify internal and external Classless Inter-Domain Routing (CIDR) implementations in the vSphere CCM configuration. Each CIDR implementation lists an IP address range that the CCM uses to decide what subnets interact with traffic from internal and external networks. Important Failure to configure internal and external CIDR implementations in the vSphere CCM configuration can cause the vSphere CCM to select the wrong subnet. This situation causes the following error: This configuration can cause new nodes that associate with a MachineSet object with a single subnet to become unusable as each new node receives the node.cloudprovider.kubernetes.io/uninitialized taint. These situations can cause communication issues with the Kubernetes API server that can cause installation of the cluster to fail. Prerequisites You created Kubernetes manifest files for your OpenShift Container Platform cluster. Procedure From the directory where you store your OpenShift Container Platform cluster manifest files, open the manifests/cluster-infrastructure-02-config.yml manifest file. Add a nodeNetworking object to the file and specify internal and external network subnet CIDR implementations for the object. Tip For most networking situations, consider setting the standard multiple-subnet configuration. This configuration requires that you set the same IP address ranges in the nodeNetworking.internal.networkSubnetCidr and nodeNetworking.external.networkSubnetCidr parameters. Example of a configured cluster-infrastructure-02-config.yml manifest file apiVersion: config.openshift.io/v1 kind: Infrastructure metadata: name: cluster spec: cloudConfig: key: config name: cloud-provider-config platformSpec: type: VSphere vsphere: failureDomains: - name: generated-failure-domain ... nodeNetworking: external: networkSubnetCidr: - <machine_network_cidr_ipv4> - <machine_network_cidr_ipv6> internal: networkSubnetCidr: - <machine_network_cidr_ipv4> - <machine_network_cidr_ipv6> # ... Additional resources Cluster Network Operator configuration .spec.platformSpec.vsphere.nodeNetworking 4.14. Cluster Network Operator configuration The configuration for the cluster network is specified as part of the Cluster Network Operator (CNO) configuration and stored in a custom resource (CR) object that is named cluster . The CR specifies the fields for the Network API in the operator.openshift.io API group. The CNO configuration inherits the following fields during cluster installation from the Network API in the Network.config.openshift.io API group and these fields cannot be changed: clusterNetwork IP address pools from which pod IP addresses are allocated. serviceNetwork IP address pool for services. defaultNetwork.type Cluster network plugin, such as OpenShift SDN or OVN-Kubernetes. You can specify the cluster network plugin configuration for your cluster by setting the fields for the defaultNetwork object in the CNO object named cluster . 4.14.1. Cluster Network Operator configuration object The fields for the Cluster Network Operator (CNO) are described in the following table: Table 4.13. Cluster Network Operator configuration object Field Type Description metadata.name string The name of the CNO object. This name is always cluster . spec.clusterNetwork array A list specifying the blocks of IP addresses from which pod IP addresses are allocated and the subnet prefix length assigned to each individual node in the cluster. For example: spec: clusterNetwork: - cidr: 10.128.0.0/19 hostPrefix: 23 - cidr: 10.128.32.0/19 hostPrefix: 23 You can customize this field only in the install-config.yaml file before you create the manifests. The value is read-only in the manifest file. spec.serviceNetwork array A block of IP addresses for services. The OpenShift SDN and OVN-Kubernetes network plugins support only a single IP address block for the service network. For example: spec: serviceNetwork: - 172.30.0.0/14 You can customize this field only in the install-config.yaml file before you create the manifests. The value is read-only in the manifest file. spec.defaultNetwork object Configures the network plugin for the cluster network. spec.kubeProxyConfig object The fields for this object specify the kube-proxy configuration. If you are using the OVN-Kubernetes cluster network plugin, the kube-proxy configuration has no effect. Important For a cluster that needs to deploy objects across multiple networks, ensure that you specify the same value for the clusterNetwork.hostPrefix parameter for each network type that is defined in the install-config.yaml file. Setting a different value for each clusterNetwork.hostPrefix parameter can impact the OVN-Kubernetes network plugin, where the plugin cannot effectively route object traffic among different nodes. defaultNetwork object configuration The values for the defaultNetwork object are defined in the following table: Table 4.14. defaultNetwork object Field Type Description type string Either OpenShiftSDN or OVNKubernetes . The Red Hat OpenShift Networking network plugin is selected during installation. This value cannot be changed after cluster installation. Note OpenShift Container Platform uses the OVN-Kubernetes network plugin by default. openshiftSDNConfig object This object is only valid for the OpenShift SDN network plugin. ovnKubernetesConfig object This object is only valid for the OVN-Kubernetes network plugin. Configuration for the OpenShift SDN network plugin The following table describes the configuration fields for the OpenShift SDN network plugin: Table 4.15. openshiftSDNConfig object Field Type Description mode string Configures the network isolation mode for OpenShift SDN. The default value is NetworkPolicy . The values Multitenant and Subnet are available for backwards compatibility with OpenShift Container Platform 3.x but are not recommended. This value cannot be changed after cluster installation. mtu integer The maximum transmission unit (MTU) for the VXLAN overlay network. This is detected automatically based on the MTU of the primary network interface. You do not normally need to override the detected MTU. If the auto-detected value is not what you expect it to be, confirm that the MTU on the primary network interface on your nodes is correct. You cannot use this option to change the MTU value of the primary network interface on the nodes. If your cluster requires different MTU values for different nodes, you must set this value to 50 less than the lowest MTU value in your cluster. For example, if some nodes in your cluster have an MTU of 9001 , and some have an MTU of 1500 , you must set this value to 1450 . This value cannot be changed after cluster installation. vxlanPort integer The port to use for all VXLAN packets. The default value is 4789 . This value cannot be changed after cluster installation. If you are running in a virtualized environment with existing nodes that are part of another VXLAN network, then you might be required to change this. For example, when running an OpenShift SDN overlay on top of VMware NSX-T, you must select an alternate port for the VXLAN, because both SDNs use the same default VXLAN port number. On Amazon Web Services (AWS), you can select an alternate port for the VXLAN between port 9000 and port 9999 . Example OpenShift SDN configuration defaultNetwork: type: OpenShiftSDN openshiftSDNConfig: mode: NetworkPolicy mtu: 1450 vxlanPort: 4789 Configuration for the OVN-Kubernetes network plugin The following table describes the configuration fields for the OVN-Kubernetes network plugin: Table 4.16. ovnKubernetesConfig object Field Type Description mtu integer The maximum transmission unit (MTU) for the Geneve (Generic Network Virtualization Encapsulation) overlay network. This is detected automatically based on the MTU of the primary network interface. You do not normally need to override the detected MTU. If the auto-detected value is not what you expect it to be, confirm that the MTU on the primary network interface on your nodes is correct. You cannot use this option to change the MTU value of the primary network interface on the nodes. If your cluster requires different MTU values for different nodes, you must set this value to 100 less than the lowest MTU value in your cluster. For example, if some nodes in your cluster have an MTU of 9001 , and some have an MTU of 1500 , you must set this value to 1400 . genevePort integer The port to use for all Geneve packets. The default value is 6081 . This value cannot be changed after cluster installation. ipsecConfig object Specify an empty object to enable IPsec encryption. policyAuditConfig object Specify a configuration object for customizing network policy audit logging. If unset, the defaults audit log settings are used. gatewayConfig object Optional: Specify a configuration object for customizing how egress traffic is sent to the node gateway. Note While migrating egress traffic, you can expect some disruption to workloads and service traffic until the Cluster Network Operator (CNO) successfully rolls out the changes. v4InternalSubnet If your existing network infrastructure overlaps with the 100.64.0.0/16 IPv4 subnet, you can specify a different IP address range for internal use by OVN-Kubernetes. You must ensure that the IP address range does not overlap with any other subnet used by your OpenShift Container Platform installation. The IP address range must be larger than the maximum number of nodes that can be added to the cluster. For example, if the clusterNetwork.cidr value is 10.128.0.0/14 and the clusterNetwork.hostPrefix value is /23 , then the maximum number of nodes is 2^(23-14)=512 . This field cannot be changed after installation. The default value is 100.64.0.0/16 . v6InternalSubnet If your existing network infrastructure overlaps with the fd98::/48 IPv6 subnet, you can specify a different IP address range for internal use by OVN-Kubernetes. You must ensure that the IP address range does not overlap with any other subnet used by your OpenShift Container Platform installation. The IP address range must be larger than the maximum number of nodes that can be added to the cluster. This field cannot be changed after installation. The default value is fd98::/48 . Table 4.17. policyAuditConfig object Field Type Description rateLimit integer The maximum number of messages to generate every second per node. The default value is 20 messages per second. maxFileSize integer The maximum size for the audit log in bytes. The default value is 50000000 or 50 MB. maxLogFiles integer The maximum number of log files that are retained. destination string One of the following additional audit log targets: libc The libc syslog() function of the journald process on the host. udp:<host>:<port> A syslog server. Replace <host>:<port> with the host and port of the syslog server. unix:<file> A Unix Domain Socket file specified by <file> . null Do not send the audit logs to any additional target. syslogFacility string The syslog facility, such as kern , as defined by RFC5424. The default value is local0 . Table 4.18. gatewayConfig object Field Type Description routingViaHost boolean Set this field to true to send egress traffic from pods to the host networking stack. For highly-specialized installations and applications that rely on manually configured routes in the kernel routing table, you might want to route egress traffic to the host networking stack. By default, egress traffic is processed in OVN to exit the cluster and is not affected by specialized routes in the kernel routing table. The default value is false . This field has an interaction with the Open vSwitch hardware offloading feature. If you set this field to true , you do not receive the performance benefits of the offloading because egress traffic is processed by the host networking stack. Example OVN-Kubernetes configuration with IPSec enabled defaultNetwork: type: OVNKubernetes ovnKubernetesConfig: mtu: 1400 genevePort: 6081 ipsecConfig: {} kubeProxyConfig object configuration The values for the kubeProxyConfig object are defined in the following table: Table 4.19. kubeProxyConfig object Field Type Description iptablesSyncPeriod string The refresh period for iptables rules. The default value is 30s . Valid suffixes include s , m , and h and are described in the Go time package documentation. Note Because of performance improvements introduced in OpenShift Container Platform 4.3 and greater, adjusting the iptablesSyncPeriod parameter is no longer necessary. proxyArguments.iptables-min-sync-period array The minimum duration before refreshing iptables rules. This field ensures that the refresh does not happen too frequently. Valid suffixes include s , m , and h and are described in the Go time package . The default value is: kubeProxyConfig: proxyArguments: iptables-min-sync-period: - 0s 4.15. Deploying the cluster You can install OpenShift Container Platform on a compatible cloud platform. Important You can run the create cluster command of the installation program only once, during initial installation. Prerequisites Obtain the OpenShift Container Platform installation program and the pull secret for your cluster. Verify the cloud provider account on your host has the correct permissions to deploy the cluster. An account with incorrect permissions causes the installation process to fail with an error message that displays the missing permissions. Optional: Before you create the cluster, configure an external load balancer in place of the default load balancer. Important You do not need to specify API and Ingress static addresses for your installation program. If you choose this configuration, you must take additional actions to define network targets that accept an IP address from each referenced vSphere subnet. See the section "Configuring an external load balancer". Procedure Change to the directory that contains the installation program and initialize the cluster deployment: USD ./openshift-install create cluster --dir <installation_directory> \ 1 --log-level=info 2 1 For <installation_directory> , specify the location of your customized ./install-config.yaml file. 2 To view different installation details, specify warn , debug , or error instead of info . Verification When the cluster deployment completes successfully: The terminal displays directions for accessing your cluster, including a link to the web console and credentials for the kubeadmin user. Credential information also outputs to <installation_directory>/.openshift_install.log . Important Do not delete the installation program or the files that the installation program creates. Both are required to delete the cluster. Example output ... INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: "kubeadmin", and password: "password" INFO Time elapsed: 36m22s Important The Ignition config files that the installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending node-bootstrapper certificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information. It is recommended that you use Ignition config files within 12 hours after they are generated because the 24-hour certificate rotates from 16 to 22 hours after the cluster is installed. By using the Ignition config files within 12 hours, you can avoid installation failure if the certificate update runs during installation. 4.16. Installing the OpenShift CLI by downloading the binary You can install the OpenShift CLI ( oc ) to interact with OpenShift Container Platform from a command-line interface. You can install oc on Linux, Windows, or macOS. Important If you installed an earlier version of oc , you cannot use it to complete all of the commands in OpenShift Container Platform 4.13. Download and install the new version of oc . Installing the OpenShift CLI on Linux You can install the OpenShift CLI ( oc ) binary on Linux by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the architecture from the Product Variant drop-down list. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.13 Linux Client entry and save the file. Unpack the archive: USD tar xvf <file> Place the oc binary in a directory that is on your PATH . To check your PATH , execute the following command: USD echo USDPATH Verification After you install the OpenShift CLI, it is available using the oc command: USD oc <command> Installing the OpenShift CLI on Windows You can install the OpenShift CLI ( oc ) binary on Windows by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.13 Windows Client entry and save the file. Unzip the archive with a ZIP program. Move the oc binary to a directory that is on your PATH . To check your PATH , open the command prompt and execute the following command: C:\> path Verification After you install the OpenShift CLI, it is available using the oc command: C:\> oc <command> Installing the OpenShift CLI on macOS You can install the OpenShift CLI ( oc ) binary on macOS by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.13 macOS Client entry and save the file. Note For macOS arm64, choose the OpenShift v4.13 macOS arm64 Client entry. Unpack and unzip the archive. Move the oc binary to a directory on your PATH. To check your PATH , open a terminal and execute the following command: USD echo USDPATH Verification After you install the OpenShift CLI, it is available using the oc command: USD oc <command> 4.17. Logging in to the cluster by using the CLI You can log in to your cluster as a default system user by exporting the cluster kubeconfig file. The kubeconfig file contains information about the cluster that is used by the CLI to connect a client to the correct cluster and API server. The file is specific to a cluster and is created during OpenShift Container Platform installation. Prerequisites You deployed an OpenShift Container Platform cluster. You installed the oc CLI. Procedure Export the kubeadmin credentials: USD export KUBECONFIG=<installation_directory>/auth/kubeconfig 1 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. Verify you can run oc commands successfully using the exported configuration: USD oc whoami Example output system:admin 4.18. Creating registry storage After you install the cluster, you must create storage for the registry Operator. 4.18.1. Image registry removed during installation On platforms that do not provide shareable object storage, the OpenShift Image Registry Operator bootstraps itself as Removed . This allows openshift-installer to complete installations on these platform types. After installation, you must edit the Image Registry Operator configuration to switch the managementState from Removed to Managed . When this has completed, you must configure storage. 4.18.2. Image registry storage configuration The Image Registry Operator is not initially available for platforms that do not provide default storage. After installation, you must configure your registry to use storage so that the Registry Operator is made available. Instructions are shown for configuring a persistent volume, which is required for production clusters. Where applicable, instructions are shown for configuring an empty directory as the storage location, which is available for only non-production clusters. Additional instructions are provided for allowing the image registry to use block storage types by using the Recreate rollout strategy during upgrades. 4.18.2.1. Configuring registry storage for VMware vSphere As a cluster administrator, following installation you must configure your registry to use storage. Prerequisites Cluster administrator permissions. A cluster on VMware vSphere. Persistent storage provisioned for your cluster, such as Red Hat OpenShift Data Foundation. Important OpenShift Container Platform supports ReadWriteOnce access for image registry storage when you have only one replica. ReadWriteOnce access also requires that the registry uses the Recreate rollout strategy. To deploy an image registry that supports high availability with two or more replicas, ReadWriteMany access is required. Must have "100Gi" capacity. Important Testing shows issues with using the NFS server on RHEL as storage backend for core services. This includes the OpenShift Container Registry and Quay, Prometheus for monitoring storage, and Elasticsearch for logging storage. Therefore, using RHEL NFS to back PVs used by core services is not recommended. Other NFS implementations on the marketplace might not have these issues. Contact the individual NFS implementation vendor for more information on any testing that was possibly completed against these OpenShift Container Platform core components. Procedure To configure your registry to use storage, change the spec.storage.pvc in the configs.imageregistry/cluster resource. Note When you use shared storage, review your security settings to prevent outside access. Verify that you do not have a registry pod: USD oc get pod -n openshift-image-registry -l docker-registry=default Example output No resourses found in openshift-image-registry namespace Note If you do have a registry pod in your output, you do not need to continue with this procedure. Check the registry configuration: USD oc edit configs.imageregistry.operator.openshift.io Example output storage: pvc: claim: 1 1 Leave the claim field blank to allow the automatic creation of an image-registry-storage persistent volume claim (PVC). The PVC is generated based on the default storage class. However, be aware that the default storage class might provide ReadWriteOnce (RWO) volumes, such as a RADOS Block Device (RBD), which can cause issues when you replicate to more than one replica. Check the clusteroperator status: USD oc get clusteroperator image-registry Example output NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MESSAGE image-registry 4.7 True False False 6h50m 4.18.2.2. Configuring block registry storage for VMware vSphere To allow the image registry to use block storage types such as vSphere Virtual Machine Disk (VMDK) during upgrades as a cluster administrator, you can use the Recreate rollout strategy. Important Block storage volumes are supported but not recommended for use with image registry on production clusters. An installation where the registry is configured on block storage is not highly available because the registry cannot have more than one replica. Procedure Enter the following command to set the image registry storage as a block storage type, patch the registry so that it uses the Recreate rollout strategy, and runs with only 1 replica: USD oc patch config.imageregistry.operator.openshift.io/cluster --type=merge -p '{"spec":{"rolloutStrategy":"Recreate","replicas":1}}' Provision the PV for the block storage device, and create a PVC for that volume. The requested block volume uses the ReadWriteOnce (RWO) access mode. Create a pvc.yaml file with the following contents to define a VMware vSphere PersistentVolumeClaim object: kind: PersistentVolumeClaim apiVersion: v1 metadata: name: image-registry-storage 1 namespace: openshift-image-registry 2 spec: accessModes: - ReadWriteOnce 3 resources: requests: storage: 100Gi 4 1 A unique name that represents the PersistentVolumeClaim object. 2 The namespace for the PersistentVolumeClaim object, which is openshift-image-registry . 3 The access mode of the persistent volume claim. With ReadWriteOnce , the volume can be mounted with read and write permissions by a single node. 4 The size of the persistent volume claim. Enter the following command to create the PersistentVolumeClaim object from the file: USD oc create -f pvc.yaml -n openshift-image-registry Enter the following command to edit the registry configuration so that it references the correct PVC: USD oc edit config.imageregistry.operator.openshift.io -o yaml Example output storage: pvc: claim: 1 1 By creating a custom PVC, you can leave the claim field blank for the default automatic creation of an image-registry-storage PVC. For instructions about configuring registry storage so that it references the correct PVC, see Configuring the registry for vSphere . 4.19. Backing up VMware vSphere volumes OpenShift Container Platform provisions new volumes as independent persistent disks to freely attach and detach the volume on any node in the cluster. As a consequence, it is not possible to back up volumes that use snapshots, or to restore volumes from snapshots. See Snapshot Limitations for more information. Procedure To create a backup of persistent volumes: Stop the application that is using the persistent volume. Clone the persistent volume. Restart the application. Create a backup of the cloned volume. Delete the cloned volume. 4.20. Telemetry access for OpenShift Container Platform In OpenShift Container Platform 4.13, the Telemetry service, which runs by default to provide metrics about cluster health and the success of updates, requires internet access. If your cluster is connected to the internet, Telemetry runs automatically, and your cluster is registered to OpenShift Cluster Manager Hybrid Cloud Console . After you confirm that your OpenShift Cluster Manager Hybrid Cloud Console inventory is correct, either maintained automatically by Telemetry or manually by using OpenShift Cluster Manager, use subscription watch to track your OpenShift Container Platform subscriptions at the account or multi-cluster level. Additional resources See About remote health monitoring for more information about the Telemetry service 4.21. Services for an external load balancer You can configure an OpenShift Container Platform cluster to use an external load balancer in place of the default load balancer. Important Configuring an external load balancer depends on your vendor's load balancer. The information and examples in this section are for guideline purposes only. Consult the vendor documentation for more specific information about the vendor's load balancer. Red Hat supports the following services for an external load balancer: Ingress Controller OpenShift API OpenShift MachineConfig API You can choose whether you want to configure one or all of these services for an external load balancer. Configuring only the Ingress Controller service is a common configuration option. To better understand each service, view the following diagrams: Figure 4.1. Example network workflow that shows an Ingress Controller operating in an OpenShift Container Platform environment Figure 4.2. Example network workflow that shows an OpenShift API operating in an OpenShift Container Platform environment Figure 4.3. Example network workflow that shows an OpenShift MachineConfig API operating in an OpenShift Container Platform environment The following configuration options are supported for external load balancers: Use a node selector to map the Ingress Controller to a specific set of nodes. You must assign a static IP address to each node in this set, or configure each node to receive the same IP address from the Dynamic Host Configuration Protocol (DHCP). Infrastructure nodes commonly receive this type of configuration. Target all IP addresses on a subnet. This configuration can reduce maintenance overhead, because you can create and destroy nodes within those networks without reconfiguring the load balancer targets. If you deploy your ingress pods by using a machine set on a smaller network, such as a /27 or /28 , you can simplify your load balancer targets. Tip You can list all IP addresses that exist in a network by checking the machine config pool's resources. Before you configure an external load balancer for your OpenShift Container Platform cluster, consider the following information: For a front-end IP address, you can use the same IP address for the front-end IP address, the Ingress Controller's load balancer, and API load balancer. Check the vendor's documentation for this capability. For a back-end IP address, ensure that an IP address for an OpenShift Container Platform control plane node does not change during the lifetime of the external load balancer. You can achieve this by completing one of the following actions: Assign a static IP address to each control plane node. Configure each node to receive the same IP address from the DHCP every time the node requests a DHCP lease. Depending on the vendor, the DHCP lease might be in the form of an IP reservation or a static DHCP assignment. Manually define each node that runs the Ingress Controller in the external load balancer for the Ingress Controller back-end service. For example, if the Ingress Controller moves to an undefined node, a connection outage can occur. 4.21.1. Configuring an external load balancer You can configure an OpenShift Container Platform cluster to use an external load balancer in place of the default load balancer. Important Before you configure an external load balancer, ensure that you read the "Services for an external load balancer" section. Read the following prerequisites that apply to the service that you want to configure for your external load balancer. Note MetalLB, that runs on a cluster, functions as an external load balancer. OpenShift API prerequisites You defined a front-end IP address. TCP ports 6443 and 22623 are exposed on the front-end IP address of your load balancer. Check the following items: Port 6443 provides access to the OpenShift API service. Port 22623 can provide ignition startup configurations to nodes. The front-end IP address and port 6443 are reachable by all users of your system with a location external to your OpenShift Container Platform cluster. The front-end IP address and port 22623 are reachable only by OpenShift Container Platform nodes. The load balancer backend can communicate with OpenShift Container Platform control plane nodes on port 6443 and 22623. Ingress Controller prerequisites You defined a front-end IP address. TCP ports 443 and 80 are exposed on the front-end IP address of your load balancer. The front-end IP address, port 80 and port 443 are be reachable by all users of your system with a location external to your OpenShift Container Platform cluster. The front-end IP address, port 80 and port 443 are reachable to all nodes that operate in your OpenShift Container Platform cluster. The load balancer backend can communicate with OpenShift Container Platform nodes that run the Ingress Controller on ports 80, 443, and 1936. Prerequisite for health check URL specifications You can configure most load balancers by setting health check URLs that determine if a service is available or unavailable. OpenShift Container Platform provides these health checks for the OpenShift API, Machine Configuration API, and Ingress Controller backend services. The following examples demonstrate health check specifications for the previously listed backend services: Example of a Kubernetes API health check specification Path: HTTPS:6443/readyz Healthy threshold: 2 Unhealthy threshold: 2 Timeout: 10 Interval: 10 Example of a Machine Config API health check specification Path: HTTPS:22623/healthz Healthy threshold: 2 Unhealthy threshold: 2 Timeout: 10 Interval: 10 Example of an Ingress Controller health check specification Path: HTTP:1936/healthz/ready Healthy threshold: 2 Unhealthy threshold: 2 Timeout: 5 Interval: 10 Procedure Configure the HAProxy Ingress Controller, so that you can enable access to the cluster from your load balancer on ports 6443, 443, and 80: Example HAProxy configuration #... listen my-cluster-api-6443 bind 192.168.1.100:6443 mode tcp balance roundrobin option httpchk http-check connect http-check send meth GET uri /readyz http-check expect status 200 server my-cluster-master-2 192.168.1.101:6443 check inter 10s rise 2 fall 2 server my-cluster-master-0 192.168.1.102:6443 check inter 10s rise 2 fall 2 server my-cluster-master-1 192.168.1.103:6443 check inter 10s rise 2 fall 2 listen my-cluster-machine-config-api-22623 bind 192.168.1.100:22623 mode tcp balance roundrobin option httpchk http-check connect http-check send meth GET uri /healthz http-check expect status 200 server my-cluster-master-2 192.168.1.101:22623 check inter 10s rise 2 fall 2 server my-cluster-master-0 192.168.1.102:22623 check inter 10s rise 2 fall 2 server my-cluster-master-1 192.168.1.103:22623 check inter 10s rise 2 fall 2 listen my-cluster-apps-443 bind 192.168.1.100:443 mode tcp balance roundrobin option httpchk http-check connect http-check send meth GET uri /healthz/ready http-check expect status 200 server my-cluster-worker-0 192.168.1.111:443 check port 1936 inter 10s rise 2 fall 2 server my-cluster-worker-1 192.168.1.112:443 check port 1936 inter 10s rise 2 fall 2 server my-cluster-worker-2 192.168.1.113:443 check port 1936 inter 10s rise 2 fall 2 listen my-cluster-apps-80 bind 192.168.1.100:80 mode tcp balance roundrobin option httpchk http-check connect http-check send meth GET uri /healthz/ready http-check expect status 200 server my-cluster-worker-0 192.168.1.111:80 check port 1936 inter 10s rise 2 fall 2 server my-cluster-worker-1 192.168.1.112:80 check port 1936 inter 10s rise 2 fall 2 server my-cluster-worker-2 192.168.1.113:80 check port 1936 inter 10s rise 2 fall 2 # ... Use the curl CLI command to verify that the external load balancer and its resources are operational: Verify that the cluster machine configuration API is accessible to the Kubernetes API server resource, by running the following command and observing the response: USD curl https://<loadbalancer_ip_address>:6443/version --insecure If the configuration is correct, you receive a JSON object in response: { "major": "1", "minor": "11+", "gitVersion": "v1.11.0+ad103ed", "gitCommit": "ad103ed", "gitTreeState": "clean", "buildDate": "2019-01-09T06:44:10Z", "goVersion": "go1.10.3", "compiler": "gc", "platform": "linux/amd64" } Verify that the cluster machine configuration API is accessible to the Machine config server resource, by running the following command and observing the output: USD curl -v https://<loadbalancer_ip_address>:22623/healthz --insecure If the configuration is correct, the output from the command shows the following response: HTTP/1.1 200 OK Content-Length: 0 Verify that the controller is accessible to the Ingress Controller resource on port 80, by running the following command and observing the output: USD curl -I -L -H "Host: console-openshift-console.apps.<cluster_name>.<base_domain>" http://<load_balancer_front_end_IP_address> If the configuration is correct, the output from the command shows the following response: HTTP/1.1 302 Found content-length: 0 location: https://console-openshift-console.apps.ocp4.private.opequon.net/ cache-control: no-cache Verify that the controller is accessible to the Ingress Controller resource on port 443, by running the following command and observing the output: USD curl -I -L --insecure --resolve console-openshift-console.apps.<cluster_name>.<base_domain>:443:<Load Balancer Front End IP Address> https://console-openshift-console.apps.<cluster_name>.<base_domain> If the configuration is correct, the output from the command shows the following response: HTTP/1.1 200 OK referrer-policy: strict-origin-when-cross-origin set-cookie: csrf-token=UlYWOyQ62LWjw2h003xtYSKlh1a0Py2hhctw0WmV2YEdhJjFyQwWcGBsja261dGLgaYO0nxzVErhiXt6QepA7g==; Path=/; Secure; SameSite=Lax x-content-type-options: nosniff x-dns-prefetch-control: off x-frame-options: DENY x-xss-protection: 1; mode=block date: Wed, 04 Oct 2023 16:29:38 GMT content-type: text/html; charset=utf-8 set-cookie: 1e2670d92730b515ce3a1bb65da45062=1bf5e9573c9a2760c964ed1659cc1673; path=/; HttpOnly; Secure; SameSite=None cache-control: private Configure the DNS records for your cluster to target the front-end IP addresses of the external load balancer. You must update records to your DNS server for the cluster API and applications over the load balancer. Examples of modified DNS records <load_balancer_ip_address> A api.<cluster_name>.<base_domain> A record pointing to Load Balancer Front End <load_balancer_ip_address> A apps.<cluster_name>.<base_domain> A record pointing to Load Balancer Front End Important DNS propagation might take some time for each DNS record to become available. Ensure that each DNS record propagates before validating each record. Use the curl CLI command to verify that the external load balancer and DNS record configuration are operational: Verify that you can access the cluster API, by running the following command and observing the output: USD curl https://api.<cluster_name>.<base_domain>:6443/version --insecure If the configuration is correct, you receive a JSON object in response: { "major": "1", "minor": "11+", "gitVersion": "v1.11.0+ad103ed", "gitCommit": "ad103ed", "gitTreeState": "clean", "buildDate": "2019-01-09T06:44:10Z", "goVersion": "go1.10.3", "compiler": "gc", "platform": "linux/amd64" } Verify that you can access the cluster machine configuration, by running the following command and observing the output: USD curl -v https://api.<cluster_name>.<base_domain>:22623/healthz --insecure If the configuration is correct, the output from the command shows the following response: HTTP/1.1 200 OK Content-Length: 0 Verify that you can access each cluster application on port, by running the following command and observing the output: USD curl http://console-openshift-console.apps.<cluster_name>.<base_domain> -I -L --insecure If the configuration is correct, the output from the command shows the following response: HTTP/1.1 302 Found content-length: 0 location: https://console-openshift-console.apps.<cluster-name>.<base domain>/ cache-control: no-cacheHTTP/1.1 200 OK referrer-policy: strict-origin-when-cross-origin set-cookie: csrf-token=39HoZgztDnzjJkq/JuLJMeoKNXlfiVv2YgZc09c3TBOBU4NI6kDXaJH1LdicNhN1UsQWzon4Dor9GWGfopaTEQ==; Path=/; Secure x-content-type-options: nosniff x-dns-prefetch-control: off x-frame-options: DENY x-xss-protection: 1; mode=block date: Tue, 17 Nov 2020 08:42:10 GMT content-type: text/html; charset=utf-8 set-cookie: 1e2670d92730b515ce3a1bb65da45062=9b714eb87e93cf34853e87a92d6894be; path=/; HttpOnly; Secure; SameSite=None cache-control: private Verify that you can access each cluster application on port 443, by running the following command and observing the output: USD curl https://console-openshift-console.apps.<cluster_name>.<base_domain> -I -L --insecure If the configuration is correct, the output from the command shows the following response: HTTP/1.1 200 OK referrer-policy: strict-origin-when-cross-origin set-cookie: csrf-token=UlYWOyQ62LWjw2h003xtYSKlh1a0Py2hhctw0WmV2YEdhJjFyQwWcGBsja261dGLgaYO0nxzVErhiXt6QepA7g==; Path=/; Secure; SameSite=Lax x-content-type-options: nosniff x-dns-prefetch-control: off x-frame-options: DENY x-xss-protection: 1; mode=block date: Wed, 04 Oct 2023 16:29:38 GMT content-type: text/html; charset=utf-8 set-cookie: 1e2670d92730b515ce3a1bb65da45062=1bf5e9573c9a2760c964ed1659cc1673; path=/; HttpOnly; Secure; SameSite=None cache-control: private 4.22. Configuring network components to run on the control plane You can configure networking components to run exclusively on the control plane nodes. By default, OpenShift Container Platform allows any node in the machine config pool to host the ingressVIP virtual IP address. However, some environments deploy worker nodes in separate subnets from the control plane nodes, which requires configuring the ingressVIP virtual IP address to run on the control plane nodes. Note You can scale the remote workers by creating a worker machineset in a separate subnet. Important When deploying remote workers in separate subnets, you must place the ingressVIP virtual IP address exclusively with the control plane nodes. Procedure Change to the directory storing the install-config.yaml file: USD cd ~/clusterconfigs Switch to the manifests subdirectory: USD cd manifests Create a file named cluster-network-avoid-workers-99-config.yaml : USD touch cluster-network-avoid-workers-99-config.yaml Open the cluster-network-avoid-workers-99-config.yaml file in an editor and enter a custom resource (CR) that describes the Operator configuration: apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: name: 50-worker-fix-ipi-rwn labels: machineconfiguration.openshift.io/role: worker spec: config: ignition: version: 3.2.0 storage: files: - path: /etc/kubernetes/manifests/keepalived.yaml mode: 0644 contents: source: data:, This manifest places the ingressVIP virtual IP address on the control plane nodes. Additionally, this manifest deploys the following processes on the control plane nodes only: openshift-ingress-operator keepalived Save the cluster-network-avoid-workers-99-config.yaml file. Create a manifests/cluster-ingress-default-ingresscontroller.yaml file: apiVersion: operator.openshift.io/v1 kind: IngressController metadata: name: default namespace: openshift-ingress-operator spec: nodePlacement: nodeSelector: matchLabels: node-role.kubernetes.io/master: "" Consider backing up the manifests directory. The installer deletes the manifests/ directory when creating the cluster. Modify the cluster-scheduler-02-config.yml manifest to make the control plane nodes schedulable by setting the mastersSchedulable field to true . Control plane nodes are not schedulable by default. For example: Note If control plane nodes are not schedulable after completing this procedure, deploying the cluster will fail. 4.23. steps Customize your cluster . If necessary, you can opt out of remote health reporting . Set up your registry and configure registry storage . Optional: View the events from the vSphere Problem Detector Operator to determine if the cluster has permission or storage configuration issues.
[ "ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1", "cat <path>/<file_name>.pub", "cat ~/.ssh/id_ed25519.pub", "eval \"USD(ssh-agent -s)\"", "Agent pid 31874", "ssh-add <path>/<file_name> 1", "Identity added: /home/<you>/<path>/<file_name> (<computer_name>)", "tar -xvf openshift-install-linux.tar.gz", "certs ├── lin │ ├── 108f4d17.0 │ ├── 108f4d17.r1 │ ├── 7e757f6a.0 │ ├── 8e4f8471.0 │ └── 8e4f8471.r0 ├── mac │ ├── 108f4d17.0 │ ├── 108f4d17.r1 │ ├── 7e757f6a.0 │ ├── 8e4f8471.0 │ └── 8e4f8471.r0 └── win ├── 108f4d17.0.crt ├── 108f4d17.r1.crl ├── 7e757f6a.0.crt ├── 8e4f8471.0.crt └── 8e4f8471.r0.crl 3 directories, 15 files", "cp certs/lin/* /etc/pki/ca-trust/source/anchors", "update-ca-trust extract", "./openshift-install create install-config --dir <installation_directory> 1", "rm -rf ~/.powervs", "{ \"auths\":{ \"cloud.openshift.com\":{ \"auth\":\"b3Blb=\", \"email\":\"[email protected]\" }, \"quay.io\":{ \"auth\":\"b3Blb=\", \"email\":\"[email protected]\" } } }", "networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 - cidr: fd00:10:128::/56 hostPrefix: 64 serviceNetwork: - 172.30.0.0/16 - fd00:172:16::/112", "networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23", "networking: serviceNetwork: - 172.30.0.0/16", "networking: machineNetwork: - cidr: 10.0.0.0/16", "platform: vsphere:", "platform: vsphere: apiVIPs:", "platform: vsphere: diskType:", "platform: vsphere: failureDomains: region:", "platform: vsphere: failureDomains: server:", "platform: vsphere: failureDomains: zone:", "platform: vsphere: failureDomains: topology: datacenter:", "platform: vsphere: failureDomains: topology: datastore:", "platform: vsphere: failureDomains: topology: folder:", "platform: vsphere: failureDomains: topology: networks:", "platform: vsphere: failureDomains: topology: resourcePool:", "platform: vsphere: ingressVIPs:", "platform: vsphere: vcenters:", "platform: vsphere: vcenters: datacenters:", "platform: vsphere: vcenters: password:", "platform: vsphere: vcenters: port:", "platform: vsphere: vcenters: server:", "platform: vsphere: vcenters: user:", "platform: vsphere: apiVIP:", "platform: vsphere: cluster:", "platform: vsphere: datacenter:", "platform: vsphere: defaultDatastore:", "platform: vsphere: folder:", "platform: vsphere: ingressVIP:", "platform: vsphere: network:", "platform: vsphere: password:", "platform: vsphere: resourcePool:", "platform: vsphere: username:", "platform: vsphere: vCenter:", "apiVersion: v1 baseDomain: example.com 1 compute: 2 - architecture: amd64 name: <worker_node> platform: {} replicas: 3 controlPlane: 3 architecture: amd64 name: <parent_node> platform: {} replicas: 3 metadata: creationTimestamp: null name: test 4 networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 networkType: OVNKubernetes 5 serviceNetwork: - 172.30.0.0/16 platform: vsphere: 6 apiVIPs: - 10.0.0.1 failureDomains: 7 - name: <failure_domain_name> region: <default_region_name> server: <fully_qualified_domain_name> topology: computeCluster: \"/<datacenter>/host/<cluster>\" datacenter: <datacenter> datastore: \"/<datacenter>/datastore/<datastore>\" 8 networks: - <VM_Network_name> resourcePool: \"/<datacenter>/host/<cluster>/Resources/<resourcePool>\" 9 folder: \"/<datacenter_name>/vm/<folder_name>/<subfolder_name>\" zone: <default_zone_name> ingressVIPs: - 10.0.0.2 vcenters: - datacenters: - <datacenter> password: <password> port: 443 server: <fully_qualified_domain_name> user: [email protected] diskType: thin 10 fips: false pullSecret: '{\"auths\": ...}' sshKey: 'ssh-ed25519 AAAA...'", "apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5", "./openshift-install wait-for install-complete --log-level debug", "machineNetwork: - cidr: {{ extcidrnet }} - cidr: {{ extcidrnet6 }} clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 - cidr: fd02::/48 hostPrefix: 64 serviceNetwork: - 172.30.0.0/16 - fd03::/112", "platform: vsphere: apiVIPs: - <api_ipv4> - <api_ipv6> ingressVIPs: - <wildcard_ipv4> - <wildcard_ipv6>", "govc tags.category.create -d \"OpenShift region\" openshift-region", "govc tags.category.create -d \"OpenShift zone\" openshift-zone", "govc tags.create -c <region_tag_category> <region_tag>", "govc tags.create -c <zone_tag_category> <zone_tag>", "govc tags.attach -c <region_tag_category> <region_tag_1> /<datacenter_1>", "govc tags.attach -c <zone_tag_category> <zone_tag_1> /<datacenter_1>/host/vcs-mdcnc-workload-1", "--- compute: --- vsphere: zones: - \"<machine_pool_zone_1>\" - \"<machine_pool_zone_2>\" --- controlPlane: --- vsphere: zones: - \"<machine_pool_zone_1>\" - \"<machine_pool_zone_2>\" --- platform: vsphere: vcenters: --- datacenters: - <datacenter1_name> - <datacenter2_name> failureDomains: - name: <machine_pool_zone_1> region: <region_tag_1> zone: <zone_tag_1> server: <fully_qualified_domain_name> topology: datacenter: <datacenter1> computeCluster: \"/<datacenter1>/host/<cluster1>\" networks: - <VM_Network1_name> datastore: \"/<datacenter1>/datastore/<datastore1>\" resourcePool: \"/<datacenter1>/host/<cluster1>/Resources/<resourcePool1>\" folder: \"/<datacenter1>/vm/<folder1>\" - name: <machine_pool_zone_2> region: <region_tag_2> zone: <zone_tag_2> server: <fully_qualified_domain_name> topology: datacenter: <datacenter2> computeCluster: \"/<datacenter2>/host/<cluster2>\" networks: - <VM_Network2_name> datastore: \"/<datacenter2>/datastore/<datastore2>\" resourcePool: \"/<datacenter2>/host/<cluster2>/Resources/<resourcePool2>\" folder: \"/<datacenter2>/vm/<folder2>\" ---", "./openshift-install create manifests --dir <installation_directory> 1", "apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec:", "apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: defaultNetwork: openshiftSDNConfig: vxlanPort: 4800", "apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: defaultNetwork: ovnKubernetesConfig: ipsecConfig: {}", "ERROR Bootstrap failed to complete: timed out waiting for the condition ERROR Failed to wait for bootstrapping to complete. This error usually happens when there is a problem with control plane hosts that prevents the control plane operators from creating the control plane.", "apiVersion: config.openshift.io/v1 kind: Infrastructure metadata: name: cluster spec: cloudConfig: key: config name: cloud-provider-config platformSpec: type: VSphere vsphere: failureDomains: - name: generated-failure-domain nodeNetworking: external: networkSubnetCidr: - <machine_network_cidr_ipv4> - <machine_network_cidr_ipv6> internal: networkSubnetCidr: - <machine_network_cidr_ipv4> - <machine_network_cidr_ipv6>", "spec: clusterNetwork: - cidr: 10.128.0.0/19 hostPrefix: 23 - cidr: 10.128.32.0/19 hostPrefix: 23", "spec: serviceNetwork: - 172.30.0.0/14", "defaultNetwork: type: OpenShiftSDN openshiftSDNConfig: mode: NetworkPolicy mtu: 1450 vxlanPort: 4789", "defaultNetwork: type: OVNKubernetes ovnKubernetesConfig: mtu: 1400 genevePort: 6081 ipsecConfig: {}", "kubeProxyConfig: proxyArguments: iptables-min-sync-period: - 0s", "./openshift-install create cluster --dir <installation_directory> \\ 1 --log-level=info 2", "INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: \"kubeadmin\", and password: \"password\" INFO Time elapsed: 36m22s", "tar xvf <file>", "echo USDPATH", "oc <command>", "C:\\> path", "C:\\> oc <command>", "echo USDPATH", "oc <command>", "export KUBECONFIG=<installation_directory>/auth/kubeconfig 1", "oc whoami", "system:admin", "oc get pod -n openshift-image-registry -l docker-registry=default", "No resourses found in openshift-image-registry namespace", "oc edit configs.imageregistry.operator.openshift.io", "storage: pvc: claim: 1", "oc get clusteroperator image-registry", "NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MESSAGE image-registry 4.7 True False False 6h50m", "oc patch config.imageregistry.operator.openshift.io/cluster --type=merge -p '{\"spec\":{\"rolloutStrategy\":\"Recreate\",\"replicas\":1}}'", "kind: PersistentVolumeClaim apiVersion: v1 metadata: name: image-registry-storage 1 namespace: openshift-image-registry 2 spec: accessModes: - ReadWriteOnce 3 resources: requests: storage: 100Gi 4", "oc create -f pvc.yaml -n openshift-image-registry", "oc edit config.imageregistry.operator.openshift.io -o yaml", "storage: pvc: claim: 1", "Path: HTTPS:6443/readyz Healthy threshold: 2 Unhealthy threshold: 2 Timeout: 10 Interval: 10", "Path: HTTPS:22623/healthz Healthy threshold: 2 Unhealthy threshold: 2 Timeout: 10 Interval: 10", "Path: HTTP:1936/healthz/ready Healthy threshold: 2 Unhealthy threshold: 2 Timeout: 5 Interval: 10", "# listen my-cluster-api-6443 bind 192.168.1.100:6443 mode tcp balance roundrobin option httpchk http-check connect http-check send meth GET uri /readyz http-check expect status 200 server my-cluster-master-2 192.168.1.101:6443 check inter 10s rise 2 fall 2 server my-cluster-master-0 192.168.1.102:6443 check inter 10s rise 2 fall 2 server my-cluster-master-1 192.168.1.103:6443 check inter 10s rise 2 fall 2 listen my-cluster-machine-config-api-22623 bind 192.168.1.100:22623 mode tcp balance roundrobin option httpchk http-check connect http-check send meth GET uri /healthz http-check expect status 200 server my-cluster-master-2 192.168.1.101:22623 check inter 10s rise 2 fall 2 server my-cluster-master-0 192.168.1.102:22623 check inter 10s rise 2 fall 2 server my-cluster-master-1 192.168.1.103:22623 check inter 10s rise 2 fall 2 listen my-cluster-apps-443 bind 192.168.1.100:443 mode tcp balance roundrobin option httpchk http-check connect http-check send meth GET uri /healthz/ready http-check expect status 200 server my-cluster-worker-0 192.168.1.111:443 check port 1936 inter 10s rise 2 fall 2 server my-cluster-worker-1 192.168.1.112:443 check port 1936 inter 10s rise 2 fall 2 server my-cluster-worker-2 192.168.1.113:443 check port 1936 inter 10s rise 2 fall 2 listen my-cluster-apps-80 bind 192.168.1.100:80 mode tcp balance roundrobin option httpchk http-check connect http-check send meth GET uri /healthz/ready http-check expect status 200 server my-cluster-worker-0 192.168.1.111:80 check port 1936 inter 10s rise 2 fall 2 server my-cluster-worker-1 192.168.1.112:80 check port 1936 inter 10s rise 2 fall 2 server my-cluster-worker-2 192.168.1.113:80 check port 1936 inter 10s rise 2 fall 2", "curl https://<loadbalancer_ip_address>:6443/version --insecure", "{ \"major\": \"1\", \"minor\": \"11+\", \"gitVersion\": \"v1.11.0+ad103ed\", \"gitCommit\": \"ad103ed\", \"gitTreeState\": \"clean\", \"buildDate\": \"2019-01-09T06:44:10Z\", \"goVersion\": \"go1.10.3\", \"compiler\": \"gc\", \"platform\": \"linux/amd64\" }", "curl -v https://<loadbalancer_ip_address>:22623/healthz --insecure", "HTTP/1.1 200 OK Content-Length: 0", "curl -I -L -H \"Host: console-openshift-console.apps.<cluster_name>.<base_domain>\" http://<load_balancer_front_end_IP_address>", "HTTP/1.1 302 Found content-length: 0 location: https://console-openshift-console.apps.ocp4.private.opequon.net/ cache-control: no-cache", "curl -I -L --insecure --resolve console-openshift-console.apps.<cluster_name>.<base_domain>:443:<Load Balancer Front End IP Address> https://console-openshift-console.apps.<cluster_name>.<base_domain>", "HTTP/1.1 200 OK referrer-policy: strict-origin-when-cross-origin set-cookie: csrf-token=UlYWOyQ62LWjw2h003xtYSKlh1a0Py2hhctw0WmV2YEdhJjFyQwWcGBsja261dGLgaYO0nxzVErhiXt6QepA7g==; Path=/; Secure; SameSite=Lax x-content-type-options: nosniff x-dns-prefetch-control: off x-frame-options: DENY x-xss-protection: 1; mode=block date: Wed, 04 Oct 2023 16:29:38 GMT content-type: text/html; charset=utf-8 set-cookie: 1e2670d92730b515ce3a1bb65da45062=1bf5e9573c9a2760c964ed1659cc1673; path=/; HttpOnly; Secure; SameSite=None cache-control: private", "<load_balancer_ip_address> A api.<cluster_name>.<base_domain> A record pointing to Load Balancer Front End", "<load_balancer_ip_address> A apps.<cluster_name>.<base_domain> A record pointing to Load Balancer Front End", "curl https://api.<cluster_name>.<base_domain>:6443/version --insecure", "{ \"major\": \"1\", \"minor\": \"11+\", \"gitVersion\": \"v1.11.0+ad103ed\", \"gitCommit\": \"ad103ed\", \"gitTreeState\": \"clean\", \"buildDate\": \"2019-01-09T06:44:10Z\", \"goVersion\": \"go1.10.3\", \"compiler\": \"gc\", \"platform\": \"linux/amd64\" }", "curl -v https://api.<cluster_name>.<base_domain>:22623/healthz --insecure", "HTTP/1.1 200 OK Content-Length: 0", "curl http://console-openshift-console.apps.<cluster_name>.<base_domain> -I -L --insecure", "HTTP/1.1 302 Found content-length: 0 location: https://console-openshift-console.apps.<cluster-name>.<base domain>/ cache-control: no-cacheHTTP/1.1 200 OK referrer-policy: strict-origin-when-cross-origin set-cookie: csrf-token=39HoZgztDnzjJkq/JuLJMeoKNXlfiVv2YgZc09c3TBOBU4NI6kDXaJH1LdicNhN1UsQWzon4Dor9GWGfopaTEQ==; Path=/; Secure x-content-type-options: nosniff x-dns-prefetch-control: off x-frame-options: DENY x-xss-protection: 1; mode=block date: Tue, 17 Nov 2020 08:42:10 GMT content-type: text/html; charset=utf-8 set-cookie: 1e2670d92730b515ce3a1bb65da45062=9b714eb87e93cf34853e87a92d6894be; path=/; HttpOnly; Secure; SameSite=None cache-control: private", "curl https://console-openshift-console.apps.<cluster_name>.<base_domain> -I -L --insecure", "HTTP/1.1 200 OK referrer-policy: strict-origin-when-cross-origin set-cookie: csrf-token=UlYWOyQ62LWjw2h003xtYSKlh1a0Py2hhctw0WmV2YEdhJjFyQwWcGBsja261dGLgaYO0nxzVErhiXt6QepA7g==; Path=/; Secure; SameSite=Lax x-content-type-options: nosniff x-dns-prefetch-control: off x-frame-options: DENY x-xss-protection: 1; mode=block date: Wed, 04 Oct 2023 16:29:38 GMT content-type: text/html; charset=utf-8 set-cookie: 1e2670d92730b515ce3a1bb65da45062=1bf5e9573c9a2760c964ed1659cc1673; path=/; HttpOnly; Secure; SameSite=None cache-control: private", "cd ~/clusterconfigs", "cd manifests", "touch cluster-network-avoid-workers-99-config.yaml", "apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: name: 50-worker-fix-ipi-rwn labels: machineconfiguration.openshift.io/role: worker spec: config: ignition: version: 3.2.0 storage: files: - path: /etc/kubernetes/manifests/keepalived.yaml mode: 0644 contents: source: data:,", "apiVersion: operator.openshift.io/v1 kind: IngressController metadata: name: default namespace: openshift-ingress-operator spec: nodePlacement: nodeSelector: matchLabels: node-role.kubernetes.io/master: \"\"", "sed -i \"s;mastersSchedulable: false;mastersSchedulable: true;g\" clusterconfigs/manifests/cluster-scheduler-02-config.yml" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.13/html/installing_on_vsphere/installing-vsphere-installer-provisioned-network-customizations
Providing feedback on Red Hat documentation
Providing feedback on Red Hat documentation We appreciate your feedback on our documentation. Let us know how we can improve it. Submitting feedback through Jira (account required) Log in to the Jira website. Click Create in the top navigation bar Enter a descriptive title in the Summary field. Enter your suggestion for improvement in the Description field. Include links to the relevant parts of the documentation. Click Create at the bottom of the dialogue.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/9/html/automatically_installing_rhel/proc_providing-feedback-on-red-hat-documentation_rhel-installer
8.209. samba4
8.209. samba4 8.209.1. RHBA-2014:1605 - samba4 bug fix update Updated samba4 packages that fix one bug are now available for Red Hat Enterprise Linux 6. Samba is an open-source implementation of the Server Message Block (SMB) or Common Internet File System (CIFS) protocol, which allows PC-compatible machines to share files, printers, and other information. Users of samba4 are advised to upgrade to these updated packages, which fix this bug. After installing this update, the smb service will be restarted automatically.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.6_technical_notes/samba4
Chapter 12. Configuring and Setting Up Remote Jobs
Chapter 12. Configuring and Setting Up Remote Jobs Use this section as a guide to configuring Satellite to execute jobs on remote hosts. Any command that you want to apply to a remote host must be defined as a job template. After you have defined a job template you can execute it multiple times. 12.1. About Running Jobs on Hosts You can run jobs on hosts remotely from Capsules using shell scripts or Ansible tasks and playbooks. This is referred to as remote execution. For custom Ansible roles that you create, or roles that you download, you must install the package containing the roles on the Capsule base operating system. Before you can use Ansible roles, you must import the roles into Satellite from the Capsule where they are installed. Communication occurs through Capsule Server, which means that Satellite Server does not require direct access to the target host, and can scale to manage many hosts. Remote execution uses the SSH service that must be enabled and running on the target host. Ensure that the remote execution Capsule has access to port 22 on the target hosts. Satellite uses ERB syntax job templates. For more information, see Template Writing Reference in the Managing Hosts guide. Several job templates for shell scripts and Ansible are included by default. For more information, see Setting up Job Templates . Note Any Capsule Server base operating system is a client of Satellite Server's internal Capsule, and therefore this section applies to any type of host connected to Satellite Server, including Capsules. You can run jobs on multiple hosts at once, and you can use variables in your commands for more granular control over the jobs you run. You can use host facts and parameters to populate the variable values. In addition, you can specify custom values for templates when you run the command. For more information, see Executing a Remote Job . 12.2. Remote Execution Workflow When you run a remote job on hosts, for every host, Satellite performs the following actions to find a remote execution Capsule to use. Satellite searches only for Capsules that have the remote execution feature enabled. Satellite finds the host's interfaces that have the Remote execution checkbox selected. Satellite finds the subnets of these interfaces. Satellite finds remote execution Capsules assigned to these subnets. From this set of Capsules, Satellite selects the Capsule that has the least number of running jobs. By doing this, Satellite ensures that the jobs load is balanced between remote execution Capsules. If you have enabled Prefer registered through Capsule for remote execution , Satellite runs the REX job using the Capsule the host is registered to. By default, Prefer registered through Capsule for remote execution is set to No . To enable it, in the Satellite web UI, navigate to Administer > Settings , and on the Content tab, set Prefer registered through Capsule for remote execution to Yes . This ensures that Satellite performs REX jobs on hosts by the Capsule to which they are registered to. If Satellite does not find a remote execution Capsule at this stage, and if the Fallback to Any Capsule setting is enabled, Satellite adds another set of Capsules to select the remote execution Capsule from. Satellite selects the most lightly loaded Capsule from the following types of Capsules that are assigned to the host: DHCP, DNS and TFTP Capsules assigned to the host's subnets DNS Capsule assigned to the host's domain Realm Capsule assigned to the host's realm Puppet server Capsule Puppet CA Capsule OpenSCAP Capsule If Satellite does not find a remote execution Capsule at this stage, and if the Enable Global Capsule setting is enabled, Satellite selects the most lightly loaded remote execution Capsule from the set of all Capsules in the host's organization and location to execute a remote job. 12.3. Permissions for Remote Execution You can control which roles can run which jobs within your infrastructure, including which hosts they can target. The remote execution feature provides two built-in roles: Remote Execution Manager : Can access all remote execution features and functionality. Remote Execution User : Can only run jobs. You can clone the Remote Execution User role and customize its filter for increased granularity. If you adjust the filter with the view_job_templates permission on a customized role, you can only see and trigger jobs based on matching job templates. You can use the view_hosts and view_smart_proxies permissions to limit which hosts or Capsules are visible to the role. The execute_template_invocation permission is a special permission that is checked immediately before execution of a job begins. This permission defines which job template you can run on a particular host. This allows for even more granularity when specifying permissions. You can run remote execution jobs against Red Hat Satellite and Capsule registered as hosts to Red Hat Satellite with the execute_jobs_on_infrastructure_hosts permission. Standard Manager and Site Manager roles have this permission by default. If you use either the Manager or Site Manager role, or if you use a custom role with the execute_jobs_on_infrastructure_hosts permission, you can execute remote jobs against registered Red Hat Satellite and Capsule hosts. For more information on working with roles and permissions, see Creating and Managing Roles in the Administering Red Hat Satellite guide. The following example shows filters for the execute_template_invocation permission: Use the first line in this example to apply the Reboot template to one selected host. Use the second line to define a pool of hosts with names ending with .staging.example.com . Use the third line to bind the template with a host group. Note Permissions assigned to users with these roles can change over time. If you have already scheduled some jobs to run in the future, and the permissions change, this can result in execution failure because permissions are checked immediately before job execution. 12.4. Creating a Job Template Use this procedure to create a job template. To use the CLI instead of the Satellite web UI, see the CLI procedure . Procedure In the Satellite web UI, navigate to Hosts > Job templates . Click New Job Template . Click the Template tab, and in the Name field, enter a unique name for your job template. Select Default to make the template available for all organizations and locations. Create the template directly in the template editor or upload it from a text file by clicking Import . Optional: In the Audit Comment field, add information about the change. Click the Job tab, and in the Job category field, enter your own category or select from the default categories listed in Default Job Template Categories . Optional: In the Description Format field, enter a description template. For example, Install package %{package_name} . You can also use %{template_name} and %{job_category} in your template. From the Provider Type list, select SSH for shell scripts and Ansible for Ansible tasks or playbooks. Optional: In the Timeout to kill field, enter a timeout value to terminate the job if it does not complete. Optional: Click Add Input to define an input parameter. Parameters are requested when executing the job and do not have to be defined in the template. For examples, see the Help tab. Optional: Click Foreign input set to include other templates in this job. Optional: In the Effective user area, configure a user if the command cannot use the default remote_execution_effective_user setting. Optional: If this template is a snippet to be included in other templates, click the Type tab and select Snippet . Click the Location tab and add the locations where you want to use the template. Click the Organizations tab and add the organizations where you want to use the template. Click Submit to save your changes. You can extend and customize job templates by including other templates in the template syntax. For more information, see the appendices in the Managing Hosts guide. CLI procedure To create a job template using a template-definition file, enter the following command: 12.5. Configuring the Fallback to Any Capsule Remote Execution Setting in Satellite You can enable the Fallback to Any Capsule setting to configure Satellite to search for remote execution Capsules from the list of Capsules that are assigned to hosts. This can be useful if you need to run remote jobs on hosts that have no subnets configured or if the hosts' subnets are assigned to Capsules that do not have the remote execution feature enabled. If the Fallback to Any Capsule setting is enabled, Satellite adds another set of Capsules to select the remote execution Capsule from. Satellite also selects the most lightly loaded Capsule from the set of all Capsules assigned to the host, such as the following: DHCP, DNS and TFTP Capsules assigned to the host's subnets DNS Capsule assigned to the host's domain Realm Capsule assigned to the host's realm Puppet server Capsule Puppet CA Capsule OpenSCAP Capsule Procedure In the Satellite web UI, navigate to Administer > Settings . Click Remote Execution . Configure the Fallback to Any Capsule setting. CLI procedure Enter the hammer settings set command on Satellite to configure the Fallback to Any Capsule setting. For example, to set the value to true , enter the following command: 12.6. Configuring the Global Capsule Remote Execution Setting in Satellite By default, Satellite searches for remote execution Capsules in hosts' organizations and locations regardless of whether Capsules are assigned to hosts' subnets or not. You can disable the Enable Global Capsule setting if you want to limit the search to the Capsules that are assigned to hosts' subnets. If the Enable Global Capsule setting is enabled, Satellite adds another set of Capsules to select the remote execution Capsule from. Satellite also selects the most lightly loaded remote execution Capsule from the set of all Capsules in the host's organization and location to execute a remote job. Procedure In the Satellite web UI, navigate to Administer > Settings . Click Remote Execution . Configure the Enable Global Capsule setting. CLI procedure Enter the hammer settings set command on Satellite to configure the Enable Global Capsule setting. For example, to set the value to true , enter the following command: 12.7. Configuring Satellite to Use an Alternative Directory to Execute Remote Jobs on Hosts By default, Satellite uses the /var/tmp directory on the client system to execute the remote execution jobs. If the client system has noexec set for the /var/ volume or file system, you must configure Satellite to use an alternative directory because otherwise the remote execution job fails since the script cannot be run. Procedure Create a new directory, for example new_place : Copy the SELinux context from the default var directory: Configure the system: 12.8. Distributing SSH Keys for Remote Execution To use SSH keys for authenticating remote execution connections, you must distribute the public SSH key from Capsule to its attached hosts that you want to manage. Ensure that the SSH service is enabled and running on the hosts. Configure any network or host-based firewalls to enable access to port 22. Use one of the following methods to distribute the public SSH key from Capsule to target hosts: Section 12.9, "Distributing SSH Keys for Remote Execution Manually" . Section 12.10, "Using the Satellite API to Obtain SSH Keys for Remote Execution" . Section 12.11, "Configuring a Kickstart Template to Distribute SSH Keys during Provisioning" . For new Satellite hosts, you can deploy SSH keys to Satellite hosts during registration using the global registration template. For more information, see Registering a Host to Red Hat Satellite Using the Global Registration Template . Satellite distributes SSH keys for the remote execution feature to the hosts provisioned from Satellite by default. If the hosts are running on Amazon Web Services, enable password authentication. For more information, see https://aws.amazon.com/premiumsupport/knowledge-center/new-user-accounts-linux-instance . 12.9. Distributing SSH Keys for Remote Execution Manually To distribute SSH keys manually, complete the following steps: Procedure Enter the following command on Capsule. Repeat for each target host you want to manage: To confirm that the key was successfully copied to the target host, enter the following command on Capsule: 12.10. Using the Satellite API to Obtain SSH Keys for Remote Execution To use the Satellite API to download the public key from Capsule, complete this procedure on each target host. Procedure On the target host, create the ~/.ssh directory to store the SSH key: Download the SSH key from Capsule: Configure permissions for the ~/.ssh directory: Configure permissions for the authorized_keys file: 12.11. Configuring a Kickstart Template to Distribute SSH Keys during Provisioning You can add a remote_execution_ssh_keys snippet to your custom kickstart template to deploy SSH Keys to hosts during provisioning. Kickstart templates that Satellite ships include this snippet by default. Therefore, Satellite copies the SSH key for remote execution to the systems during provisioning. Procedure To include the public key in newly-provisioned hosts, add the following snippet to the Kickstart template that you use: 12.12. Configuring a keytab for Kerberos Ticket Granting Tickets Use this procedure to configure Satellite to use a keytab to obtain Kerberos ticket granting tickets. If you do not set up a keytab, you must manually retrieve tickets. Procedure Find the ID of the foreman-proxy user: Modify the umask value so that new files have the permissions 600 : Create the directory for the keytab: Create a keytab or copy an existing keytab to the directory: Change the directory owner to the foreman-proxy user: Ensure that the keytab file is read-only: Restore the SELinux context: 12.13. Configuring Kerberos Authentication for Remote Execution You can use Kerberos authentication to establish an SSH connection for remote execution on Satellite hosts. Prerequisites Enroll Satellite Server on the Kerberos server Enroll the Satellite target host on the Kerberos server Configure and initialize a Kerberos user account for remote execution Ensure that the foreman-proxy user on Satellite has a valid Kerberos ticket granting ticket Procedure To install and enable Kerberos authentication for remote execution, enter the following command: To edit the default user for remote execution, in the Satellite web UI, navigate to Administer > Settings and click the Remote Execution tab. In the SSH User row, edit the second column and add the user name for the Kerberos account. Navigate to remote_execution_effective_user and edit the second column to add the user name for the Kerberos account. To confirm that Kerberos authentication is ready to use, run a remote job on the host. 12.14. Setting up Job Templates Satellite provides default job templates that you can use for executing jobs. To view the list of job templates, navigate to Hosts > Job templates . If you want to use a template without making changes, proceed to Executing a Remote Job . You can use default templates as a base for developing your own. Default job templates are locked for editing. Clone the template and edit the clone. Procedure To clone a template, in the Actions column, select Clone . Enter a unique name for the clone and click Submit to save the changes. Job templates use the Embedded Ruby (ERB) syntax. For more information about writing templates, see the Template Writing Reference in the Managing Hosts guide. Ansible Considerations To create an Ansible job template, use the following procedure and instead of ERB syntax, use YAML syntax. Begin the template with --- . You can embed an Ansible playbook YAML file into the job template body. You can also add ERB syntax to customize your YAML Ansible template. You can also import Ansible playbooks in Satellite. For more information, see Synchronizing Repository Templates in the Managing Hosts guide. Parameter Variables At run time, job templates can accept parameter variables that you define for a host. Note that only the parameters visible on the Parameters tab at the host's edit page can be used as input parameters for job templates. If you do not want your Ansible job template to accept parameter variables at run time, in the Satellite web UI, navigate to Administer > Settings and click the Ansible tab. In the Top level Ansible variables row, change the Value parameter to No . 12.15. Executing a Remote Job You can execute a job that is based on a job template against one or more hosts. To use the CLI instead of the Satellite web UI, see the CLI procedure . Procedure In the Satellite web UI, navigate to Hosts > All Hosts and select the target hosts on which you want to execute a remote job. You can use the search field to filter the host list. From the Select Action list, select Schedule Remote Job . On the Job invocation page, define the main job settings: Select the Job category and the Job template you want to use. Optional: Select a stored search string in the Bookmark list to specify the target hosts. Optional: Further limit the targeted hosts by entering a Search query . The Resolves to line displays the number of hosts affected by your query. Use the refresh button to recalculate the number after changing the query. The preview icon lists the targeted hosts. The remaining settings depend on the selected job template. See Creating a Job Template for information on adding custom parameters to a template. Optional: To configure advanced settings for the job, click Display advanced fields . Some of the advanced settings depend on the job template, the following settings are general: Effective user defines the user for executing the job, by default it is the SSH user. Concurrency level defines the maximum number of jobs executed at once, which can prevent overload of systems' resources in a case of executing the job on a large number of hosts. Timeout to kill defines time interval in seconds after which the job should be killed, if it is not finished already. A task which could not be started during the defined interval, for example, if the task took too long to finish, is canceled. Type of query defines when the search query is evaluated. This helps to keep the query up to date for scheduled tasks. Execution ordering determines the order in which the job is executed on hosts: alphabetical or randomized. Concurrency level and Timeout to kill settings enable you to tailor job execution to fit your infrastructure hardware and needs. To run the job immediately, ensure that Schedule is set to Execute now . You can also define a one-time future job, or set up a recurring job. For recurring tasks, you can define start and end dates, number and frequency of runs. You can also use cron syntax to define repetition. For more information about cron, see the Automating System Tasks section of the Red Hat Enterprise Linux 7 System Administrator's Guide . Click Submit . This displays the Job Overview page, and when the job completes, also displays the status of the job. CLI procedure Enter the following command on Satellite: To execute a remote job with custom parameters, complete the following steps: Find the ID of the job template you want to use: Show the template details to see parameters required by your template: Execute a remote job with custom parameters: Replace query with the filter expression that defines hosts, for example "name ~ rex01" . For more information about executing remote commands with hammer, enter hammer job-template --help and hammer job-invocation --help . 12.16. Scheduling a Recurring Ansible Job for a Host You can schedule a recurring job to run Ansible roles on hosts. Procedure In the Satellite web UI, navigate to Hosts > All Hosts and select the target host on which you want to execute a remote job. On the Ansible tab, select Jobs . Click Schedule recurring job . Define the repetition frequency, start time, and date of the first run in the Create New Recurring Ansible Run window. Click Submit . Optional: View the scheduled Ansible job in host overview or by navigating to Ansible > Jobs . 12.17. Scheduling a Recurring Ansible Job for a Host Group You can schedule a recurring job to run Ansible roles on host groups. Procedure In the Satellite web UI, navigate to Configure > Host groups . In the Actions column, select Configure Ansible Job for the host group you want to schedule an Ansible roles run for. Click Schedule recurring job . Define the repetition frequency, start time, and date of the first run in the Create New Recurring Ansible Run window. Click Submit . 12.18. Monitoring Jobs You can monitor the progress of the job while it is running. This can help in any troubleshooting that may be required. Ansible jobs run on batches of 100 hosts, so you cannot cancel a job running on a specific host. A job completes only after the Ansible playbook runs on all hosts in the batch. Procedure In the Satellite web UI, navigate to Monitor > Jobs . This page is automatically displayed if you triggered the job with the Execute now setting. To monitor scheduled jobs, navigate to Monitor > Jobs and select the job run you wish to inspect. On the Job page, click the Hosts tab. This displays the list of hosts on which the job is running. In the Host column, click the name of the host that you want to inspect. This displays the Detail of Commands page where you can monitor the job execution in real time. Click Back to Job at any time to return to the Job Details page. CLI procedure To monitor the progress of a job while it is running, complete the following steps: Find the ID of a job: Monitor the job output: Optional: to cancel a job, enter the following command:
[ "name = Reboot and host.name = staging.example.com name = Reboot and host.name ~ *.staging.example.com name = \"Restart service\" and host_group.name = webservers", "# hammer job-template create --file \" path_to_template_file \" --name \" template_name \" --provider-type SSH --job-category \" category_name \"", "hammer settings set --name=remote_execution_fallback_proxy --value=true", "hammer settings set --name=remote_execution_global_proxy --value=true", "mkdir / remote_working_dir", "chcon --reference=/var /remote_working_dir", "satellite-installer --foreman-proxy-plugin-remote-execution-ssh-remote-working-dir /remote_working_dir", "ssh-copy-id -i ~foreman-proxy/.ssh/id_rsa_foreman_proxy.pub [email protected]", "ssh -i ~foreman-proxy/.ssh/id_rsa_foreman_proxy [email protected]", "mkdir ~/.ssh", "curl https:// capsule.example.com :9090/ssh/pubkey >> ~/.ssh/authorized_keys", "chmod 700 ~/.ssh", "chmod 600 ~/.ssh/authorized_keys", "<%= snippet 'remote_execution_ssh_keys' %>", "id -u foreman-proxy", "umask 077", "mkdir -p \"/var/kerberos/krb5/user/ USER_ID \"", "cp your_client.keytab /var/kerberos/krb5/user/ USER_ID /client.keytab", "chown -R foreman-proxy:foreman-proxy \"/var/kerberos/krb5/user/ USER_ID \"", "chmod -wx \"/var/kerberos/krb5/user/ USER_ID /client.keytab\"", "restorecon -RvF /var/kerberos/krb5", "satellite-installer --scenario satellite --foreman-proxy-plugin-remote-execution-ssh-ssh-kerberos-auth true", "hammer settings set --name=remote_execution_global_proxy --value=false", "hammer job-template list", "hammer job-template info --id template_ID", "# hammer job-invocation create --job-template \" template_name \" --inputs key1 =\" value \", key2 =\" value \",... --search-query \" query \"", "# hammer job-invocation list", "# hammer job-invocation output --id job_ID --host host_name", "# hammer job-invocation cancel --id job_ID" ]
https://docs.redhat.com/en/documentation/red_hat_satellite/6.11/html/managing_hosts/configuring_and_setting_up_remote_jobs_managing-hosts
1.7. Global Network Block Device
1.7. Global Network Block Device Global Network Block Device (GNBD) provides block-device access to Red Hat GFS over TCP/IP. GNBD is similar in concept to NBD; however, GNBD is GFS-specific and tuned solely for use with GFS. GNBD is useful when the need for more robust technologies - Fibre Channel or single-initiator SCSI - are not necessary or are cost-prohibitive. GNBD consists of two major components: a GNBD client and a GNBD server. A GNBD client runs in a node with GFS and imports a block device exported by a GNBD server. A GNBD server runs in another node and exports block-level storage from its local storage (either directly attached storage or SAN storage). Refer to Figure 1.19, "GNBD Overview" . Multiple GNBD clients can access a device exported by a GNBD server, thus making a GNBD suitable for use by a group of nodes running GFS. Figure 1.19. GNBD Overview
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/cluster_suite_overview/s1-gnbd-overview-CSO
Chapter 30. KafkaJmxAuthenticationPassword schema reference
Chapter 30. KafkaJmxAuthenticationPassword schema reference Used in: KafkaJmxOptions The type property is a discriminator that distinguishes use of the KafkaJmxAuthenticationPassword type from other subtypes which may be added in the future. It must have the value password for the type KafkaJmxAuthenticationPassword . Property Property type Description type string Must be password .
null
https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.9/html/streams_for_apache_kafka_api_reference/type-kafkajmxauthenticationpassword-reference
Chapter 24. Archetype management
Chapter 24. Archetype management Business Central provides an archetype management feature that enables you to list, add, validate, set as default, and delete the archetypes. You can manage archetypes from the Archetypes page in Business Central. Archetypes are projects installed in Apache Maven repositories in which you can set or create a template structure if required. For the most up-to-date and detailed information about archetypes, see the Introduction to Archetypes page . 24.1. Listing archetypes The Archetypes page lists all the archetypes that are added in Business Central. This list provides the detailed information about Group ID , Artifact ID , Version , Created Date , Status , and Actions of an archetype. Prerequisites You have created an archetype and listed it in the Business Central Settings from the maven repository. Procedure In Business Central, select the Admin icon in the upper-right corner of the screen and select Archetypes . In the Status column, green icon indicates it is a valid archetype, red icon indicates it is an invalid archetype whereas blue icon indicates the corresponding archetype is the default one for the new spaces. 24.2. Adding an archetype You can add a new archetype to Business Central. Prerequisites You have installed an archetype in the Maven repository. Procedure In Business Central, select the Admin icon in the upper-right corner of the screen and select Archetypes . Click Add Archetype . In the Add Archetype panel, enter the GAV attributes in the Group ID , Artifact ID , and Version fields respectively. Click Add . Business Central validates the newly added archetype and make it available to be used as a template in all the spaces. 24.3. Managing additional features of an archetype You can delete, set a default, and validate the archetypes from the Archetypes page in Business Central. Prerequisites You have created an archetype and listed in the Business Central Settings from the Maven repository. Procedure In Business Central, select the Admin icon in the upper-right corner of the screen and select Archetypes . From the Actions column, click the icon on the right side of an archetype. Select Delete from the drop-down menu to delete an archetype from the list. Select Validate from the drop-down menu to validate whether the archetype is valid or not. Note When the Business Central is starting up, all the registered archetypes are automatically validated. Select Set as default from the drop-down menu to set an archetype as a default for the new spaces. 24.4. Creating a project using archetypes You can use archetypes to create a project in Business Central. When you create a project in Business Central, it is added to the Git repository that is connected to your Red Hat Decision Manager installation. Prerequisites You have created an archetype and listed it in the Business Central Settings from the Maven repository. You have set an archetype as default in your space in the Business Central. Procedure In Business Central, go to Menu Design Projects . Select or create the space into which you want to add a new project from an archetype template. Click Add Project . Type the project name and description in the Name and Description fields respectively. Click Configure Advanced Options . Select the Based on template check box. Select the archetype from drop-down options if required. The default archetype is already set in the space. Click Add . The Assets view of the project opens based on the selected archetype template. 24.5. Managing archetypes using space settings in Business Central When you add archetypes to Business Central, you can use them as templates in all the spaces. You can manage all the archetypes from the Settings tab, which is available in the space. This tab is visible only to users with the admin role. Prerequisites You have installed an archetype in the Maven repository. You have created an archetype and listed it in the Business Central Settings from the Maven repository. Procedure In Business Central, go to Menu Design Projects . Select or create the space into which you want to manage the archetypes. The default space is MySpace . Click Settings . To include or exclude the archetypes in the space, select the Include check box. From the Actions column, click the icon on the right side of an archetype and select Set as default from the drop-down menu to set an archetype as a default for the space. Click Save .
null
https://docs.redhat.com/en/documentation/red_hat_decision_manager/7.13/html/managing_red_hat_decision_manager_and_kie_server_settings/con-business-central-archetype-management_configuring-central
Chapter 1. Overview
Chapter 1. Overview Red Hat OpenShift Data Foundation is software-defined storage that is optimized for container environments. It runs as an operator on OpenShift Container Platform to provide highly integrated and simplified persistent storage management for containers. Red Hat OpenShift Data Foundation is integrated into the latest Red Hat OpenShift Container Platform to address platform services, application portability, and persistence challenges. It provides a highly scalable backend for the generation of cloud-native applications, built on a technology stack that includes Red Hat Ceph Storage, the Rook.io Operator, and NooBaa's Multicloud Object Gateway technology. Red Hat OpenShift Data Foundation is designed for FIPS. When running on RHEL or RHEL CoreOS booted in FIPS mode, OpenShift Container Platform core components use the RHEL cryptographic libraries submitted to NIST for FIPS Validation on only the x86_64, ppc64le, and s390X architectures. For more information about the NIST validation program, see Cryptographic Module Validation Program . For the latest NIST status for the individual versions of the RHEL cryptographic libraries submitted for validation, see Compliance Activities and Government Standards . Red Hat OpenShift Data Foundation provides a trusted, enterprise-grade application development environment that simplifies and enhances the user experience across the application lifecycle in a number of ways: Provides block storage for databases. Shared file storage for continuous integration, messaging, and data aggregation. Object storage for cloud-first development, archival, backup, and media storage. Scale applications and data exponentially. Attach and detach persistent data volumes at an accelerated rate. Stretch clusters across multiple data-centers or availability zones. Establish a comprehensive application container registry. Support the generation of OpenShift workloads such as Data Analytics, Artificial Intelligence, Machine Learning, Deep Learning, and Internet of Things (IoT). Dynamically provision not only application containers, but data service volumes and containers, as well as additional OpenShift Container Platform nodes, Elastic Block Store (EBS) volumes and other infrastructure services. 1.1. About this release Red Hat OpenShift Data Foundation 4.17 ( RHSA-2024:8676 ) is now available. New enhancements, features, and known issues that pertain to OpenShift Data Foundation 4.17 are included in this topic. Red Hat OpenShift Data Foundation 4.17 is supported on the Red Hat OpenShift Container Platform version 4.17. For more information, see Red Hat OpenShift Data Foundation Supportability and Interoperability Checker . For Red Hat OpenShift Data Foundation life cycle information, refer to the layered and dependent products life cycle section in Red Hat OpenShift Container Platform Life Cycle Policy .
null
https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.17/html/4.17_release_notes/overview
Chapter 8. Replacing DistributedComputeHCI nodes
Chapter 8. Replacing DistributedComputeHCI nodes During hardware maintenance you may need to scale down, scale up, or replace a DistributedComputeHCI node at an edge site. To replace a DistributedComputeHCI node, remove services from the node you are replacing, scale the number of nodes down, and then follow the procedures for scaling those nodes back up. 8.1. Removing Red Hat Ceph Storage services Before removing an HCI (hyperconverged) node from a cluster, you must remove Red Hat Ceph Storage services. To remove the Red Hat Ceph services, you must disable and remove ceph-osd service from the cluster services on the node you are removing, then stop and disable the mon , mgr , and osd services. Procedure On the undercloud, use SSH to connect to the DistributedComputeHCI node that you want to remove: USD ssh tripleo-admin@<dcn-computehci-node> Start a cephadm shell. Use the configuration file and keyring file for the site that the host being removed is in: Record the OSDs (object storage devices) associated with the DistributedComputeHCI node you are removing for use reference in a later step: [ceph: root@dcn2-computehci2-1 ~]# ceph osd tree -c /etc/ceph/dcn2.conf ... -3 0.24399 host dcn2-computehci2-1 1 hdd 0.04880 osd.1 up 1.00000 1.00000 7 hdd 0.04880 osd.7 up 1.00000 1.00000 11 hdd 0.04880 osd.11 up 1.00000 1.00000 15 hdd 0.04880 osd.15 up 1.00000 1.00000 18 hdd 0.04880 osd.18 up 1.00000 1.00000 ... Use SSH to connect to another node in the same cluster and remove the monitor from the cluster: Use SSH to log in again to the node that you are removing from the cluster. Stop and disable the mgr service: Start the cephadm shell: Verify that the mgr service for the node is removed from the cluster: 1 The node that the mgr service is removed from is no longer listed when the mgr service is successfully removed. Export the Red Hat Ceph Storage specification: Edit the specifications in the spec.yaml file: Remove all instances of the host <dcn-computehci-node> from spec.yml Remove all instances of the <dcn-computehci-node> entry from the following: service_type: osd service_type: mon service_type: host Reapply the Red Hat Ceph Storage specification: Remove the OSDs that you identified using ceph osd tree : Verify the status of the OSDs being removed. Do not continue until the following command returns no output: Verify that no daemons remain on the host you are removing: If daemons are still present, you can remove them with the following command: Remove the <dcn-computehci-node> host from the Red Hat Ceph Storage cluster: 8.2. Removing the Image service (glance) services Remove image services from a node when you remove it from service. Procedure To disable the Image service services, disable them using systemctl on the node you are removing: [root@dcn2-computehci2-1 ~]# systemctl stop tripleo_glance_api.service [root@dcn2-computehci2-1 ~]# systemctl stop tripleo_glance_api_tls_proxy.service [root@dcn2-computehci2-1 ~]# systemctl disable tripleo_glance_api.service Removed /etc/systemd/system/multi-user.target.wants/tripleo_glance_api.service. [root@dcn2-computehci2-1 ~]# systemctl disable tripleo_glance_api_tls_proxy.service Removed /etc/systemd/system/multi-user.target.wants/tripleo_glance_api_tls_proxy.service. 8.3. Removing the Block Storage (cinder) services You must remove the cinder-volume and etcd services from the DistributedComputeHCI node when you remove it from service. Procedure Identify and disable the cinder-volume service on the node you are removing: (central) [stack@site-undercloud-0 ~]USD openstack volume service list --service cinder-volume | cinder-volume | dcn2-computehci2-1@tripleo_ceph | az-dcn2 | enabled | up | 2022-03-23T17:41:43.000000 | (central) [stack@site-undercloud-0 ~]USD openstack volume service set --disable dcn2-computehci2-1@tripleo_ceph cinder-volume Log on to a different DistributedComputeHCI node in the stack: USD ssh tripleo-admin@dcn2-computehci2-0 Remove the cinder-volume service associated with the node that you are removing: [root@dcn2-computehci2-0 ~]# podman exec -it cinder_volume cinder-manage service remove cinder-volume dcn2-computehci2-1@tripleo_ceph Service cinder-volume on host dcn2-computehci2-1@tripleo_ceph removed. Stop and disable the tripleo_cinder_volume service on the node that you are removing: 8.4. Delete the DistributedComputeHCI node Set the provisioned parameter to a value of false and remove the node from the stack. Disable the nova-compute service and delete the relevant network agent. Procedure Copy the baremetal-deployment.yaml file: Edit the baremetal-deployement-scaledown.yaml file. Identify the host you want to remove and set the provisioned parameter to have a value of false : Remove the node from the stack: Optional: If you are going to reuse the node, use ironic to clean the disk. This is required if the node will host Ceph OSDs: openstack baremetal node manage USDUUID openstack baremetal node clean USDUUID --clean-steps '[{"interface":"deploy", "step": "erase_devices_metadata"}]' openstack baremetal provide USDUUID Redeploy the central site. Include all templates that you used for the initial configuration: 8.5. Replacing a removed DistributedComputeHCI node 8.5.1. Replacing a removed DistributedComputeHCI node To add new HCI nodes to your DCN deployment, you must redeploy the edge stack with the additional node, perform a ceph export of that stack, and then perform a stack update for the central location. A stack update of the central location adds configurations specific to edge-sites. Prerequisites The node counts are correct in the nodes_data.yaml file of the stack that you want to replace the node in or add a new node to. Procedure You must set the EtcdIntialClusterState parameter to existing in one of the templates called by your deploy script: Redeploy using the deployment script specific to the stack: Export the Red Hat Ceph Storage data from the stack: Replace dcn_ceph_external.yaml with the newly generated dcn2_scale_up_ceph_external.yaml in the deploy script for the central location. Perform a stack update at central: 8.6. Verify the functionality of a replaced DistributedComputeHCI node Ensure the value of the status field is enabled , and that the value of the State field is up : (central) [stack@site-undercloud-0 ~]USD openstack compute service list -c Binary -c Host -c Zone -c Status -c State +----------------+-----------------------------------------+------------+---------+-------+ | Binary | Host | Zone | Status | State | +----------------+-----------------------------------------+------------+---------+-------+ ... | nova-compute | dcn1-compute1-0.redhat.local | az-dcn1 | enabled | up | | nova-compute | dcn1-compute1-1.redhat.local | az-dcn1 | enabled | up | | nova-compute | dcn2-computehciscaleout2-0.redhat.local | az-dcn2 | enabled | up | | nova-compute | dcn2-computehci2-0.redhat.local | az-dcn2 | enabled | up | | nova-compute | dcn2-computescaleout2-0.redhat.local | az-dcn2 | enabled | up | | nova-compute | dcn2-computehci2-2.redhat.local | az-dcn2 | enabled | up | ... Ensure that all network agents are in the up state: (central) [stack@site-undercloud-0 ~]USD openstack network agent list -c "Agent Type" -c Host -c Alive -c State +--------------------+-----------------------------------------+-------+-------+ | Agent Type | Host | Alive | State | +--------------------+-----------------------------------------+-------+-------+ | DHCP agent | dcn3-compute3-1.redhat.local | :-) | UP | | Open vSwitch agent | central-computehci0-1.redhat.local | :-) | UP | | DHCP agent | dcn3-compute3-0.redhat.local | :-) | UP | | DHCP agent | central-controller0-2.redhat.local | :-) | UP | | Open vSwitch agent | dcn3-compute3-1.redhat.local | :-) | UP | | Open vSwitch agent | dcn1-compute1-1.redhat.local | :-) | UP | | Open vSwitch agent | central-computehci0-0.redhat.local | :-) | UP | | DHCP agent | central-controller0-1.redhat.local | :-) | UP | | L3 agent | central-controller0-2.redhat.local | :-) | UP | | Metadata agent | central-controller0-1.redhat.local | :-) | UP | | Open vSwitch agent | dcn2-computescaleout2-0.redhat.local | :-) | UP | | Open vSwitch agent | dcn2-computehci2-5.redhat.local | :-) | UP | | Open vSwitch agent | central-computehci0-2.redhat.local | :-) | UP | | DHCP agent | central-controller0-0.redhat.local | :-) | UP | | Open vSwitch agent | central-controller0-1.redhat.local | :-) | UP | | Open vSwitch agent | dcn2-computehci2-0.redhat.local | :-) | UP | | Open vSwitch agent | dcn1-compute1-0.redhat.local | :-) | UP | ... Verify the status of the Ceph Cluster: Use SSH to connect to the new DistributedComputeHCI node and check the status of the Ceph cluster: [root@dcn2-computehci2-5 ~]# podman exec -it ceph-mon-dcn2-computehci2-5 \ ceph -s -c /etc/ceph/dcn2.conf Verify that both the ceph mon and ceph mgr services exist for the new node: services: mon: 3 daemons, quorum dcn2-computehci2-2,dcn2-computehci2-0,dcn2-computehci2-5 (age 3d) mgr: dcn2-computehci2-2(active, since 3d), standbys: dcn2-computehci2-0, dcn2-computehci2-5 osd: 20 osds: 20 up (since 3d), 20 in (since 3d) Verify the status of the ceph osds with 'ceph osd tree'. Ensure all osds for our new node are in STATUS up: [root@dcn2-computehci2-5 ~]# podman exec -it ceph-mon-dcn2-computehci2-5 ceph osd tree -c /etc/ceph/dcn2.conf ID CLASS WEIGHT TYPE NAME STATUS REWEIGHT PRI-AFF -1 0.97595 root default -5 0.24399 host dcn2-computehci2-0 0 hdd 0.04880 osd.0 up 1.00000 1.00000 4 hdd 0.04880 osd.4 up 1.00000 1.00000 8 hdd 0.04880 osd.8 up 1.00000 1.00000 13 hdd 0.04880 osd.13 up 1.00000 1.00000 17 hdd 0.04880 osd.17 up 1.00000 1.00000 -9 0.24399 host dcn2-computehci2-2 3 hdd 0.04880 osd.3 up 1.00000 1.00000 5 hdd 0.04880 osd.5 up 1.00000 1.00000 10 hdd 0.04880 osd.10 up 1.00000 1.00000 14 hdd 0.04880 osd.14 up 1.00000 1.00000 19 hdd 0.04880 osd.19 up 1.00000 1.00000 -3 0.24399 host dcn2-computehci2-5 1 hdd 0.04880 osd.1 up 1.00000 1.00000 7 hdd 0.04880 osd.7 up 1.00000 1.00000 11 hdd 0.04880 osd.11 up 1.00000 1.00000 15 hdd 0.04880 osd.15 up 1.00000 1.00000 18 hdd 0.04880 osd.18 up 1.00000 1.00000 -7 0.24399 host dcn2-computehciscaleout2-0 2 hdd 0.04880 osd.2 up 1.00000 1.00000 6 hdd 0.04880 osd.6 up 1.00000 1.00000 9 hdd 0.04880 osd.9 up 1.00000 1.00000 12 hdd 0.04880 osd.12 up 1.00000 1.00000 16 hdd 0.04880 osd.16 up 1.00000 1.00000 Verify the cinder-volume service for the new DistributedComputeHCI node is in Status 'enabled' and in State 'up': (central) [stack@site-undercloud-0 ~]USD openstack volume service list --service cinder-volume -c Binary -c Host -c Zone -c Status -c State +---------------+---------------------------------+------------+---------+-------+ | Binary | Host | Zone | Status | State | +---------------+---------------------------------+------------+---------+-------+ | cinder-volume | hostgroup@tripleo_ceph | az-central | enabled | up | | cinder-volume | dcn1-compute1-1@tripleo_ceph | az-dcn1 | enabled | up | | cinder-volume | dcn1-compute1-0@tripleo_ceph | az-dcn1 | enabled | up | | cinder-volume | dcn2-computehci2-0@tripleo_ceph | az-dcn2 | enabled | up | | cinder-volume | dcn2-computehci2-2@tripleo_ceph | az-dcn2 | enabled | up | | cinder-volume | dcn2-computehci2-5@tripleo_ceph | az-dcn2 | enabled | up | +---------------+---------------------------------+------------+---------+-------+ Note If the State of the cinder-volume service is down , then the service has not been started on the node. Use ssh to connect to the new DistributedComputeHCI node and check the status of the Glance services with 'systemctl': [root@dcn2-computehci2-5 ~]# systemctl --type service | grep glance tripleo_glance_api.service loaded active running glance_api container tripleo_glance_api_healthcheck.service loaded activating start start glance_api healthcheck tripleo_glance_api_tls_proxy.service loaded active running glance_api_tls_proxy container 8.7. Troubleshooting DistributedComputeHCI state down If the replacement node was deployed without the EtcdInitialClusterState parameter value set to existing , then the cinder-volume service of the replaced node shows down when you run openstack volume service list . Procedure Log onto the replacement node and check logs for the etcd service. Check that the logs show the etcd service is reporting a cluster ID mismatch in the /var/log/containers/stdouts/etcd.log log file: Set the EtcdInitialClusterState parameter to the value of existing in your deployment templates and rerun the deployment script. Use SSH to connect to the replacement node and run the following commands as root: Recheck the /var/log/containers/stdouts/etcd.log log file to verify that the node successfully joined the cluster: Check the state of the cinder-volume service, and confirm it reads up on the replacement node when you run openstack volume service list .
[ "ssh tripleo-admin@<dcn-computehci-node>", "sudo cephadm shell --config /etc/ceph/dcn2.conf --keyring /etc/ceph/dcn2.client.admin.keyring", "ceph osd tree -c /etc/ceph/dcn2.conf ... -3 0.24399 host dcn2-computehci2-1 1 hdd 0.04880 osd.1 up 1.00000 1.00000 7 hdd 0.04880 osd.7 up 1.00000 1.00000 11 hdd 0.04880 osd.11 up 1.00000 1.00000 15 hdd 0.04880 osd.15 up 1.00000 1.00000 18 hdd 0.04880 osd.18 up 1.00000 1.00000 ...", "sudo cephadm shell --config /etc/ceph/dcn2.conf --keyring /etc/ceph/dcn2.client.admin.keyring ceph mon remove dcn2-computehci2-1 -c /etc/ceph/dcn2.conf removing mon.dcn2-computehci2-1 at [v2:172.23.3.153:3300/0,v1:172.23.3.153:6789/0], there will be 2 monitors", "[tripleo-admin@dcn2-computehci2-1 ~]USD sudo systemctl --type=service | grep ceph [email protected] loaded active running Ceph crash dump collector [email protected] loaded active running Ceph Manager [tripleo-admin@dcn2-computehci2-1 ~]USD sudo systemctl stop ceph-mgr@dcn2-computehci2-1 [tripleo-admin@dcn2-computehci2-1 ~]USD sudo systemctl --type=service | grep ceph [email protected] loaded active running Ceph crash dump collector [tripleo-admin@dcn2-computehci2-1 ~]USD sudo systemctl disable ceph-mgr@dcn2-computehci2-1 Removed /etc/systemd/system/multi-user.target.wants/[email protected].", "sudo cephadm shell --config /etc/ceph/dcn2.conf --keyring /etc/ceph/dcn2.client.admin.keyring", "ceph -s cluster: id: b9b53581-d590-41ac-8463-2f50aa985001 health: HEALTH_WARN 3 pools have too many placement groups mons are allowing insecure global_id reclaim services: mon: 2 daemons, quorum dcn2-computehci2-2,dcn2-computehci2-0 (age 2h) mgr: dcn2-computehci2-2(active, since 20h), standbys: dcn2-computehci2-0 1 osd: 15 osds: 15 up (since 3h), 15 in (since 3h) data: pools: 3 pools, 384 pgs objects: 32 objects, 88 MiB usage: 16 GiB used, 734 GiB / 750 GiB avail pgs: 384 active+clean", "ceph orch ls --export > spec.yml", "ceph orch apply -i spec.yml", "ceph orch osd rm --zap 1 7 11 15 18 Scheduled OSD(s) for removal", "ceph orch osd rm status OSD_ID HOST STATE PG_COUNT REPLACE FORCE DRAIN_STARTED_AT 1 dcn2-computehci2-1 draining 27 False False 2021-04-23 21:35:51.215361 7 dcn2-computehci2-1 draining 8 False False 2021-04-23 21:35:49.111500 11 dcn2-computehci2-1 draining 14 False False 2021-04-23 21:35:50.243762", "ceph orch ps dcn2-computehci2-1", "ceph orch host drain dcn2-computehci2-1", "ceph orch host rm dcn2-computehci2-1 Removed host 'dcn2-computehci2-1'", "systemctl stop tripleo_glance_api.service systemctl stop tripleo_glance_api_tls_proxy.service systemctl disable tripleo_glance_api.service Removed /etc/systemd/system/multi-user.target.wants/tripleo_glance_api.service. systemctl disable tripleo_glance_api_tls_proxy.service Removed /etc/systemd/system/multi-user.target.wants/tripleo_glance_api_tls_proxy.service.", "(central) [stack@site-undercloud-0 ~]USD openstack volume service list --service cinder-volume | cinder-volume | dcn2-computehci2-1@tripleo_ceph | az-dcn2 | enabled | up | 2022-03-23T17:41:43.000000 | (central) [stack@site-undercloud-0 ~]USD openstack volume service set --disable dcn2-computehci2-1@tripleo_ceph cinder-volume", "ssh tripleo-admin@dcn2-computehci2-0", "podman exec -it cinder_volume cinder-manage service remove cinder-volume dcn2-computehci2-1@tripleo_ceph Service cinder-volume on host dcn2-computehci2-1@tripleo_ceph removed.", "systemctl stop tripleo_cinder_volume.service systemctl disable tripleo_cinder_volume.service Removed /etc/systemd/system/multi-user.target.wants/tripleo_cinder_volume.service", "cp /home/stack/dcn2/overcloud-baremetal-deploy.yaml /home/stack/dcn2/baremetal-deployment-scaledown.yaml", "instances: - hostname: dcn2-computehci2-1 provisioned: false", "openstack overcloud node delete --stack dcn2 --baremetal-deployment /home/stack/dcn2/baremetal_deployment_scaledown.yaml", "openstack baremetal node manage USDUUID openstack baremetal node clean USDUUID --clean-steps '[{\"interface\":\"deploy\", \"step\": \"erase_devices_metadata\"}]' openstack baremetal provide USDUUID", "openstack overcloud deploy --deployed-server --stack central --templates /usr/share/openstack-tripleo-heat-templates/ -r ~/control-plane/central_roles.yaml -n ~/network-data.yaml -e /usr/share/openstack-tripleo-heat-templates/environments/network-environment.yaml -e /usr/share/openstack-tripleo-heat-templates/environments/podman.yaml -e /usr/share/openstack-tripleo-heat-templates/environments/dcn-storage.yaml -e /usr/share/openstack-tripleo-heat-templates/environments/cephadm/cephadm.yaml -e /usr/share/openstack-tripleo-heat-templates/environments/nova-az-config.yaml -e /home/stack/central/overcloud-networks-deployed.yaml -e /home/stack/central/overcloud-vip-deployed.yaml -e /home/stack/central/deployed_metal.yaml -e /home/stack/central/deployed_ceph.yaml -e /home/stack/central/dcn_ceph.yaml -e /home/stack/central/glance_update.yaml", "parameter_defaults: EtcdInitialClusterState: existing", "(undercloud) [stack@site-undercloud-0 ~]USD ./overcloud_deploy_dcn2.sh ... Overcloud Deployed without error", "(undercloud) [stack@site-undercloud-0 ~]USD sudo -E openstack overcloud export ceph --stack dcn1,dcn2 --config-download-dir /var/lib/mistral --output-file ~/central/dcn2_scale_up_ceph_external.yaml", "(undercloud) [stack@site-undercloud-0 ~]USD ./overcloud_deploy.sh Overcloud Deployed without error", "(central) [stack@site-undercloud-0 ~]USD openstack compute service list -c Binary -c Host -c Zone -c Status -c State +----------------+-----------------------------------------+------------+---------+-------+ | Binary | Host | Zone | Status | State | +----------------+-----------------------------------------+------------+---------+-------+ | nova-compute | dcn1-compute1-0.redhat.local | az-dcn1 | enabled | up | | nova-compute | dcn1-compute1-1.redhat.local | az-dcn1 | enabled | up | | nova-compute | dcn2-computehciscaleout2-0.redhat.local | az-dcn2 | enabled | up | | nova-compute | dcn2-computehci2-0.redhat.local | az-dcn2 | enabled | up | | nova-compute | dcn2-computescaleout2-0.redhat.local | az-dcn2 | enabled | up | | nova-compute | dcn2-computehci2-2.redhat.local | az-dcn2 | enabled | up |", "(central) [stack@site-undercloud-0 ~]USD openstack network agent list -c \"Agent Type\" -c Host -c Alive -c State +--------------------+-----------------------------------------+-------+-------+ | Agent Type | Host | Alive | State | +--------------------+-----------------------------------------+-------+-------+ | DHCP agent | dcn3-compute3-1.redhat.local | :-) | UP | | Open vSwitch agent | central-computehci0-1.redhat.local | :-) | UP | | DHCP agent | dcn3-compute3-0.redhat.local | :-) | UP | | DHCP agent | central-controller0-2.redhat.local | :-) | UP | | Open vSwitch agent | dcn3-compute3-1.redhat.local | :-) | UP | | Open vSwitch agent | dcn1-compute1-1.redhat.local | :-) | UP | | Open vSwitch agent | central-computehci0-0.redhat.local | :-) | UP | | DHCP agent | central-controller0-1.redhat.local | :-) | UP | | L3 agent | central-controller0-2.redhat.local | :-) | UP | | Metadata agent | central-controller0-1.redhat.local | :-) | UP | | Open vSwitch agent | dcn2-computescaleout2-0.redhat.local | :-) | UP | | Open vSwitch agent | dcn2-computehci2-5.redhat.local | :-) | UP | | Open vSwitch agent | central-computehci0-2.redhat.local | :-) | UP | | DHCP agent | central-controller0-0.redhat.local | :-) | UP | | Open vSwitch agent | central-controller0-1.redhat.local | :-) | UP | | Open vSwitch agent | dcn2-computehci2-0.redhat.local | :-) | UP | | Open vSwitch agent | dcn1-compute1-0.redhat.local | :-) | UP |", "podman exec -it ceph-mon-dcn2-computehci2-5 ceph -s -c /etc/ceph/dcn2.conf", "services: mon: 3 daemons, quorum dcn2-computehci2-2,dcn2-computehci2-0,dcn2-computehci2-5 (age 3d) mgr: dcn2-computehci2-2(active, since 3d), standbys: dcn2-computehci2-0, dcn2-computehci2-5 osd: 20 osds: 20 up (since 3d), 20 in (since 3d)", "podman exec -it ceph-mon-dcn2-computehci2-5 ceph osd tree -c /etc/ceph/dcn2.conf ID CLASS WEIGHT TYPE NAME STATUS REWEIGHT PRI-AFF -1 0.97595 root default -5 0.24399 host dcn2-computehci2-0 0 hdd 0.04880 osd.0 up 1.00000 1.00000 4 hdd 0.04880 osd.4 up 1.00000 1.00000 8 hdd 0.04880 osd.8 up 1.00000 1.00000 13 hdd 0.04880 osd.13 up 1.00000 1.00000 17 hdd 0.04880 osd.17 up 1.00000 1.00000 -9 0.24399 host dcn2-computehci2-2 3 hdd 0.04880 osd.3 up 1.00000 1.00000 5 hdd 0.04880 osd.5 up 1.00000 1.00000 10 hdd 0.04880 osd.10 up 1.00000 1.00000 14 hdd 0.04880 osd.14 up 1.00000 1.00000 19 hdd 0.04880 osd.19 up 1.00000 1.00000 -3 0.24399 host dcn2-computehci2-5 1 hdd 0.04880 osd.1 up 1.00000 1.00000 7 hdd 0.04880 osd.7 up 1.00000 1.00000 11 hdd 0.04880 osd.11 up 1.00000 1.00000 15 hdd 0.04880 osd.15 up 1.00000 1.00000 18 hdd 0.04880 osd.18 up 1.00000 1.00000 -7 0.24399 host dcn2-computehciscaleout2-0 2 hdd 0.04880 osd.2 up 1.00000 1.00000 6 hdd 0.04880 osd.6 up 1.00000 1.00000 9 hdd 0.04880 osd.9 up 1.00000 1.00000 12 hdd 0.04880 osd.12 up 1.00000 1.00000 16 hdd 0.04880 osd.16 up 1.00000 1.00000", "(central) [stack@site-undercloud-0 ~]USD openstack volume service list --service cinder-volume -c Binary -c Host -c Zone -c Status -c State +---------------+---------------------------------+------------+---------+-------+ | Binary | Host | Zone | Status | State | +---------------+---------------------------------+------------+---------+-------+ | cinder-volume | hostgroup@tripleo_ceph | az-central | enabled | up | | cinder-volume | dcn1-compute1-1@tripleo_ceph | az-dcn1 | enabled | up | | cinder-volume | dcn1-compute1-0@tripleo_ceph | az-dcn1 | enabled | up | | cinder-volume | dcn2-computehci2-0@tripleo_ceph | az-dcn2 | enabled | up | | cinder-volume | dcn2-computehci2-2@tripleo_ceph | az-dcn2 | enabled | up | | cinder-volume | dcn2-computehci2-5@tripleo_ceph | az-dcn2 | enabled | up | +---------------+---------------------------------+------------+---------+-------+", "systemctl --type service | grep glance tripleo_glance_api.service loaded active running glance_api container tripleo_glance_api_healthcheck.service loaded activating start start glance_api healthcheck tripleo_glance_api_tls_proxy.service loaded active running glance_api_tls_proxy container", "2022-04-06T18:00:11.834104130+00:00 stderr F 2022-04-06 18:00:11.834045 E | rafthttp: request cluster ID mismatch (got 654f4cf0e2cfb9fd want 918b459b36fe2c0c)", "systemctl stop tripleo_etcd rm -rf /var/lib/etcd/* systemctl start tripleo_etcd", "2022-04-06T18:24:22.130059875+00:00 stderr F 2022-04-06 18:24:22.129395 I | etcdserver/membership: added member 96f61470cd1839e5 [https://dcn2-computehci2-4.internalapi.redhat.local:2380] to cluster 654f4cf0e2cfb9fd" ]
https://docs.redhat.com/en/documentation/red_hat_openstack_platform/17.0/html/distributed_compute_node_and_storage_deployment/assembly_replacing-dcnhci-nodes
2.2. Partitioning the Disk
2.2. Partitioning the Disk Red Hat recommends creating separate partitions for the /boot , / , /home , /tmp , and /var/tmp/ directories. The reasons for each are different, and we will address each partition. /boot This partition is the first partition that is read by the system during boot up. The boot loader and kernel images that are used to boot your system into Red Hat Enterprise Linux 7 are stored in this partition. This partition should not be encrypted. If this partition is included in / and that partition is encrypted or otherwise becomes unavailable then your system will not be able to boot. /home When user data ( /home ) is stored in / instead of in a separate partition, the partition can fill up causing the operating system to become unstable. Also, when upgrading your system to the version of Red Hat Enterprise Linux 7 it is a lot easier when you can keep your data in the /home partition as it will not be overwritten during installation. If the root partition ( / ) becomes corrupt your data could be lost forever. By using a separate partition there is slightly more protection against data loss. You can also target this partition for frequent backups. /tmp and /var/tmp/ Both the /tmp and /var/tmp/ directories are used to store data that does not need to be stored for a long period of time. However, if a lot of data floods one of these directories it can consume all of your storage space. If this happens and these directories are stored within / then your system could become unstable and crash. For this reason, moving these directories into their own partitions is a good idea. Note During the installation process, you have an option to encrypt partitions. You must supply a passphrase. This passphrase serves as a key to unlock the bulk encryption key, which is used to secure the partition's data. For more information, see Section 4.9.1, "Using LUKS Disk Encryption" .
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/security_guide/sec-partitioning_the_disk
8.225. vhostmd
8.225. vhostmd 8.225.1. RHBA-2013:1579 - vhostmd bug fix update Updated vhostmd packages that fix one bug are now available for Red Hat Enterprise Linux 6 for SAP. The Virtual Host Metrics Daemon (vhostmd) provides virtual machines with information on the resource utilization of the Red Hat Enterprise Linux host on which they are being run. Bug Fix BZ# 820500 Due to bugs in the libmetrics code, user's programs could terminate with a segmentation fault when attempting to obtain guest metrics from vhostmd. The libmetrics code has been fixed to perform XPath queries and propagate errors to the user correctly so that the user's programs can now obtain guest metrics as expected. All users of vhostmd are advised to upgrade to these updated packages, which fix these bugs.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.5_technical_notes/vhostmd
Chapter 3. Disconnected installation mirroring
Chapter 3. Disconnected installation mirroring 3.1. About disconnected installation mirroring You can use a mirror registry to ensure that your clusters only use container images that satisfy your organizational controls on external content. Before you install a cluster on infrastructure that you provision in a restricted network, you must mirror the required container images into that environment. To mirror container images, you must have a registry for mirroring. 3.1.1. Creating a mirror registry If you already have a container image registry, such as Red Hat Quay, you can use it as your mirror registry. If you do not already have a registry, you can create a mirror registry using the mirror registry for Red Hat OpenShift . 3.1.2. Mirroring images for a disconnected installation You can use one of the following procedures to mirror your OpenShift Container Platform image repository to your mirror registry: Mirroring images for a disconnected installation Mirroring images for a disconnected installation using the oc-mirror plugin 3.2. Creating a mirror registry with mirror registry for Red Hat OpenShift The mirror registry for Red Hat OpenShift is a small and streamlined container registry that you can use as a target for mirroring the required container images of OpenShift Container Platform for disconnected installations. If you already have a container image registry, such as Red Hat Quay, you can skip this section and go straight to Mirroring the OpenShift Container Platform image repository . 3.2.1. Prerequisites An OpenShift Container Platform subscription. Red Hat Enterprise Linux (RHEL) 8 and 9 with Podman 3.3 and OpenSSL installed. Fully qualified domain name for the Red Hat Quay service, which must resolve through a DNS server. Key-based SSH connectivity on the target host. SSH keys are automatically generated for local installs. For remote hosts, you must generate your own SSH keys. 2 or more vCPUs. 8 GB of RAM. About 12 GB for OpenShift Container Platform 4.11 release images, or about 358 GB for OpenShift Container Platform 4.11 release images and OpenShift Container Platform 4.11 Red Hat Operator images. Up to 1 TB per stream or more is suggested. Important These requirements are based on local testing results with only release images and Operator images. Storage requirements can vary based on your organization's needs. You might require more space, for example, when you mirror multiple z-streams. You can use standard Red Hat Quay functionality or the proper API callout to remove unnecessary images and free up space. 3.2.2. Mirror registry for Red Hat OpenShift introduction For disconnected deployments of OpenShift Container Platform, a container registry is required to carry out the installation of the clusters. To run a production-grade registry service on such a cluster, you must create a separate registry deployment to install the first cluster. The mirror registry for Red Hat OpenShift addresses this need and is included in every OpenShift subscription. It is available for download on the OpenShift console Downloads page. The mirror registry for Red Hat OpenShift allows users to install a small-scale version of Red Hat Quay and its required components using the mirror-registry command line interface (CLI) tool. The mirror registry for Red Hat OpenShift is deployed automatically with preconfigured local storage and a local database. It also includes auto-generated user credentials and access permissions with a single set of inputs and no additional configuration choices to get started. The mirror registry for Red Hat OpenShift provides a pre-determined network configuration and reports deployed component credentials and access URLs upon success. A limited set of optional configuration inputs like fully qualified domain name (FQDN) services, superuser name and password, and custom TLS certificates are also provided. This provides users with a container registry so that they can easily create an offline mirror of all OpenShift Container Platform release content when running OpenShift Container Platform in restricted network environments. Use of the mirror registry for Red Hat OpenShift is optional if another container registry is already available in the install environment. 3.2.2.1. Mirror registry for Red Hat OpenShift limitations The following limitations apply to the mirror registry for Red Hat OpenShift : The mirror registry for Red Hat OpenShift is not a highly-available registry and only local file system storage is supported. It is not intended to replace Red Hat Quay or the internal image registry for OpenShift Container Platform. The mirror registry for Red Hat OpenShift is only supported for hosting images that are required to install a disconnected OpenShift Container Platform cluster, such as Release images or Red Hat Operator images. It uses local storage on your Red Hat Enterprise Linux (RHEL) machine, and storage supported by RHEL is supported by the mirror registry for Red Hat OpenShift . Note Because the mirror registry for Red Hat OpenShift uses local storage, you should remain aware of the storage usage consumed when mirroring images and use Red Hat Quay's garbage collection feature to mitigate potential issues. For more information about this feature, see "Red Hat Quay garbage collection". Support for Red Hat product images that are pushed to the mirror registry for Red Hat OpenShift for bootstrapping purposes are covered by valid subscriptions for each respective product. A list of exceptions to further enable the bootstrap experience can be found on the Self-managed Red Hat OpenShift sizing and subscription guide . Content built by customers should not be hosted by the mirror registry for Red Hat OpenShift . Using the mirror registry for Red Hat OpenShift with more than one cluster is discouraged because multiple clusters can create a single point of failure when updating your cluster fleet. It is advised to leverage the mirror registry for Red Hat OpenShift to install a cluster that can host a production-grade, highly-available registry such as Red Hat Quay, which can serve OpenShift Container Platform content to other clusters. 3.2.3. Mirroring on a local host with mirror registry for Red Hat OpenShift This procedure explains how to install the mirror registry for Red Hat OpenShift on a local host using the mirror-registry installer tool. By doing so, users can create a local host registry running on port 443 for the purpose of storing a mirror of OpenShift Container Platform images. Note Installing the mirror registry for Red Hat OpenShift using the mirror-registry CLI tool makes several changes to your machine. After installation, a /etc/quay-install directory is created, which has installation files, local storage, and the configuration bundle. Trusted SSH keys are generated in case the deployment target is the local host, and systemd files on the host machine are set up to ensure that container runtimes are persistent. Additionally, an initial user named init is created with an automatically generated password. All access credentials are printed at the end of the install routine. Procedure Download the mirror-registry.tar.gz package for the latest version of the mirror registry for Red Hat OpenShift found on the OpenShift console Downloads page. Install the mirror registry for Red Hat OpenShift on your local host with your current user account by using the mirror-registry tool. For a full list of available flags, see "mirror registry for Red Hat OpenShift flags". USD ./mirror-registry install \ --quayHostname <host_example_com> \ --quayRoot <example_directory_name> Use the user name and password generated during installation to log into the registry by running the following command: USD podman login -u init \ -p <password> \ <host_example_com>:8443> \ --tls-verify=false 1 1 You can avoid running --tls-verify=false by configuring your system to trust the generated rootCA certificates. See "Using SSL to protect connections to Red Hat Quay" and "Configuring the system to trust the certificate authority" for more information. Note You can also log in by accessing the UI at https://<host.example.com>:8443 after installation. You can mirror OpenShift Container Platform images after logging in. Depending on your needs, see either the "Mirroring the OpenShift Container Platform image repository" or the "Mirroring Operator catalogs for use with disconnected clusters" sections of this document. Note If there are issues with images stored by the mirror registry for Red Hat OpenShift due to storage layer problems, you can remirror the OpenShift Container Platform images, or reinstall mirror registry on more stable storage. 3.2.4. Updating mirror registry for Red Hat OpenShift from a local host This procedure explains how to update the mirror registry for Red Hat OpenShift from a local host using the upgrade command. Updating to the latest version ensures bug fixes and security vulnerability fixes. Important When updating, there is intermittent downtime of your mirror registry, as it is restarted during the update process. Prerequisites You have installed the mirror registry for Red Hat OpenShift on a local host. Procedure To upgrade the mirror registry for Red Hat OpenShift from localhost, enter the following command: USD sudo ./mirror-registry upgrade -v Note Users who upgrade the mirror registry for Red Hat OpenShift with the ./mirror-registry upgrade -v flag must include the same credentials used when creating their mirror registry. For example, if you installed the mirror registry for Red Hat OpenShift with --quayHostname <host_example_com> and --quayRoot <example_directory_name> , you must include that string to properly upgrade the mirror registry. If you are upgrading the mirror registry for Red Hat OpenShift from 1.2.z 1.3.0 and you used a specified directory in your 1.2.z deployment, you must pass in the new --pgStorage and --quayStorage flags. For example: USD sudo ./mirror-registry upgrade --quayHostname <host_example_com> --quayRoot <example_directory_name> --pgStorage <example_directory_name>/pg-data --quayStorage <example_directory_name>/quay-storage -v 3.2.5. Mirroring on a remote host with mirror registry for Red Hat OpenShift This procedure explains how to install the mirror registry for Red Hat OpenShift on a remote host using the mirror-registry tool. By doing so, users can create a registry to hold a mirror of OpenShift Container Platform images. Note Installing the mirror registry for Red Hat OpenShift using the mirror-registry CLI tool makes several changes to your machine. After installation, a /etc/quay-install directory is created, which has installation files, local storage, and the configuration bundle. Trusted SSH keys are generated in case the deployment target is the local host, and systemd files on the host machine are set up to ensure that container runtimes are persistent. Additionally, an initial user named init is created with an automatically generated password. All access credentials are printed at the end of the install routine. Procedure Download the mirror-registry.tar.gz package for the latest version of the mirror registry for Red Hat OpenShift found on the OpenShift console Downloads page. Install the mirror registry for Red Hat OpenShift on your local host with your current user account by using the mirror-registry tool. For a full list of available flags, see "mirror registry for Red Hat OpenShift flags". USD ./mirror-registry install -v \ --targetHostname <host_example_com> \ --targetUsername <example_user> \ -k ~/.ssh/my_ssh_key \ --quayHostname <host_example_com> \ --quayRoot <example_directory_name> Use the user name and password generated during installation to log into the mirror registry by running the following command: USD podman login -u init \ -p <password> \ <host_example_com>:8443> \ --tls-verify=false 1 1 You can avoid running --tls-verify=false by configuring your system to trust the generated rootCA certificates. See "Using SSL to protect connections to Red Hat Quay" and "Configuring the system to trust the certificate authority" for more information. Note You can also log in by accessing the UI at https://<host.example.com>:8443 after installation. You can mirror OpenShift Container Platform images after logging in. Depending on your needs, see either the "Mirroring the OpenShift Container Platform image repository" or the "Mirroring Operator catalogs for use with disconnected clusters" sections of this document. Note If there are issues with images stored by the mirror registry for Red Hat OpenShift due to storage layer problems, you can remirror the OpenShift Container Platform images, or reinstall mirror registry on more stable storage. 3.2.6. Updating mirror registry for Red Hat OpenShift from a remote host This procedure explains how to update the mirror registry for Red Hat OpenShift from a remote host using the upgrade command. Updating to the latest version ensures bug fixes and security vulnerability fixes. Important When updating, there is intermittent downtime of your mirror registry, as it is restarted during the update process. Prerequisites You have installed the mirror registry for Red Hat OpenShift on a remote host. Procedure To upgrade the mirror registry for Red Hat OpenShift from a remote host, enter the following command: USD ./mirror-registry upgrade -v --targetHostname <remote_host_url> --targetUsername <user_name> -k ~/.ssh/my_ssh_key Note Users who upgrade the mirror registry for Red Hat OpenShift with the ./mirror-registry upgrade -v flag must include the same credentials used when creating their mirror registry. For example, if you installed the mirror registry for Red Hat OpenShift with --quayHostname <host_example_com> and --quayRoot <example_directory_name> , you must include that string to properly upgrade the mirror registry. 3.2.7. Replacing mirror registry for Red Hat OpenShift SSL/TLS certificates In some cases, you might want to update your SSL/TLS certificates for the mirror registry for Red Hat OpenShift . This is useful in the following scenarios: If you are replacing the current mirror registry for Red Hat OpenShift certificate. If you are using the same certificate as the mirror registry for Red Hat OpenShift installation. If you are periodically updating the mirror registry for Red Hat OpenShift certificate. Use the following procedure to replace mirror registry for Red Hat OpenShift SSL/TLS certificates. Prerequisites You have downloaded the ./mirror-registry binary from the OpenShift console Downloads page. Procedure Enter the following command to install the mirror registry for Red Hat OpenShift : USD ./mirror-registry install \ --quayHostname <host_example_com> \ --quayRoot <example_directory_name> This installs the mirror registry for Red Hat OpenShift to the USDHOME/quay-install directory. Prepare a new certificate authority (CA) bundle and generate new ssl.key and ssl.crt key files. For more information, see Using SSL/TLS . Assign /USDHOME/quay-install an environment variable, for example, QUAY , by entering the following command: USD export QUAY=/USDHOME/quay-install Copy the new ssl.crt file to the /USDHOME/quay-install directory by entering the following command: USD cp ~/ssl.crt USDQUAY/quay-config Copy the new ssl.key file to the /USDHOME/quay-install directory by entering the following command: USD cp ~/ssl.key USDQUAY/quay-config Restart the quay-app application pod by entering the following command: USD systemctl restart quay-app 3.2.8. Uninstalling the mirror registry for Red Hat OpenShift You can uninstall the mirror registry for Red Hat OpenShift from your local host by running the following command: USD ./mirror-registry uninstall -v \ --quayRoot <example_directory_name> Note Deleting the mirror registry for Red Hat OpenShift will prompt the user before deletion. You can use --autoApprove to skip this prompt. Users who install the mirror registry for Red Hat OpenShift with the --quayRoot flag must include the --quayRoot flag when uninstalling. For example, if you installed the mirror registry for Red Hat OpenShift with --quayRoot example_directory_name , you must include that string to properly uninstall the mirror registry. 3.2.9. Mirror registry for Red Hat OpenShift flags The following flags are available for the mirror registry for Red Hat OpenShift : Flags Description --autoApprove A boolean value that disables interactive prompts. If set to true , the quayRoot directory is automatically deleted when uninstalling the mirror registry. Defaults to false if left unspecified. --initPassword The password of the init user created during Quay installation. Must be at least eight characters and contain no whitespace. --initUser string Shows the username of the initial user. Defaults to init if left unspecified. --no-color , -c Allows users to disable color sequences and propagate that to Ansible when running install, uninstall, and upgrade commands. --quayHostname The fully-qualified domain name of the mirror registry that clients will use to contact the registry. Equivalent to SERVER_HOSTNAME in the Quay config.yaml . Must resolve by DNS. Defaults to <targetHostname>:8443 if left unspecified. [1] --quayRoot , -r The directory where container image layer and configuration data is saved, including rootCA.key , rootCA.pem , and rootCA.srl certificates. Requires about 12 GB for OpenShift Container Platform 4.10 Release images, or about 358 GB for OpenShift Container Platform 4.10 Release images and OpenShift Container Platform 4.10 Red Hat Operator images. Defaults to /etc/quay-install if left unspecified. --ssh-key , -k The path of your SSH identity key. Defaults to ~/.ssh/quay_installer if left unspecified. --sslCert The path to the SSL/TLS public key / certificate. Defaults to {quayRoot}/quay-config and is auto-generated if left unspecified. --sslCheckSkip Skips the check for the certificate hostname against the SERVER_HOSTNAME in the config.yaml file. [2] --sslKey The path to the SSL/TLS private key used for HTTPS communication. Defaults to {quayRoot}/quay-config and is auto-generated if left unspecified. --targetHostname , -H The hostname of the target you want to install Quay to. Defaults to USDHOST , for example, a local host, if left unspecified. --targetUsername , -u The user on the target host which will be used for SSH. Defaults to USDUSER , for example, the current user if left unspecified. --verbose , -v Shows debug logs and Ansible playbook outputs. --quayHostname must be modified if the public DNS name of your system is different from the local hostname. Additionally, the --quayHostname flag does not support installation with an IP address. Installation with a hostname is required. --sslCheckSkip is used in cases when the mirror registry is set behind a proxy and the exposed hostname is different from the internal Quay hostname. It can also be used when users do not want the certificates to be validated against the provided Quay hostname during installation. 3.2.10. Mirror registry for Red Hat OpenShift release notes The mirror registry for Red Hat OpenShift is a small and streamlined container registry that you can use as a target for mirroring the required container images of OpenShift Container Platform for disconnected installations. These release notes track the development of the mirror registry for Red Hat OpenShift in OpenShift Container Platform. For an overview of the mirror registry for Red Hat OpenShift , see Creating a mirror registry with mirror registry for Red Hat OpenShift . 3.2.10.1. Mirror registry for Red Hat OpenShift 1.3.10 Issued: 2023-12-07 Mirror registry for Red Hat OpenShift is now available with Red Hat Quay 3.8.14. The following advisory is available for the mirror registry for Red Hat OpenShift : RHBA-2023:7628 - mirror registry for Red Hat OpenShift 1.3.10 3.2.10.2. Mirror registry for Red Hat OpenShift 1.3.9 Issued: 2023-09-19 Mirror registry for Red Hat OpenShift is now available with Red Hat Quay 3.8.12. The following advisory is available for the mirror registry for Red Hat OpenShift : RHBA-2023:5241 - mirror registry for Red Hat OpenShift 1.3.9 3.2.10.3. Mirror registry for Red Hat OpenShift 1.3.8 Issued: 2023-08-16 Mirror registry for Red Hat OpenShift is now available with Red Hat Quay 3.8.11. The following advisory is available for the mirror registry for Red Hat OpenShift : RHBA-2023:4622 - mirror registry for Red Hat OpenShift 1.3.8 3.2.10.4. Mirror registry for Red Hat OpenShift 1.3.7 Issued: 2023-07-19 Mirror registry for Red Hat OpenShift is now available with Red Hat Quay 3.8.10. The following advisory is available for the mirror registry for Red Hat OpenShift : RHBA-2023:4087 - mirror registry for Red Hat OpenShift 1.3.7 3.2.10.5. Mirror registry for Red Hat OpenShift 1.3.6 Issued: 2023-05-30 Mirror registry for Red Hat OpenShift is now available with Red Hat Quay 3.8.8. The following advisory is available for the mirror registry for Red Hat OpenShift : RHBA-2023:3302 - mirror registry for Red Hat OpenShift 1.3.6 3.2.10.6. Mirror registry for Red Hat OpenShift 1.3.5 Issued: 2023-05-18 Mirror registry for Red Hat OpenShift is now available with Red Hat Quay 3.8.7. The following advisory is available for the mirror registry for Red Hat OpenShift : RHBA-2023:3225 - mirror registry for Red Hat OpenShift 1.3.5 3.2.10.7. Mirror registry for Red Hat OpenShift 1.3.4 Issued: 2023-04-25 Mirror registry for Red Hat OpenShift is now available with Red Hat Quay 3.8.6. The following advisory is available for the mirror registry for Red Hat OpenShift : RHBA-2023:1914 - mirror registry for Red Hat OpenShift 1.3.4 3.2.10.8. Mirror registry for Red Hat OpenShift 1.3.3 Issued: 2023-04-05 Mirror registry for Red Hat OpenShift is now available with Red Hat Quay 3.8.5. The following advisory is available for the mirror registry for Red Hat OpenShift : RHBA-2023:1528 - mirror registry for Red Hat OpenShift 1.3.3 3.2.10.9. Mirror registry for Red Hat OpenShift 1.3.2 Issued: 2023-03-21 Mirror registry for Red Hat OpenShift is now available with Red Hat Quay 3.8.4. The following advisory is available for the mirror registry for Red Hat OpenShift : RHBA-2023:1376 - mirror registry for Red Hat OpenShift 1.3.2 3.2.10.10. Mirror registry for Red Hat OpenShift 1.3.1 Issued: 2023-03-7 Mirror registry for Red Hat OpenShift is now available with Red Hat Quay 3.8.3. The following advisory is available for the mirror registry for Red Hat OpenShift : RHBA-2023:1086 - mirror registry for Red Hat OpenShift 1.3.1 3.2.10.11. Mirror registry for Red Hat OpenShift 1.3.0 Issued: 2023-02-20 Mirror registry for Red Hat OpenShift is now available with Red Hat Quay 3.8.1. The following advisory is available for the mirror registry for Red Hat OpenShift : RHBA-2023:0558 - mirror registry for Red Hat OpenShift 1.3.0 3.2.10.11.1. New features Mirror registry for Red Hat OpenShift is now supported on Red Hat Enterprise Linux (RHEL) 9 installations. IPv6 support is now available on mirror registry for Red Hat OpenShift local host installations. IPv6 is currently unsupported on mirror registry for Red Hat OpenShift remote host installations. A new feature flag, --quayStorage , has been added. By specifying this flag, you can manually set the location for the Quay persistent storage. A new feature flag, --pgStorage , has been added. By specifying this flag, you can manually set the location for the Postgres persistent storage. Previously, users were required to have root privileges ( sudo ) to install mirror registry for Red Hat OpenShift . With this update, sudo is no longer required to install mirror registry for Red Hat OpenShift . When mirror registry for Red Hat OpenShift was installed with sudo , an /etc/quay-install directory that contained installation files, local storage, and the configuration bundle was created. With the removal of the sudo requirement, installation files and the configuration bundle are now installed to USDHOME/quay-install . Local storage, for example Postgres and Quay, are now stored in named volumes automatically created by Podman. To override the default directories that these files are stored in, you can use the command line arguments for mirror registry for Red Hat OpenShift . For more information about mirror registry for Red Hat OpenShift command line arguments, see " Mirror registry for Red Hat OpenShift flags". 3.2.10.11.2. Bug fixes Previously, the following error could be returned when attempting to uninstall mirror registry for Red Hat OpenShift : ["Error: no container with name or ID \"quay-postgres\" found: no such container"], "stdout": "", "stdout_lines": [] * . With this update, the order that mirror registry for Red Hat OpenShift services are stopped and uninstalled have been changed so that the error no longer occurs when uninstalling mirror registry for Red Hat OpenShift . For more information, see PROJQUAY-4629 . 3.2.10.12. Mirror registry for Red Hat OpenShift 1.2.9 Mirror registry for Red Hat OpenShift is now available with Red Hat Quay 3.7.10. The following advisory is available for the mirror registry for Red Hat OpenShift : RHBA-2022:7369 - mirror registry for Red Hat OpenShift 1.2.9 3.2.10.13. Mirror registry for Red Hat OpenShift 1.2.8 Mirror registry for Red Hat OpenShift is now available with Red Hat Quay 3.7.9. The following advisory is available for the mirror registry for Red Hat OpenShift : RHBA-2022:7065 - mirror registry for Red Hat OpenShift 1.2.8 3.2.10.14. Mirror registry for Red Hat OpenShift 1.2.7 Mirror registry for Red Hat OpenShift is now available with Red Hat Quay 3.7.8. The following advisory is available for the mirror registry for Red Hat OpenShift : RHBA-2022:6500 - mirror registry for Red Hat OpenShift 1.2.7 3.2.10.14.1. Bug fixes Previously, getFQDN() relied on the fully-qualified domain name (FQDN) library to determine its FQDN, and the FQDN library tried to read the /etc/hosts folder directly. Consequently, on some Red Hat Enterprise Linux CoreOS (RHCOS) installations with uncommon DNS configurations, the FQDN library would fail to install and abort the installation. With this update, mirror registry for Red Hat OpenShift uses hostname to determine the FQDN. As a result, the FQDN library does not fail to install. ( PROJQUAY-4139 ) 3.2.10.15. Mirror registry for Red Hat OpenShift 1.2.6 Mirror registry for Red Hat OpenShift is now available with Red Hat Quay 3.7.7. The following advisory is available for the mirror registry for Red Hat OpenShift : RHBA-2022:6278 - mirror registry for Red Hat OpenShift 1.2.6 3.2.10.15.1. New features A new feature flag, --no-color ( -c ) has been added. This feature flag allows users to disable color sequences and propagate that to Ansible when running install, uninstall, and upgrade commands. 3.2.10.16. Mirror registry for Red Hat OpenShift 1.2.5 Mirror registry for Red Hat OpenShift is now available with Red Hat Quay 3.7.6. The following advisory is available for the mirror registry for Red Hat OpenShift : RHBA-2022:6071 - mirror registry for Red Hat OpenShift 1.2.5 3.2.10.17. Mirror registry for Red Hat OpenShift 1.2.4 Mirror registry for Red Hat OpenShift is now available with Red Hat Quay 3.7.5. The following advisory is available for the mirror registry for Red Hat OpenShift : RHBA-2022:5884 - mirror registry for Red Hat OpenShift 1.2.4 3.2.10.18. Mirror registry for Red Hat OpenShift 1.2.3 Mirror registry for Red Hat OpenShift is now available with Red Hat Quay 3.7.4. The following advisory is available for the mirror registry for Red Hat OpenShift : RHBA-2022:5649 - mirror registry for Red Hat OpenShift 1.2.3 3.2.10.19. Mirror registry for Red Hat OpenShift 1.2.2 Mirror registry for Red Hat OpenShift is now available with Red Hat Quay 3.7.3. The following advisory is available for the mirror registry for Red Hat OpenShift : RHBA-2022:5501 - mirror registry for Red Hat OpenShift 1.2.2 3.2.10.20. Mirror registry for Red Hat OpenShift 1.2.1 Mirror registry for Red Hat OpenShift is now available with Red Hat Quay 3.7.2. The following advisory is available for the mirror registry for Red Hat OpenShift : RHBA-2022:4986 - mirror registry for Red Hat OpenShift 1.2.1 3.2.10.21. Mirror registry for Red Hat OpenShift 1.2.0 Mirror registry for Red Hat OpenShift is now available with Red Hat Quay 3.7.1. The following advisory is available for the mirror registry for Red Hat OpenShift : RHBA-2022:4986 - mirror registry for Red Hat OpenShift 1.2.0 3.2.10.21.1. Bug fixes Previously, all components and workers running inside of the Quay pod Operator had log levels set to DEBUG . As a result, large traffic logs were created that consumed unnecessary space. With this update, log levels are set to WARN by default, which reduces traffic information while emphasizing problem scenarios. ( PROJQUAY-3504 ) 3.2.10.22. Mirror registry for Red Hat OpenShift 1.1.0 The following advisory is available for the mirror registry for Red Hat OpenShift : RHBA-2022:0956 - mirror registry for Red Hat OpenShift 1.1.0 3.2.10.22.1. New features A new command, mirror-registry upgrade has been added. This command upgrades all container images without interfering with configurations or data. Note If quayRoot was previously set to something other than default, it must be passed into the upgrade command. 3.2.10.22.2. Bug fixes Previously, the absence of quayHostname or targetHostname did not default to the local hostname. With this update, quayHostname and targetHostname now default to the local hostname if they are missing. ( PROJQUAY-3079 ) Previously, the command ./mirror-registry --version returned an unknown flag error. Now, running ./mirror-registry --version returns the current version of the mirror registry for Red Hat OpenShift . ( PROJQUAY-3086 ) Previously, users could not set a password during installation, for example, when running ./mirror-registry install --initUser <user_name> --initPassword <password> --verbose . With this update, users can set a password during installation. ( PROJQUAY-3149 ) Previously, the mirror registry for Red Hat OpenShift did not recreate pods if they were destroyed. Now, pods are recreated if they are destroyed. ( PROJQUAY-3261 ) 3.2.11. Additional resources Red Hat Quay garbage collection Using SSL to protect connections to Red Hat Quay Configuring the system to trust the certificate authority Mirroring the OpenShift Container Platform image repository Mirroring Operator catalogs for use with disconnected clusters 3.3. Mirroring images for a disconnected installation You can ensure your clusters only use container images that satisfy your organizational controls on external content. Before you install a cluster on infrastructure that you provision in a restricted network, you must mirror the required container images into that environment. To mirror container images, you must have a registry for mirroring. Important You must have access to the internet to obtain the necessary container images. In this procedure, you place your mirror registry on a mirror host that has access to both your network and the internet. If you do not have access to a mirror host, use the Mirroring Operator catalogs for use with disconnected clusters procedure to copy images to a device you can move across network boundaries with. 3.3.1. Prerequisites You must have a container image registry that supports Docker v2-2 in the location that will host the OpenShift Container Platform cluster, such as one of the following registries: Red Hat Quay JFrog Artifactory Sonatype Nexus Repository Harbor If you have an entitlement to Red Hat Quay, see the documentation on deploying Red Hat Quay for proof-of-concept purposes or by using the Red Hat Quay Operator . If you need additional assistance selecting and installing a registry, contact your sales representative or Red Hat support. If you do not already have an existing solution for a container image registry, subscribers of OpenShift Container Platform are provided a mirror registry for Red Hat OpenShift . The mirror registry for Red Hat OpenShift is included with your subscription and is a small-scale container registry that can be used to mirror the required container images of OpenShift Container Platform in disconnected installations. 3.3.2. About the mirror registry You can mirror the images that are required for OpenShift Container Platform installation and subsequent product updates to a container mirror registry such as Red Hat Quay, JFrog Artifactory, Sonatype Nexus Repository, or Harbor. If you do not have access to a large-scale container registry, you can use the mirror registry for Red Hat OpenShift , a small-scale container registry included with OpenShift Container Platform subscriptions. You can use any container registry that supports Docker v2-2 , such as Red Hat Quay, the mirror registry for Red Hat OpenShift , Artifactory, Sonatype Nexus Repository, or Harbor. Regardless of your chosen registry, the procedure to mirror content from Red Hat hosted sites on the internet to an isolated image registry is the same. After you mirror the content, you configure each cluster to retrieve this content from your mirror registry. Important The OpenShift image registry cannot be used as the target registry because it does not support pushing without a tag, which is required during the mirroring process. If choosing a container registry that is not the mirror registry for Red Hat OpenShift , it must be reachable by every machine in the clusters that you provision. If the registry is unreachable, installation, updating, or normal operations such as workload relocation might fail. For that reason, you must run mirror registries in a highly available way, and the mirror registries must at least match the production availability of your OpenShift Container Platform clusters. When you populate your mirror registry with OpenShift Container Platform images, you can follow two scenarios. If you have a host that can access both the internet and your mirror registry, but not your cluster nodes, you can directly mirror the content from that machine. This process is referred to as connected mirroring . If you have no such host, you must mirror the images to a file system and then bring that host or removable media into your restricted environment. This process is referred to as disconnected mirroring . For mirrored registries, to view the source of pulled images, you must review the Trying to access log entry in the CRI-O logs. Other methods to view the image pull source, such as using the crictl images command on a node, show the non-mirrored image name, even though the image is pulled from the mirrored location. Note Red Hat does not test third party registries with OpenShift Container Platform. Additional information For information about viewing the CRI-O logs to view the image source, see Viewing the image pull source . 3.3.3. Preparing your mirror host Before you perform the mirror procedure, you must prepare the host to retrieve content and push it to the remote location. 3.3.3.1. Installing the OpenShift CLI by downloading the binary You can install the OpenShift CLI ( oc ) to interact with OpenShift Container Platform from a command-line interface. You can install oc on Linux, Windows, or macOS. Important If you installed an earlier version of oc , you cannot use it to complete all of the commands in OpenShift Container Platform 4.11. Download and install the new version of oc . Installing the OpenShift CLI on Linux You can install the OpenShift CLI ( oc ) binary on Linux by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the architecture in the Product Variant drop-down menu. Select the appropriate version in the Version drop-down menu. Click Download Now to the OpenShift v4.11 Linux Client entry and save the file. Unpack the archive: USD tar xvf <file> Place the oc binary in a directory that is on your PATH . To check your PATH , execute the following command: USD echo USDPATH Verification After you install the OpenShift CLI, it is available using the oc command: USD oc <command> Installing the OpenShift CLI on Windows You can install the OpenShift CLI ( oc ) binary on Windows by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version in the Version drop-down menu. Click Download Now to the OpenShift v4.11 Windows Client entry and save the file. Unzip the archive with a ZIP program. Move the oc binary to a directory that is on your PATH . To check your PATH , open the command prompt and execute the following command: C:\> path Verification After you install the OpenShift CLI, it is available using the oc command: C:\> oc <command> Installing the OpenShift CLI on macOS You can install the OpenShift CLI ( oc ) binary on macOS by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version in the Version drop-down menu. Click Download Now to the OpenShift v4.11 macOS Client entry and save the file. Note For macOS arm64, choose the OpenShift v4.11 macOS arm64 Client entry. Unpack and unzip the archive. Move the oc binary to a directory on your PATH. To check your PATH , open a terminal and execute the following command: USD echo USDPATH Verification After you install the OpenShift CLI, it is available using the oc command: USD oc <command> 3.3.4. Configuring credentials that allow images to be mirrored Create a container image registry credentials file that allows mirroring images from Red Hat to your mirror. Warning Do not use this image registry credentials file as the pull secret when you install a cluster. If you provide this file when you install cluster, all of the machines in the cluster will have write access to your mirror registry. Warning This process requires that you have write access to a container image registry on the mirror registry and adds the credentials to a registry pull secret. Prerequisites You configured a mirror registry to use in your disconnected environment. You identified an image repository location on your mirror registry to mirror images into. You provisioned a mirror registry account that allows images to be uploaded to that image repository. Procedure Complete the following steps on the installation host: Download your registry.redhat.io pull secret from the Red Hat OpenShift Cluster Manager . Make a copy of your pull secret in JSON format: USD cat ./pull-secret | jq . > <path>/<pull_secret_file_in_json> 1 1 Specify the path to the folder to store the pull secret in and a name for the JSON file that you create. The contents of the file resemble the following example: { "auths": { "cloud.openshift.com": { "auth": "b3BlbnNo...", "email": "[email protected]" }, "quay.io": { "auth": "b3BlbnNo...", "email": "[email protected]" }, "registry.connect.redhat.com": { "auth": "NTE3Njg5Nj...", "email": "[email protected]" }, "registry.redhat.io": { "auth": "NTE3Njg5Nj...", "email": "[email protected]" } } } Generate the base64-encoded user name and password or token for your mirror registry: USD echo -n '<user_name>:<password>' | base64 -w0 1 BGVtbYk3ZHAtqXs= 1 For <user_name> and <password> , specify the user name and password that you configured for your registry. Edit the JSON file and add a section that describes your registry to it: "auths": { "<mirror_registry>": { 1 "auth": "<credentials>", 2 "email": "[email protected]" } }, 1 For <mirror_registry> , specify the registry domain name, and optionally the port, that your mirror registry uses to serve content. For example, registry.example.com or registry.example.com:8443 2 For <credentials> , specify the base64-encoded user name and password for the mirror registry. The file resembles the following example: { "auths": { "registry.example.com": { "auth": "BGVtbYk3ZHAtqXs=", "email": "[email protected]" }, "cloud.openshift.com": { "auth": "b3BlbnNo...", "email": "[email protected]" }, "quay.io": { "auth": "b3BlbnNo...", "email": "[email protected]" }, "registry.connect.redhat.com": { "auth": "NTE3Njg5Nj...", "email": "[email protected]" }, "registry.redhat.io": { "auth": "NTE3Njg5Nj...", "email": "[email protected]" } } } 3.3.5. Mirroring the OpenShift Container Platform image repository Mirror the OpenShift Container Platform image repository to your registry to use during cluster installation or upgrade. Prerequisites Your mirror host has access to the internet. You configured a mirror registry to use in your restricted network and can access the certificate and credentials that you configured. You downloaded the pull secret from the Red Hat OpenShift Cluster Manager and modified it to include authentication to your mirror repository. If you use self-signed certificates, you have specified a Subject Alternative Name in the certificates. Procedure Complete the following steps on the mirror host: Review the OpenShift Container Platform downloads page to determine the version of OpenShift Container Platform that you want to install and determine the corresponding tag on the Repository Tags page. Set the required environment variables: Export the release version: USD OCP_RELEASE=<release_version> For <release_version> , specify the tag that corresponds to the version of OpenShift Container Platform to install, such as 4.5.4 . Export the local registry name and host port: USD LOCAL_REGISTRY='<local_registry_host_name>:<local_registry_host_port>' For <local_registry_host_name> , specify the registry domain name for your mirror repository, and for <local_registry_host_port> , specify the port that it serves content on. Export the local repository name: USD LOCAL_REPOSITORY='<local_repository_name>' For <local_repository_name> , specify the name of the repository to create in your registry, such as ocp4/openshift4 . Export the name of the repository to mirror: USD PRODUCT_REPO='openshift-release-dev' For a production release, you must specify openshift-release-dev . Export the path to your registry pull secret: USD LOCAL_SECRET_JSON='<path_to_pull_secret>' For <path_to_pull_secret> , specify the absolute path to and file name of the pull secret for your mirror registry that you created. Export the release mirror: USD RELEASE_NAME="ocp-release" For a production release, you must specify ocp-release . Export the type of architecture for your server, such as x86_64 or aarch64 : USD ARCHITECTURE=<server_architecture> Export the path to the directory to host the mirrored images: USD REMOVABLE_MEDIA_PATH=<path> 1 1 Specify the full path, including the initial forward slash (/) character. Mirror the version images to the mirror registry: If your mirror host does not have internet access, take the following actions: Connect the removable media to a system that is connected to the internet. Review the images and configuration manifests to mirror: USD oc adm release mirror -a USD{LOCAL_SECRET_JSON} \ --from=quay.io/USD{PRODUCT_REPO}/USD{RELEASE_NAME}:USD{OCP_RELEASE}-USD{ARCHITECTURE} \ --to=USD{LOCAL_REGISTRY}/USD{LOCAL_REPOSITORY} \ --to-release-image=USD{LOCAL_REGISTRY}/USD{LOCAL_REPOSITORY}:USD{OCP_RELEASE}-USD{ARCHITECTURE} --dry-run Record the entire imageContentSources section from the output of the command. The information about your mirrors is unique to your mirrored repository, and you must add the imageContentSources section to the install-config.yaml file during installation. Mirror the images to a directory on the removable media: USD oc adm release mirror -a USD{LOCAL_SECRET_JSON} --to-dir=USD{REMOVABLE_MEDIA_PATH}/mirror quay.io/USD{PRODUCT_REPO}/USD{RELEASE_NAME}:USD{OCP_RELEASE}-USD{ARCHITECTURE} Take the media to the restricted network environment and upload the images to the local container registry. USD oc image mirror -a USD{LOCAL_SECRET_JSON} --from-dir=USD{REMOVABLE_MEDIA_PATH}/mirror "file://openshift/release:USD{OCP_RELEASE}*" USD{LOCAL_REGISTRY}/USD{LOCAL_REPOSITORY} 1 1 For REMOVABLE_MEDIA_PATH , you must use the same path that you specified when you mirrored the images. Important Running oc image mirror might result in the following error: error: unable to retrieve source image . This error occurs when image indexes include references to images that no longer exist on the image registry. Image indexes might retain older references to allow users running those images an upgrade path to newer points on the upgrade graph. As a temporary workaround, you can use the --skip-missing option to bypass the error and continue downloading the image index. For more information, see Service Mesh Operator mirroring failed . If the local container registry is connected to the mirror host, take the following actions: Directly push the release images to the local registry by using following command: USD oc adm release mirror -a USD{LOCAL_SECRET_JSON} \ --from=quay.io/USD{PRODUCT_REPO}/USD{RELEASE_NAME}:USD{OCP_RELEASE}-USD{ARCHITECTURE} \ --to=USD{LOCAL_REGISTRY}/USD{LOCAL_REPOSITORY} \ --to-release-image=USD{LOCAL_REGISTRY}/USD{LOCAL_REPOSITORY}:USD{OCP_RELEASE}-USD{ARCHITECTURE} This command pulls the release information as a digest, and its output includes the imageContentSources data that you require when you install your cluster. Record the entire imageContentSources section from the output of the command. The information about your mirrors is unique to your mirrored repository, and you must add the imageContentSources section to the install-config.yaml file during installation. Note The image name gets patched to Quay.io during the mirroring process, and the podman images will show Quay.io in the registry on the bootstrap virtual machine. To create the installation program that is based on the content that you mirrored, extract it and pin it to the release: If your mirror host does not have internet access, run the following command: USD oc adm release extract -a USD{LOCAL_SECRET_JSON} --icsp-file=<file> \ --command=openshift-install "USD{LOCAL_REGISTRY}/USD{LOCAL_REPOSITORY}:USD{OCP_RELEASE}" If the local container registry is connected to the mirror host, run the following command: USD oc adm release extract -a USD{LOCAL_SECRET_JSON} --command=openshift-install "USD{LOCAL_REGISTRY}/USD{LOCAL_REPOSITORY}:USD{OCP_RELEASE}-USD{ARCHITECTURE}" Important To ensure that you use the correct images for the version of OpenShift Container Platform that you selected, you must extract the installation program from the mirrored content. You must perform this step on a machine with an active internet connection. For clusters using installer-provisioned infrastructure, run the following command: USD openshift-install 3.3.6. The Cluster Samples Operator in a disconnected environment In a disconnected environment, you must take additional steps after you install a cluster to configure the Cluster Samples Operator. Review the following information in preparation. 3.3.6.1. Cluster Samples Operator assistance for mirroring During installation, OpenShift Container Platform creates a config map named imagestreamtag-to-image in the openshift-cluster-samples-operator namespace. The imagestreamtag-to-image config map contains an entry, the populating image, for each image stream tag. The format of the key for each entry in the data field in the config map is <image_stream_name>_<image_stream_tag_name> . During a disconnected installation of OpenShift Container Platform, the status of the Cluster Samples Operator is set to Removed . If you choose to change it to Managed , it installs samples. Note The use of samples in a network-restricted or discontinued environment may require access to services external to your network. Some example services include: Github, Maven Central, npm, RubyGems, PyPi and others. There might be additional steps to take that allow the cluster samples operators's objects to reach the services they require. You can use this config map as a reference for which images need to be mirrored for your image streams to import. While the Cluster Samples Operator is set to Removed , you can create your mirrored registry, or determine which existing mirrored registry you want to use. Mirror the samples you want to the mirrored registry using the new config map as your guide. Add any of the image streams you did not mirror to the skippedImagestreams list of the Cluster Samples Operator configuration object. Set samplesRegistry of the Cluster Samples Operator configuration object to the mirrored registry. Then set the Cluster Samples Operator to Managed to install the image streams you have mirrored. 3.3.7. Mirroring Operator catalogs for use with disconnected clusters You can mirror the Operator contents of a Red Hat-provided catalog, or a custom catalog, into a container image registry using the oc adm catalog mirror command. The target registry must support Docker v2-2 . For a cluster on a restricted network, this registry can be one that the cluster has network access to, such as a mirror registry created during a restricted network cluster installation. Important The OpenShift image registry cannot be used as the target registry because it does not support pushing without a tag, which is required during the mirroring process. Running oc adm catalog mirror might result in the following error: error: unable to retrieve source image . This error occurs when image indexes include references to images that no longer exist on the image registry. Image indexes might retain older references to allow users running those images an upgrade path to newer points on the upgrade graph. As a temporary workaround, you can use the --skip-missing option to bypass the error and continue downloading the image index. For more information, see Service Mesh Operator mirroring failed . The oc adm catalog mirror command also automatically mirrors the index image that is specified during the mirroring process, whether it be a Red Hat-provided index image or your own custom-built index image, to the target registry. You can then use the mirrored index image to create a catalog source that allows Operator Lifecycle Manager (OLM) to load the mirrored catalog onto your OpenShift Container Platform cluster. Additional resources Using Operator Lifecycle Manager on restricted networks 3.3.7.1. Prerequisites Mirroring Operator catalogs for use with disconnected clusters has the following prerequisites: Workstation with unrestricted network access. podman version 1.9.3 or later. If you want to filter, or prune , an existing catalog and selectively mirror only a subset of Operators, see the following sections: Installing the opm CLI Updating or filtering a file-based catalog image If you want to mirror a Red Hat-provided catalog, run the following command on your workstation with unrestricted network access to authenticate with registry.redhat.io : USD podman login registry.redhat.io Access to a mirror registry that supports Docker v2-2 . On your mirror registry, decide which repository, or namespace, to use for storing mirrored Operator content. For example, you might create an olm-mirror repository. If your mirror registry does not have internet access, connect removable media to your workstation with unrestricted network access. If you are working with private registries, including registry.redhat.io , set the REG_CREDS environment variable to the file path of your registry credentials for use in later steps. For example, for the podman CLI: USD REG_CREDS=USD{XDG_RUNTIME_DIR}/containers/auth.json 3.3.7.2. Extracting and mirroring catalog contents The oc adm catalog mirror command extracts the contents of an index image to generate the manifests required for mirroring. The default behavior of the command generates manifests, then automatically mirrors all of the image content from the index image, as well as the index image itself, to your mirror registry. Alternatively, if your mirror registry is on a completely disconnected, or airgapped , host, you can first mirror the content to removable media, move the media to the disconnected environment, then mirror the content from the media to the registry. 3.3.7.2.1. Mirroring catalog contents to registries on the same network If your mirror registry is co-located on the same network as your workstation with unrestricted network access, take the following actions on your workstation. Procedure If your mirror registry requires authentication, run the following command to log in to the registry: USD podman login <mirror_registry> Run the following command to extract and mirror the content to the mirror registry: USD oc adm catalog mirror \ <index_image> \ 1 <mirror_registry>:<port>[/<repository>] \ 2 [-a USD{REG_CREDS}] \ 3 [--insecure] \ 4 [--index-filter-by-os='<platform>/<arch>'] \ 5 [--manifests-only] 6 1 Specify the index image for the catalog that you want to mirror. 2 Specify the fully qualified domain name (FQDN) for the target registry to mirror the Operator contents to. The mirror registry <repository> can be any existing repository, or namespace, on the registry, for example olm-mirror as outlined in the prerequisites. If there is an existing repository found during mirroring, the repository name is added to the resulting image name. If you do not want the image name to include the repository name, omit the <repository> value from this line, for example <mirror_registry>:<port> . 3 Optional: If required, specify the location of your registry credentials file. {REG_CREDS} is required for registry.redhat.io . 4 Optional: If you do not want to configure trust for the target registry, add the --insecure flag. 5 Optional: Specify which platform and architecture of the index image to select when multiple variants are available. Images are passed as '<platform>/<arch>[/<variant>]' . This does not apply to images referenced by the index. Valid values are linux/amd64 , linux/ppc64le , linux/s390x , linux/arm64 . 6 Optional: Generate only the manifests required for mirroring without actually mirroring the image content to a registry. This option can be useful for reviewing what will be mirrored, and lets you make any changes to the mapping list, if you require only a subset of packages. You can then use the mapping.txt file with the oc image mirror command to mirror the modified list of images in a later step. This flag is intended for only advanced selective mirroring of content from the catalog. Example output src image has index label for database path: /database/index.db using database path mapping: /database/index.db:/tmp/153048078 wrote database to /tmp/153048078 1 ... wrote mirroring manifests to manifests-redhat-operator-index-1614211642 2 1 Directory for the temporary index.db database generated by the command. 2 Record the manifests directory name that is generated. This directory is referenced in subsequent procedures. Note Red Hat Quay does not support nested repositories. As a result, running the oc adm catalog mirror command will fail with a 401 unauthorized error. As a workaround, you can use the --max-components=2 option when running the oc adm catalog mirror command to disable the creation of nested repositories. For more information on this workaround, see the Unauthorized error thrown while using catalog mirror command with Quay registry Knowledgebase Solution. Additional resources Architecture and operating system support for Operators 3.3.7.2.2. Mirroring catalog contents to airgapped registries If your mirror registry is on a completely disconnected, or airgapped, host, take the following actions. Procedure Run the following command on your workstation with unrestricted network access to mirror the content to local files: USD oc adm catalog mirror \ <index_image> \ 1 file:///local/index \ 2 -a USD{REG_CREDS} \ 3 --insecure \ 4 --index-filter-by-os='<platform>/<arch>' 5 1 Specify the index image for the catalog that you want to mirror. 2 Specify the content to mirror to local files in your current directory. 3 Optional: If required, specify the location of your registry credentials file. 4 Optional: If you do not want to configure trust for the target registry, add the --insecure flag. 5 Optional: Specify which platform and architecture of the index image to select when multiple variants are available. Images are specified as '<platform>/<arch>[/<variant>]' . This does not apply to images referenced by the index. Valid values are linux/amd64 , linux/ppc64le , linux/s390x , linux/arm64 , and .* Example output ... info: Mirroring completed in 5.93s (5.915MB/s) wrote mirroring manifests to manifests-my-index-1614985528 1 To upload local images to a registry, run: oc adm catalog mirror file://local/index/myrepo/my-index:v1 REGISTRY/REPOSITORY 2 1 Record the manifests directory name that is generated. This directory is referenced in subsequent procedures. 2 Record the expanded file:// path that is based on your provided index image. This path is referenced in a subsequent step. This command creates a v2/ directory in your current directory. Copy the v2/ directory to removable media. Physically remove the media and attach it to a host in the disconnected environment that has access to the mirror registry. If your mirror registry requires authentication, run the following command on your host in the disconnected environment to log in to the registry: USD podman login <mirror_registry> Run the following command from the parent directory containing the v2/ directory to upload the images from local files to the mirror registry: USD oc adm catalog mirror \ file://local/index/<repository>/<index_image>:<tag> \ 1 <mirror_registry>:<port>[/<repository>] \ 2 -a USD{REG_CREDS} \ 3 --insecure \ 4 --index-filter-by-os='<platform>/<arch>' 5 1 Specify the file:// path from the command output. 2 Specify the fully qualified domain name (FQDN) for the target registry to mirror the Operator contents to. The mirror registry <repository> can be any existing repository, or namespace, on the registry, for example olm-mirror as outlined in the prerequisites. If there is an existing repository found during mirroring, the repository name is added to the resulting image name. If you do not want the image name to include the repository name, omit the <repository> value from this line, for example <mirror_registry>:<port> . 3 Optional: If required, specify the location of your registry credentials file. 4 Optional: If you do not want to configure trust for the target registry, add the --insecure flag. 5 Optional: Specify which platform and architecture of the index image to select when multiple variants are available. Images are specified as '<platform>/<arch>[/<variant>]' . This does not apply to images referenced by the index. Valid values are linux/amd64 , linux/ppc64le , linux/s390x , linux/arm64 , and .* Note Red Hat Quay does not support nested repositories. As a result, running the oc adm catalog mirror command will fail with a 401 unauthorized error. As a workaround, you can use the --max-components=2 option when running the oc adm catalog mirror command to disable the creation of nested repositories. For more information on this workaround, see the Unauthorized error thrown while using catalog mirror command with Quay registry Knowledgebase Solution. Run the oc adm catalog mirror command again. Use the newly mirrored index image as the source and the same mirror registry target used in the step: USD oc adm catalog mirror \ <mirror_registry>:<port>/<index_image> \ <mirror_registry>:<port>[/<repository>] \ --manifests-only \ 1 [-a USD{REG_CREDS}] \ [--insecure] 1 The --manifests-only flag is required for this step so that the command does not copy all of the mirrored content again. Important This step is required because the image mappings in the imageContentSourcePolicy.yaml file generated during the step must be updated from local paths to valid mirror locations. Failure to do so will cause errors when you create the ImageContentSourcePolicy object in a later step. After you mirror the catalog, you can continue with the remainder of your cluster installation. After your cluster installation has finished successfully, you must specify the manifests directory from this procedure to create the ImageContentSourcePolicy and CatalogSource objects. These objects are required to enable installation of Operators from OperatorHub. Additional resources Architecture and operating system support for Operators 3.3.7.3. Generated manifests After mirroring Operator catalog content to your mirror registry, a manifests directory is generated in your current directory. If you mirrored content to a registry on the same network, the directory name takes the following pattern: manifests-<index_image_name>-<random_number> If you mirrored content to a registry on a disconnected host in the section, the directory name takes the following pattern: manifests-index/<repository>/<index_image_name>-<random_number> Note The manifests directory name is referenced in subsequent procedures. The manifests directory contains the following files, some of which might require further modification: The catalogSource.yaml file is a basic definition for a CatalogSource object that is pre-populated with your index image tag and other relevant metadata. This file can be used as is or modified to add the catalog source to your cluster. Important If you mirrored the content to local files, you must modify your catalogSource.yaml file to remove any backslash ( / ) characters from the metadata.name field. Otherwise, when you attempt to create the object, it fails with an "invalid resource name" error. The imageContentSourcePolicy.yaml file defines an ImageContentSourcePolicy object that can configure nodes to translate between the image references stored in Operator manifests and the mirrored registry. Note If your cluster uses an ImageContentSourcePolicy object to configure repository mirroring, you can use only global pull secrets for mirrored registries. You cannot add a pull secret to a project. The mapping.txt file contains all of the source images and where to map them in the target registry. This file is compatible with the oc image mirror command and can be used to further customize the mirroring configuration. Important If you used the --manifests-only flag during the mirroring process and want to further trim the subset of packages to mirror, see the steps in the Mirroring a package manifest format catalog image procedure of the OpenShift Container Platform 4.7 documentation about modifying your mapping.txt file and using the file with the oc image mirror command. 3.3.7.4. Postinstallation requirements After you mirror the catalog, you can continue with the remainder of your cluster installation. After your cluster installation has finished successfully, you must specify the manifests directory from this procedure to create the ImageContentSourcePolicy and CatalogSource objects. These objects are required to populate and enable installation of Operators from OperatorHub. Additional resources Populating OperatorHub from mirrored Operator catalogs Updating or filtering a file-based catalog image 3.3.8. steps Install a cluster on infrastructure that you provision in your restricted network, such as on VMware vSphere , bare metal , or Amazon Web Services . 3.3.9. Additional resources See Gathering data about specific features for more information about using must-gather. 3.4. Mirroring images for a disconnected installation using the oc-mirror plugin Running your cluster in a restricted network without direct internet connectivity is possible by installing the cluster from a mirrored set of OpenShift Container Platform container images in a private registry. This registry must be running at all times as long as the cluster is running. See the Prerequisites section for more information. You can use the oc-mirror OpenShift CLI ( oc ) plugin to mirror images to a mirror registry in your fully or partially disconnected environments. You must run oc-mirror from a system with internet connectivity in order to download the required images from the official Red Hat registries. The following steps outline the high-level workflow on how to use the oc-mirror plugin to mirror images to a mirror registry: Create an image set configuration file. Mirror the image set to the mirror registry by using one of the following methods: Mirror an image set directly to the mirror registry. Mirror an image set to disk, transfer the image set to the target environment, then upload the image set to the target mirror registry. Configure your cluster to use the resources generated by the oc-mirror plugin. Repeat these steps to update your mirror registry as necessary. 3.4.1. About the oc-mirror plugin You can use the oc-mirror OpenShift CLI ( oc ) plugin to mirror all required OpenShift Container Platform content and other images to your mirror registry by using a single tool. It provides the following features: Provides a centralized method to mirror OpenShift Container Platform releases, Operators, helm charts, and other images. Maintains update paths for OpenShift Container Platform and Operators. Uses a declarative image set configuration file to include only the OpenShift Container Platform releases, Operators, and images that your cluster needs. Performs incremental mirroring, which reduces the size of future image sets. Prunes images from the target mirror registry that were excluded from the image set configuration since the execution. Optionally generates supporting artifacts for OpenShift Update Service (OSUS) usage. When using the oc-mirror plugin, you specify which content to mirror in an image set configuration file. In this YAML file, you can fine-tune the configuration to only include the OpenShift Container Platform releases and Operators that your cluster needs. This reduces the amount of data that you need to download and transfer. The oc-mirror plugin can also mirror arbitrary helm charts and additional container images to assist users in seamlessly synchronizing their workloads onto mirror registries. The first time you run the oc-mirror plugin, it populates your mirror registry with the required content to perform your disconnected cluster installation or update. In order for your disconnected cluster to continue receiving updates, you must keep your mirror registry updated. To update your mirror registry, you run the oc-mirror plugin using the same configuration as the first time you ran it. The oc-mirror plugin references the metadata from the storage backend and only downloads what has been released since the last time you ran the tool. This provides update paths for OpenShift Container Platform and Operators and performs dependency resolution as required. Important When using the oc-mirror CLI plugin to populate a mirror registry, any further updates to the mirror registry must be made using the oc-mirror tool. 3.4.2. oc-mirror compatibility and support The oc-mirror plugin supports mirroring OpenShift Container Platform payload images and Operator catalogs for OpenShift Container Platform versions 4.9 and later. Use the latest available version of the oc-mirror plugin regardless of which versions of OpenShift Container Platform you need to mirror. Important If you used the Technology Preview version of the oc-mirror plugin for OpenShift Container Platform 4.10, it is not possible to migrate your mirror registry to OpenShift Container Platform 4.11. You must download the new oc-mirror plugin, use a new storage back end, and use a new top-level namespace on the target mirror registry. 3.4.3. About the mirror registry You can mirror the images that are required for OpenShift Container Platform installation and subsequent product updates to a container mirror registry that supports Docker v2-2 , such as Red Hat Quay. If you do not have access to a large-scale container registry, you can use the mirror registry for Red Hat OpenShift , which is a small-scale container registry included with OpenShift Container Platform subscriptions. Regardless of your chosen registry, the procedure to mirror content from Red Hat hosted sites on the internet to an isolated image registry is the same. After you mirror the content, you configure each cluster to retrieve this content from your mirror registry. Important The OpenShift image registry cannot be used as the target registry because it does not support pushing without a tag, which is required during the mirroring process. If choosing a container registry that is not the mirror registry for Red Hat OpenShift , it must be reachable by every machine in the clusters that you provision. If the registry is unreachable, installation, updating, or normal operations such as workload relocation might fail. For that reason, you must run mirror registries in a highly available way, and the mirror registries must at least match the production availability of your OpenShift Container Platform clusters. When you populate your mirror registry with OpenShift Container Platform images, you can follow two scenarios. If you have a host that can access both the internet and your mirror registry, but not your cluster nodes, you can directly mirror the content from that machine. This process is referred to as connected mirroring . If you have no such host, you must mirror the images to a file system and then bring that host or removable media into your restricted environment. This process is referred to as disconnected mirroring . For mirrored registries, to view the source of pulled images, you must review the Trying to access log entry in the CRI-O logs. Other methods to view the image pull source, such as using the crictl images command on a node, show the non-mirrored image name, even though the image is pulled from the mirrored location. Note Red Hat does not test third party registries with OpenShift Container Platform. Additional resources For information about viewing the CRI-O logs to view the image source, see Viewing the image pull source . 3.4.4. Prerequisites You must have a container image registry that supports Docker v2-2 in the location that will host the OpenShift Container Platform cluster, such as Red Hat Quay. Note If you use Red Hat Quay, you must use version 3.6 or later with the oc-mirror plugin. If you have an entitlement to Red Hat Quay, see the documentation on deploying Red Hat Quay for proof-of-concept purposes or by using the Red Hat Quay Operator . If you need additional assistance selecting and installing a registry, contact your sales representative or Red Hat Support. If you do not already have an existing solution for a container image registry, subscribers of OpenShift Container Platform are provided a mirror registry for Red Hat OpenShift . The mirror registry for Red Hat OpenShift is included with your subscription and is a small-scale container registry that can be used to mirror the required container images of OpenShift Container Platform in disconnected installations. 3.4.5. Preparing your mirror hosts Before you can use the oc-mirror plugin to mirror images, you must install the plugin and create a container image registry credentials file to allow the mirroring from Red Hat to your mirror. 3.4.5.1. Installing the oc-mirror OpenShift CLI plugin To use the oc-mirror OpenShift CLI plugin to mirror registry images, you must install the plugin. If you are mirroring image sets in a fully disconnected environment, ensure that you install the oc-mirror plugin on the host with internet access and the host in the disconnected environment with access to the mirror registry. Prerequisites You have installed the OpenShift CLI ( oc ). Procedure Download the oc-mirror CLI plugin. Navigate to the Downloads page of the OpenShift Cluster Manager Hybrid Cloud Console . Under the OpenShift disconnected installation tools section, click Download for OpenShift Client (oc) mirror plugin and save the file. Extract the archive: USD tar xvzf oc-mirror.tar.gz If necessary, update the plugin file to be executable: USD chmod +x oc-mirror Note Do not rename the oc-mirror file. Install the oc-mirror CLI plugin by placing the file in your PATH , for example, /usr/local/bin : USD sudo mv oc-mirror /usr/local/bin/. Verification Run oc mirror help to verify that the plugin was successfully installed: USD oc mirror help Additional resources Installing and using CLI plugins 3.4.5.2. Configuring credentials that allow images to be mirrored Create a container image registry credentials file that allows mirroring images from Red Hat to your mirror. Warning Do not use this image registry credentials file as the pull secret when you install a cluster. If you provide this file when you install cluster, all of the machines in the cluster will have write access to your mirror registry. Warning This process requires that you have write access to a container image registry on the mirror registry and adds the credentials to a registry pull secret. Prerequisites You configured a mirror registry to use in your disconnected environment. You identified an image repository location on your mirror registry to mirror images into. You provisioned a mirror registry account that allows images to be uploaded to that image repository. Procedure Complete the following steps on the installation host: Download your registry.redhat.io pull secret from the Red Hat OpenShift Cluster Manager . Make a copy of your pull secret in JSON format: USD cat ./pull-secret | jq . > <path>/<pull_secret_file_in_json> 1 1 Specify the path to the folder to store the pull secret in and a name for the JSON file that you create. The contents of the file resemble the following example: { "auths": { "cloud.openshift.com": { "auth": "b3BlbnNo...", "email": "[email protected]" }, "quay.io": { "auth": "b3BlbnNo...", "email": "[email protected]" }, "registry.connect.redhat.com": { "auth": "NTE3Njg5Nj...", "email": "[email protected]" }, "registry.redhat.io": { "auth": "NTE3Njg5Nj...", "email": "[email protected]" } } } Save the file either as ~/.docker/config.json or USDXDG_RUNTIME_DIR/containers/auth.json . Generate the base64-encoded user name and password or token for your mirror registry: USD echo -n '<user_name>:<password>' | base64 -w0 1 BGVtbYk3ZHAtqXs= 1 For <user_name> and <password> , specify the user name and password that you configured for your registry. Edit the JSON file and add a section that describes your registry to it: "auths": { "<mirror_registry>": { 1 "auth": "<credentials>", 2 "email": "[email protected]" } }, 1 For <mirror_registry> , specify the registry domain name, and optionally the port, that your mirror registry uses to serve content. For example, registry.example.com or registry.example.com:8443 2 For <credentials> , specify the base64-encoded user name and password for the mirror registry. The file resembles the following example: { "auths": { "registry.example.com": { "auth": "BGVtbYk3ZHAtqXs=", "email": "[email protected]" }, "cloud.openshift.com": { "auth": "b3BlbnNo...", "email": "[email protected]" }, "quay.io": { "auth": "b3BlbnNo...", "email": "[email protected]" }, "registry.connect.redhat.com": { "auth": "NTE3Njg5Nj...", "email": "[email protected]" }, "registry.redhat.io": { "auth": "NTE3Njg5Nj...", "email": "[email protected]" } } } 3.4.6. Creating the image set configuration Before you can use the oc-mirror plugin to mirror image sets, you must create an image set configuration file. This image set configuration file defines which OpenShift Container Platform releases, Operators, and other images to mirror, along with other configuration settings for the oc-mirror plugin. You must specify a storage backend in the image set configuration file. This storage backend can be a local directory or a registry that supports Docker v2-2 . The oc-mirror plugin stores metadata in this storage backend during image set creation. Important Do not delete or modify the metadata that is generated by the oc-mirror plugin. You must use the same storage backend every time you run the oc-mirror plugin for the same mirror registry. Prerequisites You have created a container image registry credentials file. For instructions, see Configuring credentials that allow images to be mirrored . Procedure Use the oc mirror init command to create a template for the image set configuration and save it to a file called imageset-config.yaml : USD oc mirror init --registry example.com/mirror/oc-mirror-metadata > imageset-config.yaml 1 1 Replace example.com/mirror/oc-mirror-metadata with the location of your registry for the storage backend. Edit the file and adjust the settings as necessary: kind: ImageSetConfiguration apiVersion: mirror.openshift.io/v1alpha2 archiveSize: 4 1 storageConfig: 2 registry: imageURL: example.com/mirror/oc-mirror-metadata 3 skipTLS: false mirror: platform: channels: - name: stable-4.11 4 type: ocp graph: true 5 operators: - catalog: registry.redhat.io/redhat/redhat-operator-index:v4.11 6 packages: - name: serverless-operator 7 channels: - name: stable 8 additionalImages: - name: registry.redhat.io/ubi8/ubi:latest 9 helm: {} 1 Add archiveSize to set the maximum size, in GiB, of each file within the image set. 2 Set the back-end location to save the image set metadata to. This location can be a registry or local directory. It is required to specify storageConfig values. 3 Set the registry URL for the storage backend. 4 Set the channel to retrieve the OpenShift Container Platform images from. 5 Add graph: true to build and push the graph-data image to the mirror registry. The graph-data image is required to create OpenShift Update Service (OSUS). The graph: true field also generates the UpdateService custom resource manifest. The oc command-line interface (CLI) can use the UpdateService custom resource manifest to create OSUS. For more information, see About the OpenShift Update Service . 6 Set the Operator catalog to retrieve the OpenShift Container Platform images from. 7 Specify only certain Operator packages to include in the image set. Remove this field to retrieve all packages in the catalog. 8 Specify only certain channels of the Operator packages to include in the image set. You must always include the default channel for the Operator package even if you do not use the bundles in that channel. You can find the default channel by running the following command: oc mirror list operators --catalog=<catalog_name> --package=<package_name> . 9 Specify any additional images to include in image set. See Image set configuration parameters for the full list of parameters and Image set configuration examples for various mirroring use cases. Save the updated file. This image set configuration file is required by the oc mirror command when mirroring content. Additional resources Image set configuration parameters Image set configuration examples Using the OpenShift Update Service in a disconnected environment 3.4.7. Mirroring an image set to a mirror registry You can use the oc-mirror CLI plugin to mirror images to a mirror registry in a partially disconnected environment or in a fully disconnected environment . These procedures assume that you already have your mirror registry set up. 3.4.7.1. Mirroring an image set in a partially disconnected environment In a partially disconnected environment, you can mirror an image set directly to the target mirror registry. 3.4.7.1.1. Mirroring from mirror to mirror You can use the oc-mirror plugin to mirror an image set directly to a target mirror registry that is accessible during image set creation. You are required to specify a storage backend in the image set configuration file. This storage backend can be a local directory or a Docker v2 registry. The oc-mirror plugin stores metadata in this storage backend during image set creation. Important Do not delete or modify the metadata that is generated by the oc-mirror plugin. You must use the same storage backend every time you run the oc-mirror plugin for the same mirror registry. Prerequisites You have access to the internet to obtain the necessary container images. You have installed the OpenShift CLI ( oc ). You have installed the oc-mirror CLI plugin. You have created the image set configuration file. Procedure Run the oc mirror command to mirror the images from the specified image set configuration to a specified registry: USD oc mirror --config=./imageset-config.yaml \ 1 docker://registry.example:5000 2 1 Pass in the image set configuration file that was created. This procedure assumes that it is named imageset-config.yaml . 2 Specify the registry to mirror the image set file to. The registry must start with docker:// . If you specify a top-level namespace for the mirror registry, you must also use this same namespace on subsequent executions. Verification Navigate into the oc-mirror-workspace/ directory that was generated. Navigate into the results directory, for example, results-1639608409/ . Verify that YAML files are present for the ImageContentSourcePolicy and CatalogSource resources. steps Configure your cluster to use the resources generated by oc-mirror. Troubleshooting Unable to retrieve source image . 3.4.7.2. Mirroring an image set in a fully disconnected environment To mirror an image set in a fully disconnected environment, you must first mirror the image set to disk , then mirror the image set file on disk to a mirror . 3.4.7.2.1. Mirroring from mirror to disk You can use the oc-mirror plugin to generate an image set and save the contents to disk. The generated image set can then be transferred to the disconnected environment and mirrored to the target registry. Important Depending on the configuration specified in the image set configuration file, using oc-mirror to mirror images might download several hundreds of gigabytes of data to disk. The initial image set download when you populate the mirror registry is often the largest. Because you only download the images that changed since the last time you ran the command, when you run the oc-mirror plugin again, the generated image set is often smaller. You are required to specify a storage backend in the image set configuration file. This storage backend can be a local directory or a docker v2 registry. The oc-mirror plugin stores metadata in this storage backend during image set creation. Important Do not delete or modify the metadata that is generated by the oc-mirror plugin. You must use the same storage backend every time you run the oc-mirror plugin for the same mirror registry. Prerequisites You have access to the internet to obtain the necessary container images. You have installed the OpenShift CLI ( oc ). You have installed the oc-mirror CLI plugin. You have created the image set configuration file. Procedure Run the oc mirror command to mirror the images from the specified image set configuration to disk: USD oc mirror --config=./imageset-config.yaml \ 1 file://<path_to_output_directory> 2 1 Pass in the image set configuration file that was created. This procedure assumes that it is named imageset-config.yaml . 2 Specify the target directory where you want to output the image set file. The target directory path must start with file:// . Verification Navigate to your output directory: USD cd <path_to_output_directory> Verify that an image set .tar file was created: USD ls Example output mirror_seq1_000000.tar steps Transfer the image set .tar file to the disconnected environment. Troubleshooting Unable to retrieve source image . 3.4.7.2.2. Mirroring from disk to mirror You can use the oc-mirror plugin to mirror the contents of a generated image set to the target mirror registry. Prerequisites You have installed the OpenShift CLI ( oc ) in the disconnected environment. You have installed the oc-mirror CLI plugin in the disconnected environment. You have generated the image set file by using the oc mirror command. You have transferred the image set file to the disconnected environment. Procedure Run the oc mirror command to process the image set file on disk and mirror the contents to a target mirror registry: USD oc mirror --from=./mirror_seq1_000000.tar \ 1 docker://registry.example:5000 2 1 Pass in the image set .tar file to mirror, named mirror_seq1_000000.tar in this example. If an archiveSize value was specified in the image set configuration file, the image set might be broken up into multiple .tar files. In this situation, you can pass in a directory that contains the image set .tar files. 2 Specify the registry to mirror the image set file to. The registry must start with docker:// . If you specify a top-level namespace for the mirror registry, you must also use this same namespace on subsequent executions. This command updates the mirror registry with the image set and generates the ImageContentSourcePolicy and CatalogSource resources. Verification Navigate into the oc-mirror-workspace/ directory that was generated. Navigate into the results directory, for example, results-1639608409/ . Verify that YAML files are present for the ImageContentSourcePolicy and CatalogSource resources. steps Configure your cluster to use the resources generated by oc-mirror. Troubleshooting Unable to retrieve source image . 3.4.8. Configuring your cluster to use the resources generated by oc-mirror After you have mirrored your image set to the mirror registry, you must apply the generated ImageContentSourcePolicy , CatalogSource , and release image signature resources into the cluster. The ImageContentSourcePolicy resource associates the mirror registry with the source registry and redirects image pull requests from the online registries to the mirror registry. The CatalogSource resource is used by Operator Lifecycle Manager (OLM) to retrieve information about the available Operators in the mirror registry. The release image signatures are used to verify the mirrored release images. Prerequisites You have mirrored the image set to the registry mirror in the disconnected environment. You have access to the cluster as a user with the cluster-admin role. Procedure Log in to the OpenShift CLI as a user with the cluster-admin role. Apply the YAML files from the results directory to the cluster by running the following command: USD oc apply -f ./oc-mirror-workspace/results-1639608409/ If you mirrored release images, apply the release image signatures to the cluster by running the following command: USD oc apply -f ./oc-mirror-workspace/results-1639608409/release-signatures/ Note If you are mirroring Operators instead of clusters, you do not need to run USD oc apply -f ./oc-mirror-workspace/results-1639608409/release-signatures/ . Running that command will return an error, as there are no release image signatures to apply. Verification Verify that the ImageContentSourcePolicy resources were successfully installed by running the following command: USD oc get imagecontentsourcepolicy --all-namespaces Verify that the CatalogSource resources were successfully installed by running the following command: USD oc get catalogsource --all-namespaces 3.4.9. Keeping your mirror registry content updated After your target mirror registry is populated with the initial image set, be sure to update it regularly so that it has the latest content. You can optionally set up a cron job, if possible, so that the mirror registry is updated on a regular basis. Ensure that you update your image set configuration to add or remove OpenShift Container Platform and Operator releases as necessary. Any images that are removed are pruned from the mirror registry. 3.4.9.1. About updating your mirror registry content When you run the oc-mirror plugin again, it generates an image set that only contains new and updated images since the execution. Because it only pulls in the differences since the image set was created, the generated image set is often smaller and faster to process than the initial image set. Important Generated image sets are sequential and must be pushed to the target mirror registry in order. You can derive the sequence number from the file name of the generated image set archive file. Adding new and updated images Depending on the settings in your image set configuration, future executions of oc-mirror can mirror additional new and updated images. Review the settings in your image set configuration to ensure that you are retrieving new versions as necessary. For example, you can set the minimum and maximum versions of Operators to mirror if you want to restrict to specific versions. Alternatively, you can set the minimum version as a starting point to mirror, but keep the version range open so you keep receiving new Operator versions on future executions of oc-mirror. Omitting any minimum or maximum version gives you the full version history of an Operator in a channel. Omitting explicitly named channels gives you all releases in all channels of the specified Operator. Omitting any named Operator gives you the entire catalog of all Operators and all their versions ever released. All these constraints and conditions are evaluated against the publicly released content by Red Hat on every invocation of oc-mirror. This way, it automatically picks up new releases and entirely new Operators. Constraints can be specified by only listing a desired set of Operators, which will not automatically add other newly released Operators into the mirror set. You can also specify a particular release channel, which limits mirroring to just this channel and not any new channels that have been added. This is important for Operator products, such as Red Hat Quay, that use different release channels for their minor releases. Lastly, you can specify a maximum version of a particular Operator, which causes the tool to only mirror the specified version range so that you do not automatically get any newer releases past the maximum version mirrored. In all these cases, you must update the image set configuration file to broaden the scope of the mirroring of Operators to get other Operators, new channels, and newer versions of Operators to be available in your target registry. It is recommended to align constraints like channel specification or version ranges with the release strategy that a particular Operator has chosen. For example, when the Operator uses a stable channel, you should restrict mirroring to that channel and potentially a minimum version to find the right balance between download volume and getting stable updates regularly. If the Operator chooses a release version channel scheme, for example stable-3.7 , you should mirror all releases in that channel. This allows you to keep receiving patch versions of the Operator, for example 3.7.1 . You can also regularly adjust the image set configuration to add channels for new product releases, for example stable-3.8 . Pruning images Images are pruned automatically from the target mirror registry if they are no longer included in the latest image set that was generated and mirrored. This allows you to easily manage and clean up unneeded content and reclaim storage resources. If there are OpenShift Container Platform releases or Operator versions that you no longer need, you can modify your image set configuration to exclude them, and they will be pruned from the mirror registry upon mirroring. This can be done by adjusting a minimum or maximum version range setting per Operator in the image set configuration file or by deleting the Operator from the list of Operators to mirror from the catalog. You can also remove entire Operator catalogs or entire OpenShift Container Platform releases from the configuration file. Important If there are no new or updated images to mirror, the excluded images are not pruned from the target mirror registry. Additionally, if an Operator publisher removes an Operator version from a channel, the removed versions are pruned from the target mirror registry. 3.4.9.2. Updating your mirror registry content After you publish the initial image set to the mirror registry, you can use the oc-mirror plugin to keep your disconnected clusters updated. Depending on your image set configuration, oc-mirror automatically detects newer releases of OpenShift Container Platform and your selected Operators that have been released after you completed the inital mirror. It is recommended to run oc-mirror at regular intervals, for example in a nightly cron job, to receive product and security updates on a timely basis. Prerequisites You have used the oc-mirror plugin to mirror the initial image set to your mirror registry. You have access to the storage backend that was used for the initial execution of the oc-mirror plugin. Note You must use the same storage backend as the initial execution of oc-mirror for the same mirror registry. Do not delete or modify the metadata image that is generated by the oc-mirror plugin. Procedure If necessary, update your image set configuration file to pick up new OpenShift Container Platform and Operator versions. See Image set configuration examples for example mirroring use cases. Follow the same steps that you used to mirror your initial image set to the mirror registry. For instructions, see Mirroring an image set in a partially disconnected environment or Mirroring an image set in a fully disconnected environment . Important You must provide the same storage backend so that only a differential image set is created and mirrored. If you specified a top-level namespace for the mirror registry during the initial image set creation, then you must use this same namespace every time you run the oc-mirror plugin for the same mirror registry. Configure your cluster to use the resources generated by oc-mirror. Additional resources Image set configuration examples Mirroring an image set in a partially disconnected environment Mirroring an image set in a fully disconnected environment Configuring your cluster to use the resources generated by oc-mirror 3.4.10. Performing a dry run You can use oc-mirror to perform a dry run, without actually mirroring any images. This allows you to review the list of images that would be mirrored, as well as any images that would be pruned from the mirror registry. It also allows you to catch any errors with your image set configuration early or use the generated list of images with other tools to carry out the mirroring operation. Prerequisites You have access to the internet to obtain the necessary container images. You have installed the OpenShift CLI ( oc ). You have installed the oc-mirror CLI plugin. You have created the image set configuration file. Procedure Run the oc mirror command with the --dry-run flag to perform a dry run: USD oc mirror --config=./imageset-config.yaml \ 1 docker://registry.example:5000 \ 2 --dry-run 3 1 Pass in the image set configuration file that was created. This procedure assumes that it is named imageset-config.yaml . 2 Specify the mirror registry. Nothing is mirrored to this registry as long as you use the --dry-run flag. 3 Use the --dry-run flag to generate the dry run artifacts and not an actual image set file. Example output Checking push permissions for registry.example:5000 Creating directory: oc-mirror-workspace/src/publish Creating directory: oc-mirror-workspace/src/v2 Creating directory: oc-mirror-workspace/src/charts Creating directory: oc-mirror-workspace/src/release-signatures No metadata detected, creating new workspace wrote mirroring manifests to oc-mirror-workspace/operators.1658342351/manifests-redhat-operator-index ... info: Planning completed in 31.48s info: Dry run complete Writing image mapping to oc-mirror-workspace/mapping.txt Navigate into the workspace directory that was generated: USD cd oc-mirror-workspace/ Review the mapping.txt file that was generated. This file contains a list of all images that would be mirrored. Review the pruning-plan.json file that was generated. This file contains a list of all images that would be pruned from the mirror registry when the image set is published. Note The pruning-plan.json file is only generated if your oc-mirror command points to your mirror registry and there are images to be pruned. 3.4.11. Image set configuration parameters The oc-mirror plugin requires an image set configuration file that defines what images to mirror. The following table lists the available parameters for the ImageSetConfiguration resource. Table 3.1. ImageSetConfiguration parameters Parameter Description Values apiVersion The API version for the ImageSetConfiguration content. String. For example: mirror.openshift.io/v1alpha2 . archiveSize The maximum size, in GiB, of each archive file within the image set. Integer. For example: 4 mirror The configuration of the image set. Object mirror.additionalImages The additional images configuration of the image set. Array of objects. For example: additionalImages: - name: registry.redhat.io/ubi8/ubi:latest mirror.additionalImages.name The tag or digest of the image to mirror. String. For example: registry.redhat.io/ubi8/ubi:latest mirror.blockedImages The full tag, digest, or pattern of images to block from mirroring. Array of strings. For example: docker.io/library/alpine mirror.helm The helm configuration of the image set. Note that the oc-mirror plugin supports only helm charts that do not require user input when rendered. Object mirror.helm.local The local helm charts to mirror. Array of objects. For example: local: - name: podinfo path: /test/podinfo-5.0.0.tar.gz mirror.helm.local.name The name of the local helm chart to mirror. String. For example: podinfo . mirror.helm.local.path The path of the local helm chart to mirror. String. For example: /test/podinfo-5.0.0.tar.gz . mirror.helm.repositories The remote helm repositories to mirror from. Array of objects. For example: repositories: - name: podinfo url: https://example.github.io/podinfo charts: - name: podinfo version: 5.0.0 mirror.helm.repositories.name The name of the helm repository to mirror from. String. For example: podinfo . mirror.helm.repositories.url The URL of the helm repository to mirror from. String. For example: https://example.github.io/podinfo . mirror.helm.repositories.charts The remote helm charts to mirror. Array of objects. mirror.helm.repositories.charts.name The name of the helm chart to mirror. String. For example: podinfo . mirror.helm.repositories.charts.version The version of the named helm chart to mirror. String. For example: 5.0.0 . mirror.operators The Operators configuration of the image set. Array of objects. For example: operators: - catalog: registry.redhat.io/redhat/redhat-operator-index:v4.11 packages: - name: elasticsearch-operator minVersion: '2.4.0' mirror.operators.catalog The Operator catalog to include in the image set. String. For example: registry.redhat.io/redhat/redhat-operator-index:v4.11 . mirror.operators.full When true , downloads the full catalog, Operator package, or Operator channel. Boolean. The default value is false . mirror.operators.packages The Operator packages configuration. Array of objects. For example: operators: - catalog: registry.redhat.io/redhat/redhat-operator-index:v4.11 packages: - name: elasticsearch-operator minVersion: '5.2.3-31' mirror.operators.packages.name The Operator package name to include in the image set String. For example: elasticsearch-operator . mirror.operators.packages.channels The Operator package channel configuration. Object mirror.operators.packages.channels.name The Operator channel name, unique within a package, to include in the image set. String. For example: fast or stable-v4.11 . mirror.operators.packages.channels.maxVersion The highest version of the Operator mirror across all channels in which it exists. String. For example: 5.2.3-31 mirror.operators.packages.channels.minBundle The name of the minimum bundle to include, plus all bundles in the upgrade graph to the channel head. Set this field only if the named bundle has no semantic version metadata. String. For example: bundleName mirror.operators.packages.channels.minVersion The lowest version of the Operator to mirror across all channels in which it exists. String. For example: 5.2.3-31 mirror.operators.packages.maxVersion The highest version of the Operator to mirror across all channels in which it exists. String. For example: 5.2.3-31 . mirror.operators.packages.minVersion The lowest version of the Operator to mirror across all channels in which it exists. String. For example: 5.2.3-31 . mirror.operators.skipDependencies If true , dependencies of bundles are not included. Boolean. The default value is false . mirror.operators.targetName Optional alternative name to mirror the referenced catalog as. String. For example: my-operator-catalog mirror.operators.targetTag Optional alternative tag to append to the targetName . String. For example: v1 mirror.platform The platform configuration of the image set. Object mirror.platform.architectures The architecture of the platform release payload to mirror. Array of strings. For example: architectures: - amd64 - arm64 mirror.platform.channels The platform channel configuration of the image set. Array of objects. For example: channels: - name: stable-4.10 - name: stable-4.11 mirror.platform.channels.full When true , sets the minVersion to the first release in the channel and the maxVersion to the last release in the channel. Boolean. The default value is false . mirror.platform.channels.name The name of the release channel. String. For example: stable-4.11 mirror.platform.channels.minVersion The minimum version of the referenced platform to be mirrored. String. For example: 4.9.6 mirror.platform.channels.maxVersion The highest version of the referenced platform to be mirrored. String. For example: 4.11.1 mirror.platform.channels.shortestPath Toggles shortest path mirroring or full range mirroring. Boolean. The default value is false . mirror.platform.channels.type The type of the platform to be mirrored. String. For example: ocp or okd . The default is ocp . mirror.platform.graph Indicates whether the OSUS graph is added to the image set and subsequently published to the mirror. Boolean. The default value is false . storageConfig The back-end configuration of the image set. Object storageConfig.local The local back-end configuration of the image set. Object storageConfig.local.path The path of the directory to contain the image set metadata. String. For example: ./path/to/dir/ . storageConfig.registry The registry back-end configuration of the image set. Object storageConfig.registry.imageURL The back-end registry URI. Can optionally include a namespace reference in the URI. String. For example: quay.io/myuser/imageset:metadata . storageConfig.registry.skipTLS Optionally skip TLS verification of the referenced back-end registry. Boolean. The default value is false . 3.4.12. Image set configuration examples The following ImageSetConfiguration file examples show the configuration for various mirroring use cases. Use case: Including arbitrary images and helm charts The following ImageSetConfiguration file uses a registry storage backend and includes helm charts and an additional Red Hat Universal Base Image (UBI). Example ImageSetConfiguration file apiVersion: mirror.openshift.io/v1alpha2 kind: ImageSetConfiguration archiveSize: 4 storageConfig: registry: imageURL: example.com/mirror/oc-mirror-metadata skipTLS: false mirror: platform: architectures: - "s390x" channels: - name: stable-4.11 operators: - catalog: registry.redhat.io/redhat/redhat-operator-index:v4.11 helm: repositories: - name: redhat-helm-charts url: https://raw.githubusercontent.com/redhat-developer/redhat-helm-charts/master charts: - name: ibm-mongodb-enterprise-helm version: 0.2.0 additionalImages: - name: registry.redhat.io/ubi8/ubi:latest Use case: Including Operator versions from a minimum to the latest The following ImageSetConfiguration file uses a local storage backend and includes only the Red Hat Advanced Cluster Security for Kubernetes Operator, versions starting at 3.68.0 and later in the latest channel. Example ImageSetConfiguration file apiVersion: mirror.openshift.io/v1alpha2 kind: ImageSetConfiguration storageConfig: local: path: /home/user/metadata mirror: operators: - catalog: registry.redhat.io/redhat/redhat-operator-index:v4.11 packages: - name: rhacs-operator channels: - name: latest minVersion: 3.68.0 Use case: Including the shortest OpenShift Container Platform upgrade path The following ImageSetConfiguration file uses a local storage backend and includes all OpenShift Container Platform versions along the shortest upgrade path from the minimum version of 4.9.37 to the maximum version of 4.10.22 . Example ImageSetConfiguration file apiVersion: mirror.openshift.io/v1alpha2 kind: ImageSetConfiguration storageConfig: local: path: /home/user/metadata mirror: platform: channels: - name: stable-4.10 minVersion: 4.9.37 maxVersion: 4.10.22 shortestPath: true Use case: Including all versions of OpenShift Container Platform from a minimum to the latest The following ImageSetConfiguration file uses a registry storage backend and includes all OpenShift Container Platform versions starting at a minimum version of 4.10.10 to the latest version in the channel. On every invocation of oc-mirror with this image set configuration, the latest release of the stable-4.10 channel is evaluated, so running oc-mirror at regular intervals ensures that you automatically receive the latest releases of OpenShift Container Platform images. Example ImageSetConfiguration file apiVersion: mirror.openshift.io/v1alpha2 kind: ImageSetConfiguration storageConfig: registry: imageURL: example.com/mirror/oc-mirror-metadata skipTLS: false mirror: platform: channels: - name: stable-4.10 minVersion: 4.10.10 Use case: Including Operator versions from a minimum to a maximum The following ImageSetConfiguration file uses a local storage backend and includes only an example Operator, versions starting at 1.0.0 through 2.0.0 in the stable channel. This allows you to only mirror a specific version range of a particular Operator. As time progresses, you can use these settings to adjust the version to newer releases, for example when you no longer have version 1.0.0 running anywhere anymore. In this scenario, you can increase the minVersion to something newer, for example 1.5.0 . When oc-mirror runs again with the updated version range, it automatically detects that any releases older than 1.5.0 are no longer required and deletes those from the registry to conserve storage space. Example ImageSetConfiguration file apiVersion: mirror.openshift.io/v1alpha2 kind: ImageSetConfiguration storageConfig: local: path: /home/user/metadata mirror: operators: - catalog: registry.redhat.io/redhat/redhat-operator-index:v4.11 packages: - name: example-operator channels: - name: stable minVersion: '1.0.0' maxVersion: '2.0.0' 3.4.13. Command reference for oc-mirror The following tables describe the oc mirror subcommands and flags: Table 3.2. oc mirror subcommands Subcommand Description completion Generate the autocompletion script for the specified shell. describe Output the contents of an image set. help Show help about any subcommand. init Output an initial image set configuration template. list List available platform and Operator content and their version. version Output the oc-mirror version. Table 3.3. oc mirror flags Flag Description -c , --config <string> Specify the path to an image set configuration file. --continue-on-error If any non image-pull related error occurs, continue and attempt to mirror as much as possible. --dest-skip-tls Disable TLS validation for the target registry. --dest-use-http Use plain HTTP for the target registry. --dry-run Print actions without mirroring images. Generates mapping.txt and pruning-plan.json files. --from <string> Specify the path to an image set archive that was generated by an execution of oc-mirror to load into a target registry. -h , --help Show the help. --ignore-history Ignore past mirrors when downloading images and packing layers. Disables incremental mirroring and might download more data. --manifests-only Generate manifests for ImageContentSourcePolicy objects to configure a cluster to use the mirror registry, but do not actually mirror any images. To use this flag, you must pass in an image set archive with the --from flag. --max-per-registry <int> Specify the number of concurrent requests allowed per registry. The default is 6 . --skip-cleanup Skip removal of artifact directories. --skip-image-pin Do not replace image tags with digest pins in Operator catalogs. --skip-metadata-check Skip metadata when publishing an image set. This is only recommended when the image set was created with --ignore-history . --skip-missing If an image is not found, skip it instead of reporting an error and aborting execution. Does not apply to custom images explicitly specified in the image set configuration. --skip-verification Skip digest verification. --source-skip-tls Disable TLS validation for the source registry. --source-use-http Use plain HTTP for the source registry. -v , --verbose <int> Specify the number for the log level verbosity. Valid values are 0 - 9 . The default is 0 . 3.4.14. Additional resources About cluster updates in a disconnected environment
[ "./mirror-registry install --quayHostname <host_example_com> --quayRoot <example_directory_name>", "podman login -u init -p <password> <host_example_com>:8443> --tls-verify=false 1", "sudo ./mirror-registry upgrade -v", "sudo ./mirror-registry upgrade --quayHostname <host_example_com> --quayRoot <example_directory_name> --pgStorage <example_directory_name>/pg-data --quayStorage <example_directory_name>/quay-storage -v", "./mirror-registry install -v --targetHostname <host_example_com> --targetUsername <example_user> -k ~/.ssh/my_ssh_key --quayHostname <host_example_com> --quayRoot <example_directory_name>", "podman login -u init -p <password> <host_example_com>:8443> --tls-verify=false 1", "./mirror-registry upgrade -v --targetHostname <remote_host_url> --targetUsername <user_name> -k ~/.ssh/my_ssh_key", "./mirror-registry install --quayHostname <host_example_com> --quayRoot <example_directory_name>", "export QUAY=/USDHOME/quay-install", "cp ~/ssl.crt USDQUAY/quay-config", "cp ~/ssl.key USDQUAY/quay-config", "systemctl restart quay-app", "./mirror-registry uninstall -v --quayRoot <example_directory_name>", "tar xvf <file>", "echo USDPATH", "oc <command>", "C:\\> path", "C:\\> oc <command>", "echo USDPATH", "oc <command>", "cat ./pull-secret | jq . > <path>/<pull_secret_file_in_json> 1", "{ \"auths\": { \"cloud.openshift.com\": { \"auth\": \"b3BlbnNo...\", \"email\": \"[email protected]\" }, \"quay.io\": { \"auth\": \"b3BlbnNo...\", \"email\": \"[email protected]\" }, \"registry.connect.redhat.com\": { \"auth\": \"NTE3Njg5Nj...\", \"email\": \"[email protected]\" }, \"registry.redhat.io\": { \"auth\": \"NTE3Njg5Nj...\", \"email\": \"[email protected]\" } } }", "echo -n '<user_name>:<password>' | base64 -w0 1 BGVtbYk3ZHAtqXs=", "\"auths\": { \"<mirror_registry>\": { 1 \"auth\": \"<credentials>\", 2 \"email\": \"[email protected]\" } },", "{ \"auths\": { \"registry.example.com\": { \"auth\": \"BGVtbYk3ZHAtqXs=\", \"email\": \"[email protected]\" }, \"cloud.openshift.com\": { \"auth\": \"b3BlbnNo...\", \"email\": \"[email protected]\" }, \"quay.io\": { \"auth\": \"b3BlbnNo...\", \"email\": \"[email protected]\" }, \"registry.connect.redhat.com\": { \"auth\": \"NTE3Njg5Nj...\", \"email\": \"[email protected]\" }, \"registry.redhat.io\": { \"auth\": \"NTE3Njg5Nj...\", \"email\": \"[email protected]\" } } }", "OCP_RELEASE=<release_version>", "LOCAL_REGISTRY='<local_registry_host_name>:<local_registry_host_port>'", "LOCAL_REPOSITORY='<local_repository_name>'", "PRODUCT_REPO='openshift-release-dev'", "LOCAL_SECRET_JSON='<path_to_pull_secret>'", "RELEASE_NAME=\"ocp-release\"", "ARCHITECTURE=<server_architecture>", "REMOVABLE_MEDIA_PATH=<path> 1", "oc adm release mirror -a USD{LOCAL_SECRET_JSON} --from=quay.io/USD{PRODUCT_REPO}/USD{RELEASE_NAME}:USD{OCP_RELEASE}-USD{ARCHITECTURE} --to=USD{LOCAL_REGISTRY}/USD{LOCAL_REPOSITORY} --to-release-image=USD{LOCAL_REGISTRY}/USD{LOCAL_REPOSITORY}:USD{OCP_RELEASE}-USD{ARCHITECTURE} --dry-run", "oc adm release mirror -a USD{LOCAL_SECRET_JSON} --to-dir=USD{REMOVABLE_MEDIA_PATH}/mirror quay.io/USD{PRODUCT_REPO}/USD{RELEASE_NAME}:USD{OCP_RELEASE}-USD{ARCHITECTURE}", "oc image mirror -a USD{LOCAL_SECRET_JSON} --from-dir=USD{REMOVABLE_MEDIA_PATH}/mirror \"file://openshift/release:USD{OCP_RELEASE}*\" USD{LOCAL_REGISTRY}/USD{LOCAL_REPOSITORY} 1", "oc adm release mirror -a USD{LOCAL_SECRET_JSON} --from=quay.io/USD{PRODUCT_REPO}/USD{RELEASE_NAME}:USD{OCP_RELEASE}-USD{ARCHITECTURE} --to=USD{LOCAL_REGISTRY}/USD{LOCAL_REPOSITORY} --to-release-image=USD{LOCAL_REGISTRY}/USD{LOCAL_REPOSITORY}:USD{OCP_RELEASE}-USD{ARCHITECTURE}", "oc adm release extract -a USD{LOCAL_SECRET_JSON} --icsp-file=<file> \\ --command=openshift-install \"USD{LOCAL_REGISTRY}/USD{LOCAL_REPOSITORY}:USD{OCP_RELEASE}\"", "oc adm release extract -a USD{LOCAL_SECRET_JSON} --command=openshift-install \"USD{LOCAL_REGISTRY}/USD{LOCAL_REPOSITORY}:USD{OCP_RELEASE}-USD{ARCHITECTURE}\"", "openshift-install", "podman login registry.redhat.io", "REG_CREDS=USD{XDG_RUNTIME_DIR}/containers/auth.json", "podman login <mirror_registry>", "oc adm catalog mirror <index_image> \\ 1 <mirror_registry>:<port>[/<repository>] \\ 2 [-a USD{REG_CREDS}] \\ 3 [--insecure] \\ 4 [--index-filter-by-os='<platform>/<arch>'] \\ 5 [--manifests-only] 6", "src image has index label for database path: /database/index.db using database path mapping: /database/index.db:/tmp/153048078 wrote database to /tmp/153048078 1 wrote mirroring manifests to manifests-redhat-operator-index-1614211642 2", "oc adm catalog mirror <index_image> \\ 1 file:///local/index \\ 2 -a USD{REG_CREDS} \\ 3 --insecure \\ 4 --index-filter-by-os='<platform>/<arch>' 5", "info: Mirroring completed in 5.93s (5.915MB/s) wrote mirroring manifests to manifests-my-index-1614985528 1 To upload local images to a registry, run: oc adm catalog mirror file://local/index/myrepo/my-index:v1 REGISTRY/REPOSITORY 2", "podman login <mirror_registry>", "oc adm catalog mirror file://local/index/<repository>/<index_image>:<tag> \\ 1 <mirror_registry>:<port>[/<repository>] \\ 2 -a USD{REG_CREDS} \\ 3 --insecure \\ 4 --index-filter-by-os='<platform>/<arch>' 5", "oc adm catalog mirror <mirror_registry>:<port>/<index_image> <mirror_registry>:<port>[/<repository>] --manifests-only \\ 1 [-a USD{REG_CREDS}] [--insecure]", "manifests-<index_image_name>-<random_number>", "manifests-index/<repository>/<index_image_name>-<random_number>", "tar xvzf oc-mirror.tar.gz", "chmod +x oc-mirror", "sudo mv oc-mirror /usr/local/bin/.", "oc mirror help", "cat ./pull-secret | jq . > <path>/<pull_secret_file_in_json> 1", "{ \"auths\": { \"cloud.openshift.com\": { \"auth\": \"b3BlbnNo...\", \"email\": \"[email protected]\" }, \"quay.io\": { \"auth\": \"b3BlbnNo...\", \"email\": \"[email protected]\" }, \"registry.connect.redhat.com\": { \"auth\": \"NTE3Njg5Nj...\", \"email\": \"[email protected]\" }, \"registry.redhat.io\": { \"auth\": \"NTE3Njg5Nj...\", \"email\": \"[email protected]\" } } }", "echo -n '<user_name>:<password>' | base64 -w0 1 BGVtbYk3ZHAtqXs=", "\"auths\": { \"<mirror_registry>\": { 1 \"auth\": \"<credentials>\", 2 \"email\": \"[email protected]\" } },", "{ \"auths\": { \"registry.example.com\": { \"auth\": \"BGVtbYk3ZHAtqXs=\", \"email\": \"[email protected]\" }, \"cloud.openshift.com\": { \"auth\": \"b3BlbnNo...\", \"email\": \"[email protected]\" }, \"quay.io\": { \"auth\": \"b3BlbnNo...\", \"email\": \"[email protected]\" }, \"registry.connect.redhat.com\": { \"auth\": \"NTE3Njg5Nj...\", \"email\": \"[email protected]\" }, \"registry.redhat.io\": { \"auth\": \"NTE3Njg5Nj...\", \"email\": \"[email protected]\" } } }", "oc mirror init --registry example.com/mirror/oc-mirror-metadata > imageset-config.yaml 1", "kind: ImageSetConfiguration apiVersion: mirror.openshift.io/v1alpha2 archiveSize: 4 1 storageConfig: 2 registry: imageURL: example.com/mirror/oc-mirror-metadata 3 skipTLS: false mirror: platform: channels: - name: stable-4.11 4 type: ocp graph: true 5 operators: - catalog: registry.redhat.io/redhat/redhat-operator-index:v4.11 6 packages: - name: serverless-operator 7 channels: - name: stable 8 additionalImages: - name: registry.redhat.io/ubi8/ubi:latest 9 helm: {}", "oc mirror --config=./imageset-config.yaml \\ 1 docker://registry.example:5000 2", "oc mirror --config=./imageset-config.yaml \\ 1 file://<path_to_output_directory> 2", "cd <path_to_output_directory>", "ls", "mirror_seq1_000000.tar", "oc mirror --from=./mirror_seq1_000000.tar \\ 1 docker://registry.example:5000 2", "oc apply -f ./oc-mirror-workspace/results-1639608409/", "oc apply -f ./oc-mirror-workspace/results-1639608409/release-signatures/", "oc get imagecontentsourcepolicy --all-namespaces", "oc get catalogsource --all-namespaces", "oc mirror --config=./imageset-config.yaml \\ 1 docker://registry.example:5000 \\ 2 --dry-run 3", "Checking push permissions for registry.example:5000 Creating directory: oc-mirror-workspace/src/publish Creating directory: oc-mirror-workspace/src/v2 Creating directory: oc-mirror-workspace/src/charts Creating directory: oc-mirror-workspace/src/release-signatures No metadata detected, creating new workspace wrote mirroring manifests to oc-mirror-workspace/operators.1658342351/manifests-redhat-operator-index info: Planning completed in 31.48s info: Dry run complete Writing image mapping to oc-mirror-workspace/mapping.txt", "cd oc-mirror-workspace/", "additionalImages: - name: registry.redhat.io/ubi8/ubi:latest", "local: - name: podinfo path: /test/podinfo-5.0.0.tar.gz", "repositories: - name: podinfo url: https://example.github.io/podinfo charts: - name: podinfo version: 5.0.0", "operators: - catalog: registry.redhat.io/redhat/redhat-operator-index:v4.11 packages: - name: elasticsearch-operator minVersion: '2.4.0'", "operators: - catalog: registry.redhat.io/redhat/redhat-operator-index:v4.11 packages: - name: elasticsearch-operator minVersion: '5.2.3-31'", "architectures: - amd64 - arm64", "channels: - name: stable-4.10 - name: stable-4.11", "apiVersion: mirror.openshift.io/v1alpha2 kind: ImageSetConfiguration archiveSize: 4 storageConfig: registry: imageURL: example.com/mirror/oc-mirror-metadata skipTLS: false mirror: platform: architectures: - \"s390x\" channels: - name: stable-4.11 operators: - catalog: registry.redhat.io/redhat/redhat-operator-index:v4.11 helm: repositories: - name: redhat-helm-charts url: https://raw.githubusercontent.com/redhat-developer/redhat-helm-charts/master charts: - name: ibm-mongodb-enterprise-helm version: 0.2.0 additionalImages: - name: registry.redhat.io/ubi8/ubi:latest", "apiVersion: mirror.openshift.io/v1alpha2 kind: ImageSetConfiguration storageConfig: local: path: /home/user/metadata mirror: operators: - catalog: registry.redhat.io/redhat/redhat-operator-index:v4.11 packages: - name: rhacs-operator channels: - name: latest minVersion: 3.68.0", "apiVersion: mirror.openshift.io/v1alpha2 kind: ImageSetConfiguration storageConfig: local: path: /home/user/metadata mirror: platform: channels: - name: stable-4.10 minVersion: 4.9.37 maxVersion: 4.10.22 shortestPath: true", "apiVersion: mirror.openshift.io/v1alpha2 kind: ImageSetConfiguration storageConfig: registry: imageURL: example.com/mirror/oc-mirror-metadata skipTLS: false mirror: platform: channels: - name: stable-4.10 minVersion: 4.10.10", "apiVersion: mirror.openshift.io/v1alpha2 kind: ImageSetConfiguration storageConfig: local: path: /home/user/metadata mirror: operators: - catalog: registry.redhat.io/redhat/redhat-operator-index:v4.11 packages: - name: example-operator channels: - name: stable minVersion: '1.0.0' maxVersion: '2.0.0'" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.11/html/installing/disconnected-installation-mirroring
Appendix A. Revision History
Appendix A. Revision History Revision History Revision 1.0-23 Mon March 10 2017 Jiri Herrmann Updates for the 6.9 GA release Revision 1.0-19 Mon May 02 2016 Jiri Herrmann Updates for the 6.8 GA release Revision 1.0-18 Tue Mar 01 2016 Jiri Herrmann Multiple updates for the 6.8 beta publication Revision 1.0-17 Thu Oct 08 2015 Jiri Herrmann Cleaned up the Revision History Revision 1.0-16 Wed July 15 2015 Dayle Parker Version for 6.7 GA release.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/virtualization_tuning_and_optimization_guide/appe-virtualization_tuning_optimization_guide-revision_history
Chapter 10. Backing up storage data from Google Persistent Disk
Chapter 10. Backing up storage data from Google Persistent Disk Red Hat recommends that you back up the data on your persistent volume claims (PVCs) regularly. Backing up your data is particularly important before deleting a user and before uninstalling OpenShift AI, as all PVCs are deleted when OpenShift AI is uninstalled. Prerequisites You have credentials for Red Hat OpenShift Cluster Manager ( https://console.redhat.com/openshift/ ). You have administrator access to the OpenShift Dedicated cluster. You have credentials for the Google Cloud Platform (GCP) account that the OpenShift Dedicated cluster is deployed under. Procedure Determine the IDs of the persistent volumes (PVs) that you want to back up. In the OpenShift Dedicated web console, change into the Administrator perspective. Click Home Projects . Click the rhods-notebooks project. The Details page for the project opens. Click the PersistentVolumeClaims in the Inventory section. The PersistentVolumeClaims page opens. Note the ID of the persistent volume (PV) that you want to back up. The persistent volume (PV) IDs are required to identify the correct persistent disk to back up in your GCP instance. Locate the persistent disk containing the PVs that you want to back up. Log in to the Google Cloud console ( https://console.cloud.google.com ) and ensure that you are viewing the region that your OpenShift Dedicated cluster is deployed in. Click the navigation menu (≡) and then click Compute Engine . From the side navigation, under Storage , click Disks . The Disks page opens. In the Filter query box, enter the ID of the persistent volume (PV) that you made a note of earlier. The Disks page reloads to display the search results. Click on the disk shown and verify that any kubernetes.io/created-for/pvc/namespace tags contain the value rhods-notebooks , and any kubernetes.io/created-for/pvc/name tags match the name of the persistent volume that the persistent disk is being used for, for example, jupyterhub-nb-user1-pvc . Back up the persistent disk that contains your persistent volume (PV). Select CREATE SNAPSHOT from the top navigation. The Create a snapshot page opens. Enter a unique Name for the snapshot. Under Source disk , verify the persistent disk you want to back up is displayed. Change any optional settings as needed. Click CREATE . The snapshot of the persistent disk is created. Verification The snapshot that you created is visible on the Snapshots page in GCP. Additional resources Google Cloud documentation: Create and manage disk snapshots
null
https://docs.redhat.com/en/documentation/red_hat_openshift_ai_cloud_service/1/html/installing_the_openshift_ai_cloud_service/backing-up-storage-data-from-google-persistent-disk_install
Making open source more inclusive
Making open source more inclusive Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see our CTO Chris Wright's message .
null
https://docs.redhat.com/en/documentation/red_hat_build_of_cryostat/2/html/using_cryostat_to_manage_a_jfr_recording/making-open-source-more-inclusive
File System Guide
File System Guide Red Hat Ceph Storage 5 Configuring and Mounting Ceph File Systems Red Hat Ceph Storage Documentation Team
null
https://docs.redhat.com/en/documentation/red_hat_ceph_storage/5/html/file_system_guide/index
Chapter 20. Quotas and Service Level Agreement Policy
Chapter 20. Quotas and Service Level Agreement Policy 20.1. Introduction to Quota Quota is a resource limitation tool provided with Red Hat Virtualization. Quota may be thought of as a layer of limitations on top of the layer of limitations set by User Permissions. Quota is a data center object. Quota allows administrators of Red Hat Virtualization environments to limit user access to memory, CPU, and storage. Quota defines the memory resources and storage resources an administrator can assign users. As a result users may draw on only the resources assigned to them. When the quota resources are exhausted, Red Hat Virtualization does not permit further user actions. There are two different kinds of Quota: Table 20.1. The Two Different Kinds of Quota Quota type Definition Run-time Quota This quota limits the consumption of runtime resources, like CPU and memory. Storage Quota This quota limits the amount of storage available. Quota, like SELinux, has three modes: Table 20.2. Quota Modes Quota Mode Function Enforced This mode puts into effect the quota that you have set in Audit mode, limiting resources to the group or user affected by the quota. Audit This mode logs quota violations without blocking users and can be used to test quotas. In Audit mode, you can increase or decrease the amount of runtime quota and the amount of storage quota available to users affected by it. Disabled This mode turns off the runtime and storage limitations defined by the quota. When a user attempts to run a virtual machine, the specifications of the virtual machine are compared to the storage allowance and the runtime allowance set in the applicable quota. If starting a virtual machine causes the aggregated resources of all running virtual machines covered by a quota to exceed the allowance defined in the quota, then the Manager refuses to run the virtual machine. When a user creates a new disk, the requested disk size is added to the aggregated disk usage of all the other disks covered by the applicable quota. If the new disk takes the total aggregated disk usage above the amount allowed by the quota, disk creation fails. Quota allows for resource sharing of the same hardware. It supports hard and soft thresholds. Administrators can use a quota to set thresholds on resources. These thresholds appear, from the user's point of view, as 100% usage of that resource. To prevent failures when the customer unexpectedly exceeds this threshold, the interface supports a "grace" amount by which the threshold can be briefly exceeded. Exceeding the threshold results in a warning sent to the customer. Important Quota imposes limitations upon the running of virtual machines. Ignoring these limitations is likely to result in a situation in which you cannot use your virtual machines and virtual disks. When quota is running in enforced mode, virtual machines and disks that do not have quotas assigned cannot be used. To power on a virtual machine, a quota must be assigned to that virtual machine. To create a snapshot of a virtual machine, the disk associated with the virtual machine must have a quota assigned. When creating a template from a virtual machine, you are prompted to select the quota that you want the template to consume. This allows you to set the template (and all future machines created from the template) to consume a different quota than the virtual machine and disk from which the template is generated.
null
https://docs.redhat.com/en/documentation/red_hat_virtualization/4.3/html/administration_guide/chap-quotas_and_service_level_agreement_policy
Chapter 9. Advanced Configuration
Chapter 9. Advanced Configuration This chapter describes advanced resource types and advanced configuration features that Pacemaker supports. 9.1. Resource Clones You can clone a resource so that the resource can be active on multiple nodes. For example, you can use cloned resources to configure multiple instances of an IP resource to distribute throughout a cluster for node balancing. You can clone any resource provided the resource agent supports it. A clone consists of one resource or one resource group. Note Only resources that can be active on multiple nodes at the same time are suitable for cloning. For example, a Filesystem resource mounting a non-clustered file system such as ext4 from a shared memory device should not be cloned. Since the ext4 partition is not cluster aware, this file system is not suitable for read/write operations occurring from multiple nodes at the same time. 9.1.1. Creating and Removing a Cloned Resource You can create a resource and a clone of that resource at the same time with the following command. The name of the clone will be resource_id -clone . You cannot create a resource group and a clone of that resource group in a single command. Alternately, you can create a clone of a previously-created resource or resource group with the following command. The name of the clone will be resource_id -clone or group_name -clone . Note You need to configure resource configuration changes on one node only. Note When configuring constraints, always use the name of the group or clone. When you create a clone of a resource, the clone takes on the name of the resource with -clone appended to the name. The following commands creates a resource of type apache named webfarm and a clone of that resource named webfarm-clone . Note When you create a resource or resource group clone that will be ordered after another clone, you should almost always set the interleave=true option. This ensures that copies of the dependent clone can stop or start when the clone it depends on has stopped or started on the same node. If you do not set this option, if a cloned resource B depends on a cloned resource A and a node leaves the cluster, when the node returns to the cluster and resource A starts on that node, then all of the copies of resource B on all of the nodes will restart. This is because when a dependent cloned resource does not have the interleave option set, all instances of that resource depend on any running instance of the resource it depends on. Use the following command to remove a clone of a resource or a resource group. This does not remove the resource or resource group itself. For information on resource options, see Section 6.1, "Resource Creation" . Table 9.1, "Resource Clone Options" describes the options you can specify for a cloned resource. Table 9.1. Resource Clone Options Field Description priority, target-role, is-managed Options inherited from resource that is being cloned, as described in Table 6.3, "Resource Meta Options" . clone-max How many copies of the resource to start. Defaults to the number of nodes in the cluster. clone-node-max How many copies of the resource can be started on a single node; the default value is 1 . notify When stopping or starting a copy of the clone, tell all the other copies beforehand and when the action was successful. Allowed values: false , true . The default value is false . globally-unique Does each copy of the clone perform a different function? Allowed values: false , true If the value of this option is false , these resources behave identically everywhere they are running and thus there can be only one copy of the clone active per machine. If the value of this option is true , a copy of the clone running on one machine is not equivalent to another instance, whether that instance is running on another node or on the same node. The default value is true if the value of clone-node-max is greater than one; otherwise the default value is false . ordered Should the copies be started in series (instead of in parallel). Allowed values: false , true . The default value is false . interleave Changes the behavior of ordering constraints (between clones/masters) so that copies of the first clone can start or stop as soon as the copy on the same node of the second clone has started or stopped (rather than waiting until every instance of the second clone has started or stopped). Allowed values: false , true . The default value is false . clone-min If a value is specified, any clones which are ordered after this clone will not be able to start until the specified number of instances of the original clone are running, even if the interleave option is set to true . 9.1.2. Clone Constraints In most cases, a clone will have a single copy on each active cluster node. You can, however, set clone-max for the resource clone to a value that is less than the total number of nodes in the cluster. If this is the case, you can indicate which nodes the cluster should preferentially assign copies to with resource location constraints. These constraints are written no differently to those for regular resources except that the clone's id must be used. The following command creates a location constraint for the cluster to preferentially assign resource clone webfarm-clone to node1 . Ordering constraints behave slightly differently for clones. In the example below, because the interleave clone option is left to default as false , no instance of webfarm-stats will start until all instances of webfarm-clone that need to be started have done so. Only if no copies of webfarm-clone can be started then webfarm-stats will be prevented from being active. Additionally, webfarm-clone will wait for webfarm-stats to be stopped before stopping itself. Colocation of a regular (or group) resource with a clone means that the resource can run on any machine with an active copy of the clone. The cluster will choose a copy based on where the clone is running and the resource's own location preferences. Colocation between clones is also possible. In such cases, the set of allowed locations for the clone is limited to nodes on which the clone is (or will be) active. Allocation is then performed as normally. The following command creates a colocation constraint to ensure that the resource webfarm-stats runs on the same node as an active copy of webfarm-clone . 9.1.3. Clone Stickiness To achieve a stable allocation pattern, clones are slightly sticky by default. If no value for resource-stickiness is provided, the clone will use a value of 1. Being a small value, it causes minimal disturbance to the score calculations of other resources but is enough to prevent Pacemaker from needlessly moving copies around the cluster.
[ "pcs resource create resource_id standard:provider:type | type [ resource options ] clone [meta clone_options ]", "pcs resource clone resource_id | group_name [ clone_options ]", "pcs resource create webfarm apache clone", "pcs resource unclone resource_id | group_name", "pcs constraint location webfarm-clone prefers node1", "pcs constraint order start webfarm-clone then webfarm-stats", "pcs constraint colocation add webfarm-stats with webfarm-clone" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/high_availability_add-on_reference/ch-advancedresource-HAAR
Chapter 5. Documentation changes
Chapter 5. Documentation changes This section details the major documentation updates delivered with Red Hat OpenStack Platform (RHOSP) 17.0, and the changes made to the documentation set that include adding new features, enhancements, and corrections. The section also details the addition of new titles and the removal of retired or replaced titles. Table 5.1. Table legend Column Meaning Date The date that the documentation change was published. 17.0 versions impacted The RHOSP 17.0 versions that the documentation change impacts. Unless stated otherwise, a change that impacts a particular version also impacts all later versions. Components The RHOSP components that the documenation change impacts. Affected content The RHOSP documents that contain the change or update. Description of change A brief summary of the change to the document. Table 5.2. Document changes Date 17.0 versions impacted Components Affected content Description of change 20 October 2023 17.0 Networking Exporting the DNS service pool configuration Updated the procedure to describe how to run the command inside a container. 04 October 2023 17.0 Networking Enabling custom composable networks Removing an overcloud stack Replaced networks definition file, network_data.yaml , with network_data_v2.yaml . 29 September 2023 17.0 Security Federal Information Processing Standard on Red Hat OpenStack Platform Corrected procedure so that FIPS images are uploaded to glance 11 September 2023 17.1 Networking Chapter 20. Replacing Controller nodes Changes made to Chapter 20 to address the OVN database partition issue described in BZ 2222543 07 September 2023 17.1 Networking QoS rules To Table 9.1, added footnote (#8) stating that RHOSP does not support QoS for trunk ports. 30 August 2023 17.1 Networking Overview of allowed address pairs Added a definition for a virtual port (vport). 30 August 2023 17.0 Security Creating images Removed deprecated example for building images and replaced with link to image builder documentation 10 August 2023 17.0 Security link:https://access.redhat.com/documentation/en-us/red_hat_openstack_platform/17.0/html/users_and_identity_management_guide/assembly_application-credentials#proc_replacing-application-credentials_application-credentials Procedure on replacing applications credentials in undercloud.conf is rewritten to specify need for user credentials, provides more details. 07 August 2023 17.0 Security link:https://access.redhat.com/documentation/en-us/red_hat_openstack_platform/17.0/html/security_and_hardening_guide/ Chapter 'Rotating service account passwords' uses a deprecated mistral workflow for execution, and has been removed. 07 August 2023 17.0 Security Replacing Application Credentials . Procedure requires use of --unrestricted flag which is not recommended; procedure is removed. 20 July 2023 17.0 All-in-One Deploying the all-in-one Red Hat OpenStack Platform environment Procedure is updated with corrected path to clouds.yaml . 12 July 2023 17.0 Security Implementing TLS-e with Ansible This procedure is updated with an optional step that is needed when the IdM domain and IDM realm do not match 27 June 2023 17.0 Edge link:https://access.redhat.com/documentation/en-us/red_hat_openstack_platform/17.0/html/distributed_compute_node_and_storage_deployment/precaching-glance-images-into-nova#proc_ansible-image-cache-playbook_precaching-glance-images The procedure is updated to remove the deprecated way of producing an ansible inventory in Red Hat OpenStack Platform 21 June 2023 17.0 Networking Creating Linux bonds The example has changed for the "Linux bond set to 802.3ad LACP mode with one VLAN." 20 June 2023 17.0 Networking Adding a new leaf to a spine-leaf deployment Example for flat network mappings (step 7) updated. 13 June 2023 17.0 Networking Load-balancing service (octavia) feature support matrix Items added that specify no support for SR-IOV and DPDK. 25 May 2023 17.0 Networking Adding a composable network Removed what was previously labeled step 9, assigning predictable virtual IPs for Redis and OVNDBs. 23 May 2023 17.0 Networking Configuring existing BIND servers for the DNS service Labeled the feature integrating the RHOSP DNS service with an existing BIND infrastructure as technology preview. 11 May 2023 17.0 Networking Enabling VLAN transparency in ML2/OVN deployments Added a new step that instructs users to set --allowed-address on the VM port. 4 May 2023 17.0 Validation Framework Starting the undercloud in the High Availability for Compute Instances guide Starting the undercloud in the Director Installation and Usage guide Step 4 has been changed. The option --group pre-introspection has been added. 26 April 2023 17.0 Validation framework Running validation using the validation framework Updated Ansible inventory location and Ansible commands. 19 April 2023 17.0 Networking Networking guide Introduction to the OpenStack Dashboard Using Designate for DNS-as-a-Service Added "/puppet-generated" to various configuration file paths. 18 April 2023 17.0 Compute Configuring NVDIMM Compute nodes to provide persistent memory for instances The "Configuring NVDIMM Compute nodes to provide persistent memory for instances" content has been removed from the Configuring the Compute Service for Instance Creation guide. Red Hat has removed support for persistent memory from RHOSP 17.0 and future releases in response to the announcement by the Intel Corporation on July 28, 2022 that they are discontinuing investment in their Intel(R) OptaneTM business: Intel(R) OptaneTM Business Update: What Does This Mean for Warranty and Support Intel(R) Product Change Notification #119311-00 12 April 2023 17.0 Compute Migration constraints Updated the configuration to minimize packet loss when live migrating instances in an ML2/OVS deployment. 10 April 2023 17.0 Security Removing services from the overcloud firewall Removed invalid parameter/value pair "action: accept" from firefwall.yaml in example provided in step 2. 05 April 2023 17.0 Storage Chapter 6. Configuring the Shared File Systems service (manila) Chapter 7. Performing operations with the Shared File Systems service (manila) The Shared File Systems service (manila) content in the Storage Guide has been reorganized into two separate chapters for configuration and operations. 23 Mar 2023 17.0 Networking Configuring VLAN provider networks Step 1 under "Verification steps" has been changed. The --external and --share options have been removed. 23 Mar 2023 17.0 Security and Hardening Adding services to the overcloud firewall The example of ~/templates/firewall.yaml is updated. 23 Mar 2023 17.0 Networking Fixing OVN controllers that fail to register on edge sites A step has been added to the resolution. 20 Mar 2023 17.0 Security Replacing the IdM server for Red Hat OpenStack Platfrom with a replica A new procedure is added to ensure that critical parameters are validated to avoid future deployment failures. 16 Mar 2023 17.0 Storage Deploying Red Hat Ceph Storage and Red Hat OpenStack Platform together with director Deploying the Shared File Systems service with CephFS through NFS has been removed from the Customer Portal and the content has been moved to Deploying Red Hat Ceph Storage and Red Hat OpenStack Platform together with director . 07 Mar 2023 17.0 NFV Network Functions Virtualization Planning and Configuration Guide Code snippets that contain the devname parameter have been replaced with the address parameter. 07 Mar 2023 17.0 NFV Configuring trust between virtual and physical functions The step that instructs users to "Modify permissions to allow users the capability of creating and updating port bindings" (step 3) has been removed. 06 Mar 2023 17.0 Networking Chapter 13. Configuring distributed virtual routing (DVR) The topic "Deploying DVR with ML2 OVS" has been removed from the Networking Guide . 02 Mar 2023 17.0 Storage Deploying Red Hat Ceph Storage and Red Hat OpenStack Platform together with director Deploying the Shared File Systems service with native CephFS has been removed from the Customer Portal and the content has been moved to Deploying Red Hat Ceph Storage and Red Hat OpenStack Platform together with director . 01 Mar 2023 17.0 Compute and NFV Enabling RT-KVM for NFV Workloads Chapter 15, "Configuring real-time compute," has been moved from the Configuring the Compute Service for Instance Creation guide to the Network Functions Virtualization Planning and Configuration Guide . 28 Feb 2023 17.0 Hardware Provisioning Configuration considerations for overcloud storage nodes Provisioning bare metal nodes for the overcloud Added guidance on configuring local disk partition sizes that meet the storage and retention requirements for your storage nodes. Added an optional step, step 6, to the "Provisioning bare metal nodes for the overcloud" procedure, to configure the disk partition size allocations if the default disk partition sizes do not meet your requirements. 27 Feb 2023 17.0 Networking Chapter 4. Configuring the overcloud Rewrote this chapter to capture deployment changes introduced in RHOSP 17.0. 27 Feb 2023 17.0 Edge Updating the central location Using external Ceph keys Using a pre-installed Red Hat Ceph Storage cluster at the edge Fixed the environment file name used within the procedures. 27 Feb 2023 17.0 Hardware Provisioning Removing failed bare-metal nodes from the node definition file Added a new procedure on how to remove a failed bare-metal node if the node provisioning fails because of a node hardware or network configuration failure. 23 Feb 2023 17.0 Networking, Compute Enabling VLAN transparency in ML2/OVN deployments Previously, this procedure instructed you to set the MTU on the network. The updated procedure correctly instructs you to set the MTU on the VLAN interface of each participating VM. 22 Feb 2023 17.0 Networking Chapter 17. Configuring allowed address pairs There were several instances of arguments that used underscores (_) instead of hypens (-). 09 Feb 2023 17.0 CloudOps, Storage Storage Guide Deployment Recommendations for Specific Red Hat OpenStack Platform Services has been removed from the Customer Portal. For information about recommendations for the Object Storage service (swift), see Configuring the Object Storage service (swift) in the Storage Guide . 08 Feb 2023 17.0 NFV NFV BIOS settings Added a note about enabling SR-IOV global and NIC settings in the BIOS. 02 Feb 2023 17.0.1 Compute Creating an image for UEFI Secure Boot Image configuration parameters Flavor metadata Added content for the UEFI Secure Boot feature: New procedure for creating an image for UEFI Secure Boot. Image properties required for a UEFI Secure Boot image: os_secure_boot , hw_firmware_type , and hw_machine_type . Flavor metadata property to enable Secure Boot for instances launched with this flavor: os:secure_boot . 31 Jan 2023 17.0 Networking Configuring Network service availability zones with ML2/OVN Included new RHOSP heat parameter, OVNAvailabilityZone . 31 Jan 2023 17.0 NFV Supported Configurations for NFV Deployments and Chapter 7. Planning your OVS-DPDK deployment Note added stating that a Support Exception from Red Hat Support is needed to use OVS-DPDK on non-NFV workloads. 27 Jan 2023 17.0 Updates Chapter 1. Preparing for a minor update Removed topic about EUS repositories. 25 Jan 2023 17.0 Storage Volume allocation on multiple back ends The former Block Storage topic: Specifying back ends for volume creation, has been replaced with: Volume allocation on multiple back ends. 25 Jan 2023 17.0 Updates Chapter 3. Updating the overcloud Removed bullet point from list in Section 3.3 due to software fix. 23 Jan 2023 17.0 Networking Chapter 20. Using availability zones to make network resources highly available Several changes made to identify the distributed compute node (DCN) use case. 17 Jan 2023 17.0 Networking Chapter 2. Working with ML2/OVN Two topics have been added to Chapter 2: "Deploying a custom role with ML2/OVN" and "SR-IOV with ML2/OVN and native OVN DHCP." 17 Jan 2023 17.0 Edge Deploy the edge without storage Removed redundant step from procedure to deploy storage at the edge 16 Jan 2023 17.0 Networking Configuring a DHCP relay Added an important admonition about requiring option 79 for some DHCP relays. 13 Jan 2023 17.0 Edge Deploying storage at the edge Replaced instances of deprecated file dcn-hci.yaml with dcn-storage.yaml 13 Jan 2023 17.0 Edge Deploying storage at the edge Including the necessary deployed_ceph.yaml and central_ceph_external.yaml in example deploy command. 13 Jan 2023 17.0 Edge Using a pre-installed Red Hat Ceph Storage cluster at the edge Changed output directory of openstack overcloud export ceph to for consistency across guide 11 Jan 2023 17.0 Edge Installing the central location Fixed ceph deployment to include --stack central parameter and value 22 Dec 2023 17.0 Network Functions Virtualization Network Functions Virtualization Product Guide Removed guide because RHOSP 17.0 does not support Network Functions Virtualization (NFV). 22 Dec 2023 17.0 Network Functions Virtualization Network Functions Virtualization Planning and Configuration Guide Removed guide because RHOSP 17.0 does not support NFV. 22 Dec 2022 17.0 Security Hardening infrastructure and virtualization Added procedures for investigating and modifying containers 22 Dec 2022 17.0 Security Increasing the size of private keys Added procedure for increasing the default size of private keys 21 Dec 2022 17.0 Compute Flavor metadata Added a note about considering the underlying host OS when you use the quota:cpu_* extra specs to tune the instance CPU resource use limits. 20 Dec 2022 17.0 Networking Configuring Network service availability zones with ML2/OVN Changes have been made to steps 4 and 5. 20 Dec 2022 17.0 Edge Deploying edge nodes without storage Step two is updated, you are not required to generate the DistributedComputeScaleOut role 09 Dec 2022 17.0 Networking Configuring the Networking service for QoS policies The step (7.ii.) about resource provider hypervisors has changed. 08 Dec 2022 17.0 Networking OVN metadata agent on Compute nodes The corresponding OVN metadata namespace for Virtual Machine (VM) instances on Compute nodes has changed from ovnmeta-<datapath_uuid> to ovnmeta-<network_uuid> . 08 Dec 2022 17.0 Networking QoS rules A footnote was added to Table 9.1 stating that ML2/OVN does not support DSCP marking QoS policies on tunneled protocols. 30 Nov 2022 17.0 Networking Configuring Load-balancing service flavors The three topics in Chapter 6, "Configuring Load-balancing service flavors," erroneously instructed users to access the undercloud to run certain OpenStack commands. Instead, users should access the overcloud. 30 Nov 2022 17.0 Security Authenticating with keystone Added procedure to stop repeated failed logins 23 Nov 2022 17.0 Hardware Provisioning Provisioning bare metal nodes for the overcloud Updated the guidance on how to configure the href image property in the node definition file. 22 Nov 2022 17.0 Storage Creating and Managing Images Updated procedures to use the Image service (glance) command-line client instead of the Dashboard service (horizon) to create and manage images. 9 Nov 2022 17.0 Updates Performing a minor update of a containerized undercloud Updated the dnf update command from USD sudo dnf update -y python3-tripleoclient* ansible to USD sudo dnf update -y python3-tripleoclient ansible-* 7 Nov 2022 17.0 Updates Running the overcloud update preparation Added a prerequisite to regenerate custom NIC templates. 28 Oct 2022 17.0 Backup and Restore Installing ReaR on the undercloud node Installing ReaR on the control plane nodes Updated the command that you use to extract the static ansible inventory file. 20 Oct 2022 17.0 Compute Configuring filters and weights for the Compute scheduler service Updated the NovaSchedulerDefaultFilters parameter to NovaSchedulerEnabledFilters . 19 Oct 2022 17.0 DCN Configuring routed spine-leaf in the undercloud Added procedure for configuring spine/leaf networking on the undercloud. 19 Oct 2022 17.0 DCN Replacing DistributedComputeHCI nodes Added procedure for replacing a DCN node. 19 Oct 2022 17.0 Validation Director Installation and Usage guide Replaced tripleo validation commands with the new CLI validation commands. 19 Oct 2022 17.0 Validation Creating a validation Added procedural content about creating a validation. 19 Oct 2022 17.0 Validation Changing the validation configuration file Added procedural content about changing the validation configuration file. 14 Oct 2022 17.0 Identity Users and Identity Management Guide Added procedural content about changing the default region name. 14 Oct 2022 17.0 Identity Users and Identity Management Guide Added conceptual information about resource credential files. 14 Oct 2022 17.0 Hardware Provisioning Provisioning and deploying your overcloud Updated the provisioning step to include details on how to use your own templates instead of the default templates when provisioning the network resources for your physical networks, and when provisioning your bare metal nodes. 11 Oct 2022 17.0 Networking Configuring bridge mappings Two steps have been added to this procedure that enable customers to change the network name from the default, datacentre . 04 Oct 2022 17.0 Networking Configuring the Networking service for QoS policies The example for the SRIOV agent has changed in the in the Networking Guide topic, "Configuring the Networking service for QoS policies." 03 Oct 2022 17.0 Networking Network definition file configuration options The default value for mtu has been corrected in the Director Installation and Usage guide topic, "Network definition file configuration options." 30 Sep 2022 17.0 Networking Cleaning up after Controller node replacement The note about "bugs prevent the removal of the OVN controller and metadata agents" has been deleted from the Director Installation and Usage guide topic, "Cleaning up after Controller node replacement." 28 Sep 2022 17.0 All All In Red Hat OpenStack Platform (RHOSP) 17.0, the heat-admin user has been replaced with the tripleo-admin user. 28 Sep 2022 17.0 Networking Chapter 6. Troubleshooting networks Significant changes have been made to the "Troubleshooting networks" chapter in the Networking Guide . 21 Sep 2022 17.0 Networking Using Designate for DNS-as-a-Service In Red Hat OpenStack Platform (RHOSP) 17.0, a guide has been added to support the new RHOSP DNS service (designate). 21 Sep 2022 17.0 Upgrades Framework for Upgrades guide The Framework for Upgrades guide is not published in the RHOSP 17.0 life cycle because upgrades from versions are not supported. Upgrades will be supported in RHOSP 17.1 and the Framework for Upgrades Guide will be published. Updates from 17.0.0 to 17.0.z are supported in the RHOSP 17.0 life cycle. For more information, see Keeping Red Hat OpenStack Platform Updated . 21 Sep 2022 17.0 Networking Testing Migration of the Networking Service to the ML2/OVN Mechanism Driver guide The Migrating the Networking Service to the ML2/OVN Mechanism Driver guide is published with RHOSP 17.0 for ML2/OVN migration testing purposes only under the title Testing Migration of the Networking Service to the ML2/OVN Mechanism Driver . ML2/OVN migrations are not supported in RHOSP 17.0, because they are not needed for production. Red Hat does not support upgrades to RHOSP 17.0, and all RHOSP 17.0 deployments use the default ML2/OVN mechanism driver. Thus all RHOSP 17.0 deployments start with ML2/OVN and migration is not needed for production. 21 Sep 2022 17.0 Compute Scaling Deployments with Compute Cells guide The Scaling Deployments with Compute Cells guide is not published for RHOSP 17.0 because the Compute cells feature does not work in RHOSP 17.0. Therefore, the Scaling Deployments with Compute Cells guide has been removed until the underlying issues are fixed. 21 Sep 2022 17.0 All Director Installation and Usage guide Creating and Managing Images guide Storage Guide Transitioning to Containerized Services guide Security and Hardening Guide The Advanced Overcloud Customization guide has been removed for RHOSP 17.0 and the content has been moved to several other guides. For instance, several chapters on networking have been moved to the Director Installation and Usage guide, and the chapter "Configuring the image import method and shared staging area" has been moved to the Creating and Managing Images guide. 21 Sep 2022 17.0 Security Federate with Identity Service guide The Federate with Identity Service guide has been removed for RHOSP 17.0. Its contents are consolidated in a Red Hat knowledgebase article that is currently under development. 21 Sep 2022 17.0 Security Security and Hardening Guide The Deploy Fernet on the Overcloud guide has been removed. For information about working with Fernet keys, see the Security and Hardening Guide . 21 Sep 2022 17.0 All Product Documentation for Red Hat OpenStack Platform 17.0 The Product Documentation landing page, also known as splash page, has been reorganized. Sections have been renamed, removed, or replaced and the list of titles represents the latest set of titles. 21 Sep 2022 17.0 All Deploying Red Hat Ceph Storage and Red Hat OpenStack Platform together with director The Deploying an overcloud with containerized Red Hat Ceph guide is now called Deploying Red Hat Ceph Storage and Red Hat OpenStack Platform together with director . The content in this document has changed to reflect changes in Red Hat Ceph Storage deployment. 21 Sep 2022 17.0 All Firewall Rules for Red Hat OpenStack Platform The Firewall Rules for Red Hat OpenStack Platform guide will not be updated or published in RHOSP 17.0. Red Hat plans to update and publish the guide for RHOSP 17.1.
null
https://docs.redhat.com/en/documentation/red_hat_openstack_platform/17.0/html/release_notes/doc-changes_rhosp-relnotes
Chapter 6. Ceph Object Storage Daemon (OSD) configuration
Chapter 6. Ceph Object Storage Daemon (OSD) configuration As a storage administrator, you can configure the Ceph Object Storage Daemon (OSD) to be redundant and optimized based on the intended workload. Prerequisites Installation of the Red Hat Ceph Storage software. 6.1. Ceph OSD configuration All Ceph clusters have a configuration, which defines: Cluster identity Authentication settings Ceph daemon membership in the cluster Network configuration Host names and addresses Paths to keyrings Paths to OSD log files Other runtime options A deployment tool, such as cephadm , will typically create an initial Ceph configuration file for you. However, you can create one yourself if you prefer to bootstrap a cluster without using a deployment tool. For your convenience, each daemon has a series of default values. Many are set by the ceph/src/common/config_opts.h script. You can override these settings with a Ceph configuration file or at runtime by using the monitor tell command or connecting directly to a daemon socket on a Ceph node. Important Red Hat does not recommend changing the default paths, as it makes it more difficult to troubleshoot Ceph later. Additional Resources For more information about cephadm and the Ceph orchestrator, see the Red Hat Ceph Storage Operations Guide . 6.2. Scrubbing the OSD In addition to making multiple copies of objects, Ceph ensures data integrity by scrubbing placement groups. Ceph scrubbing is analogous to the fsck command on the object storage layer. For each placement group, Ceph generates a catalog of all objects and compares each primary object and its replicas to ensure that no objects are missing or mismatched. Light scrubbing (daily) checks the object size and attributes. Deep scrubbing (weekly) reads the data and uses checksums to ensure data integrity. Scrubbing is important for maintaining data integrity, but it can reduce performance. Adjust the following settings to increase or decrease scrubbing operations. Additional resources See Ceph scrubbing options in the appendix of the Red Hat Ceph Storage Configuration Guide for more details. 6.3. Backfilling an OSD When you add Ceph OSDs to a cluster or remove them from the cluster, the CRUSH algorithm rebalances the cluster by moving placement groups to or from Ceph OSDs to restore the balance. The process of migrating placement groups and the objects they contain can reduce the cluster operational performance considerably. To maintain operational performance, Ceph performs this migration with the 'backfill' process, which allows Ceph to set backfill operations to a lower priority than requests to read or write data. 6.4. OSD recovery When the cluster starts or when a Ceph OSD terminates unexpectedly and restarts, the OSD begins peering with other Ceph OSDs before a write operation can occur. If a Ceph OSD crashes and comes back online, usually it will be out of sync with other Ceph OSDs containing more recent versions of objects in the placement groups. When this happens, the Ceph OSD goes into recovery mode and seeks to get the latest copy of the data and bring its map back up to date. Depending upon how long the Ceph OSD was down, the OSD's objects and placement groups may be significantly out of date. Also, if a failure domain went down, for example, a rack, more than one Ceph OSD might come back online at the same time. This can make the recovery process time consuming and resource intensive. To maintain operational performance, Ceph performs recovery with limitations on the number of recovery requests, threads, and object chunk sizes which allows Ceph to perform well in a degraded state. Additional resources See all the Red Hat Ceph Storage Ceph OSD configuration options in OSD object daemon storage configuration options for specific option descriptions and usage.
null
https://docs.redhat.com/en/documentation/red_hat_ceph_storage/8/html/configuration_guide/ceph-object-storage-daemon-configuration
Providing feedback on JBoss EAP documentation
Providing feedback on JBoss EAP documentation To report an error or to improve our documentation, log in to your Red Hat Jira account and submit an issue. If you do not have a Red Hat Jira account, then you will be prompted to create an account. Procedure Click the following link to create a ticket . Please include the Document URL , the section number and describe the issue . Enter a brief description of the issue in the Summary . Provide a detailed description of the issue or enhancement in the Description . Include a URL to where the issue occurs in the documentation. Clicking Submit creates and routes the issue to the appropriate documentation team.
null
https://docs.redhat.com/en/documentation/red_hat_jboss_enterprise_application_platform/7.4/html/migration_guide/proc_providing-feedback-on-red-hat-documentation_default
Chapter 29. Red Hat Enterprise Linux Atomic Host 7.4.0
Chapter 29. Red Hat Enterprise Linux Atomic Host 7.4.0 29.1. Atomic Host OStree update : New Tree Version: 7.4.0 (hash: 846fb0e18e65bd9a62fc9d952627413c6467c33c2d726449a1d7ad7690bbb93a) Changes since Tree Version 7.3.6 (hash: e073a47baa605a99632904e4e05692064302afd8769a15290d8ebe8dbfd3c81b) Updated packages : atomic-devmode-0.3.7-2.el7 cockpit-ostree-141-2.el7 redhat-release-atomic-host-7.4-20170427.0.atomic.el7.1 rpm-ostree-client-2017.6-5.atomic.el7 29.2. Extras Updated packages : atomic-1.18.1-3.1.git0705b1b.el7 cockpit-141-4.el7 container-selinux-2.21-1.el7 docker-1.12.6-48.git0fdc778.el7 docker-distribution-2.6.1-1.1.gita25b9ef.el7 docker-latest-1.13.1-21.1.gitcd75c68.el7 dpdk-16.11.2-4.el7 * etcd-3.1.9-2.el7 flannel-0.7.1-2.el7 gomtree-0.3.1-2.1.el7 libev-4.15-7.el7 * libssh-0.7.1-3.el7 * oci-register-machine-0-3.11.1.gitdd0daef.el7 oci-systemd-hook-0.1.8-4.1.gite533efa.el7 ostree-2017.7-1.el7 python-backports-lzma-0.0.2-9.el7 * python-gevent-1.0-3.el7 * python-greenlet-0.4.2-4.el7 * runc-1.0.0-12.1.gitf8ce01d.el7 skopeo-0.1.20-2.1.gite802625.el7 storaged-2.5.2-3.el7 * New packages : container-storage-setup-0.3.0-3.git927974f.el7 sshpass-1.06-2.el7 * python-httplib2-0.9.1-3.el7 * libtommath-0.42.0-6.el7 * python-passlib-1.6.5-2.el7 * python-paramiko-2.1.1-2.el7 * ansible-2.3.1.0-3.el7 * python-crypto-2.6.1-15.el7 * libtomcrypt-1.17-26.el7 * rhel-system-roles-0.2-2.el7 * driverctl-0.95-1.el7 * The asterisk (*) marks packages which are available for Red Hat Enterprise Linux only. 29.2.1. Container Images Updated : Red Hat Enterprise Linux Atomic cockpit-ws Container Image (rhel7/cockpit-ws) Red Hat Enterprise Linux Atomic etcd Container Image (rhel7/etcd) Red Hat Enterprise Linux Atomic flannel Container Image (rhel7/flannel) Red Hat Enterprise Linux Atomic Identity Management Server Container Image (rhel7/ipa-server) Red Hat Enterprise Linux Atomic Kubernetes apiserver Container Image (rhel7/kubernetes-apiserver) Red Hat Enterprise Linux Atomic Kubernetes controller-manager Container (rhel7/kubernetes-controller-mgr) Red Hat Enterprise Linux Atomic Kubernetes scheduler Container Image (rhel7/kubernetes-scheduler) Red Hat Enterprise Linux Atomic open-vm-tools Container Image (rhel7/open-vm-tools) Red Hat Enterprise Linux Atomic openscap Container Image (rhel7/openscap) Red Hat Enterprise Linux 7.4 Container Image (rhel7.4, rhel7, rhel7/rhel, rhel) Red Hat Enterprise Linux Atomic Tools Container Image (rhel7/rhel-tools) Red Hat Enterprise Linux 7 Init Container Image (rhel7/rhel7-init) Red Hat Enterprise Linux Atomic sadc Container Image (rhel7/sadc) Red Hat Enterprise Linux Atomic SSSD Container Image (rhel7/sssd) 29.3. New Features Limited support for containers on little-endian IBM power systems Now containers have limited support on the little-endian variant of IBM power Systems (PPCle). See the Supported Architectures for Containers on RHEL for details. Notably, packages from the Extras channel are now provided for the little-endian variant of IBM power Systems, along with the rhel7-ppc64le base container. This enables using containers on these systems with Red Hat Enterprise Linux 7.4. overlay2 storage driver now available The overlay2 graph driver has been upgraded from a Technology Preview to a fully supported feature. The overlay2 graph driver, along with overlay , uses OverlayFS, a copy-on-write union file system that features page-cache sharing between containers. However, overlay2 is the more performant option. To enable the driver, specify overlay2 in the /etc/sysconfig/docker-storage-setup file: OverlayFS now can be run with SELinux enforced Previously, SELinux had to be in permissive or disabled mode for OverlayFS to work. Now you can run the OverlayFS file system with SELinux in enforcing mode. For more information on OverlayFS, see Overlay Graph Driver . SSSD in a container is now fully supported The System Security Services Daemon (SSSD) in a container has been upgraded from a Technology Preview to a fully supported feature. SSSD allows Red Hat Enterprise Linux Atomic Host authentication subsystem to be connected to central identity providers such as Red Hat Identity Management and Microsoft Active Directory. To install this new image, use the atomic install rhel7/sssd command. For full documentation on SSSD, see Configuring SSSD . Package layering is now fully supported The pkg-add subcommand of the rpm-ostree tool has been upgraded from a Technology Preview to a fully supported feature. The rpm-ostree install commands installs layered packages that are persistent across reboots. This command can be used to install individual packages that are not part of the original OSTree, such as diagnostics tools. For detailed information about package layering, see Package Layering . Image signing is now fully supported The image signing and validation functionality has been upgraded from a Technology Preview to a fully supported feature. Signing container images on RHEL and RHEL Atomic Host systems provides a means of validating where a container image came from, checking that the image has not been tampered with, and setting policies to determine which validated images you will allow to use on your systems. The main image signing tasks can be done as follows: To sign and distribute an image, use the atomic sign and atomic push commands. To get and verify a signed image, use the atomic pull and atomic verify commands. To designate a signed image as trusted and acceptable on the local system, use the atomic trust command. For the current release, image signing is only supported when pushing and pulling between Docker v2 registries (such as the registry software included in the docker-distribution package) and the Docker Hub (docker.io). To learn more about image signing, see Image Signing . GPG verification changes for OSTree commits For new installations of RHEL Atomic Host 7.4.0 and later, the GPG verification of OSTree commits is enabled by default. If you upgrade from RHEL Atomic Host 7.3, you can enable GPG verification manually. To enable GPG verification, set the gpg-verify directive in the /etc/ostree/remotes.d/redhat.conf file to true . If GPG verification is enabled, the output of the atomic host status command shows information about the GPG signature of the commit. docker-storage-setup renamed to container-storage-setup The docker-storage-setup utility has been renamed to container-storage-setup for RHEL7.4 and RHEL Atomic Host 7.4. Note that: The name of the package has also changed to container-storage-setup . The name of the service is still docker-storage-setup . The default configuration is in the /usr/share/container-storage-setup/container-storage-setup file, but your configuration should go to /etc/sysconfig/docker-storage-setup , which overrides configuration from /usr/share/container-storage-setup/container-storage-setup .
[ "STORAGE_DRIVER=overlay2" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux_atomic_host/7/html/release_notes/red_hat_enterprise_linux_atomic_host_7_4_0
8.49. edac-utils
8.49. edac-utils 8.49.1. RHBA-2014:0768 - edac-utils bug fix update An updated edac-utils package that fixes one bug is now available for Red Hat Enterprise Linux 6. Error Detection And Correction (EDAC) is the current set of drivers in the Linux kernel that handles detection of ECC errors from memory controllers for most chipsets on the 32-bit x86, AMD64, and Intel 64 architectures. The user-space component consists of an init script which ensures that EDAC drivers and DIMM labels are loaded at system startup, as well as a library and utility for reporting current error counts from the EDAC sysfs files. Bug Fix BZ# 679812 Previously, the exit status of the edac-utils package init script was not set correctly. As a consequence, running the 'service edac status' command returned exit status 0, which was not expected behavior because no programs were running after executing the 'service edac start' command. With this update, the returned exit status has been changed to 3 in the described situation. Users of edac-utils are advised to upgrade to this updated package, which fixes this bug.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.6_technical_notes/edac-utils
Getting Started with Streams for Apache Kafka on OpenShift
Getting Started with Streams for Apache Kafka on OpenShift Red Hat Streams for Apache Kafka 2.7 Get started using Streams for Apache Kafka 2.7 on OpenShift Container Platform
null
https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.7/html/getting_started_with_streams_for_apache_kafka_on_openshift/index
Chapter 14. Using a sink binding with Service Mesh
Chapter 14. Using a sink binding with Service Mesh You can use a sink binding with Service Mesh. 14.1. Configuring a sink binding with Service Mesh This procedure describes how to configure a sink binding with Service Mesh. Prerequisites You have set up integration of Service Mesh and Serverless. Procedure Create a Service object in a namespace that is member of the ServiceMeshMemberRoll : Example event-display-service.yaml configuration file apiVersion: serving.knative.dev/v1 kind: Service metadata: name: event-display namespace: <namespace> 1 spec: template: metadata: annotations: sidecar.istio.io/inject: "true" 2 sidecar.istio.io/rewriteAppHTTPProbers: "true" spec: containers: - image: quay.io/openshift-knative/knative-eventing-sources-event-display:latest 1 A namespace that is a member of the ServiceMeshMemberRoll . 2 This annotation injects Service Mesh sidecars into the Knative service pods. Apply the Service object: USD oc apply -f event-display-service.yaml Create a SinkBinding object: Example heartbeat-sinkbinding.yaml configuration file apiVersion: sources.knative.dev/v1alpha1 kind: SinkBinding metadata: name: bind-heartbeat namespace: <namespace> 1 spec: subject: apiVersion: batch/v1 kind: Job 2 selector: matchLabels: app: heartbeat-cron sink: ref: apiVersion: serving.knative.dev/v1 kind: Service name: event-display 1 A namespace that is part of the ServiceMeshMemberRoll . 2 Bind any Job with the label app: heartbeat-cron to the event sink. Apply the SinkBinding object: USD oc apply -f heartbeat-sinkbinding.yaml Create a CronJob object: Example heartbeat-cronjob.yaml configuration file apiVersion: batch/v1 kind: CronJob metadata: name: heartbeat-cron namespace: <namespace> 1 spec: # Run every minute schedule: "* * * * *" jobTemplate: metadata: labels: app: heartbeat-cron bindings.knative.dev/include: "true" spec: template: metadata: annotations: sidecar.istio.io/inject: "true" 2 sidecar.istio.io/rewriteAppHTTPProbers: "true" spec: restartPolicy: Never containers: - name: single-heartbeat image: quay.io/openshift-knative/heartbeats:latest args: - --period=1 env: - name: ONE_SHOT value: "true" - name: POD_NAME valueFrom: fieldRef: fieldPath: metadata.name - name: POD_NAMESPACE valueFrom: fieldRef: fieldPath: metadata.namespace 1 A namespace that is part of the ServiceMeshMemberRoll . 2 Inject Service Mesh sidecars into the CronJob pods. Apply the CronJob object: USD oc apply -f heartbeat-cronjob.yaml Optional: Verify that the events were sent to the Knative event sink by looking at the message dumper function logs: Example command USD oc logs USD(oc get pod -o name | grep event-display) -c user-container Example output ☁\ufe0f cloudevents.Event Validation: valid Context Attributes, specversion: 1.0 type: dev.knative.eventing.samples.heartbeat source: https://knative.dev/eventing-contrib/cmd/heartbeats/#event-test/mypod id: 2b72d7bf-c38f-4a98-a433-608fbcdd2596 time: 2019-10-18T15:23:20.809775386Z contenttype: application/json Extensions, beats: true heart: yes the: 42 Data, { "id": 1, "label": "" }
[ "apiVersion: serving.knative.dev/v1 kind: Service metadata: name: event-display namespace: <namespace> 1 spec: template: metadata: annotations: sidecar.istio.io/inject: \"true\" 2 sidecar.istio.io/rewriteAppHTTPProbers: \"true\" spec: containers: - image: quay.io/openshift-knative/knative-eventing-sources-event-display:latest", "oc apply -f event-display-service.yaml", "apiVersion: sources.knative.dev/v1alpha1 kind: SinkBinding metadata: name: bind-heartbeat namespace: <namespace> 1 spec: subject: apiVersion: batch/v1 kind: Job 2 selector: matchLabels: app: heartbeat-cron sink: ref: apiVersion: serving.knative.dev/v1 kind: Service name: event-display", "oc apply -f heartbeat-sinkbinding.yaml", "apiVersion: batch/v1 kind: CronJob metadata: name: heartbeat-cron namespace: <namespace> 1 spec: # Run every minute schedule: \"* * * * *\" jobTemplate: metadata: labels: app: heartbeat-cron bindings.knative.dev/include: \"true\" spec: template: metadata: annotations: sidecar.istio.io/inject: \"true\" 2 sidecar.istio.io/rewriteAppHTTPProbers: \"true\" spec: restartPolicy: Never containers: - name: single-heartbeat image: quay.io/openshift-knative/heartbeats:latest args: - --period=1 env: - name: ONE_SHOT value: \"true\" - name: POD_NAME valueFrom: fieldRef: fieldPath: metadata.name - name: POD_NAMESPACE valueFrom: fieldRef: fieldPath: metadata.namespace", "oc apply -f heartbeat-cronjob.yaml", "oc logs USD(oc get pod -o name | grep event-display) -c user-container", "☁\\ufe0f cloudevents.Event Validation: valid Context Attributes, specversion: 1.0 type: dev.knative.eventing.samples.heartbeat source: https://knative.dev/eventing-contrib/cmd/heartbeats/#event-test/mypod id: 2b72d7bf-c38f-4a98-a433-608fbcdd2596 time: 2019-10-18T15:23:20.809775386Z contenttype: application/json Extensions, beats: true heart: yes the: 42 Data, { \"id\": 1, \"label\": \"\" }" ]
https://docs.redhat.com/en/documentation/red_hat_openshift_serverless/1.35/html/eventing/sinkbinding-with-ossm
Chapter 6. Creating a kernel-based virtual machine and booting the installation ISO in the VM
Chapter 6. Creating a kernel-based virtual machine and booting the installation ISO in the VM You can create a kernel-based virtual machine (KVM) and start the Red Hat Enterprise Linux installation. The following instructions are specific for installation on a VM. If you are installing RHEL on a physical system, you can skip this section. Procedure Create a virtual machine with the instance of Red Hat Enterprise Linux as a KVM guest operating system, by using the following virt-install command on the KVM host: Additional resources virt-install man page on your system Creating virtual machines by using the command line
[ "virt-install --name=<guest_name> --disk size=<disksize_in_GB> --memory=<memory_size_in_MB> --cdrom <filepath_to_iso> --graphics vnc" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/9/html/interactively_installing_rhel_from_installation_media/installing-under-kvm_rhel-installer
7.119. lvm2
7.119. lvm2 7.119.1. RHBA-2015:1411 - lvm2 bug fix and enhancement update Updated lvm2 packages that fix several bugs and add various enhancements are now available for Red Hat Enterprise Linux 6. The lvm2 packages include complete support for handling read and write operations on physical volumes (PVs), creating volume groups (VGs) from one or more PVs, and creating one or more logical volumes (LVs) in VGs. Two enhancements are described in the Red Hat Enterprise Linux 6.7 Release Notes, linked from the References section: Bug Fixes BZ# 853259 This update enhances selection support in the Logical Volume Manager (LVM) BZ# 1021051 The "lvchange -p" command can change in-kernel permissions on a logical volume (LV) BZ# 736027 Volume groups (VGs) built from a high number of physical volumes (PVs) can experience significant lags. Enabling the lvmetad service reduces the operation time even on systems where the VG has metadata on all PVs. BZ# 1021728 The lvremove utility failed to remove damaged thin pools that were not repaired. The double "--force --force" option can now remove such pool volumes. BZ# 1130245 When the lvmetad service was used with "global/use_lvmetad=1" set, LVM leaked open sockets, and lvmetad kept threads for existing sockets. Now, LVM no longer leaks open lvmetad sockets, and lvmetad frees unused threads. BZ# 1132211 Activating a thin pool failed under certain circumstances. The lvm2 utility now properly rounds to 64 kB thin pool chunk size, thus fixing this bug. BZ# 1133079 The lvconvert utility displayed internal error messages under certain circumstances. Now, lvconvert verifies if the "--originname" value differs from the "--thinpool" value before the conversion begins. The messages are no longer displayed. BZ# 1133093 The user could not use the lvconvert utility to repair or split mirrors from cache data and cache metadata volumes due to strict checks for LV names. The checks have been relaxed, and lvconvert can be successfully used for these operations. BZ# 1136925 The lvm2 utility previously in some cases attempted to access incorrect devices for locking. Now, lvm2 uses the expected LV lock for snapshot volumes, thus fixing this bug. BZ# 1140128 When the volume_list parameter was set to forbid activating volumes during thin pool creation on error code path, some volumes could remain active in the device mapper table without the proper lock being held. All such volumes are now correctly deactivated before lvm2 exits. BZ# 1141386 Changing the VG clustering attribute could malfunction when clustered locking was selected. The code now correctly checks and propagates locks even for non-clustered VGs in this situation. The bug no longer occurs. BZ# 1143747 It is no longer possible to set the "--minor" and "--major" options for thin pool volumes with the lvm2 utility. If the user attempts to set them, lvm2 correctly informs the user they are not supported. BZ# 1171805 , BZ# 1205503 The vgimportclone script did sometimes not work as expected and in some cases also failed to rename and import duplicated VGs. The script now properly handles when the "filter" setting is missing from the lvm.conf file, and its code has been made more robust, thus fixing these bugs. BZ# 1184353 The "--clear-needs-check-flag" option was missing from the default value for the thin_check_options option in the "global" section of the lvm.conf file after installing lvm2. Now, "--clear-needs-check-flag" is set by default after installation. BZ# 1196767 The pvs utility did not list all PVs when reporting only label fields for given PVs if "obtain_device_list_from_udev=0" was set in lvm.conf. Now, LVM2 generates correct content for the persistent cache, thus fixing this bug. Enhancements BZ# 1202916 With this update, LVM cache is fully supported. Users can now create LVs with a small fast device that serves as a cache to larger and slower devices. For information on creating cache LVs, see the lvmcache(7) man page. BZ# 1211645 This update adds the "--enable-halvm", "--disable-halvm", "--mirrorservice", and "--startstopservices" options to the lvmconf script. For more information, see the lvmconf(8) man page. Users of lvm2 are advised to upgrade to these updated packages, which fix these bugs and add these enhancements.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.7_technical_notes/package-lvm2
Preface
Preface You can use a host with a compatible graphics processing unit (GPU) to run virtual machines in Red Hat Virtualization that are suited for graphics-intensive tasks and for running software that cannot run without a GPU, such as CAD. You can assign a GPU to a virtual machine in one of the following ways: GPU passthrough : You can assign a host GPU to a single virtual machine, so the virtual machine, instead of the host, uses the GPU. Virtual GPU (vGPU) : You can divide a physical GPU device into one or more virtual devices, referred to as mediated devices . You can then assign these mediated devices to one or more virtual machines as virtual GPUs. These virtual machines share the performance of a single physical GPU. For some GPUs, only one mediated device can be assigned to a single guest. vGPU support is only available on selected NVIDIA GPUs. Example: A host has four GPUs. Each GPU can support up to 16 vGPUs, for a total of 64 vGPUs. Some possible vGPU assignments are: one virtual machine with 64 vGPUs 64 virtual machines, each with one vGPU 32 virtual machines, each with one vGPU; eight virtual machines, each with two vGPUs; 4 virtual machines, each with four vGPUs
null
https://docs.redhat.com/en/documentation/red_hat_virtualization/4.3/html/setting_up_an_nvidia_gpu_for_a_virtual_machine_in_red_hat_virtualization/pr01
Chapter 5. Technology Previews
Chapter 5. Technology Previews This section provides an overview of Technology Preview features introduced or updated in this release of Red Hat Ceph Storage. Important Technology Preview features are not supported with Red Hat production service level agreements (SLAs), might not be functionally complete, and Red Hat does not recommend to use them for production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information on Red Hat Technology Preview features support scope, see https://access.redhat.com/support/offerings/techpreview/ . Users can archive older data to an AWS bucket With this release, users can enable data transition to a remote cloud service, such as Amazon Web Services (AWS), as part of the lifecycle configuration. See the Transitioning data to Amazon S3 cloud service for more details. Expands the application of S3 select to Apache Parquet format With this release, there are now two S3 select workflows, one for CSV and one for Parquet, that provide S3 select operations with CSV and Parquet objects. See the S3 select operations in the Red Hat Ceph Storage Developer Guide for more details. Bucket granular multi-site sync policies is now supported Red Hat now supports bucket granular multi-site sync policies. See the Using multi-site sync policies section in the Red Hat Ceph Storage Object Gateway Guide for more details. Server-Side encryption is now supported With this release, Red Hat provides the support to manage Server-Side encryption. This enables S3 users to protect data at rest with a unique key through Server-Side encryption with Amazon S3-managed encryption keys (SSE-S3). Users can use the PutBucketEncryption S3 feature to enforce object encryption Previously, to enforce object encryption in order to protect data, users were required to add a header to each request which was not possible in all cases. With this release, Ceph Object Gateway is updated to support PutBucketEncryption S3 action. Users can use the PutBucketEncryption S3 feature with the Ceph Object Gateway without adding headers to each request. This is handled by the Ceph Object Gateway. 5.1. The Cephadm utility New Ceph Management gateway and the OAuth2 Proxy service for unified access and high availability With this enhancement, the Ceph Dashboard introduces the Ceph Management gateway ( mgmt-gateway ) and the OAuth2 Proxy service ( oauth2-proxy ). With the Ceph Management gateway ( mgmt-gateway ) and the OAuth2 Proxy ( oauth2-proxy ) in place, nginx automatically directs the user through the oauth2-proxy to the configured Identity Provider (IdP), when single sign-on (SSO) is configured. Bugzilla:2298666 5.2. Ceph Dashboard New OAuth2 SSO OAuth2 SSO uses the oauth2-proxy service to work with the Ceph Management gateway ( mgmt-gateway ), providing unified access and improved user experience. Bugzilla:2312560 5.3. Ceph Object Gateway New bucket logging support for Ceph Object Gateway Bucket logging provides a mechanism for logging all access to a bucket. The log data can be used to monitor bucket activity, detect unauthorized access, get insights into the bucket usage and use the logs as a journal for bucket changes. The log records are stored in objects in a separate bucket and can be analyzed later. Logging configuration is done at the bucket level and can be enabled or disabled at any time. The log bucket can accumulate logs from multiple buckets. The configured prefix may be used to distinguish between logs from different buckets. For performance reasons, even though the log records are written to persistent storage, the log object appears in the log bucket only after a configurable amount of time or when reaching the maximum object size of 128 MB. Adding a log object to the log bucket is done in such a way that if no more records are written to the object, it might remain outside of the log bucket even after the configured time has passed. There are two logging types: standard and journal . The default logging type is standard . When set to standard the log records are written to the log bucket after the bucket operation is completed. As a result the logging operation can fail with no indication to the client. When set to journal the records are written to the log bucket before the bucket operation is complete. As a result, the operation does not run if the logging action fails and an error is returned to the client. You can complete the following bucket logging actions: enable, disable, and get. Bugzilla:2308169 Support for user accounts through Identity and Access Management (IAM) With this release, Ceph Object Gateway supports user accounts as an optional feature to enable the self-service management of Users, Groups, and Roles similar to those in AWS Identity and Access Management(IAM). Restore objects transitioned to remote cloud endpoint back into Ceph Object gateway using the cloud-restore feature With this release, the cloud-restore feature is implemented. This feature allows users to restore objects transitioned to remote cloud endpoint back into Ceph Object gateway, using either S3 restore-object API or by rehydrating using read-through options. Bugzilla:2293539
null
https://docs.redhat.com/en/documentation/red_hat_ceph_storage/8/html/8.0_release_notes/technology-previews
Chapter 17. Tuning CPU frequency to optimize energy consumption
Chapter 17. Tuning CPU frequency to optimize energy consumption You can optimize the power consumption of your system by using the available cpupower commands to set CPU speed on a system according to your requirements after setting up the required CPUfreq governor. 17.1. Supported cpupower tool commands The cpupower tool is a collection of tools to examine and tune power saving related features of processors. The cpupower tool supports the following commands: idle-info Displays the available idle states and other statistics for the CPU idle driver using the cpupower idle-info command. For more information, see CPU Idle States . idle-set Enables or disables specific CPU idle state using the cpupower idle-set command as root. Use -d to disable and -e to enable a specific CPU idle state. frequency-info Displays the current cpufreq driver and available cpufreq governors using the cpupower frequency-info command. For more information, see CPUfreq drivers , Core CPUfreq Governors , and Intel P-state CPUfreq governors . frequency-set Sets the cpufreq and governors using the cpupower frequency-set command as root. For more information, see Setting up CPUfreq governor . set Sets processor power saving policies using the cpupower set command as root. Using the --perf-bias option, you can enable software on supported Intel processors to determine the balance between optimum performance and saving power. Assigned values range from 0 to 15 , where 0 is optimum performance and 15 is optimum power efficiency. By default, the --perf-bias option applies to all cores. To apply it only to individual cores, add the --cpu cpulist option. info Displays processor power related and hardware configurations, which you have enabled using the cpupower set command. For example, if you assign the --perf-bias value as 5 : monitor Displays the idle statistics and CPU demands using the cpupower monitor command. Using the -l option, you can list all available monitors on your system and the -m option to display information related to specific monitors. For example, to monitor information related to the Mperf monitor, use the cpupower monitor -m Mperf command as root. Additional resources cpupower(1) , cpupower-idle-info(1) , cpupower-idle-set(1) , cpupower-frequency-set(1) , cpupower-frequency-info(1) , cpupower-set(1) , cpupower-info(1) , and cpupower-monitor(1) man pages on your system 17.2. CPU Idle States CPUs with the x86 architecture support various states, such as, few parts of the CPU are deactivated or using lower performance settings, known as C-states. With this state, you can save power by partially deactivating CPUs that are not in use. There is no need to configure the C-state, unlike P-states that require a governor and potentially some set up to avoid undesirable power or performance issues. C-states are numbered from C0 upwards, with higher numbers representing decreased CPU functionality and greater power saving. C-states of a given number are broadly similar across processors, although the exact details of the specific feature sets of the state may vary between processor families. C-states 0-3 are defined as follows: C0 In this state, the CPU is working and not idle at all. C1, Halt In this state, the processor is not executing any instructions but is typically not in a lower power state. The CPU can continue processing with practically no delay. All processors offering C-states need to support this state. Pentium 4 processors support an enhanced C1 state called C1E that actually is a state for lower power consumption. C2, Stop-Clock In this state, the clock is frozen for this processor but it keeps the complete state for its registers and caches, so after starting the clock again it can immediately start processing again. This is an optional state. C3, Sleep In this state, the processor goes to sleep and does not need to keep its cache up to date. Due to this reason, waking up from this state needs considerably more time than from the C2 state. This is an optional state. You can view the available idle states and other statistics for the CPUidle driver using the following command: Intel CPUs with the "Nehalem" microarchitecture features a C6 state, which can reduce the voltage supply of a CPU to zero, but typically reduces power consumption by between 80% and 90%. The kernel in Red Hat Enterprise Linux 8 includes optimizations for this new C-state. Additional resources cpupower(1) and cpupower-idle(1) man pages on your system 17.3. Overview of CPUfreq One of the most effective ways to reduce power consumption and heat output on your system is CPUfreq, which is supported by x86 and ARM64 architectures in Red Hat Enterprise Linux 8. CPUfreq, also referred to as CPU speed scaling, is the infrastructure in the Linux kernel that enables it to scale the CPU frequency in order to save power. CPU scaling can be done automatically depending on the system load, in response to Advanced Configuration and Power Interface (ACPI) events, or manually by user-space programs, and it allows the clock speed of the processor to be adjusted on the fly. This enables the system to run at a reduced clock speed to save power. The rules for shifting frequencies, whether to a faster or slower clock speed and when to shift frequencies, are defined by the CPUfreq governor. You can view the cpufreq information using the cpupower frequency-info command as root. 17.3.1. CPUfreq drivers Using the cpupower frequency-info --driver command as root, you can view the current CPUfreq driver. The following are the two available drivers for CPUfreq that can be used: ACPI CPUfreq Advanced Configuration and Power Interface (ACPI) CPUfreq driver is a kernel driver that controls the frequency of a particular CPU through ACPI, which ensures the communication between the kernel and the hardware. Intel P-state In Red Hat Enterprise Linux 8, Intel P-state driver is supported. The driver provides an interface for controlling the P-state selection on processors based on the Intel Xeon E series architecture or newer architectures. Currently, Intel P-state is used by default for supported CPUs. You can switch to using ACPI CPUfreq by adding the intel_pstate=disable command to the kernel command line. Intel P-state implements the setpolicy() callback. The driver decides what P-state to use based on the policy requested from the cpufreq core. If the processor is capable of selecting its P-state internally, the driver offloads this responsibility to the processor. If not, the driver implements algorithms to select the P-state. Intel P-state provides its own sysfs files to control the P-state selection. These files are located in the /sys/devices/system/cpu/intel_pstate/ directory. Any changes made to the files are applicable to all CPUs. This directory contains the following files that are used for setting P-state parameters: max_perf_pct limits the maximum P-state requested by the driver expressed in a percentage of available performance. The available P-state performance can be reduced by the no_turbo setting. min_perf_pct limits the minimum P-state requested by the driver, expressed in a percentage of the maximum no-turbo performance level. no_turbo limits the driver to selecting P-state below the turbo frequency range. turbo_pct displays the percentage of the total performance supported by hardware that is in the turbo range. This number is independent of whether turbo has been disabled or not. num_pstates displays the number of P-states that are supported by hardware. This number is independent of whether turbo has been disabled or not. Additional resources cpupower-frequency-info(1) man page on your system 17.3.2. Core CPUfreq governors A CPUfreq governor defines the power characteristics of the system CPU, which in turn affects the CPU performance. Each governor has its own unique behavior, purpose, and suitability in terms of workload. Using the cpupower frequency-info --governor command as root, you can view the available CPUfreq governors. Red Hat Enterprise Linux 8 includes multiple core CPUfreq governors: cpufreq_performance It forces the CPU to use the highest possible clock frequency. This frequency is statically set and does not change. As such, this particular governor offers no power saving benefit. It is only suitable for hours of a heavy workload, and only during times wherein the CPU is rarely or never idle. cpufreq_powersave It forces the CPU to use the lowest possible clock frequency. This frequency is statically set and does not change. This governor offers maximum power savings, but at the cost of the lowest CPU performance. The term "powersave" can sometimes be deceiving though, since in principle a slow CPU on full load consumes more power than a fast CPU that is not loaded. As such, while it may be advisable to set the CPU to use the powersave governor during times of expected low activity, any unexpected high loads during that time can cause the system to actually consume more power. The Powersave governor is more of a speed limiter for the CPU than a power saver. It is most useful in systems and environments where overheating can be a problem. cpufreq_ondemand It is a dynamic governor, using which you can enable the CPU to achieve maximum clock frequency when the system load is high, and also minimum clock frequency when the system is idle. While this allows the system to adjust power consumption accordingly with respect to system load, it does so at the expense of latency between frequency switching. As such, latency can offset any performance or power saving benefits offered by the ondemand governor if the system switches between idle and heavy workloads too often. For most systems, the ondemand governor can provide the best compromise between heat emission, power consumption, performance, and manageability. When the system is only busy at specific times of the day, the ondemand governor automatically switches between maximum and minimum frequency depending on the load without any further intervention. cpufreq_userspace It allows user-space programs, or any process running as root, to set the frequency. Of all the governors, userspace is the most customizable and depending on how it is configured, it can offer the best balance between performance and consumption for your system. cpufreq_conservative Similar to the ondemand governor, the conservative governor also adjusts the clock frequency according to usage. However, the conservative governor switches between frequencies more gradually. This means that the conservative governor adjusts to a clock frequency that it considers best for the load, rather than simply choosing between maximum and minimum. While this can possibly provide significant savings in power consumption, it does so at an ever greater latency than the ondemand governor. Note You can enable a governor using cron jobs. This allows you to automatically set specific governors during specific times of the day. As such, you can specify a low-frequency governor during idle times, for example, after work hours, and return to a higher-frequency governor during hours of heavy workload. For instructions on how to enable a specific governor, see Setting up CPUfreq governor . 17.3.3. Intel P-state CPUfreq governors By default, the Intel P-state driver operates in active mode with or without Hardware p-state (HWP) depending on whether the CPU supports HWP. Using the cpupower frequency-info --governor command as root, you can view the available CPUfreq governors. Note The functionality of performance and powersave Intel P-state CPUfreq governors is different compared to core CPUfreq governors of the same names. The Intel P-state driver can operate in the following three different modes: Active mode with hardware-managed P-states When active mode with HWP is used, the Intel P-state driver instructs the CPU to perform the P-state selection. The driver can provide frequency hints. However, the final selection depends on CPU internal logic. In active mode with HWP, the Intel P-state driver provides two P-state selection algorithms: performance : With the performance governor, the driver instructs internal CPU logic to be performance-oriented. The range of allowed P-states is restricted to the upper boundary of the range that the driver is allowed to use. powersave : With the powersave governor, the driver instructs internal CPU logic to be powersave-oriented. Active mode without hardware-managed P-states When active mode without HWP is used, the Intel P-state driver provides two P-state selection algorithms: performance : With the performance governor, the driver chooses the maximum P-state it is allowed to use. powersave : With the powersave governor, the driver chooses P-states proportional to the current CPU utilization. The behavior is similar to the ondemand CPUfreq core governor. Passive mode When the passive mode is used, the Intel P-state driver functions the same as the traditional CPUfreq scaling driver. All available generic CPUFreq core governors can be used. 17.3.4. Setting up CPUfreq governor All CPUfreq drivers are built in as part of the kernel-tools package, and selected automatically. To set up CPUfreq, you need to select a governor. Prerequisites To use cpupower , install the kernel-tools package: Procedure View which governors are available for use for a specific CPU: Enable one of the governors on all CPUs: Replace the performance governor with the cpufreq governor name as per your requirement. To only enable a governor on specific cores, use -c with a range or comma-separated list of CPU numbers. For example, to enable the userspace governor for CPUs 1-3 and 5, use: Note If the kernel-tools package is not installed, the CPUfreq settings can be viewed in the /sys/devices/system/cpu/cpuid/cpufreq/ directory. Settings and values can be changed by writing to these tunables. For example, to set the minimum clock speed of cpu0 to 360 MHz, use: Verification Verify that the governor is enabled: The current policy displays the recently enabled cpufreq governor. In this case, it is performance . Additional resources cpupower-frequency-info(1) and cpupower-frequency-set(1) man pages on your system
[ "cpupower set --perf-bias 5 cpupower info analyzing CPU 0: perf-bias: 5", "cpupower monitor | Nehalem || Mperf ||Idle_Stats CPU| C3 | C6 | PC3 | PC6 || C0 | Cx | Freq || POLL | C1 | C1E | C3 | C6 | C7s | C8 | C9 | C10 0| 1.95| 55.12| 0.00| 0.00|| 4.21| 95.79| 3875|| 0.00| 0.68| 2.07| 3.39| 88.77| 0.00| 0.00| 0.00| 0.00 [...]", "cpupower idle-info CPUidle governor: menu analyzing CPU 0: Number of idle states: 9 Available idle states: POLL C1 C1E C3 C6 C7s C8 C9 C10 [...]", "yum install kernel-tools", "cpupower frequency-info --governors analyzing CPU 0: available cpufreq governors: performance powersave", "cpupower frequency-set --governor performance", "cpupower -c 1-3,5 frequency-set --governor cpufreq_userspace", "echo 360000 > /sys/devices/system/cpu/cpu0/cpufreq/scaling_min_freq", "cpupower frequency-info analyzing CPU 0: driver: intel_pstate CPUs which run at the same hardware frequency: 0 CPUs which need to have their frequency coordinated by software: 0 maximum transition latency: Cannot determine or is not supported. hardware limits: 400 MHz - 4.20 GHz available cpufreq governors: performance powersave current policy: frequency should be within 400 MHz and 4.20 GHz. The governor \"performance\" may decide which speed to use within this range. current CPU frequency: Unable to call hardware current CPU frequency: 3.88 GHz (asserted by call to kernel) boost state support: Supported: yes Active: yes" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html/monitoring_and_managing_system_status_and_performance/tuning-cpu-frequency-to-optimize-energy-consumption_monitoring-and-managing-system-status-and-performance
Part III. Supported Containers for JBoss Data Grid
Part III. Supported Containers for JBoss Data Grid
null
https://docs.redhat.com/en/documentation/red_hat_data_grid/6.6/html/getting_started_guide/part-supported_containers_for_jboss_data_grid
6.3. Configuring Host Names Using hostnamectl
6.3. Configuring Host Names Using hostnamectl The hostnamectl tool is provided for administering the three separate classes of host names in use on a given system. 6.3.1. View All the Host Names To view all the current host names, enter the following command: The status option is implied by default if no option is given. 6.3.2. Set All the Host Names To set all the host names on a system, enter the following command as root : This will alter the pretty, static, and transient host names alike. The static and transient host names will be simplified forms of the pretty host name. Spaces will be replaced with " - " and special characters will be removed. 6.3.3. Set a Particular Host Name To set a particular host name, enter the following command as root with the relevant option: Where option is one or more of: --pretty , --static , and --transient . If the --static or --transient options are used together with the --pretty option, the static and transient host names will be simplified forms of the pretty host name. Spaces will be replaced with " - " and special characters will be removed. If the --pretty option is not given, no simplification takes place. When setting a pretty host name, remember to use the appropriate quotation marks if the host name contains spaces or a single quotation mark. For example: 6.3.4. Clear a Particular Host Name To clear a particular host name and allow it to revert to the default, enter the following command as root with the relevant option: Where "" is a quoted empty string and where option is one or more of: --pretty , --static , and --transient . 6.3.5. Changing Host Names Remotely To execute a hostnamectl command on a remote system, use the -H, --host option as follows: Where hostname is the remote host you want to configure. The username is optional. The hostnamectl tool will use SSH to connect to the remote system.
[ "~]USD hostnamectl status", "~]# hostnamectl set-hostname name", "~]# hostnamectl set-hostname name [ option ... ]", "~]# hostnamectl set-hostname \"Stephen's notebook\" --pretty", "~]# hostnamectl set-hostname \"\" [ option ... ]", "~]# hostnamectl set-hostname -H [ username ]@ hostname" ]
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/networking_guide/sec_configuring_host_names_using_hostnamectl
7.5. Defining Audit Rules
7.5. Defining Audit Rules The Audit system operates on a set of rules that define what is to be captured in the log files. The following types of Audit rules can be specified: Control rules Allow the Audit system's behavior and some of its configuration to be modified. File system rules Also known as file watches, allow the auditing of access to a particular file or a directory. System call rules Allow logging of system calls that any specified program makes. Audit rules can be set: on the command line using the auditctl utility. Note that these rules are not persistent across reboots. For details, see Section 7.5.1, "Defining Audit Rules with auditctl " in the /etc/audit/audit.rules file. For details, see Section 7.5.3, "Defining Persistent Audit Rules and Controls in the /etc/audit/audit.rules File" 7.5.1. Defining Audit Rules with auditctl The auditctl command allows you to control the basic functionality of the Audit system and to define rules that decide which Audit events are logged. Note All commands which interact with the Audit service and the Audit log files require root privileges. Ensure you execute these commands as the root user. Additionally, the CAP_AUDIT_CONTROL capability is required to set up audit services and the CAP_AUDIT_WRITE capabilityis required to log user messages. Defining Control Rules The following are some of the control rules that allow you to modify the behavior of the Audit system: -b sets the maximum amount of existing Audit buffers in the kernel, for example: -f sets the action that is performed when a critical error is detected, for example: The above configuration triggers a kernel panic in case of a critical error. -e enables and disables the Audit system or locks its configuration, for example: The above command locks the Audit configuration. -r sets the rate of generated messages per second, for example: The above configuration sets no rate limit on generated messages. -s reports the status of the Audit system, for example: -l lists all currently loaded Audit rules, for example: -D deletes all currently loaded Audit rules, for example: Defining File System Rules To define a file system rule, use the following syntax: where: path_to_file is the file or directory that is audited. permissions are the permissions that are logged: r - read access to a file or a directory. w - write access to a file or a directory. x - execute access to a file or a directory. a - change in the file's or directory's attribute. key_name is an optional string that helps you identify which rule or a set of rules generated a particular log entry. Example 7.1. File System Rules To define a rule that logs all write access to, and every attribute change of, the /etc/passwd file, execute the following command: Note that the string following the -k option is arbitrary. To define a rule that logs all write access to, and every attribute change of, all the files in the /etc/selinux/ directory, execute the following command: To define a rule that logs the execution of the /sbin/insmod command, which inserts a module into the Linux kernel, execute the following command: Defining System Call Rules To define a system call rule, use the following syntax: where: action and filter specify when a certain event is logged. action can be either always or never . filter specifies which kernel rule-matching filter is applied to the event. The rule-matching filter can be one of the following: task , exit , user , and exclude . For more information about these filters, see the beginning of Section 7.1, "Audit System Architecture" . system_call specifies the system call by its name. A list of all system calls can be found in the /usr/include/asm/unistd_64.h file. Several system calls can be grouped into one rule, each specified after its own -S option. field = value specifies additional options that further modify the rule to match events based on a specified architecture, group ID, process ID, and others. For a full listing of all available field types and their values, see the auditctl (8) man page. key_name is an optional string that helps you identify which rule or a set of rules generated a particular log entry. Example 7.2. System Call Rules To define a rule that creates a log entry every time the adjtimex or settimeofday system calls are used by a program, and the system uses the 64-bit architecture, use the following command: To define a rule that creates a log entry every time a file is deleted or renamed by a system user whose ID is 1000 or larger, use the following command: Note that the -F auid!=4294967295 option is used to exclude users whose login UID is not set. It is also possible to define a file system rule using the system call rule syntax. The following command creates a rule for system calls that is analogous to the -w /etc/shadow -p wa file system rule: 7.5.2. Defining Executable File Rules To define an executable file rule, use the following syntax: where: action and filter specify when a certain event is logged. action can be either always or never . filter specifies which kernel rule-matching filter is applied to the event. The rule-matching filter can be one of the following: task , exit , user , and exclude . For more information about these filters, see the beginning of Section 7.1, "Audit System Architecture" . system_call specifies the system call by its name. A list of all system calls can be found in the /usr/include/asm/unistd_64.h file. Several system calls can be grouped into one rule, each specified after its own -S option. path_to_executable_file is the absolute path to the executable file that is audited. key_name is an optional string that helps you identify which rule or a set of rules generated a particular log entry. Example 7.3. Executable File Rules To define a rule that logs all execution of the /bin/id program, execute the following command: 7.5.3. Defining Persistent Audit Rules and Controls in the /etc/audit/audit.rules File To define Audit rules that are persistent across reboots, you must either directly include them in the /etc/audit/audit.rules file or use the augenrules program that reads rules located in the /etc/audit/rules.d/ directory. The /etc/audit/audit.rules file uses the same auditctl command line syntax to specify the rules. Empty lines and text following a hash sign ( # ) are ignored. The auditctl command can also be used to read rules from a specified file using the -R option, for example: Defining Control Rules A file can contain only the following control rules that modify the behavior of the Audit system: -b , -D , -e , -f , -r , --loginuid-immutable , and --backlog_wait_time . For more information on these options, see the section called "Defining Control Rules" . Example 7.4. Control Rules in audit.rules Defining File System and System Call Rules File system and system call rules are defined using the auditctl syntax. The examples in Section 7.5.1, "Defining Audit Rules with auditctl " can be represented with the following rules file: Example 7.5. File System and System Call Rules in audit.rules Preconfigured Rules Files In the /usr/share/doc/audit/rules/ directory, the audit package provides a set of pre-configured rules files according to various certification standards: 30-nispom.rules - Audit rule configuration that meets the requirements specified in the Information System Security chapter of the National Industrial Security Program Operating Manual. 30-pci-dss-v31.rules - Audit rule configuration that meets the requirements set by Payment Card Industry Data Security Standard (PCI DSS) v3.1. 30-stig.rules - Audit rule configuration that meets the requirements set by Security Technical Implementation Guides (STIG). To use these configuration files, create a backup of your original /etc/audit/audit.rules file and copy the configuration file of your choice over the /etc/audit/audit.rules file: Note The Audit rules have a numbering scheme that allows them to be ordered. To learn more about the naming scheme, see the /usr/share/doc/audit/rules/README-rules file. Using augenrules to Define Persistent Rules The augenrules script reads rules located in the /etc/audit/rules.d/ directory and compiles them into an audit.rules file. This script processes all files that ends in .rules in a specific order based on their natural sort order. The files in this directory are organized into groups with following meanings: 10 - Kernel and auditctl configuration 20 - Rules that could match general rules but you want a different match 30 - Main rules 40 - Optional rules 50 - Server-specific rules 70 - System local rules 90 - Finalize (immutable) The rules are not meant to be used all at once. They are pieces of a policy that should be thought out and individual files copied to /etc/audit/rules.d/ . For example, to set a system up in the STIG configuration, copy rules 10-base-config, 30-stig, 31-privileged, and 99-finalize. Once you have the rules in the /etc/audit/rules.d/ directory, load them by running the augenrules script with the --load directive: For more information on the Audit rules and the augenrules script, see the audit.rules(8) and augenrules(8) man pages.
[ "~]# auditctl -b 8192", "~]# auditctl -f 2", "~]# auditctl -e 2", "~]# auditctl -r 0", "~]# auditctl -s AUDIT_STATUS: enabled=1 flag=2 pid=0 rate_limit=0 backlog_limit=8192 lost=259 backlog=0", "~]# auditctl -l -w /etc/passwd -p wa -k passwd_changes -w /etc/selinux -p wa -k selinux_changes -w /sbin/insmod -p x -k module_insertion ...", "~]# auditctl -D No rules", "auditctl -w path_to_file -p permissions -k key_name", "~]# auditctl -w /etc/passwd -p wa -k passwd_changes", "~]# auditctl -w /etc/selinux/ -p wa -k selinux_changes", "~]# auditctl -w /sbin/insmod -p x -k module_insertion", "auditctl -a action , filter -S system_call -F field = value -k key_name", "~]# auditctl -a always,exit -F arch=b64 -S adjtimex -S settimeofday -k time_change", "~]# auditctl -a always,exit -S unlink -S unlinkat -S rename -S renameat -F auid>=1000 -F auid!=4294967295 -k delete", "~]# auditctl -a always,exit -F path=/etc/shadow -F perm=wa", "auditctl -a action , filter [ -F arch=cpu -S system_call ] -F exe= path_to_executable_file -k key_name", "~]# auditctl -a always,exit -F exe=/bin/id -F arch=b64 -S execve -k execution_bin_id", "~]# auditctl -R /usr/share/doc/audit/rules/30-stig.rules", "Delete all previous rules -D Set buffer size -b 8192 Make the configuration immutable -- reboot is required to change audit rules -e 2 Panic when a failure occurs -f 2 Generate at most 100 audit messages per second -r 100 Make login UID immutable once it is set (may break containers) --loginuid-immutable 1", "-w /etc/passwd -p wa -k passwd_changes -w /etc/selinux/ -p wa -k selinux_changes -w /sbin/insmod -p x -k module_insertion -a always,exit -F arch=b64 -S adjtimex -S settimeofday -k time_change -a always,exit -S unlink -S unlinkat -S rename -S renameat -F auid>=1000 -F auid!=4294967295 -k delete", "~]# cp /etc/audit/audit.rules /etc/audit/audit.rules_backup ~]# cp /usr/share/doc/audit/rules/30-stig.rules /etc/audit/audit.rules", "~]# augenrules --load augenrules --load No rules enabled 1 failure 1 pid 634 rate_limit 0 backlog_limit 8192 lost 0 backlog 0 enabled 1 failure 1 pid 634 rate_limit 0 backlog_limit 8192 lost 0 backlog 1" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/security_guide/sec-Defining_Audit_Rules_and_Controls
Chapter 6. Deploying AMQ Streams using installation artifacts
Chapter 6. Deploying AMQ Streams using installation artifacts Having prepared your environment for a deployment of AMQ Streams , you can deploy AMQ Streams to an OpenShift cluster. Use the installation files provided with the release artifacts. AMQ Streams is based on Strimzi 0.36.x. You can deploy AMQ Streams 2.5 on OpenShift 4.12 and later. The steps to deploy AMQ Streams using the installation files are as follows: Deploy the Cluster Operator Use the Cluster Operator to deploy the following: Kafka cluster Topic Operator User Operator Optionally, deploy the following Kafka components according to your requirements: Kafka Connect Kafka MirrorMaker Kafka Bridge Note To run the commands in this guide, an OpenShift user must have the rights to manage role-based access control (RBAC) and CRDs. 6.1. Basic deployment path You can set up a deployment where AMQ Streams manages a single Kafka cluster in the same namespace. You might use this configuration for development or testing. Or you can use AMQ Streams in a production environment to manage a number of Kafka clusters in different namespaces. The first step for any deployment of AMQ Streams is to install the Cluster Operator using the install/cluster-operator files. A single command applies all the installation files in the cluster-operator folder: oc apply -f ./install/cluster-operator . The command sets up everything you need to be able to create and manage a Kafka deployment, including the following: Cluster Operator ( Deployment , ConfigMap ) AMQ Streams CRDs ( CustomResourceDefinition ) RBAC resources ( ClusterRole , ClusterRoleBinding , RoleBinding ) Service account ( ServiceAccount ) The basic deployment path is as follows: Download the release artifacts Create an OpenShift namespace in which to deploy the Cluster Operator Deploy the Cluster Operator Update the install/cluster-operator files to use the namespace created for the Cluster Operator Install the Cluster Operator to watch one, multiple, or all namespaces Create a Kafka cluster After which, you can deploy other Kafka components and set up monitoring of your deployment. 6.2. Deploying the Cluster Operator The Cluster Operator is responsible for deploying and managing Kafka clusters within an OpenShift cluster. When the Cluster Operator is running, it starts to watch for updates of Kafka resources. By default, a single replica of the Cluster Operator is deployed. You can add replicas with leader election so that additional Cluster Operators are on standby in case of disruption. For more information, see Section 8.5.3, "Running multiple Cluster Operator replicas with leader election" . 6.2.1. Specifying the namespaces the Cluster Operator watches The Cluster Operator watches for updates in the namespaces where the Kafka resources are deployed. When you deploy the Cluster Operator, you specify which namespaces to watch in the OpenShift cluster. You can specify the following namespaces: A single selected namespace (the same namespace containing the Cluster Operator) Multiple selected namespaces All namespaces in the cluster Watching multiple selected namespaces has the most impact on performance due to increased processing overhead. To optimize performance for namespace monitoring, it is generally recommended to either watch a single namespace or monitor the entire cluster. Watching a single namespace allows for focused monitoring of namespace-specific resources, while monitoring all namespaces provides a comprehensive view of the cluster's resources across all namespaces. The Cluster Operator watches for changes to the following resources: Kafka for the Kafka cluster. KafkaConnect for the Kafka Connect cluster. KafkaConnector for creating and managing connectors in a Kafka Connect cluster. KafkaMirrorMaker for the Kafka MirrorMaker instance. KafkaMirrorMaker2 for the Kafka MirrorMaker 2 instance. KafkaBridge for the Kafka Bridge instance. KafkaRebalance for the Cruise Control optimization requests. When one of these resources is created in the OpenShift cluster, the operator gets the cluster description from the resource and starts creating a new cluster for the resource by creating the necessary OpenShift resources, such as Deployments, Pods, Services and ConfigMaps. Each time a Kafka resource is updated, the operator performs corresponding updates on the OpenShift resources that make up the cluster for the resource. Resources are either patched or deleted, and then recreated in order to make the cluster for the resource reflect the desired state of the cluster. This operation might cause a rolling update that might lead to service disruption. When a resource is deleted, the operator undeploys the cluster and deletes all related OpenShift resources. Note While the Cluster Operator can watch one, multiple, or all namespaces in an OpenShift cluster, the Topic Operator and User Operator watch for KafkaTopic and KafkaUser resources in a single namespace. For more information, see Section 1.2.1, "Watching AMQ Streams resources in OpenShift namespaces" . 6.2.2. Deploying the Cluster Operator to watch a single namespace This procedure shows how to deploy the Cluster Operator to watch AMQ Streams resources in a single namespace in your OpenShift cluster. Prerequisites You need an account with permission to create and manage CustomResourceDefinition and RBAC ( ClusterRole , and RoleBinding ) resources. Procedure Edit the AMQ Streams installation files to use the namespace the Cluster Operator is going to be installed into. For example, in this procedure the Cluster Operator is installed into the namespace my-cluster-operator-namespace . On Linux, use: On MacOS, use: Deploy the Cluster Operator: oc create -f install/cluster-operator -n my-cluster-operator-namespace Check the status of the deployment: oc get deployments -n my-cluster-operator-namespace Output shows the deployment name and readiness NAME READY UP-TO-DATE AVAILABLE strimzi-cluster-operator 1/1 1 1 READY shows the number of replicas that are ready/expected. The deployment is successful when the AVAILABLE output shows 1 . 6.2.3. Deploying the Cluster Operator to watch multiple namespaces This procedure shows how to deploy the Cluster Operator to watch AMQ Streams resources across multiple namespaces in your OpenShift cluster. Prerequisites You need an account with permission to create and manage CustomResourceDefinition and RBAC ( ClusterRole , and RoleBinding ) resources. Procedure Edit the AMQ Streams installation files to use the namespace the Cluster Operator is going to be installed into. For example, in this procedure the Cluster Operator is installed into the namespace my-cluster-operator-namespace . On Linux, use: On MacOS, use: Edit the install/cluster-operator/060-Deployment-strimzi-cluster-operator.yaml file to add a list of all the namespaces the Cluster Operator will watch to the STRIMZI_NAMESPACE environment variable. For example, in this procedure the Cluster Operator will watch the namespaces watched-namespace-1 , watched-namespace-2 , watched-namespace-3 . apiVersion: apps/v1 kind: Deployment spec: # ... template: spec: serviceAccountName: strimzi-cluster-operator containers: - name: strimzi-cluster-operator image: registry.redhat.io/amq-streams/strimzi-rhel8-operator:2.5.2 imagePullPolicy: IfNotPresent env: - name: STRIMZI_NAMESPACE value: watched-namespace-1,watched-namespace-2,watched-namespace-3 For each namespace listed, install the RoleBindings . In this example, we replace watched-namespace in these commands with the namespaces listed in the step, repeating them for watched-namespace-1 , watched-namespace-2 , watched-namespace-3 : oc create -f install/cluster-operator/020-RoleBinding-strimzi-cluster-operator.yaml -n <watched_namespace> oc create -f install/cluster-operator/023-RoleBinding-strimzi-cluster-operator.yaml -n <watched_namespace> oc create -f install/cluster-operator/031-RoleBinding-strimzi-cluster-operator-entity-operator-delegation.yaml -n <watched_namespace> Deploy the Cluster Operator: oc create -f install/cluster-operator -n my-cluster-operator-namespace Check the status of the deployment: oc get deployments -n my-cluster-operator-namespace Output shows the deployment name and readiness NAME READY UP-TO-DATE AVAILABLE strimzi-cluster-operator 1/1 1 1 READY shows the number of replicas that are ready/expected. The deployment is successful when the AVAILABLE output shows 1 . 6.2.4. Deploying the Cluster Operator to watch all namespaces This procedure shows how to deploy the Cluster Operator to watch AMQ Streams resources across all namespaces in your OpenShift cluster. When running in this mode, the Cluster Operator automatically manages clusters in any new namespaces that are created. Prerequisites You need an account with permission to create and manage CustomResourceDefinition and RBAC ( ClusterRole , and RoleBinding ) resources. Procedure Edit the AMQ Streams installation files to use the namespace the Cluster Operator is going to be installed into. For example, in this procedure the Cluster Operator is installed into the namespace my-cluster-operator-namespace . On Linux, use: On MacOS, use: Edit the install/cluster-operator/060-Deployment-strimzi-cluster-operator.yaml file to set the value of the STRIMZI_NAMESPACE environment variable to * . apiVersion: apps/v1 kind: Deployment spec: # ... template: spec: # ... serviceAccountName: strimzi-cluster-operator containers: - name: strimzi-cluster-operator image: registry.redhat.io/amq-streams/strimzi-rhel8-operator:2.5.2 imagePullPolicy: IfNotPresent env: - name: STRIMZI_NAMESPACE value: "*" # ... Create ClusterRoleBindings that grant cluster-wide access for all namespaces to the Cluster Operator. oc create clusterrolebinding strimzi-cluster-operator-namespaced --clusterrole=strimzi-cluster-operator-namespaced --serviceaccount my-cluster-operator-namespace:strimzi-cluster-operator oc create clusterrolebinding strimzi-cluster-operator-watched --clusterrole=strimzi-cluster-operator-watched --serviceaccount my-cluster-operator-namespace:strimzi-cluster-operator oc create clusterrolebinding strimzi-cluster-operator-entity-operator-delegation --clusterrole=strimzi-entity-operator --serviceaccount my-cluster-operator-namespace:strimzi-cluster-operator Deploy the Cluster Operator to your OpenShift cluster. oc create -f install/cluster-operator -n my-cluster-operator-namespace Check the status of the deployment: oc get deployments -n my-cluster-operator-namespace Output shows the deployment name and readiness NAME READY UP-TO-DATE AVAILABLE strimzi-cluster-operator 1/1 1 1 READY shows the number of replicas that are ready/expected. The deployment is successful when the AVAILABLE output shows 1 . 6.3. Deploying Kafka To be able to manage a Kafka cluster with the Cluster Operator, you must deploy it as a Kafka resource. AMQ Streams provides example deployment files to do this. You can use these files to deploy the Topic Operator and User Operator at the same time. After you have deployed the Cluster Operator, use a Kafka resource to deploy the following components: Kafka cluster or (preview) Kafka cluster with node pools Topic Operator User Operator When installing Kafka, AMQ Streams also installs a ZooKeeper cluster and adds the necessary configuration to connect Kafka with ZooKeeper. If you are trying the preview of the node pools feature, you can deploy a Kafka cluster with one or more node pools. Node pools provide configuration for a set of Kafka nodes. By using node pools, nodes can have different configuration within the same Kafka cluster. Node pools are not enabled by default, so you must enable the KafkaNodePools feature gate before using them. If you haven't deployed a Kafka cluster as a Kafka resource, you can't use the Cluster Operator to manage it. This applies, for example, to a Kafka cluster running outside of OpenShift. However, you can use the Topic Operator and User Operator with a Kafka cluster that is not managed by AMQ Streams, by deploying them as standalone components . You can also deploy and use other Kafka components with a Kafka cluster not managed by AMQ Streams. 6.3.1. Deploying the Kafka cluster This procedure shows how to deploy a Kafka cluster to your OpenShift cluster using the Cluster Operator. The deployment uses a YAML file to provide the specification to create a Kafka resource. AMQ Streams provides the following example files you can use to create a Kafka cluster: kafka-persistent.yaml Deploys a persistent cluster with three ZooKeeper and three Kafka nodes. kafka-jbod.yaml Deploys a persistent cluster with three ZooKeeper and three Kafka nodes (each using multiple persistent volumes). kafka-persistent-single.yaml Deploys a persistent cluster with a single ZooKeeper node and a single Kafka node. kafka-ephemeral.yaml Deploys an ephemeral cluster with three ZooKeeper and three Kafka nodes. kafka-ephemeral-single.yaml Deploys an ephemeral cluster with three ZooKeeper nodes and a single Kafka node. In this procedure, we use the examples for an ephemeral and persistent Kafka cluster deployment. Ephemeral cluster In general, an ephemeral (or temporary) Kafka cluster is suitable for development and testing purposes, not for production. This deployment uses emptyDir volumes for storing broker information (for ZooKeeper) and topics or partitions (for Kafka). Using an emptyDir volume means that its content is strictly related to the pod life cycle and is deleted when the pod goes down. Persistent cluster A persistent Kafka cluster uses persistent volumes to store ZooKeeper and Kafka data. A PersistentVolume is acquired using a PersistentVolumeClaim to make it independent of the actual type of the PersistentVolume . The PersistentVolumeClaim can use a StorageClass to trigger automatic volume provisioning. When no StorageClass is specified, OpenShift will try to use the default StorageClass . The following examples show some common types of persistent volumes: If your OpenShift cluster runs on Amazon AWS, OpenShift can provision Amazon EBS volumes If your OpenShift cluster runs on Microsoft Azure, OpenShift can provision Azure Disk Storage volumes If your OpenShift cluster runs on Google Cloud, OpenShift can provision Persistent Disk volumes If your OpenShift cluster runs on bare metal, OpenShift can provision local persistent volumes The example YAML files specify the latest supported Kafka version, and configuration for its supported log message format version and inter-broker protocol version. The inter.broker.protocol.version property for the Kafka config must be the version supported by the specified Kafka version ( spec.kafka.version ). The property represents the version of Kafka protocol used in a Kafka cluster. From Kafka 3.0.0, when the inter.broker.protocol.version is set to 3.0 or higher, the log.message.format.version option is ignored and doesn't need to be set. An update to the inter.broker.protocol.version is required when upgrading Kafka . The example clusters are named my-cluster by default. The cluster name is defined by the name of the resource and cannot be changed after the cluster has been deployed. To change the cluster name before you deploy the cluster, edit the Kafka.metadata.name property of the Kafka resource in the relevant YAML file. Default cluster name and specified Kafka versions apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-cluster spec: kafka: version: 3.5.0 #... config: #... log.message.format.version: "3.5" inter.broker.protocol.version: "3.5" # ... Prerequisites The Cluster Operator must be deployed. Procedure Create and deploy an ephemeral or persistent cluster. To create and deploy an ephemeral cluster: oc apply -f examples/kafka/kafka-ephemeral.yaml To create and deploy a persistent cluster: oc apply -f examples/kafka/kafka-persistent.yaml Check the status of the deployment: oc get pods -n <my_cluster_operator_namespace> Output shows the pod names and readiness NAME READY STATUS RESTARTS my-cluster-entity-operator 3/3 Running 0 my-cluster-kafka-0 1/1 Running 0 my-cluster-kafka-1 1/1 Running 0 my-cluster-kafka-2 1/1 Running 0 my-cluster-zookeeper-0 1/1 Running 0 my-cluster-zookeeper-1 1/1 Running 0 my-cluster-zookeeper-2 1/1 Running 0 my-cluster is the name of the Kafka cluster. A sequential index number starting with 0 identifies each Kafka and ZooKeeper pod created. With the default deployment, you create an Entity Operator cluster, 3 Kafka pods, and 3 ZooKeeper pods. READY shows the number of replicas that are ready/expected. The deployment is successful when the STATUS displays as Running . Additional resources Kafka cluster configuration 6.3.2. (Preview) Deploying Kafka node pools This procedure shows how to deploy Kafka node pools to your OpenShift cluster using the Cluster Operator. Node pools represent a distinct group of Kafka nodes within a Kafka cluster that share the same configuration. For each Kafka node in the node pool, any configuration not defined in node pool is inherited from the cluster configuration in the kafka resource. Note The node pools feature is available as a preview. Node pools are not enabled by default, so you must enable the KafkaNodePools feature gate before using them. The deployment uses a YAML file to provide the specification to create a KafkaNodePool resource. You can use node pools with Kafka clusters that use KRaft (Kafka Raft metadata) mode or ZooKeeper for cluster management. Important KRaft mode is not ready for production in Apache Kafka or in AMQ Streams. AMQ Streams provides the following example files that you can use to create a Kafka node pool: kafka.yaml Deploys ZooKeeper with 3 nodes, and 2 different pools of Kafka brokers. Each of the pools has 3 brokers. The pools in the example use different storage configuration. kafka-with-dual-role-kraft-nodes.yaml Deploys a Kafka cluster with one pool of KRaft nodes that share the broker and controller roles. kafka-with-kraft.yaml Deploys a Kafka cluster with one pool of controller nodes and one pool of broker nodes. Note You don't need to start using node pools right away. If you decide to use them, you can perform the steps outlined here to deploy a new Kafka cluster with KafkaNodePool resources or migrate your existing Kafka cluster . Prerequisites The Cluster Operator must be deployed. You have created and deployed a Kafka cluster . Note If you want to migrate an existing Kafka cluster to use node pools, see the steps to migrate existing Kafka clusters . Procedure Enable the KafkaNodePools feature gate from the command line: oc set env deployment/strimzi-cluster-operator STRIMZI_FEATURE_GATES="+KafkaNodePools" Or by editing the Cluster Operator Deployment and updating the STRIMZI_FEATURE_GATES environment variable: env - name: STRIMZI_FEATURE_GATES value: +KafkaNodePools This updates the Cluster Operator. If using KRaft mode, enable the UseKRaft feature gate as well. Create a node pool. To deploy a Kafka cluster and ZooKeeper cluster with two node pools of three brokers: oc apply -f examples/kafka/nodepools/kafka.yaml To deploy a Kafka cluster in KRaft mode with a single node pool that uses dual-role nodes: oc apply -f examples/kafka/nodepools/kafka-with-dual-role-kraft-nodes.yaml To deploy a Kafka cluster in KRaft mode with separate node pools for broker and controller nodes: oc apply -f examples/kafka/nodepools/kafka-with-kraft.yaml Check the status of the deployment: oc get pods -n <my_cluster_operator_namespace> Output shows the node pool names and readiness NAME READY STATUS RESTARTS my-cluster-entity-operator 3/3 Running 0 my-cluster-pool-a-kafka-0 1/1 Running 0 my-cluster-pool-a-kafka-1 1/1 Running 0 my-cluster-pool-a-kafka-4 1/1 Running 0 my-cluster is the name of the Kafka cluster. pool-a is the name of the node pool. A sequential index number starting with 0 identifies each Kafka pod created. If you are using ZooKeeper, you'll also see the ZooKeeper pods. READY shows the number of replicas that are ready/expected. The deployment is successful when the STATUS displays as Running . Information on the deployment is also shown in the status of the KafkaNodePool resource, including a list of IDs for nodes in the pool. Note Node IDs are assigned sequentially starting at 0 (zero) across all node pools within a cluster. This means that node IDs might not run sequentially within a specific node pool. If there are gaps in the sequence of node IDs across the cluster, the node to be added is assigned an ID that fills the gap. When scaling down, the node with the highest node ID within a pool is removed. Additional resources Node pool configuration 6.3.3. Deploying the Topic Operator using the Cluster Operator This procedure describes how to deploy the Topic Operator using the Cluster Operator. The Topic Operator can be deployed for use in either bidirectional mode or unidirectional mode. To learn more about bidirectional and unidirectional topic management, see Section 9.1, "Topic management modes" . Note Unidirectional topic management is available as a preview. Unidirectional topic management is not enabled by default, so you must enable the UnidirectionalTopicOperator feature gate to be able to use it. You configure the entityOperator property of the Kafka resource to include the topicOperator . By default, the Topic Operator watches for KafkaTopic resources in the namespace of the Kafka cluster deployed by the Cluster Operator. You can also specify a namespace using watchedNamespace in the Topic Operator spec . A single Topic Operator can watch a single namespace. One namespace should be watched by only one Topic Operator. If you use AMQ Streams to deploy multiple Kafka clusters into the same namespace, enable the Topic Operator for only one Kafka cluster or use the watchedNamespace property to configure the Topic Operators to watch other namespaces. If you want to use the Topic Operator with a Kafka cluster that is not managed by AMQ Streams, you must deploy the Topic Operator as a standalone component . For more information about configuring the entityOperator and topicOperator properties, see Configuring the Entity Operator . Prerequisites The Cluster Operator must be deployed. Procedure Edit the entityOperator properties of the Kafka resource to include topicOperator : apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-cluster spec: #... entityOperator: topicOperator: {} userOperator: {} Configure the Topic Operator spec using the properties described in the EntityTopicOperatorSpec schema reference . Use an empty object ( {} ) if you want all properties to use their default values. Create or update the resource: oc apply -f <kafka_configuration_file> Check the status of the deployment: oc get pods -n <my_cluster_operator_namespace> Output shows the pod name and readiness NAME READY STATUS RESTARTS my-cluster-entity-operator 3/3 Running 0 # ... my-cluster is the name of the Kafka cluster. READY shows the number of replicas that are ready/expected. The deployment is successful when the STATUS displays as Running . 6.3.4. Deploying the User Operator using the Cluster Operator This procedure describes how to deploy the User Operator using the Cluster Operator. You configure the entityOperator property of the Kafka resource to include the userOperator . By default, the User Operator watches for KafkaUser resources in the namespace of the Kafka cluster deployment. You can also specify a namespace using watchedNamespace in the User Operator spec . A single User Operator can watch a single namespace. One namespace should be watched by only one User Operator. If you want to use the User Operator with a Kafka cluster that is not managed by AMQ Streams, you must deploy the User Operator as a standalone component . For more information about configuring the entityOperator and userOperator properties, see Configuring the Entity Operator . Prerequisites The Cluster Operator must be deployed. Procedure Edit the entityOperator properties of the Kafka resource to include userOperator : apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-cluster spec: #... entityOperator: topicOperator: {} userOperator: {} Configure the User Operator spec using the properties described in EntityUserOperatorSpec schema reference . Use an empty object ( {} ) if you want all properties to use their default values. Create or update the resource: oc apply -f <kafka_configuration_file> Check the status of the deployment: oc get pods -n <my_cluster_operator_namespace> Output shows the pod name and readiness NAME READY STATUS RESTARTS my-cluster-entity-operator 3/3 Running 0 # ... my-cluster is the name of the Kafka cluster. READY shows the number of replicas that are ready/expected. The deployment is successful when the STATUS displays as Running . 6.3.5. List of Kafka cluster resources The following resources are created by the Cluster Operator in the OpenShift cluster: Shared resources cluster-name -cluster-ca Secret with the Cluster CA private key used to encrypt the cluster communication. cluster-name -cluster-ca-cert Secret with the Cluster CA public key. This key can be used to verify the identity of the Kafka brokers. cluster-name -clients-ca Secret with the Clients CA private key used to sign user certificates cluster-name -clients-ca-cert Secret with the Clients CA public key. This key can be used to verify the identity of the Kafka users. cluster-name -cluster-operator-certs Secret with Cluster operators keys for communication with Kafka and ZooKeeper. ZooKeeper nodes cluster-name -zookeeper Name given to the following ZooKeeper resources: StrimziPodSet for managing the ZooKeeper node pods. Service account used by the ZooKeeper nodes. PodDisruptionBudget configured for the ZooKeeper nodes. cluster-name -zookeeper- idx Pods created by the StrimziPodSet. cluster-name -zookeeper-nodes Headless Service needed to have DNS resolve the ZooKeeper pods IP addresses directly. cluster-name -zookeeper-client Service used by Kafka brokers to connect to ZooKeeper nodes as clients. cluster-name -zookeeper-config ConfigMap that contains the ZooKeeper ancillary configuration, and is mounted as a volume by the ZooKeeper node pods. cluster-name -zookeeper-nodes Secret with ZooKeeper node keys. cluster-name -network-policy-zookeeper Network policy managing access to the ZooKeeper services. data- cluster-name -zookeeper- idx Persistent Volume Claim for the volume used for storing data for the ZooKeeper node pod idx . This resource will be created only if persistent storage is selected for provisioning persistent volumes to store data. Kafka brokers cluster-name -kafka Name given to the following Kafka resources: StrimziPodSet for managing the Kafka broker pods. Service account used by the Kafka pods. PodDisruptionBudget configured for the Kafka brokers. cluster-name -kafka- idx Name given to the following Kafka resources: Pods created by the StrimziPodSet. ConfigMaps with Kafka broker configuration. cluster-name -kafka-brokers Service needed to have DNS resolve the Kafka broker pods IP addresses directly. cluster-name -kafka-bootstrap Service can be used as bootstrap servers for Kafka clients connecting from within the OpenShift cluster. cluster-name -kafka-external-bootstrap Bootstrap service for clients connecting from outside the OpenShift cluster. This resource is created only when an external listener is enabled. The old service name will be used for backwards compatibility when the listener name is external and port is 9094 . cluster-name -kafka- pod-id Service used to route traffic from outside the OpenShift cluster to individual pods. This resource is created only when an external listener is enabled. The old service name will be used for backwards compatibility when the listener name is external and port is 9094 . cluster-name -kafka-external-bootstrap Bootstrap route for clients connecting from outside the OpenShift cluster. This resource is created only when an external listener is enabled and set to type route . The old route name will be used for backwards compatibility when the listener name is external and port is 9094 . cluster-name -kafka- pod-id Route for traffic from outside the OpenShift cluster to individual pods. This resource is created only when an external listener is enabled and set to type route . The old route name will be used for backwards compatibility when the listener name is external and port is 9094 . cluster-name -kafka- listener-name -bootstrap Bootstrap service for clients connecting from outside the OpenShift cluster. This resource is created only when an external listener is enabled. The new service name will be used for all other external listeners. cluster-name -kafka- listener-name - pod-id Service used to route traffic from outside the OpenShift cluster to individual pods. This resource is created only when an external listener is enabled. The new service name will be used for all other external listeners. cluster-name -kafka- listener-name -bootstrap Bootstrap route for clients connecting from outside the OpenShift cluster. This resource is created only when an external listener is enabled and set to type route . The new route name will be used for all other external listeners. cluster-name -kafka- listener-name - pod-id Route for traffic from outside the OpenShift cluster to individual pods. This resource is created only when an external listener is enabled and set to type route . The new route name will be used for all other external listeners. cluster-name -kafka-config ConfigMap containing the Kafka ancillary configuration, which is mounted as a volume by the broker pods when the UseStrimziPodSets feature gate is disabled. cluster-name -kafka-brokers Secret with Kafka broker keys. cluster-name -network-policy-kafka Network policy managing access to the Kafka services. strimzi- namespace-name - cluster-name -kafka-init Cluster role binding used by the Kafka brokers. cluster-name -jmx Secret with JMX username and password used to secure the Kafka broker port. This resource is created only when JMX is enabled in Kafka. data- cluster-name -kafka- idx Persistent Volume Claim for the volume used for storing data for the Kafka broker pod idx . This resource is created only if persistent storage is selected for provisioning persistent volumes to store data. data- id - cluster-name -kafka- idx Persistent Volume Claim for the volume id used for storing data for the Kafka broker pod idx . This resource is created only if persistent storage is selected for JBOD volumes when provisioning persistent volumes to store data. Entity Operator These resources are only created if the Entity Operator is deployed using the Cluster Operator. cluster-name -entity-operator Name given to the following Entity Operator resources: Deployment with Topic and User Operators. Service account used by the Entity Operator. Network policy managing access to the Entity Operator metrics. cluster-name -entity-operator- random-string Pod created by the Entity Operator deployment. cluster-name -entity-topic-operator-config ConfigMap with ancillary configuration for Topic Operators. cluster-name -entity-user-operator-config ConfigMap with ancillary configuration for User Operators. cluster-name -entity-topic-operator-certs Secret with Topic Operator keys for communication with Kafka and ZooKeeper. cluster-name -entity-user-operator-certs Secret with User Operator keys for communication with Kafka and ZooKeeper. strimzi- cluster-name -entity-topic-operator Role binding used by the Entity Topic Operator. strimzi- cluster-name -entity-user-operator Role binding used by the Entity User Operator. Kafka Exporter These resources are only created if the Kafka Exporter is deployed using the Cluster Operator. cluster-name -kafka-exporter Name given to the following Kafka Exporter resources: Deployment with Kafka Exporter. Service used to collect consumer lag metrics. Service account used by the Kafka Exporter. Network policy managing access to the Kafka Exporter metrics. cluster-name -kafka-exporter- random-string Pod created by the Kafka Exporter deployment. Cruise Control These resources are only created if Cruise Control was deployed using the Cluster Operator. cluster-name -cruise-control Name given to the following Cruise Control resources: Deployment with Cruise Control. Service used to communicate with Cruise Control. Service account used by the Cruise Control. cluster-name -cruise-control- random-string Pod created by the Cruise Control deployment. cluster-name -cruise-control-config ConfigMap that contains the Cruise Control ancillary configuration, and is mounted as a volume by the Cruise Control pods. cluster-name -cruise-control-certs Secret with Cruise Control keys for communication with Kafka and ZooKeeper. cluster-name -network-policy-cruise-control Network policy managing access to the Cruise Control service. 6.4. Deploying Kafka Connect Kafka Connect is an integration toolkit for streaming data between Kafka brokers and other systems using connector plugins. Kafka Connect provides a framework for integrating Kafka with an external data source or target, such as a database or messaging system, for import or export of data using connectors. Connectors are plugins that provide the connection configuration needed. In AMQ Streams, Kafka Connect is deployed in distributed mode. Kafka Connect can also work in standalone mode, but this is not supported by AMQ Streams. Using the concept of connectors , Kafka Connect provides a framework for moving large amounts of data into and out of your Kafka cluster while maintaining scalability and reliability. The Cluster Operator manages Kafka Connect clusters deployed using the KafkaConnect resource and connectors created using the KafkaConnector resource. In order to use Kafka Connect, you need to do the following. Deploy a Kafka Connect cluster Add connectors to integrate with other systems Note The term connector is used interchangeably to mean a connector instance running within a Kafka Connect cluster, or a connector class. In this guide, the term connector is used when the meaning is clear from the context. 6.4.1. Deploying Kafka Connect to your OpenShift cluster This procedure shows how to deploy a Kafka Connect cluster to your OpenShift cluster using the Cluster Operator. A Kafka Connect cluster deployment is implemented with a configurable number of nodes (also called workers ) that distribute the workload of connectors as tasks so that the message flow is highly scalable and reliable. The deployment uses a YAML file to provide the specification to create a KafkaConnect resource. AMQ Streams provides example configuration files . In this procedure, we use the following example file: examples/connect/kafka-connect.yaml Prerequisites The Cluster Operator must be deployed. Running Kafka cluster. Procedure Deploy Kafka Connect to your OpenShift cluster. Use the examples/connect/kafka-connect.yaml file to deploy Kafka Connect. oc apply -f examples/connect/kafka-connect.yaml Check the status of the deployment: oc get pods -n <my_cluster_operator_namespace> Output shows the deployment name and readiness NAME READY STATUS RESTARTS my-connect-cluster-connect-<pod_id> 1/1 Running 0 my-connect-cluster is the name of the Kafka Connect cluster. A pod ID identifies each pod created. With the default deployment, you create a single Kafka Connect pod. READY shows the number of replicas that are ready/expected. The deployment is successful when the STATUS displays as Running . Additional resources Kafka Connect cluster configuration 6.4.2. Configuring Kafka Connect for multiple instances If you are running multiple instances of Kafka Connect, you have to change the default configuration of the following config properties: apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaConnect metadata: name: my-connect spec: # ... config: group.id: connect-cluster 1 offset.storage.topic: connect-cluster-offsets 2 config.storage.topic: connect-cluster-configs 3 status.storage.topic: connect-cluster-status 4 # ... # ... 1 The Kafka Connect cluster ID within Kafka. 2 Kafka topic that stores connector offsets. 3 Kafka topic that stores connector and task status configurations. 4 Kafka topic that stores connector and task status updates. Note Values for the three topics must be the same for all Kafka Connect instances with the same group.id . Unless you change the default settings, each Kafka Connect instance connecting to the same Kafka cluster is deployed with the same values. What happens, in effect, is all instances are coupled to run in a cluster and use the same topics. If multiple Kafka Connect clusters try to use the same topics, Kafka Connect will not work as expected and generate errors. If you wish to run multiple Kafka Connect instances, change the values of these properties for each instance. 6.4.3. Adding connectors Kafka Connect uses connectors to integrate with other systems to stream data. A connector is an instance of a Kafka Connector class, which can be one of the following type: Source connector A source connector is a runtime entity that fetches data from an external system and feeds it to Kafka as messages. Sink connector A sink connector is a runtime entity that fetches messages from Kafka topics and feeds them to an external system. Kafka Connect uses a plugin architecture to provide the implementation artifacts for connectors. Plugins allow connections to other systems and provide additional configuration to manipulate data. Plugins include connectors and other components, such as data converters and transforms. A connector operates with a specific type of external system. Each connector defines a schema for its configuration. You supply the configuration to Kafka Connect to create a connector instance within Kafka Connect. Connector instances then define a set of tasks for moving data between systems. Add connector plugins to Kafka Connect in one of the following ways: Configure Kafka Connect to build a new container image with plugins automatically Create a Docker image from the base Kafka Connect image (manually or using continuous integration) After plugins have been added to the container image, you can start, stop, and manage connector instances in the following ways: Using AMQ Streams's KafkaConnector custom resource Using the Kafka Connect API You can also create new connector instances using these options. 6.4.3.1. Building a new container image with connector plugins automatically Configure Kafka Connect so that AMQ Streams automatically builds a new container image with additional connectors. You define the connector plugins using the .spec.build.plugins property of the KafkaConnect custom resource. AMQ Streams will automatically download and add the connector plugins into a new container image. The container is pushed into the container repository specified in .spec.build.output and automatically used in the Kafka Connect deployment. Prerequisites The Cluster Operator must be deployed. A container registry. You need to provide your own container registry where images can be pushed to, stored, and pulled from. AMQ Streams supports private container registries as well as public registries such as Quay or Docker Hub . Procedure Configure the KafkaConnect custom resource by specifying the container registry in .spec.build.output , and additional connectors in .spec.build.plugins : apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaConnect metadata: name: my-connect-cluster spec: 1 #... build: output: 2 type: docker image: my-registry.io/my-org/my-connect-cluster:latest pushSecret: my-registry-credentials plugins: 3 - name: debezium-postgres-connector artifacts: - type: tgz url: https://repo1.maven.org/maven2/io/debezium/debezium-connector-postgres/2.1.3.Final/debezium-connector-postgres-2.1.3.Final-plugin.tar.gz sha512sum: c4ddc97846de561755dc0b021a62aba656098829c70eb3ade3b817ce06d852ca12ae50c0281cc791a5a131cb7fc21fb15f4b8ee76c6cae5dd07f9c11cb7c6e79 - name: camel-telegram artifacts: - type: tgz url: https://repo.maven.apache.org/maven2/org/apache/camel/kafkaconnector/camel-telegram-kafka-connector/0.11.5/camel-telegram-kafka-connector-0.11.5-package.tar.gz sha512sum: d6d9f45e0d1dbfcc9f6d1c7ca2046168c764389c78bc4b867dab32d24f710bb74ccf2a007d7d7a8af2dfca09d9a52ccbc2831fc715c195a3634cca055185bd91 #... 1 The specification for the Kafka Connect cluster . 2 (Required) Configuration of the container registry where new images are pushed. 3 (Required) List of connector plugins and their artifacts to add to the new container image. Each plugin must be configured with at least one artifact . Create or update the resource: Wait for the new container image to build, and for the Kafka Connect cluster to be deployed. Use the Kafka Connect REST API or KafkaConnector custom resources to use the connector plugins you added. Additional resources Kafka Connect Build schema reference 6.4.3.2. Building a new container image with connector plugins from the Kafka Connect base image Create a custom Docker image with connector plugins from the Kafka Connect base image Add the custom image to the /opt/kafka/plugins directory. You can use the Kafka container image on Red Hat Ecosystem Catalog as a base image for creating your own custom image with additional connector plugins. At startup, the AMQ Streams version of Kafka Connect loads any third-party connector plugins contained in the /opt/kafka/plugins directory. Prerequisites The Cluster Operator must be deployed. Procedure Create a new Dockerfile using registry.redhat.io/amq-streams/kafka-35-rhel8:2.5.2 as the base image: Example plugins file The COPY command points to the plugin files to copy to the container image. This example adds plugins for Debezium connectors (MongoDB, MySQL, and PostgreSQL), though not all files are listed for brevity. Debezium running in Kafka Connect looks the same as any other Kafka Connect task. Build the container image. Push your custom image to your container registry. Point to the new container image. You can point to the image in one of the following ways: Edit the KafkaConnect.spec.image property of the KafkaConnect custom resource. If set, this property overrides the STRIMZI_KAFKA_CONNECT_IMAGES environment variable in the Cluster Operator. apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaConnect metadata: name: my-connect-cluster spec: 1 #... image: my-new-container-image 2 config: 3 #... 1 The specification for the Kafka Connect cluster . 2 The docker image for the pods. 3 Configuration of the Kafka Connect workers (not connectors). Edit the STRIMZI_KAFKA_CONNECT_IMAGES environment variable in the install/cluster-operator/060-Deployment-strimzi-cluster-operator.yaml file to point to the new container image, and then reinstall the Cluster Operator. Additional resources Container image configuration and the KafkaConnect.spec.image property Cluster Operator configuration and the STRIMZI_KAFKA_CONNECT_IMAGES variable 6.4.3.3. Deploying KafkaConnector resources Deploy KafkaConnector resources to manage connectors. The KafkaConnector custom resource offers an OpenShift-native approach to management of connectors by the Cluster Operator. You don't need to send HTTP requests to manage connectors, as with the Kafka Connect REST API. You manage a running connector instance by updating its corresponding KafkaConnector resource, and then applying the updates. The Cluster Operator updates the configurations of the running connector instances. You remove a connector by deleting its corresponding KafkaConnector . KafkaConnector resources must be deployed to the same namespace as the Kafka Connect cluster they link to. In the configuration shown in this procedure, the autoRestart property is set to true . This enables automatic restarts of failed connectors and tasks. Up to seven restart attempts are made, after which restarts must be made manually. You annotate the KafkaConnector resource to restart a connector or restart a connector task manually. Example connectors You can use your own connectors or try the examples provided by AMQ Streams. Up until Apache Kafka 3.1.0, example file connector plugins were included with Apache Kafka. Starting from the 3.1.1 and 3.2.0 releases of Apache Kafka, the examples need to be added to the plugin path as any other connector . AMQ Streams provides an example KafkaConnector configuration file ( examples/connect/source-connector.yaml ) for the example file connector plugins, which creates the following connector instances as KafkaConnector resources: A FileStreamSourceConnector instance that reads each line from the Kafka license file (the source) and writes the data as messages to a single Kafka topic. A FileStreamSinkConnector instance that reads messages from the Kafka topic and writes the messages to a temporary file (the sink). We use the example file to create connectors in this procedure. Note The example connectors are not intended for use in a production environment. Prerequisites A Kafka Connect deployment The Cluster Operator is running Procedure Add the FileStreamSourceConnector and FileStreamSinkConnector plugins to Kafka Connect in one of the following ways: Configure Kafka Connect to build a new container image with plugins automatically Create a Docker image from the base Kafka Connect image (manually or using continuous integration) Set the strimzi.io/use-connector-resources annotation to true in the Kafka Connect configuration. apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaConnect metadata: name: my-connect-cluster annotations: strimzi.io/use-connector-resources: "true" spec: # ... With the KafkaConnector resources enabled, the Cluster Operator watches for them. Edit the examples/connect/source-connector.yaml file: apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaConnector metadata: name: my-source-connector 1 labels: strimzi.io/cluster: my-connect-cluster 2 spec: class: org.apache.kafka.connect.file.FileStreamSourceConnector 3 tasksMax: 2 4 autoRestart: 5 enabled: true config: 6 file: "/opt/kafka/LICENSE" 7 topic: my-topic 8 # ... 1 Name of the KafkaConnector resource, which is used as the name of the connector. Use any name that is valid for an OpenShift resource. 2 Name of the Kafka Connect cluster to create the connector instance in. Connectors must be deployed to the same namespace as the Kafka Connect cluster they link to. 3 Full name or alias of the connector class. This should be present in the image being used by the Kafka Connect cluster. 4 Maximum number of Kafka Connect tasks that the connector can create. 5 Enables automatic restarts of failed connectors and tasks. 6 Connector configuration as key-value pairs. 7 This example source connector configuration reads data from the /opt/kafka/LICENSE file. 8 Kafka topic to publish the source data to. Create the source KafkaConnector in your OpenShift cluster: oc apply -f examples/connect/source-connector.yaml Create an examples/connect/sink-connector.yaml file: touch examples/connect/sink-connector.yaml Paste the following YAML into the sink-connector.yaml file: apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaConnector metadata: name: my-sink-connector labels: strimzi.io/cluster: my-connect spec: class: org.apache.kafka.connect.file.FileStreamSinkConnector 1 tasksMax: 2 config: 2 file: "/tmp/my-file" 3 topics: my-topic 4 1 Full name or alias of the connector class. This should be present in the image being used by the Kafka Connect cluster. 2 Connector configuration as key-value pairs. 3 Temporary file to publish the source data to. 4 Kafka topic to read the source data from. Create the sink KafkaConnector in your OpenShift cluster: oc apply -f examples/connect/sink-connector.yaml Check that the connector resources were created: oc get kctr --selector strimzi.io/cluster=<my_connect_cluster> -o name my-source-connector my-sink-connector Replace <my_connect_cluster> with the name of your Kafka Connect cluster. In the container, execute kafka-console-consumer.sh to read the messages that were written to the topic by the source connector: oc exec <my_kafka_cluster>-kafka-0 -i -t -- bin/kafka-console-consumer.sh --bootstrap-server <my_kafka_cluster>-kafka-bootstrap. NAMESPACE .svc:9092 --topic my-topic --from-beginning Replace <my_kafka_cluster> with the name of your Kafka cluster. Source and sink connector configuration options The connector configuration is defined in the spec.config property of the KafkaConnector resource. The FileStreamSourceConnector and FileStreamSinkConnector classes support the same configuration options as the Kafka Connect REST API. Other connectors support different configuration options. Table 6.1. Configuration options for the FileStreamSource connector class Name Type Default value Description file String Null Source file to write messages to. If not specified, the standard input is used. topic List Null The Kafka topic to publish data to. Table 6.2. Configuration options for FileStreamSinkConnector class Name Type Default value Description file String Null Destination file to write messages to. If not specified, the standard output is used. topics List Null One or more Kafka topics to read data from. topics.regex String Null A regular expression matching one or more Kafka topics to read data from. 6.4.3.4. Manually restarting connectors If you are using KafkaConnector resources to manage connectors, use the restart annotation to manually trigger a restart of a connector. Prerequisites The Cluster Operator is running. Procedure Find the name of the KafkaConnector custom resource that controls the Kafka connector you want to restart: oc get KafkaConnector Restart the connector by annotating the KafkaConnector resource in OpenShift. oc annotate KafkaConnector <kafka_connector_name> strimzi.io/restart=true The restart annotation is set to true . Wait for the reconciliation to occur (every two minutes by default). The Kafka connector is restarted, as long as the annotation was detected by the reconciliation process. When Kafka Connect accepts the restart request, the annotation is removed from the KafkaConnector custom resource. 6.4.3.5. Manually restarting Kafka connector tasks If you are using KafkaConnector resources to manage connectors, use the restart-task annotation to manually trigger a restart of a connector task. Prerequisites The Cluster Operator is running. Procedure Find the name of the KafkaConnector custom resource that controls the Kafka connector task you want to restart: oc get KafkaConnector Find the ID of the task to be restarted from the KafkaConnector custom resource. Task IDs are non-negative integers, starting from 0: oc describe KafkaConnector <kafka_connector_name> Use the ID to restart the connector task by annotating the KafkaConnector resource in OpenShift: oc annotate KafkaConnector <kafka_connector_name> strimzi.io/restart-task=0 In this example, task 0 is restarted. Wait for the reconciliation to occur (every two minutes by default). The Kafka connector task is restarted, as long as the annotation was detected by the reconciliation process. When Kafka Connect accepts the restart request, the annotation is removed from the KafkaConnector custom resource. 6.4.3.6. Exposing the Kafka Connect API Use the Kafka Connect REST API as an alternative to using KafkaConnector resources to manage connectors. The Kafka Connect REST API is available as a service running on <connect_cluster_name> -connect-api:8083 , where <connect_cluster_name> is the name of your Kafka Connect cluster. The service is created when you create a Kafka Connect instance. The operations supported by the Kafka Connect REST API are described in the Apache Kafka Connect API documentation . Note The strimzi.io/use-connector-resources annotation enables KafkaConnectors. If you applied the annotation to your KafkaConnect resource configuration, you need to remove it to use the Kafka Connect API. Otherwise, manual changes made directly using the Kafka Connect REST API are reverted by the Cluster Operator. You can add the connector configuration as a JSON object. Example curl request to add connector configuration curl -X POST \ http://my-connect-cluster-connect-api:8083/connectors \ -H 'Content-Type: application/json' \ -d '{ "name": "my-source-connector", "config": { "connector.class":"org.apache.kafka.connect.file.FileStreamSourceConnector", "file": "/opt/kafka/LICENSE", "topic":"my-topic", "tasksMax": "4", "type": "source" } }' The API is only accessible within the OpenShift cluster. If you want to make the Kafka Connect API accessible to applications running outside of the OpenShift cluster, you can expose it manually by creating one of the following features: LoadBalancer or NodePort type services Ingress resources (Kubernetes only) OpenShift routes (OpenShift only) Note The connection is insecure, so allow external access advisedly. If you decide to create services, use the labels from the selector of the <connect_cluster_name> -connect-api service to configure the pods to which the service will route the traffic: Selector configuration for the service # ... selector: strimzi.io/cluster: my-connect-cluster 1 strimzi.io/kind: KafkaConnect strimzi.io/name: my-connect-cluster-connect 2 #... 1 Name of the Kafka Connect custom resource in your OpenShift cluster. 2 Name of the Kafka Connect deployment created by the Cluster Operator. You must also create a NetworkPolicy that allows HTTP requests from external clients. Example NetworkPolicy to allow requests to the Kafka Connect API apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: my-custom-connect-network-policy spec: ingress: - from: - podSelector: 1 matchLabels: app: my-connector-manager ports: - port: 8083 protocol: TCP podSelector: matchLabels: strimzi.io/cluster: my-connect-cluster strimzi.io/kind: KafkaConnect strimzi.io/name: my-connect-cluster-connect policyTypes: - Ingress 1 The label of the pod that is allowed to connect to the API. To add the connector configuration outside the cluster, use the URL of the resource that exposes the API in the curl command. 6.4.3.7. Limiting access to the Kafka Connect API It is crucial to restrict access to the Kafka Connect API only to trusted users to prevent unauthorized actions and potential security issues. The Kafka Connect API provides extensive capabilities for altering connector configurations, which makes it all the more important to take security precautions. Someone with access to the Kafka Connect API could potentially obtain sensitive information that an administrator may assume is secure. The Kafka Connect REST API can be accessed by anyone who has authenticated access to the OpenShift cluster and knows the endpoint URL, which includes the hostname/IP address and port number. For example, suppose an organization uses a Kafka Connect cluster and connectors to stream sensitive data from a customer database to a central database. The administrator uses a configuration provider plugin to store sensitive information related to connecting to the customer database and the central database, such as database connection details and authentication credentials. The configuration provider protects this sensitive information from being exposed to unauthorized users. However, someone who has access to the Kafka Connect API can still obtain access to the customer database without the consent of the administrator. They can do this by setting up a fake database and configuring a connector to connect to it. They then modify the connector configuration to point to the customer database, but instead of sending the data to the central database, they send it to the fake database. By configuring the connector to connect to the fake database, the login details and credentials for connecting to the customer database are intercepted, even though they are stored securely in the configuration provider. If you are using the KafkaConnector custom resources, then by default the OpenShift RBAC rules permit only OpenShift cluster administrators to make changes to connectors. You can also designate non-cluster administrators to manage AMQ Streams resources . With KafkaConnector resources enabled in your Kafka Connect configuration, changes made directly using the Kafka Connect REST API are reverted by the Cluster Operator. If you are not using the KafkaConnector resource, the default RBAC rules do not limit access to the Kafka Connect API. If you want to limit direct access to the Kafka Connect REST API using OpenShift RBAC, you need to enable and use the KafkaConnector resources. For improved security, we recommend configuring the following properties for the Kafka Connect API: org.apache.kafka.disallowed.login.modules (Kafka 3.4 or later) Set the org.apache.kafka.disallowed.login.modules Java system property to prevent the use of insecure login modules. For example, specifying com.sun.security.auth.module.JndiLoginModule prevents the use of the Kafka JndiLoginModule . Example configuration for disallowing login modules apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaConnect metadata: name: my-connect-cluster annotations: strimzi.io/use-connector-resources: "true" spec: # ... jvmOptions: javaSystemProperties: - name: org.apache.kafka.disallowed.login.modules value: com.sun.security.auth.module.JndiLoginModule, org.apache.kafka.common.security.kerberos.KerberosLoginModule # ... Only allow trusted login modules and follow the latest advice from Kafka for the version you are using. As a best practice, you should explicitly disallow insecure login modules in your Kafka Connect configuration by using the org.apache.kafka.disallowed.login.modules system property. connector.client.config.override.policy Set the connector.client.config.override.policy property to None to prevent connector configurations from overriding the Kafka Connect configuration and the consumers and producers it uses. Example configuration to specify connector override policy apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaConnect metadata: name: my-connect-cluster annotations: strimzi.io/use-connector-resources: "true" spec: # ... config: connector.client.config.override.policy: None # ... 6.4.3.8. Switching from using the Kafka Connect API to using KafkaConnector custom resources You can switch from using the Kafka Connect API to using KafkaConnector custom resources to manage your connectors. To make the switch, do the following in the order shown: Deploy KafkaConnector resources with the configuration to create your connector instances. Enable KafkaConnector resources in your Kafka Connect configuration by setting the strimzi.io/use-connector-resources annotation to true . Warning If you enable KafkaConnector resources before creating them, you delete all connectors. To switch from using KafkaConnector resources to using the Kafka Connect API, first remove the annotation that enables the KafkaConnector resources from your Kafka Connect configuration. Otherwise, manual changes made directly using the Kafka Connect REST API are reverted by the Cluster Operator. When making the switch, check the status of the KafkaConnect resource . The value of metadata.generation (the current version of the deployment) must match status.observedGeneration (the latest reconciliation of the resource). When the Kafka Connect cluster is Ready , you can delete the KafkaConnector resources. 6.4.4. List of Kafka Connect cluster resources The following resources are created by the Cluster Operator in the OpenShift cluster: connect-cluster-name -connect Name given to the following Kafka Connect resources: Deployment that creates the Kafka Connect worker node pods (when StableConnectIdentities feature gate is disabled). StrimziPodSet that creates the Kafka Connect worker node pods (when StableConnectIdentities feature gate is enabled). Headless service that provides stable DNS names to the Connect pods (when StableConnectIdentities feature gate is enabled). Pod Disruption Budget configured for the Kafka Connect worker nodes. connect-cluster-name -connect- idx Pods created by the Kafka Connect StrimziPodSet (when StableConnectIdentities feature gate is enabled). connect-cluster-name -connect-api Service which exposes the REST interface for managing the Kafka Connect cluster. connect-cluster-name -config ConfigMap which contains the Kafka Connect ancillary configuration and is mounted as a volume by the Kafka broker pods. 6.5. Deploying Kafka MirrorMaker Kafka MirrorMaker replicates data between two or more Kafka clusters, within or across data centers. This process is called mirroring to avoid confusion with the concept of Kafka partition replication. MirrorMaker consumes messages from a source cluster and republishes those messages to a target cluster. Data replication across clusters supports scenarios that require the following: Recovery of data in the event of a system failure Consolidation of data from multiple source clusters for centralized analysis Restriction of data access to a specific cluster Provision of data at a specific location to improve latency 6.5.1. Deploying Kafka MirrorMaker to your OpenShift cluster This procedure shows how to deploy a Kafka MirrorMaker cluster to your OpenShift cluster using the Cluster Operator. The deployment uses a YAML file to provide the specification to create a KafkaMirrorMaker or KafkaMirrorMaker2 resource depending on the version of MirrorMaker deployed. Important Kafka MirrorMaker 1 (referred to as just MirrorMaker in the documentation) has been deprecated in Apache Kafka 3.0.0 and will be removed in Apache Kafka 4.0.0. As a result, the KafkaMirrorMaker custom resource which is used to deploy Kafka MirrorMaker 1 has been deprecated in AMQ Streams as well. The KafkaMirrorMaker resource will be removed from AMQ Streams when we adopt Apache Kafka 4.0.0. As a replacement, use the KafkaMirrorMaker2 custom resource with the IdentityReplicationPolicy . AMQ Streams provides example configuration files . In this procedure, we use the following example files: examples/mirror-maker/kafka-mirror-maker.yaml examples/mirror-maker/kafka-mirror-maker-2.yaml Prerequisites The Cluster Operator must be deployed. Procedure Deploy Kafka MirrorMaker to your OpenShift cluster: For MirrorMaker: oc apply -f examples/mirror-maker/kafka-mirror-maker.yaml For MirrorMaker 2: oc apply -f examples/mirror-maker/kafka-mirror-maker-2.yaml Check the status of the deployment: oc get pods -n <my_cluster_operator_namespace> Output shows the deployment name and readiness NAME READY STATUS RESTARTS my-mirror-maker-mirror-maker-<pod_id> 1/1 Running 1 my-mm2-cluster-mirrormaker2-<pod_id> 1/1 Running 1 my-mirror-maker is the name of the Kafka MirrorMaker cluster. my-mm2-cluster is the name of the Kafka MirrorMaker 2 cluster. A pod ID identifies each pod created. With the default deployment, you install a single MirrorMaker or MirrorMaker 2 pod. READY shows the number of replicas that are ready/expected. The deployment is successful when the STATUS displays as Running . Additional resources Kafka MirrorMaker cluster configuration 6.5.2. List of Kafka MirrorMaker cluster resources The following resources are created by the Cluster Operator in the OpenShift cluster: <mirror-maker-name> -mirror-maker Deployment which is responsible for creating the Kafka MirrorMaker pods. <mirror-maker-name> -config ConfigMap which contains ancillary configuration for the Kafka MirrorMaker, and is mounted as a volume by the Kafka broker pods. <mirror-maker-name> -mirror-maker Pod Disruption Budget configured for the Kafka MirrorMaker worker nodes. 6.6. Deploying Kafka Bridge Kafka Bridge provides an API for integrating HTTP-based clients with a Kafka cluster. 6.6.1. Deploying Kafka Bridge to your OpenShift cluster This procedure shows how to deploy a Kafka Bridge cluster to your OpenShift cluster using the Cluster Operator. The deployment uses a YAML file to provide the specification to create a KafkaBridge resource. AMQ Streams provides example configuration files . In this procedure, we use the following example file: examples/bridge/kafka-bridge.yaml Prerequisites The Cluster Operator must be deployed. Procedure Deploy Kafka Bridge to your OpenShift cluster: oc apply -f examples/bridge/kafka-bridge.yaml Check the status of the deployment: oc get pods -n <my_cluster_operator_namespace> Output shows the deployment name and readiness NAME READY STATUS RESTARTS my-bridge-bridge-<pod_id> 1/1 Running 0 my-bridge is the name of the Kafka Bridge cluster. A pod ID identifies each pod created. With the default deployment, you install a single Kafka Bridge pod. READY shows the number of replicas that are ready/expected. The deployment is successful when the STATUS displays as Running . Additional resources Kafka Bridge cluster configuration Using the AMQ Streams Kafka Bridge 6.6.2. Exposing the Kafka Bridge service to your local machine Use port forwarding to expose the AMQ Streams Kafka Bridge service to your local machine on http://localhost:8080 . Note Port forwarding is only suitable for development and testing purposes. Procedure List the names of the pods in your OpenShift cluster: oc get pods -o name pod/kafka-consumer # ... pod/my-bridge-bridge-<pod_id> Connect to the Kafka Bridge pod on port 8080 : oc port-forward pod/my-bridge-bridge-<pod_id> 8080:8080 & Note If port 8080 on your local machine is already in use, use an alternative HTTP port, such as 8008 . API requests are now forwarded from port 8080 on your local machine to port 8080 in the Kafka Bridge pod. 6.6.3. Accessing the Kafka Bridge outside of OpenShift After deployment, the AMQ Streams Kafka Bridge can only be accessed by applications running in the same OpenShift cluster. These applications use the <kafka_bridge_name> -bridge-service service to access the API. If you want to make the Kafka Bridge accessible to applications running outside of the OpenShift cluster, you can expose it manually by creating one of the following features: LoadBalancer or NodePort type services Ingress resources (Kubernetes only) OpenShift routes (OpenShift only) If you decide to create Services, use the labels from the selector of the <kafka_bridge_name> -bridge-service service to configure the pods to which the service will route the traffic: # ... selector: strimzi.io/cluster: kafka-bridge-name 1 strimzi.io/kind: KafkaBridge #... 1 Name of the Kafka Bridge custom resource in your OpenShift cluster. 6.6.4. List of Kafka Bridge cluster resources The following resources are created by the Cluster Operator in the OpenShift cluster: bridge-cluster-name -bridge Deployment which is in charge to create the Kafka Bridge worker node pods. bridge-cluster-name -bridge-service Service which exposes the REST interface of the Kafka Bridge cluster. bridge-cluster-name -bridge-config ConfigMap which contains the Kafka Bridge ancillary configuration and is mounted as a volume by the Kafka broker pods. bridge-cluster-name -bridge Pod Disruption Budget configured for the Kafka Bridge worker nodes. 6.7. Alternative standalone deployment options for AMQ Streams operators You can perform a standalone deployment of the Topic Operator and User Operator. Consider a standalone deployment of these operators if you are using a Kafka cluster that is not managed by the Cluster Operator. You deploy the operators to OpenShift. Kafka can be running outside of OpenShift. For example, you might be using a Kafka as a managed service. You adjust the deployment configuration for the standalone operator to match the address of your Kafka cluster. 6.7.1. Deploying the standalone Topic Operator This procedure shows how to deploy the Topic Operator as a standalone component for topic management. You can use a standalone Topic Operator with a Kafka cluster that is not managed by the Cluster Operator. A standalone deployment can operate with any Kafka cluster. Standalone deployment files are provided with AMQ Streams. Use the 05-Deployment-strimzi-topic-operator.yaml deployment file to deploy the Topic Operator. Add or set the environment variables needed to make a connection to a Kafka cluster. The Topic Operator watches for KafkaTopic resources in a single namespace. You specify the namespace to watch, and the connection to the Kafka cluster, in the Topic Operator configuration. A single Topic Operator can watch a single namespace. One namespace should be watched by only one Topic Operator. If you want to use more than one Topic Operator, configure each of them to watch different namespaces. In this way, you can use Topic Operators with multiple Kafka clusters. Prerequisites You are running a Kafka cluster for the Topic Operator to connect to. As long as the standalone Topic Operator is correctly configured for connection, the Kafka cluster can be running on a bare-metal environment, a virtual machine, or as a managed cloud application service. Procedure Edit the env properties in the install/topic-operator/05-Deployment-strimzi-topic-operator.yaml standalone deployment file. Example standalone Topic Operator deployment configuration apiVersion: apps/v1 kind: Deployment metadata: name: strimzi-topic-operator labels: app: strimzi spec: # ... template: # ... spec: # ... containers: - name: strimzi-topic-operator # ... env: - name: STRIMZI_NAMESPACE 1 valueFrom: fieldRef: fieldPath: metadata.namespace - name: STRIMZI_KAFKA_BOOTSTRAP_SERVERS 2 value: my-kafka-bootstrap-address:9092 - name: STRIMZI_RESOURCE_LABELS 3 value: "strimzi.io/cluster=my-cluster" - name: STRIMZI_ZOOKEEPER_CONNECT 4 value: my-cluster-zookeeper-client:2181 - name: STRIMZI_ZOOKEEPER_SESSION_TIMEOUT_MS 5 value: "18000" - name: STRIMZI_FULL_RECONCILIATION_INTERVAL_MS 6 value: "120000" - name: STRIMZI_TOPIC_METADATA_MAX_ATTEMPTS 7 value: "6" - name: STRIMZI_LOG_LEVEL 8 value: INFO - name: STRIMZI_TLS_ENABLED 9 value: "false" - name: STRIMZI_JAVA_OPTS 10 value: "-Xmx=512M -Xms=256M" - name: STRIMZI_JAVA_SYSTEM_PROPERTIES 11 value: "-Djavax.net.debug=verbose -DpropertyName=value" - name: STRIMZI_PUBLIC_CA 12 value: "false" - name: STRIMZI_TLS_AUTH_ENABLED 13 value: "false" - name: STRIMZI_SASL_ENABLED 14 value: "false" - name: STRIMZI_SASL_USERNAME 15 value: "admin" - name: STRIMZI_SASL_PASSWORD 16 value: "password" - name: STRIMZI_SASL_MECHANISM 17 value: "scram-sha-512" - name: STRIMZI_SECURITY_PROTOCOL 18 value: "SSL" 1 The OpenShift namespace for the Topic Operator to watch for KafkaTopic resources. Specify the namespace of the Kafka cluster. 2 The host and port pair of the bootstrap broker address to discover and connect to all brokers in the Kafka cluster. Use a comma-separated list to specify two or three broker addresses in case a server is down. 3 The label to identify the KafkaTopic resources managed by the Topic Operator. This does not have to be the name of the Kafka cluster. It can be the label assigned to the KafkaTopic resource. If you deploy more than one Topic Operator, the labels must be unique for each. That is, the operators cannot manage the same resources. 4 (ZooKeeper) The host and port pair of the address to connect to the ZooKeeper cluster. This must be the same ZooKeeper cluster that your Kafka cluster is using. 5 (ZooKeeper) The ZooKeeper session timeout, in milliseconds. The default is 18000 (18 seconds). 6 The interval between periodic reconciliations, in milliseconds. The default is 120000 (2 minutes). 7 The number of attempts at getting topic metadata from Kafka. The time between each attempt is defined as an exponential backoff. Consider increasing this value when topic creation takes more time due to the number of partitions or replicas. The default is 6 attempts. 8 The level for printing logging messages. You can set the level to ERROR , WARNING , INFO , DEBUG , or TRACE . 9 Enables TLS support for encrypted communication with the Kafka brokers. 10 (Optional) The Java options used by the JVM running the Topic Operator. 11 (Optional) The debugging ( -D ) options set for the Topic Operator. 12 (Optional) Skips the generation of trust store certificates if TLS is enabled through STRIMZI_TLS_ENABLED . If this environment variable is enabled, the brokers must use a public trusted certificate authority for their TLS certificates. The default is false . 13 (Optional) Generates key store certificates for mTLS authentication. Setting this to false disables client authentication with mTLS to the Kafka brokers. The default is true . 14 (Optional) Enables SASL support for client authentication when connecting to Kafka brokers. The default is false . 15 (Optional) The SASL username for client authentication. Mandatory only if SASL is enabled through STRIMZI_SASL_ENABLED . 16 (Optional) The SASL password for client authentication. Mandatory only if SASL is enabled through STRIMZI_SASL_ENABLED . 17 (Optional) The SASL mechanism for client authentication. Mandatory only if SASL is enabled through STRIMZI_SASL_ENABLED . You can set the value to plain , scram-sha-256 , or scram-sha-512 . 18 (Optional) The security protocol used for communication with Kafka brokers. The default value is "PLAINTEXT". You can set the value to PLAINTEXT , SSL , SASL_PLAINTEXT , or SASL_SSL . If you want to connect to Kafka brokers that are using certificates from a public certificate authority, set STRIMZI_PUBLIC_CA to true . Set this property to true , for example, if you are using Amazon AWS MSK service. If you enabled mTLS with the STRIMZI_TLS_ENABLED environment variable, specify the keystore and truststore used to authenticate connection to the Kafka cluster. Example mTLS configuration # .... env: - name: STRIMZI_TRUSTSTORE_LOCATION 1 value: "/path/to/truststore.p12" - name: STRIMZI_TRUSTSTORE_PASSWORD 2 value: " TRUSTSTORE-PASSWORD " - name: STRIMZI_KEYSTORE_LOCATION 3 value: "/path/to/keystore.p12" - name: STRIMZI_KEYSTORE_PASSWORD 4 value: " KEYSTORE-PASSWORD " # ... 1 The truststore contains the public keys of the Certificate Authorities used to sign the Kafka and ZooKeeper server certificates. 2 The password for accessing the truststore. 3 The keystore contains the private key for mTLS authentication. 4 The password for accessing the keystore. Deploy the Topic Operator. oc create -f install/topic-operator Check the status of the deployment: oc get deployments Output shows the deployment name and readiness NAME READY UP-TO-DATE AVAILABLE strimzi-topic-operator 1/1 1 1 READY shows the number of replicas that are ready/expected. The deployment is successful when the AVAILABLE output shows 1 . 6.7.1.1. (Preview) Deploying the standalone Topic Operator for unidirectional topic management Unidirectional topic management maintains topics solely through KafkaTopic resources. For more information on unidirectional topic management, see Section 9.1, "Topic management modes" . If you want to try the preview of unidirectional topic management, follow these steps to deploy the standalone Topic Operator. Procedure Undeploy the current standalone Topic Operator. Retain the KafkaTopic resources, which are picked up by the Topic Operator when it is deployed again. Edit the Deployment configuration for the standalone Topic Operator to remove any ZooKeeper-related environment variables: STRIMZI_ZOOKEEPER_CONNECT STRIMZI_ZOOKEEPER_SESSION_TIMEOUT_MS TC_ZK_CONNECTION_TIMEOUT_MS STRIMZI_USE_ZOOKEEPER_TOPIC_STORE It is the presence or absence of the ZooKeeper variables that defines whether the unidirectional Topic Operator is used. Unidirectional topic management does not use ZooKeeper. If ZooKeeper environment variables are not present, the unidirectional Topic Operator is used. Otherwise, the bidirectional Topic Operator is used. Other unused environment variables that can be removed if present: STRIMZI_REASSIGN_THROTTLE STRIMZI_REASSIGN_VERIFY_INTERVAL_MS STRIMZI_TOPIC_METADATA_MAX_ATTEMPTS STRIMZI_TOPICS_PATH STRIMZI_STORE_TOPIC STRIMZI_STORE_NAME STRIMZI_APPLICATION_ID STRIMZI_STALE_RESULT_TIMEOUT_MS (Optional) Set the STRIMZI_USE_FINALIZERS environment variable to false : Additional configuration for unidirectional topic management # ... env: - name: STRIMZI_USE_FINALIZERS value: "false" Set this environment variable to false if you do not want to use finalizers to control topic deletion . Example standalone Topic Operator deployment configuration for unidirectional topic management apiVersion: apps/v1 kind: Deployment metadata: name: strimzi-topic-operator labels: app: strimzi spec: # ... template: # ... spec: # ... containers: - name: strimzi-topic-operator # ... env: - name: STRIMZI_NAMESPACE valueFrom: fieldRef: fieldPath: metadata.namespace - name: STRIMZI_KAFKA_BOOTSTRAP_SERVERS value: my-kafka-bootstrap-address:9092 - name: STRIMZI_RESOURCE_LABELS value: "strimzi.io/cluster=my-cluster" - name: STRIMZI_FULL_RECONCILIATION_INTERVAL_MS value: "120000" - name: STRIMZI_LOG_LEVEL value: INFO - name: STRIMZI_TLS_ENABLED value: "false" - name: STRIMZI_JAVA_OPTS value: "-Xmx=512M -Xms=256M" - name: STRIMZI_JAVA_SYSTEM_PROPERTIES value: "-Djavax.net.debug=verbose -DpropertyName=value" - name: STRIMZI_PUBLIC_CA value: "false" - name: STRIMZI_TLS_AUTH_ENABLED value: "false" - name: STRIMZI_SASL_ENABLED value: "false" - name: STRIMZI_SASL_USERNAME value: "admin" - name: STRIMZI_SASL_PASSWORD value: "password" - name: STRIMZI_SASL_MECHANISM value: "scram-sha-512" - name: STRIMZI_SECURITY_PROTOCOL value: "SSL" - name: STRIMZI_USE_FINALIZERS value: "true" Deploy the standalone Topic Operator in the standard way. 6.7.2. Deploying the standalone User Operator This procedure shows how to deploy the User Operator as a standalone component for user management. You can use a standalone User Operator with a Kafka cluster that is not managed by the Cluster Operator. A standalone deployment can operate with any Kafka cluster. Standalone deployment files are provided with AMQ Streams. Use the 05-Deployment-strimzi-user-operator.yaml deployment file to deploy the User Operator. Add or set the environment variables needed to make a connection to a Kafka cluster. The User Operator watches for KafkaUser resources in a single namespace. You specify the namespace to watch, and the connection to the Kafka cluster, in the User Operator configuration. A single User Operator can watch a single namespace. One namespace should be watched by only one User Operator. If you want to use more than one User Operator, configure each of them to watch different namespaces. In this way, you can use the User Operator with multiple Kafka clusters. Prerequisites You are running a Kafka cluster for the User Operator to connect to. As long as the standalone User Operator is correctly configured for connection, the Kafka cluster can be running on a bare-metal environment, a virtual machine, or as a managed cloud application service. Procedure Edit the following env properties in the install/user-operator/05-Deployment-strimzi-user-operator.yaml standalone deployment file. Example standalone User Operator deployment configuration apiVersion: apps/v1 kind: Deployment metadata: name: strimzi-user-operator labels: app: strimzi spec: # ... template: # ... spec: # ... containers: - name: strimzi-user-operator # ... env: - name: STRIMZI_NAMESPACE 1 valueFrom: fieldRef: fieldPath: metadata.namespace - name: STRIMZI_KAFKA_BOOTSTRAP_SERVERS 2 value: my-kafka-bootstrap-address:9092 - name: STRIMZI_CA_CERT_NAME 3 value: my-cluster-clients-ca-cert - name: STRIMZI_CA_KEY_NAME 4 value: my-cluster-clients-ca - name: STRIMZI_LABELS 5 value: "strimzi.io/cluster=my-cluster" - name: STRIMZI_FULL_RECONCILIATION_INTERVAL_MS 6 value: "120000" - name: STRIMZI_WORK_QUEUE_SIZE 7 value: 10000 - name: STRIMZI_CONTROLLER_THREAD_POOL_SIZE 8 value: 10 - name: STRIMZI_USER_OPERATIONS_THREAD_POOL_SIZE 9 value: 4 - name: STRIMZI_LOG_LEVEL 10 value: INFO - name: STRIMZI_GC_LOG_ENABLED 11 value: "true" - name: STRIMZI_CA_VALIDITY 12 value: "365" - name: STRIMZI_CA_RENEWAL 13 value: "30" - name: STRIMZI_JAVA_OPTS 14 value: "-Xmx=512M -Xms=256M" - name: STRIMZI_JAVA_SYSTEM_PROPERTIES 15 value: "-Djavax.net.debug=verbose -DpropertyName=value" - name: STRIMZI_SECRET_PREFIX 16 value: "kafka-" - name: STRIMZI_ACLS_ADMIN_API_SUPPORTED 17 value: "true" - name: STRIMZI_MAINTENANCE_TIME_WINDOWS 18 value: '* * 8-10 * * ?;* * 14-15 * * ?' - name: STRIMZI_KAFKA_ADMIN_CLIENT_CONFIGURATION 19 value: | default.api.timeout.ms=120000 request.timeout.ms=60000 1 The OpenShift namespace for the User Operator to watch for KafkaUser resources. Only one namespace can be specified. 2 The host and port pair of the bootstrap broker address to discover and connect to all brokers in the Kafka cluster. Use a comma-separated list to specify two or three broker addresses in case a server is down. 3 The OpenShift Secret that contains the public key ( ca.crt ) value of the CA (certificate authority) that signs new user certificates for mTLS authentication. 4 The OpenShift Secret that contains the private key ( ca.key ) value of the CA that signs new user certificates for mTLS authentication. 5 The label to identify the KafkaUser resources managed by the User Operator. This does not have to be the name of the Kafka cluster. It can be the label assigned to the KafkaUser resource. If you deploy more than one User Operator, the labels must be unique for each. That is, the operators cannot manage the same resources. 6 The interval between periodic reconciliations, in milliseconds. The default is 120000 (2 minutes). 7 The size of the controller event queue. The size of the queue should be at least as big as the maximal amount of users you expect the User Operator to operate. The default is 1024 . 8 The size of the worker pool for reconciling the users. Bigger pool might require more resources, but it will also handle more KafkaUser resources The default is 50 . 9 The size of the worker pool for Kafka Admin API and OpenShift operations. Bigger pool might require more resources, but it will also handle more KafkaUser resources The default is 4 . 10 The level for printing logging messages. You can set the level to ERROR , WARNING , INFO , DEBUG , or TRACE . 11 Enables garbage collection (GC) logging. The default is true . 12 The validity period for the CA. The default is 365 days. 13 The renewal period for the CA. The renewal period is measured backwards from the expiry date of the current certificate. The default is 30 days to initiate certificate renewal before the old certificates expire. 14 (Optional) The Java options used by the JVM running the User Operator 15 (Optional) The debugging ( -D ) options set for the User Operator 16 (Optional) Prefix for the names of OpenShift secrets created by the User Operator. 17 (Optional) Indicates whether the Kafka cluster supports management of authorization ACL rules using the Kafka Admin API. When set to false , the User Operator will reject all resources with simple authorization ACL rules. This helps to avoid unnecessary exceptions in the Kafka cluster logs. The default is true . 18 (Optional) Semi-colon separated list of Cron Expressions defining the maintenance time windows during which the expiring user certificates will be renewed. 19 (Optional) Configuration options for configuring the Kafka Admin client used by the User Operator in the properties format. If you are using mTLS to connect to the Kafka cluster, specify the secrets used to authenticate connection. Otherwise, go to the step. Example mTLS configuration # .... env: - name: STRIMZI_CLUSTER_CA_CERT_SECRET_NAME 1 value: my-cluster-cluster-ca-cert - name: STRIMZI_EO_KEY_SECRET_NAME 2 value: my-cluster-entity-operator-certs # ..." 1 The OpenShift Secret that contains the public key ( ca.crt ) value of the CA that signs Kafka broker certificates. 2 The OpenShift Secret that contains the certificate public key ( entity-operator.crt ) and private key ( entity-operator.key ) that is used for mTLS authentication against the Kafka cluster. Deploy the User Operator. oc create -f install/user-operator Check the status of the deployment: oc get deployments Output shows the deployment name and readiness NAME READY UP-TO-DATE AVAILABLE strimzi-user-operator 1/1 1 1 READY shows the number of replicas that are ready/expected. The deployment is successful when the AVAILABLE output shows 1 .
[ "sed -i 's/namespace: .*/namespace: my-cluster-operator-namespace/' install/cluster-operator/*RoleBinding*.yaml", "sed -i '' 's/namespace: .*/namespace: my-cluster-operator-namespace/' install/cluster-operator/*RoleBinding*.yaml", "create -f install/cluster-operator -n my-cluster-operator-namespace", "get deployments -n my-cluster-operator-namespace", "NAME READY UP-TO-DATE AVAILABLE strimzi-cluster-operator 1/1 1 1", "sed -i 's/namespace: .*/namespace: my-cluster-operator-namespace/' install/cluster-operator/*RoleBinding*.yaml", "sed -i '' 's/namespace: .*/namespace: my-cluster-operator-namespace/' install/cluster-operator/*RoleBinding*.yaml", "apiVersion: apps/v1 kind: Deployment spec: # template: spec: serviceAccountName: strimzi-cluster-operator containers: - name: strimzi-cluster-operator image: registry.redhat.io/amq-streams/strimzi-rhel8-operator:2.5.2 imagePullPolicy: IfNotPresent env: - name: STRIMZI_NAMESPACE value: watched-namespace-1,watched-namespace-2,watched-namespace-3", "create -f install/cluster-operator/020-RoleBinding-strimzi-cluster-operator.yaml -n <watched_namespace> create -f install/cluster-operator/023-RoleBinding-strimzi-cluster-operator.yaml -n <watched_namespace> create -f install/cluster-operator/031-RoleBinding-strimzi-cluster-operator-entity-operator-delegation.yaml -n <watched_namespace>", "create -f install/cluster-operator -n my-cluster-operator-namespace", "get deployments -n my-cluster-operator-namespace", "NAME READY UP-TO-DATE AVAILABLE strimzi-cluster-operator 1/1 1 1", "sed -i 's/namespace: .*/namespace: my-cluster-operator-namespace/' install/cluster-operator/*RoleBinding*.yaml", "sed -i '' 's/namespace: .*/namespace: my-cluster-operator-namespace/' install/cluster-operator/*RoleBinding*.yaml", "apiVersion: apps/v1 kind: Deployment spec: # template: spec: # serviceAccountName: strimzi-cluster-operator containers: - name: strimzi-cluster-operator image: registry.redhat.io/amq-streams/strimzi-rhel8-operator:2.5.2 imagePullPolicy: IfNotPresent env: - name: STRIMZI_NAMESPACE value: \"*\" #", "create clusterrolebinding strimzi-cluster-operator-namespaced --clusterrole=strimzi-cluster-operator-namespaced --serviceaccount my-cluster-operator-namespace:strimzi-cluster-operator create clusterrolebinding strimzi-cluster-operator-watched --clusterrole=strimzi-cluster-operator-watched --serviceaccount my-cluster-operator-namespace:strimzi-cluster-operator create clusterrolebinding strimzi-cluster-operator-entity-operator-delegation --clusterrole=strimzi-entity-operator --serviceaccount my-cluster-operator-namespace:strimzi-cluster-operator", "create -f install/cluster-operator -n my-cluster-operator-namespace", "get deployments -n my-cluster-operator-namespace", "NAME READY UP-TO-DATE AVAILABLE strimzi-cluster-operator 1/1 1 1", "apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-cluster spec: kafka: version: 3.5.0 # config: # log.message.format.version: \"3.5\" inter.broker.protocol.version: \"3.5\" #", "apply -f examples/kafka/kafka-ephemeral.yaml", "apply -f examples/kafka/kafka-persistent.yaml", "get pods -n <my_cluster_operator_namespace>", "NAME READY STATUS RESTARTS my-cluster-entity-operator 3/3 Running 0 my-cluster-kafka-0 1/1 Running 0 my-cluster-kafka-1 1/1 Running 0 my-cluster-kafka-2 1/1 Running 0 my-cluster-zookeeper-0 1/1 Running 0 my-cluster-zookeeper-1 1/1 Running 0 my-cluster-zookeeper-2 1/1 Running 0", "set env deployment/strimzi-cluster-operator STRIMZI_FEATURE_GATES=\"+KafkaNodePools\"", "env - name: STRIMZI_FEATURE_GATES value: +KafkaNodePools", "apply -f examples/kafka/nodepools/kafka.yaml", "apply -f examples/kafka/nodepools/kafka-with-dual-role-kraft-nodes.yaml", "apply -f examples/kafka/nodepools/kafka-with-kraft.yaml", "get pods -n <my_cluster_operator_namespace>", "NAME READY STATUS RESTARTS my-cluster-entity-operator 3/3 Running 0 my-cluster-pool-a-kafka-0 1/1 Running 0 my-cluster-pool-a-kafka-1 1/1 Running 0 my-cluster-pool-a-kafka-4 1/1 Running 0", "apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-cluster spec: # entityOperator: topicOperator: {} userOperator: {}", "apply -f <kafka_configuration_file>", "get pods -n <my_cluster_operator_namespace>", "NAME READY STATUS RESTARTS my-cluster-entity-operator 3/3 Running 0", "apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-cluster spec: # entityOperator: topicOperator: {} userOperator: {}", "apply -f <kafka_configuration_file>", "get pods -n <my_cluster_operator_namespace>", "NAME READY STATUS RESTARTS my-cluster-entity-operator 3/3 Running 0", "apply -f examples/connect/kafka-connect.yaml", "get pods -n <my_cluster_operator_namespace>", "NAME READY STATUS RESTARTS my-connect-cluster-connect-<pod_id> 1/1 Running 0", "apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaConnect metadata: name: my-connect spec: # config: group.id: connect-cluster 1 offset.storage.topic: connect-cluster-offsets 2 config.storage.topic: connect-cluster-configs 3 status.storage.topic: connect-cluster-status 4 #", "apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaConnect metadata: name: my-connect-cluster spec: 1 # build: output: 2 type: docker image: my-registry.io/my-org/my-connect-cluster:latest pushSecret: my-registry-credentials plugins: 3 - name: debezium-postgres-connector artifacts: - type: tgz url: https://repo1.maven.org/maven2/io/debezium/debezium-connector-postgres/2.1.3.Final/debezium-connector-postgres-2.1.3.Final-plugin.tar.gz sha512sum: c4ddc97846de561755dc0b021a62aba656098829c70eb3ade3b817ce06d852ca12ae50c0281cc791a5a131cb7fc21fb15f4b8ee76c6cae5dd07f9c11cb7c6e79 - name: camel-telegram artifacts: - type: tgz url: https://repo.maven.apache.org/maven2/org/apache/camel/kafkaconnector/camel-telegram-kafka-connector/0.11.5/camel-telegram-kafka-connector-0.11.5-package.tar.gz sha512sum: d6d9f45e0d1dbfcc9f6d1c7ca2046168c764389c78bc4b867dab32d24f710bb74ccf2a007d7d7a8af2dfca09d9a52ccbc2831fc715c195a3634cca055185bd91 #", "oc apply -f <kafka_connect_configuration_file>", "FROM registry.redhat.io/amq-streams/kafka-35-rhel8:2.5.2 USER root:root COPY ./ my-plugins / /opt/kafka/plugins/ USER 1001", "tree ./ my-plugins / ./ my-plugins / ├── debezium-connector-mongodb │ ├── bson-<version>.jar │ ├── CHANGELOG.md │ ├── CONTRIBUTE.md │ ├── COPYRIGHT.txt │ ├── debezium-connector-mongodb-<version>.jar │ ├── debezium-core-<version>.jar │ ├── LICENSE.txt │ ├── mongodb-driver-core-<version>.jar │ ├── README.md │ └── # ├── debezium-connector-mysql │ ├── CHANGELOG.md │ ├── CONTRIBUTE.md │ ├── COPYRIGHT.txt │ ├── debezium-connector-mysql-<version>.jar │ ├── debezium-core-<version>.jar │ ├── LICENSE.txt │ ├── mysql-binlog-connector-java-<version>.jar │ ├── mysql-connector-java-<version>.jar │ ├── README.md │ └── # └── debezium-connector-postgres ├── CHANGELOG.md ├── CONTRIBUTE.md ├── COPYRIGHT.txt ├── debezium-connector-postgres-<version>.jar ├── debezium-core-<version>.jar ├── LICENSE.txt ├── postgresql-<version>.jar ├── protobuf-java-<version>.jar ├── README.md └── #", "apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaConnect metadata: name: my-connect-cluster spec: 1 # image: my-new-container-image 2 config: 3 #", "apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaConnect metadata: name: my-connect-cluster annotations: strimzi.io/use-connector-resources: \"true\" spec: #", "apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaConnector metadata: name: my-source-connector 1 labels: strimzi.io/cluster: my-connect-cluster 2 spec: class: org.apache.kafka.connect.file.FileStreamSourceConnector 3 tasksMax: 2 4 autoRestart: 5 enabled: true config: 6 file: \"/opt/kafka/LICENSE\" 7 topic: my-topic 8 #", "apply -f examples/connect/source-connector.yaml", "touch examples/connect/sink-connector.yaml", "apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaConnector metadata: name: my-sink-connector labels: strimzi.io/cluster: my-connect spec: class: org.apache.kafka.connect.file.FileStreamSinkConnector 1 tasksMax: 2 config: 2 file: \"/tmp/my-file\" 3 topics: my-topic 4", "apply -f examples/connect/sink-connector.yaml", "get kctr --selector strimzi.io/cluster=<my_connect_cluster> -o name my-source-connector my-sink-connector", "exec <my_kafka_cluster>-kafka-0 -i -t -- bin/kafka-console-consumer.sh --bootstrap-server <my_kafka_cluster>-kafka-bootstrap. NAMESPACE .svc:9092 --topic my-topic --from-beginning", "get KafkaConnector", "annotate KafkaConnector <kafka_connector_name> strimzi.io/restart=true", "get KafkaConnector", "describe KafkaConnector <kafka_connector_name>", "annotate KafkaConnector <kafka_connector_name> strimzi.io/restart-task=0", "curl -X POST http://my-connect-cluster-connect-api:8083/connectors -H 'Content-Type: application/json' -d '{ \"name\": \"my-source-connector\", \"config\": { \"connector.class\":\"org.apache.kafka.connect.file.FileStreamSourceConnector\", \"file\": \"/opt/kafka/LICENSE\", \"topic\":\"my-topic\", \"tasksMax\": \"4\", \"type\": \"source\" } }'", "selector: strimzi.io/cluster: my-connect-cluster 1 strimzi.io/kind: KafkaConnect strimzi.io/name: my-connect-cluster-connect 2 #", "apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: my-custom-connect-network-policy spec: ingress: - from: - podSelector: 1 matchLabels: app: my-connector-manager ports: - port: 8083 protocol: TCP podSelector: matchLabels: strimzi.io/cluster: my-connect-cluster strimzi.io/kind: KafkaConnect strimzi.io/name: my-connect-cluster-connect policyTypes: - Ingress", "apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaConnect metadata: name: my-connect-cluster annotations: strimzi.io/use-connector-resources: \"true\" spec: # jvmOptions: javaSystemProperties: - name: org.apache.kafka.disallowed.login.modules value: com.sun.security.auth.module.JndiLoginModule, org.apache.kafka.common.security.kerberos.KerberosLoginModule", "apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaConnect metadata: name: my-connect-cluster annotations: strimzi.io/use-connector-resources: \"true\" spec: # config: connector.client.config.override.policy: None", "apply -f examples/mirror-maker/kafka-mirror-maker.yaml", "apply -f examples/mirror-maker/kafka-mirror-maker-2.yaml", "get pods -n <my_cluster_operator_namespace>", "NAME READY STATUS RESTARTS my-mirror-maker-mirror-maker-<pod_id> 1/1 Running 1 my-mm2-cluster-mirrormaker2-<pod_id> 1/1 Running 1", "apply -f examples/bridge/kafka-bridge.yaml", "get pods -n <my_cluster_operator_namespace>", "NAME READY STATUS RESTARTS my-bridge-bridge-<pod_id> 1/1 Running 0", "get pods -o name pod/kafka-consumer pod/my-bridge-bridge-<pod_id>", "port-forward pod/my-bridge-bridge-<pod_id> 8080:8080 &", "selector: strimzi.io/cluster: kafka-bridge-name 1 strimzi.io/kind: KafkaBridge #", "apiVersion: apps/v1 kind: Deployment metadata: name: strimzi-topic-operator labels: app: strimzi spec: # template: # spec: # containers: - name: strimzi-topic-operator # env: - name: STRIMZI_NAMESPACE 1 valueFrom: fieldRef: fieldPath: metadata.namespace - name: STRIMZI_KAFKA_BOOTSTRAP_SERVERS 2 value: my-kafka-bootstrap-address:9092 - name: STRIMZI_RESOURCE_LABELS 3 value: \"strimzi.io/cluster=my-cluster\" - name: STRIMZI_ZOOKEEPER_CONNECT 4 value: my-cluster-zookeeper-client:2181 - name: STRIMZI_ZOOKEEPER_SESSION_TIMEOUT_MS 5 value: \"18000\" - name: STRIMZI_FULL_RECONCILIATION_INTERVAL_MS 6 value: \"120000\" - name: STRIMZI_TOPIC_METADATA_MAX_ATTEMPTS 7 value: \"6\" - name: STRIMZI_LOG_LEVEL 8 value: INFO - name: STRIMZI_TLS_ENABLED 9 value: \"false\" - name: STRIMZI_JAVA_OPTS 10 value: \"-Xmx=512M -Xms=256M\" - name: STRIMZI_JAVA_SYSTEM_PROPERTIES 11 value: \"-Djavax.net.debug=verbose -DpropertyName=value\" - name: STRIMZI_PUBLIC_CA 12 value: \"false\" - name: STRIMZI_TLS_AUTH_ENABLED 13 value: \"false\" - name: STRIMZI_SASL_ENABLED 14 value: \"false\" - name: STRIMZI_SASL_USERNAME 15 value: \"admin\" - name: STRIMZI_SASL_PASSWORD 16 value: \"password\" - name: STRIMZI_SASL_MECHANISM 17 value: \"scram-sha-512\" - name: STRIMZI_SECURITY_PROTOCOL 18 value: \"SSL\"", ". env: - name: STRIMZI_TRUSTSTORE_LOCATION 1 value: \"/path/to/truststore.p12\" - name: STRIMZI_TRUSTSTORE_PASSWORD 2 value: \" TRUSTSTORE-PASSWORD \" - name: STRIMZI_KEYSTORE_LOCATION 3 value: \"/path/to/keystore.p12\" - name: STRIMZI_KEYSTORE_PASSWORD 4 value: \" KEYSTORE-PASSWORD \"", "create -f install/topic-operator", "get deployments", "NAME READY UP-TO-DATE AVAILABLE strimzi-topic-operator 1/1 1 1", "env: - name: STRIMZI_USE_FINALIZERS value: \"false\"", "apiVersion: apps/v1 kind: Deployment metadata: name: strimzi-topic-operator labels: app: strimzi spec: # template: # spec: # containers: - name: strimzi-topic-operator # env: - name: STRIMZI_NAMESPACE valueFrom: fieldRef: fieldPath: metadata.namespace - name: STRIMZI_KAFKA_BOOTSTRAP_SERVERS value: my-kafka-bootstrap-address:9092 - name: STRIMZI_RESOURCE_LABELS value: \"strimzi.io/cluster=my-cluster\" - name: STRIMZI_FULL_RECONCILIATION_INTERVAL_MS value: \"120000\" - name: STRIMZI_LOG_LEVEL value: INFO - name: STRIMZI_TLS_ENABLED value: \"false\" - name: STRIMZI_JAVA_OPTS value: \"-Xmx=512M -Xms=256M\" - name: STRIMZI_JAVA_SYSTEM_PROPERTIES value: \"-Djavax.net.debug=verbose -DpropertyName=value\" - name: STRIMZI_PUBLIC_CA value: \"false\" - name: STRIMZI_TLS_AUTH_ENABLED value: \"false\" - name: STRIMZI_SASL_ENABLED value: \"false\" - name: STRIMZI_SASL_USERNAME value: \"admin\" - name: STRIMZI_SASL_PASSWORD value: \"password\" - name: STRIMZI_SASL_MECHANISM value: \"scram-sha-512\" - name: STRIMZI_SECURITY_PROTOCOL value: \"SSL\" - name: STRIMZI_USE_FINALIZERS value: \"true\"", "apiVersion: apps/v1 kind: Deployment metadata: name: strimzi-user-operator labels: app: strimzi spec: # template: # spec: # containers: - name: strimzi-user-operator # env: - name: STRIMZI_NAMESPACE 1 valueFrom: fieldRef: fieldPath: metadata.namespace - name: STRIMZI_KAFKA_BOOTSTRAP_SERVERS 2 value: my-kafka-bootstrap-address:9092 - name: STRIMZI_CA_CERT_NAME 3 value: my-cluster-clients-ca-cert - name: STRIMZI_CA_KEY_NAME 4 value: my-cluster-clients-ca - name: STRIMZI_LABELS 5 value: \"strimzi.io/cluster=my-cluster\" - name: STRIMZI_FULL_RECONCILIATION_INTERVAL_MS 6 value: \"120000\" - name: STRIMZI_WORK_QUEUE_SIZE 7 value: 10000 - name: STRIMZI_CONTROLLER_THREAD_POOL_SIZE 8 value: 10 - name: STRIMZI_USER_OPERATIONS_THREAD_POOL_SIZE 9 value: 4 - name: STRIMZI_LOG_LEVEL 10 value: INFO - name: STRIMZI_GC_LOG_ENABLED 11 value: \"true\" - name: STRIMZI_CA_VALIDITY 12 value: \"365\" - name: STRIMZI_CA_RENEWAL 13 value: \"30\" - name: STRIMZI_JAVA_OPTS 14 value: \"-Xmx=512M -Xms=256M\" - name: STRIMZI_JAVA_SYSTEM_PROPERTIES 15 value: \"-Djavax.net.debug=verbose -DpropertyName=value\" - name: STRIMZI_SECRET_PREFIX 16 value: \"kafka-\" - name: STRIMZI_ACLS_ADMIN_API_SUPPORTED 17 value: \"true\" - name: STRIMZI_MAINTENANCE_TIME_WINDOWS 18 value: '* * 8-10 * * ?;* * 14-15 * * ?' - name: STRIMZI_KAFKA_ADMIN_CLIENT_CONFIGURATION 19 value: | default.api.timeout.ms=120000 request.timeout.ms=60000", ". env: - name: STRIMZI_CLUSTER_CA_CERT_SECRET_NAME 1 value: my-cluster-cluster-ca-cert - name: STRIMZI_EO_KEY_SECRET_NAME 2 value: my-cluster-entity-operator-certs ...\"", "create -f install/user-operator", "get deployments", "NAME READY UP-TO-DATE AVAILABLE strimzi-user-operator 1/1 1 1" ]
https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.5/html/deploying_and_managing_amq_streams_on_openshift/deploy-tasks_str
Chapter 4. Mounting NFS shares
Chapter 4. Mounting NFS shares As a system administrator, you can mount remote NFS shares on your system to access shared data. 4.1. Services required on an NFS client Red Hat Enterprise Linux uses a combination of a kernel module and user-space processes to provide NFS file shares: Table 4.1. Services required on an NFS client Service name NFS version Description rpc.idmapd 4 This process provides NFSv4 client and server upcalls, which map between NFSv4 names (strings in the form of user@domain ) and local user and group IDs. rpc.statd 3 This service provides notification to other NFSv3 clients when the local host reboots, and to the kernel when a remote NFSv3 host reboots. Additional resources rpc.idmapd(8) , rpc.statd(8) man pages on your system 4.2. Preparing an NFSv3 client to run behind a firewall An NFS server notifies clients about file locks and the server status. To establish a connection back to the client, you must open the relevant ports in the firewall on the client. Procedure By default, NFSv3 RPC services use random ports. To enable a firewall configuration, configure fixed port numbers in the /etc/nfs.conf file: In the [lockd] section, set a fixed port number for the nlockmgr RPC service, for example: With this setting, the service automatically uses this port number for both the UDP and TCP protocol. In the [statd] section, set a fixed port number for the rpc.statd service, for example: With this setting, the service automatically uses this port number for both the UDP and TCP protocol. Open the relevant ports in firewalld : Restart the rpc-statd service: 4.3. Preparing an NFSv4 client to run behind a firewall An NFS server notifies clients about file locks and the server status. To establish a connection back to the client, you must open the relevant ports in the firewall on the client. Note NFS v4.1 and later uses the pre-existing client port for callbacks, so the callback port cannot be set separately. For more information, see the How do I set the NFS4 client callback port to a specific port? solution. Prerequisites The server uses the NFS 4.0 protocol. Procedure Open the relevant ports in firewalld : 4.4. Manually mounting an NFS share If you do not require that a NFS share is automatically mounted at boot time, you can manually mount it. Warning You can experience conflicts in your NFSv4 clientid and their sudden expiration if your NFS clients have the same short hostname. To avoid any possible sudden expiration of your NFSv4 clientid , you must use either unique hostnames for NFS clients or configure identifier on each container, depending on what system you are using. For more information, see the Red Hat Knowledgebase solution NFSv4 clientid was expired suddenly due to use same hostname on several NFS clients . Procedure Use the following command to mount an NFS share on a client: For example, to mount the /nfs/projects share from the server.example.com NFS server to /mnt , enter: Verification As a user who has permissions to access the NFS share, display the content of the mounted share: 4.5. Mounting an NFS share automatically when the system boots Automatic mounting of an NFS share during system boot ensures that critical services reliant on centralized data, such as /home directories hosted on the NFS server, have seamless and uninterrupted access from the moment the system starts up. Procedure Edit the /etc/fstab file and add a line for the share that you want to mount: For example, to mount the /nfs/home share from the server.example.com NFS server to /home , enter: Mount the share: Verification As a user who has permissions to access the NFS share, display the content of the mounted share: Additional resources fstab(5) man page on your system 4.6. Connecting NFS mounts in the web console Connect a remote directory to your file system using NFS. Prerequisites You have installed the RHEL 9 web console. You have enabled the cockpit service. Your user account is allowed to log in to the web console. For instructions, see Installing and enabling the web console . The cockpit-storaged package is installed on your system. NFS server name or the IP address. Path to the directory on the remote server. Procedure Log in to the RHEL 9 web console. For details, see Logging in to the web console . Click Storage . In the Storage table, click the menu button. From the drop-down menu, select New NFS mount . In the New NFS Mount dialog box, enter the server or IP address of the remote server. In the Path on Server field, enter the path to the directory that you want to mount. In the Local Mount Point field, enter the path to the directory on your local system where you want to mount the NFS. In the Mount options check box list, select how you want to mount the NFS. You can select multiple options depending on your requirements. Check the Mount at boot box if you want the directory to be reachable even after you restart the local system. Check the Mount read only box if you do not want to change the content of the NFS. Check the Custom mount options box and add the mount options if you want to change the default mount option. Click Add . Verification Open the mounted directory and verify that the content is accessible. 4.7. Customizing NFS mount options in the web console Edit an existing NFS mount and add custom mount options. Custom mount options can help you to troubleshoot the connection or change parameters of the NFS mount such as changing timeout limits or configuring authentication. Prerequisites You have installed the RHEL 9 web console. You have enabled the cockpit service. Your user account is allowed to log in to the web console. For instructions, see Installing and enabling the web console . The cockpit-storaged package is installed on your system. An NFS mount is added to your system. Procedure Log in to the RHEL 9 web console. For details, see Logging in to the web console . Click Storage . In the Storage table, click the NFS mount you want to adjust. If the remote directory is mounted, click Unmount . You must unmount the directory during the custom mount options configuration. Otherwise, the web console does not save the configuration and this causes an error. Click Edit . In the NFS Mount dialog box, select Custom mount option . Enter mount options separated by a comma. For example: nfsvers=4 : The NFS protocol version number soft : The type of recovery after an NFS request times out sec=krb5 : The files on the NFS server can be secured by Kerberos authentication. Both the NFS client and server have to support Kerberos authentication. For a complete list of the NFS mount options, enter man nfs in the command line. Click Apply . Click Mount . Verification Open the mounted directory and verify that the content is accessible. 4.8. Setting up an NFS client with Kerberos in a Red Hat Enterprise Linux Identity Management domain If the NFS server uses Kerberos and is enrolled in an Red Hat Enterprise Linux Identity Management (IdM) domain, your client must also be a member of the domain to be able to mount the shares. This enables you to centrally manage users and groups and to use Kerberos for authentication, integrity protection, and traffic encryption. Prerequisites The NFS client is enrolled in a Red Hat Enterprise Linux Identity Management (IdM) domain. The exported NFS share uses Kerberos. Procedure Obtain a kerberos ticket as an IdM administrator: Retrieve the host principal, and store it in the /etc/krb5.keytab file: IdM automatically created the host principal when you joined the host to the IdM domain. Optional: Display the principals in the /etc/krb5.keytab file: Use the ipa-client-automount utility to configure mapping of IdM IDs: Mount an exported NFS share, for example: The -o sec option specifies the Kerberos security method. Verification Log in as an IdM user who has permissions to write on the mounted share. Obtain a Kerberos ticket: Create a file on the share, for example: List the directory to verify that the file was created: Additional resources The AUTH_GSS authentication method 4.9. Configuring GNOME to store user settings on home directories hosted on an NFS share If you use GNOME on a system with home directories hosted on an NFS server, you must change the keyfile backend of the dconf database. Otherwise, dconf might not work correctly. This change affects all users on the host because it changes how dconf manages user settings and configurations stored in the home directories. Procedure Add the following line to the beginning of the /etc/dconf/profile/user file. If the file does not exist, create it. With this setting, dconf polls the keyfile back end to determine whether updates have been made, so settings might not be updated immediately. The changes take effect when the users logs out and in. 4.10. Frequently used NFS mount options The following are the commonly-used options when mounting NFS shares. You can use these options with mount commands, in /etc/fstab settings, and the autofs automapper. lookupcache= mode Specifies how the kernel should manage its cache of directory entries for a given mount point. Valid arguments for mode are all , none , or positive . nfsvers= version Specifies which version of the NFS protocol to use, where version is 3 , 4 , 4.0 , 4.1 , or 4.2 . This is useful for hosts that run multiple NFS servers, or to disable retrying a mount with lower versions. If no version is specified, the client tries version 4.2 first, then negotiates down until it finds a version supported by the server. The option vers is identical to nfsvers , and is included in this release for compatibility reasons. noacl Turns off all ACL processing. This can be needed when interfacing with old Red Hat Enterprise Linux versions that are not compatible with the recent ACL technology. nolock Disables file locking. This setting can be required when you connect to very old NFS servers. noexec Prevents execution of binaries on mounted file systems. This is useful if the system is mounting a non-Linux file system containing incompatible binaries. nosuid Disables the set-user-identifier and set-group-identifier bits. This prevents remote users from gaining higher privileges by running a setuid program. retrans= num The number of times the NFS client retries a request before it attempts further recovery action. If the retrans option is not specified, the NFS client tries each UDP request three times and each TCP request twice. timeo= num The time in tenths of a second the NFS client waits for a response before it retries an NFS request. For NFS over TCP, the default timeo value is 600 (60 seconds). The NFS client performs linear backoff: After each retransmission the timeout is increased by timeo up to the maximum of 600 seconds. port= num Specifies the numeric value of the NFS server port. For NFSv3, if num is 0 (the default value), or not specified, then mount queries the rpcbind service on the remote host for the port number to use. For NFSv4, if num is 0 , then mount queries the rpcbind service, but if it is not specified, the standard NFS port number of TCP 2049 is used instead and the remote rpcbind is not checked anymore. rsize= num and wsize= num These options set the maximum number of bytes to be transferred in a single NFS read or write operation. There is no fixed default value for rsize and wsize . By default, NFS uses the largest possible value that both the server and the client support. In Red Hat Enterprise Linux 9, the client and server maximum is 1,048,576 bytes. For more information, see the Red Hat Knowledgebase solution What are the default and maximum values for rsize and wsize with NFS mounts? . sec= options Security options to use for accessing files on the mounted export. The options value is a colon-separated list of one or more security options. By default, the client attempts to find a security option that both the client and the server support. If the server does not support any of the selected options, the mount operation fails. Available options: sec=sys uses local UNIX UIDs and GIDs. These use AUTH_SYS to authenticate NFS operations. sec=krb5 uses Kerberos V5 instead of local UNIX UIDs and GIDs to authenticate users. sec=krb5i uses Kerberos V5 for user authentication and performs integrity checking of NFS operations using secure checksums to prevent data tampering. sec=krb5p uses Kerberos V5 for user authentication, integrity checking, and encrypts NFS traffic to prevent traffic sniffing. This is the most secure setting, but it also involves the most performance overhead. Additional resources mount(8) and `nfs(5)`man pages on your system 4.11. Enabling client-side caching of NFS content FS-Cache is a persistent local cache on the client that file systems can use to take data retrieved from over the network and cache it on the local disk. This helps to minimize network traffic. 4.11.1. How NFS caching works The following diagram is a high-level illustration of how FS-Cache works: FS-Cache is designed to be as transparent as possible to the users and administrators of a system. FS-Cache allows a file system on a server to interact directly with a client's local cache without creating an over-mounted file system. With NFS, a mount option instructs the client to mount the NFS share with FS-cache enabled. The mount point will cause automatic upload for two kernel modules: fscache and cachefiles . The cachefilesd daemon communicates with the kernel modules to implement the cache. FS-Cache does not alter the basic operation of a file system that works over the network. It merely provides that file system with a persistent place in which it can cache data. For example, a client can still mount an NFS share whether or not FS-Cache is enabled. In addition, cached NFS can handle files that will not fit into the cache (whether individually or collectively) as files can be partially cached and do not have to be read completely up front. FS-Cache also hides all I/O errors that occur in the cache from the client file system driver. To provide caching services, FS-Cache needs a cache back end, the cachefiles service. FS-Cache requires a mounted block-based file system, that supports block mapping ( bmap ) and extended attributes as its cache back end: XFS ext3 ext4 FS-Cache cannot arbitrarily cache any file system, whether through the network or otherwise: the shared file system's driver must be altered to allow interaction with FS-Cache, data storage or retrieval, and metadata setup and validation. FS-Cache needs indexing keys and coherency data from the cached file system to support persistence: indexing keys to match file system objects to cache objects, and coherency data to determine whether the cache objects are still valid. Using FS-Cache is a compromise between various factors. If FS-Cache is being used to cache NFS traffic, it may slow the client down, but can massively reduce the network and server loading by satisfying read requests locally without consuming network bandwidth. 4.11.2. Installing and configuring the cachefilesd service Red Hat Enterprise Linux provides only the cachefiles caching back end. The cachefilesd service initiates and manages cachefiles . The /etc/cachefilesd.conf file controls how cachefiles provides caching services. Prerequisites The file system mounted under the /var/cache/fscache/ directory is ext3 , ext4 , or xfs . The file system mounted under /var/cache/fscache/ uses extended attributes, which is the default if you created the file system on RHEL 8 or later. Procedure Install the cachefilesd package: Enable and start the cachefilesd service: Verification Mount an NFS share with the fsc option to use the cache: To mount a share temporarily, enter: To mount a share permanently, add the fsc option to the entry in the /etc/fstab file: Display the FS-cache statistics: Additional resources /usr/share/doc/cachefilesd/README file /usr/share/doc/kernel-doc-<kernel_version>/Documentation/filesystems/caching/fscache.rst provided by the kernel-doc package 4.11.3. Sharing NFS cache Because the cache is persistent, blocks of data in the cache are indexed on a sequence of four keys: Level 1: Server details Level 2: Some mount options; security type; FSID; a uniquifier string Level 3: File Handle Level 4: Page number in file To avoid coherency management problems between superblocks, all NFS superblocks that require to cache the data have unique level 2 keys. Normally, two NFS mounts with the same source volume and options share a superblock, and therefore share the caching, even if they mount different directories within that volume. Example 4.1. NFS cache sharing: The following two mounts likely share the superblock as they have the same mount options, especially if because they come from the same partition on the NFS server: If the mount options are different, they do not share the superblock: Note The user can not share caches between superblocks that have different communications or protocol parameters. For example, it is not possible to share caches between NFSv4.0 and NFSv3 or between NFSv4.1 and NFSv4.2 because they force different superblocks. Also setting parameters, such as the read size ( rsize ), prevents cache sharing because, again, it forces a different superblock. 4.11.4. NFS cache limitations There are some cache limitations with NFS: Opening a file from a shared file system for direct I/O automatically bypasses the cache. This is because this type of access must be direct to the server. Opening a file from a shared file system for either direct I/O or writing flushes the cached copy of the file. FS-Cache will not cache the file again until it is no longer opened for direct I/O or writing. Furthermore, this release of FS-Cache only caches regular NFS files. FS-Cache will not cache directories, symlinks, device files, FIFOs, and sockets. 4.11.5. How cache culling works The cachefilesd service works by caching remote data from shared file systems to free space on the local disk. This could potentially consume all available free space, which could cause problems if the disk also contains the root partition. To control this, cachefilesd tries to maintain a certain amount of free space by discarding old objects, such as less-recently accessed objects, from the cache. This behavior is known as cache culling. Cache culling is done on the basis of the percentage of blocks and the percentage of files available in the underlying file system. There are settings in /etc/cachefilesd.conf which control six limits: brun N% (percentage of blocks), frun N% (percentage of files) If the amount of free space and the number of available files in the cache rises above both these limits, then culling is turned off. bcull N% (percentage of blocks), fcull N% (percentage of files) If the amount of available space or the number of files in the cache falls below either of these limits, then culling is started. bstop N% (percentage of blocks), fstop N% (percentage of files) If the amount of available space or the number of available files in the cache falls below either of these limits, then no further allocation of disk space or files is permitted until culling has raised things above these limits again. The default value of N for each setting is as follows: brun/frun : 10% bcull/fcull : 7% bstop/fstop : 3% When configuring these settings, the following must hold true: 0 <= bstop < bcull < brun < 100 0 <= fstop < fcull < frun < 100 These are the percentages of available space and available files and do not appear as 100 minus the percentage displayed by the df program. Important Culling depends on both b xxx and f xxx pairs simultaneously; the user can not treat them separately.
[ "port= 5555", "port= 6666", "firewall-cmd --permanent --add-service=rpc-bind firewall-cmd --permanent --add-port={ 5555 /tcp, 5555 /udp, 6666 /tcp, 6666 /udp} firewall-cmd --reload", "systemctl restart rpc-statd nfs-server", "firewall-cmd --permanent --add-port= <callback_port> /tcp firewall-cmd --reload", "mount <nfs_server_ip_or_hostname> :/ <exported_share> <mount point>", "mount server.example.com:/nfs/projects/ /mnt/", "ls -l /mnt/", "<nfs_server_ip_or_hostname>:/<exported_share> <mount point> nfs default 0 0", "server.example.com:/nfs/projects /home nfs defaults 0 0", "mount /home", "ls -l /mnt/", "kinit admin", "ipa-getkeytab -s idm_server.idm.example.com -p host/nfs_client.idm.example.com -k /etc/krb5.keytab", "klist -k /etc/krb5.keytab Keytab name: FILE:/etc/krb5.keytab KVNO Principal ---- -------------------------------------------------------------------------- 6 host/[email protected] 6 host/[email protected] 6 host/[email protected] 6 host/[email protected]", "ipa-client-automount Searching for IPA server IPA server: DNS discovery Location: default Continue to configure the system with these values? [no]: yes Configured /etc/idmapd.conf Restarting sssd, waiting for it to become available. Started autofs", "mount -o sec=krb5i server.idm.example.com:/nfs/projects/ /mnt/", "kinit", "touch /mnt/test.txt", "ls -l /mnt/test.txt -rw-r--r--. 1 admin users 0 Feb 15 11:54 /mnt/test.txt", "service-db:keyfile/user", "dnf install cachefilesd", "systemctl enable --now cachefilesd", "mount -o fsc server.example.com:/nfs/projects/ /mnt/", "<nfs_server_ip_or_hostname>:/<exported_share> <mount point> nfs fsc 0 0", "cat /proc/fs/fscache/stats", "mount -o fsc home0:/nfs/projects /projects mount -o fsc home0:/nfs/home /home/", "mount -o fsc,rsize=8192 home0:/nfs/projects /projects mount -o fsc,rsize=65536 home0:/nfs/home /home/" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/9/html/managing_file_systems/mounting-nfs-shares_managing-file-systems
Chapter 1. Red Hat build of OpenJDK 11 - Extended Lifecycle Support Phase 1
Chapter 1. Red Hat build of OpenJDK 11 - Extended Lifecycle Support Phase 1 Important The 11.0.25 release in October 2024 was the last release of Red Hat build of OpenJDK 11 from Red Hat in the full support phase of the lifecycle. The full support phase for Red Hat build of OpenJDK 11 ended on 31 October 2024. See the Product Life Cycles page for details. From November 2024 onward, Red Hat will provide extended lifecycle support phase 1 (ELS‐1) support for new releases of Red Hat build of OpenJDK 11 until 31 October 2027. Access to ELS requires an OpenJDK ELS subscription. OpenJDK ELS is not included in any other ELS subscription. For more information about product lifecycle phases and available support levels, see Life Cycle Phases . For information about migrating to Red Hat build of OpenJDK version 17 or 21, see Migrating to Red Hat build of OpenJDK 17 from earlier versions or Migrating to Red Hat build of OpenJDK 21 from earlier versions .
null
https://docs.redhat.com/en/documentation/red_hat_build_of_openjdk/11/html/release_notes_for_red_hat_build_of_openjdk_11.0.26/els1
Chapter 4. OAuthClientAuthorization [oauth.openshift.io/v1]
Chapter 4. OAuthClientAuthorization [oauth.openshift.io/v1] Description OAuthClientAuthorization describes an authorization created by an OAuth client Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object 4.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources clientName string ClientName references the client that created this authorization kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta metadata is the standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata scopes array (string) Scopes is an array of the granted scopes. userName string UserName is the user name that authorized this client userUID string UserUID is the unique UID associated with this authorization. UserUID and UserName must both match for this authorization to be valid. 4.2. API endpoints The following API endpoints are available: /apis/oauth.openshift.io/v1/oauthclientauthorizations DELETE : delete collection of OAuthClientAuthorization GET : list or watch objects of kind OAuthClientAuthorization POST : create an OAuthClientAuthorization /apis/oauth.openshift.io/v1/watch/oauthclientauthorizations GET : watch individual changes to a list of OAuthClientAuthorization. deprecated: use the 'watch' parameter with a list operation instead. /apis/oauth.openshift.io/v1/oauthclientauthorizations/{name} DELETE : delete an OAuthClientAuthorization GET : read the specified OAuthClientAuthorization PATCH : partially update the specified OAuthClientAuthorization PUT : replace the specified OAuthClientAuthorization /apis/oauth.openshift.io/v1/watch/oauthclientauthorizations/{name} GET : watch changes to an object of kind OAuthClientAuthorization. deprecated: use the 'watch' parameter with a list operation instead, filtered to a single item with the 'fieldSelector' parameter. 4.2.1. /apis/oauth.openshift.io/v1/oauthclientauthorizations Table 4.1. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method DELETE Description delete collection of OAuthClientAuthorization Table 4.2. Query parameters Parameter Type Description continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. gracePeriodSeconds integer The duration in seconds before the object should be deleted. Value must be non-negative integer. The value zero indicates delete immediately. If this value is nil, the default grace period for the specified type will be used. Defaults to a per object value if not specified. zero means delete immediately. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. orphanDependents boolean Deprecated: please use the PropagationPolicy, this field will be deprecated in 1.7. Should the dependent objects be orphaned. If true/false, the "orphan" finalizer will be added to/removed from the object's finalizers list. Either this field or PropagationPolicy may be set, but not both. propagationPolicy string Whether and how garbage collection will be performed. Either this field or OrphanDependents may be set, but not both. The default policy is decided by the existing finalizer set in the metadata.finalizers and the resource-specific default policy. Acceptable values are: 'Orphan' - orphan the dependents; 'Background' - allow the garbage collector to delete the dependents in the background; 'Foreground' - a cascading policy that deletes all dependents in the foreground. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset sendInitialEvents boolean sendInitialEvents=true may be set together with watch=true . In that case, the watch stream will begin with synthetic events to produce the current state of objects in the collection. Once all such events have been sent, a synthetic "Bookmark" event will be sent. The bookmark will report the ResourceVersion (RV) corresponding to the set of objects, and be marked with "k8s.io/initial-events-end": "true" annotation. Afterwards, the watch stream will proceed as usual, sending watch events corresponding to changes (subsequent to the RV) to objects watched. When sendInitialEvents option is set, we require resourceVersionMatch option to also be set. The semantic of the watch request is as following: - resourceVersionMatch = NotOlderThan is interpreted as "data at least as new as the provided resourceVersion`" and the bookmark event is send when the state is synced to a `resourceVersion at least as fresh as the one provided by the ListOptions. If resourceVersion is unset, this is interpreted as "consistent read" and the bookmark event is send when the state is synced at least to the moment when request started being processed. - resourceVersionMatch set to any other value or unset Invalid error is returned. Defaults to true if resourceVersion="" or resourceVersion="0" (for backward compatibility reasons) and to false otherwise. timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. Table 4.3. Body parameters Parameter Type Description body DeleteOptions schema Table 4.4. HTTP responses HTTP code Reponse body 200 - OK Status schema 401 - Unauthorized Empty HTTP method GET Description list or watch objects of kind OAuthClientAuthorization Table 4.5. Query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset sendInitialEvents boolean sendInitialEvents=true may be set together with watch=true . In that case, the watch stream will begin with synthetic events to produce the current state of objects in the collection. Once all such events have been sent, a synthetic "Bookmark" event will be sent. The bookmark will report the ResourceVersion (RV) corresponding to the set of objects, and be marked with "k8s.io/initial-events-end": "true" annotation. Afterwards, the watch stream will proceed as usual, sending watch events corresponding to changes (subsequent to the RV) to objects watched. When sendInitialEvents option is set, we require resourceVersionMatch option to also be set. The semantic of the watch request is as following: - resourceVersionMatch = NotOlderThan is interpreted as "data at least as new as the provided resourceVersion`" and the bookmark event is send when the state is synced to a `resourceVersion at least as fresh as the one provided by the ListOptions. If resourceVersion is unset, this is interpreted as "consistent read" and the bookmark event is send when the state is synced at least to the moment when request started being processed. - resourceVersionMatch set to any other value or unset Invalid error is returned. Defaults to true if resourceVersion="" or resourceVersion="0" (for backward compatibility reasons) and to false otherwise. timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. Table 4.6. HTTP responses HTTP code Reponse body 200 - OK OAuthClientAuthorizationList schema 401 - Unauthorized Empty HTTP method POST Description create an OAuthClientAuthorization Table 4.7. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 4.8. Body parameters Parameter Type Description body OAuthClientAuthorization schema Table 4.9. HTTP responses HTTP code Reponse body 200 - OK OAuthClientAuthorization schema 201 - Created OAuthClientAuthorization schema 202 - Accepted OAuthClientAuthorization schema 401 - Unauthorized Empty 4.2.2. /apis/oauth.openshift.io/v1/watch/oauthclientauthorizations Table 4.10. Global query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. pretty string If 'true', then the output is pretty printed. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset sendInitialEvents boolean sendInitialEvents=true may be set together with watch=true . In that case, the watch stream will begin with synthetic events to produce the current state of objects in the collection. Once all such events have been sent, a synthetic "Bookmark" event will be sent. The bookmark will report the ResourceVersion (RV) corresponding to the set of objects, and be marked with "k8s.io/initial-events-end": "true" annotation. Afterwards, the watch stream will proceed as usual, sending watch events corresponding to changes (subsequent to the RV) to objects watched. When sendInitialEvents option is set, we require resourceVersionMatch option to also be set. The semantic of the watch request is as following: - resourceVersionMatch = NotOlderThan is interpreted as "data at least as new as the provided resourceVersion`" and the bookmark event is send when the state is synced to a `resourceVersion at least as fresh as the one provided by the ListOptions. If resourceVersion is unset, this is interpreted as "consistent read" and the bookmark event is send when the state is synced at least to the moment when request started being processed. - resourceVersionMatch set to any other value or unset Invalid error is returned. Defaults to true if resourceVersion="" or resourceVersion="0" (for backward compatibility reasons) and to false otherwise. timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. HTTP method GET Description watch individual changes to a list of OAuthClientAuthorization. deprecated: use the 'watch' parameter with a list operation instead. Table 4.11. HTTP responses HTTP code Reponse body 200 - OK WatchEvent schema 401 - Unauthorized Empty 4.2.3. /apis/oauth.openshift.io/v1/oauthclientauthorizations/{name} Table 4.12. Global path parameters Parameter Type Description name string name of the OAuthClientAuthorization Table 4.13. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method DELETE Description delete an OAuthClientAuthorization Table 4.14. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed gracePeriodSeconds integer The duration in seconds before the object should be deleted. Value must be non-negative integer. The value zero indicates delete immediately. If this value is nil, the default grace period for the specified type will be used. Defaults to a per object value if not specified. zero means delete immediately. orphanDependents boolean Deprecated: please use the PropagationPolicy, this field will be deprecated in 1.7. Should the dependent objects be orphaned. If true/false, the "orphan" finalizer will be added to/removed from the object's finalizers list. Either this field or PropagationPolicy may be set, but not both. propagationPolicy string Whether and how garbage collection will be performed. Either this field or OrphanDependents may be set, but not both. The default policy is decided by the existing finalizer set in the metadata.finalizers and the resource-specific default policy. Acceptable values are: 'Orphan' - orphan the dependents; 'Background' - allow the garbage collector to delete the dependents in the background; 'Foreground' - a cascading policy that deletes all dependents in the foreground. Table 4.15. Body parameters Parameter Type Description body DeleteOptions schema Table 4.16. HTTP responses HTTP code Reponse body 200 - OK Status schema 202 - Accepted Status schema 401 - Unauthorized Empty HTTP method GET Description read the specified OAuthClientAuthorization Table 4.17. HTTP responses HTTP code Reponse body 200 - OK OAuthClientAuthorization schema 401 - Unauthorized Empty HTTP method PATCH Description partially update the specified OAuthClientAuthorization Table 4.18. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . This field is required for apply requests (application/apply-patch) but optional for non-apply patch types (JsonPatch, MergePatch, StrategicMergePatch). fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. force boolean Force is going to "force" Apply requests. It means user will re-acquire conflicting fields owned by other people. Force flag must be unset for non-apply patch requests. Table 4.19. Body parameters Parameter Type Description body Patch schema Table 4.20. HTTP responses HTTP code Reponse body 200 - OK OAuthClientAuthorization schema 201 - Created OAuthClientAuthorization schema 401 - Unauthorized Empty HTTP method PUT Description replace the specified OAuthClientAuthorization Table 4.21. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 4.22. Body parameters Parameter Type Description body OAuthClientAuthorization schema Table 4.23. HTTP responses HTTP code Reponse body 200 - OK OAuthClientAuthorization schema 201 - Created OAuthClientAuthorization schema 401 - Unauthorized Empty 4.2.4. /apis/oauth.openshift.io/v1/watch/oauthclientauthorizations/{name} Table 4.24. Global path parameters Parameter Type Description name string name of the OAuthClientAuthorization Table 4.25. Global query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. pretty string If 'true', then the output is pretty printed. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset sendInitialEvents boolean sendInitialEvents=true may be set together with watch=true . In that case, the watch stream will begin with synthetic events to produce the current state of objects in the collection. Once all such events have been sent, a synthetic "Bookmark" event will be sent. The bookmark will report the ResourceVersion (RV) corresponding to the set of objects, and be marked with "k8s.io/initial-events-end": "true" annotation. Afterwards, the watch stream will proceed as usual, sending watch events corresponding to changes (subsequent to the RV) to objects watched. When sendInitialEvents option is set, we require resourceVersionMatch option to also be set. The semantic of the watch request is as following: - resourceVersionMatch = NotOlderThan is interpreted as "data at least as new as the provided resourceVersion`" and the bookmark event is send when the state is synced to a `resourceVersion at least as fresh as the one provided by the ListOptions. If resourceVersion is unset, this is interpreted as "consistent read" and the bookmark event is send when the state is synced at least to the moment when request started being processed. - resourceVersionMatch set to any other value or unset Invalid error is returned. Defaults to true if resourceVersion="" or resourceVersion="0" (for backward compatibility reasons) and to false otherwise. timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. HTTP method GET Description watch changes to an object of kind OAuthClientAuthorization. deprecated: use the 'watch' parameter with a list operation instead, filtered to a single item with the 'fieldSelector' parameter. Table 4.26. HTTP responses HTTP code Reponse body 200 - OK WatchEvent schema 401 - Unauthorized Empty
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.14/html/oauth_apis/oauthclientauthorization-oauth-openshift-io-v1
Chapter 4. Upgrading Red Hat JBoss Web Server using this Service Pack
Chapter 4. Upgrading Red Hat JBoss Web Server using this Service Pack To install this service pack: Go to the Software Downloads page for Red Hat JBoss Web Server 6.0 . Note You require a Red Hat subscription to access the Software Downloads page. Download the Red Hat JBoss Web Server 6.0 Service Pack 1 archive file that is appropriate to your platform. Extract the archive file to the Red Hat JBoss Web Server installation directory. If you have installed Red Hat JBoss Web Server from RPM packages on Red Hat Enterprise Linux, you can use the following yum command to upgrade to the latest service pack:
[ "yum upgrade" ]
https://docs.redhat.com/en/documentation/red_hat_jboss_web_server/6.0/html/red_hat_jboss_web_server_6.0_service_pack_1_release_notes/upgrading_red_hat_jboss_web_server_using_this_service_pack
3.3.2. Enabling the IP Port for luci
3.3.2. Enabling the IP Port for luci To allow client computers to communicate with a computer that runs luci (the Conga user interface server), you must enable the IP port assigned to luci . At each computer that runs luci , enable the IP port according to Table 3.2, "Enabled IP Port on a Computer That Runs luci " . Note If a cluster node is running luci , port 11111 should already have been enabled. Table 3.2. Enabled IP Port on a Computer That Runs luci IP Port Number Protocol Component 8084 TCP luci ( Conga user interface server) As of the Red Hat Enterprise Linux 6.1 release, which enabled configuration by means of the /etc/sysconfig/luci file, you can specifically configure the only IP address luci is being served at. You can use this capability if your server infrastructure incorporates more than one network and you want to access luci from the internal network only. To do this, uncomment and edit the line in the file that specifies host . For example, to change the host setting in the file to 10.10.10.10, edit the host line as follows: For more information on the /etc/sysconfig/luci file, see Section 3.4, "Configuring luci with /etc/sysconfig/luci " .
[ "host = 10.10.10.10" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/cluster_administration/s2-iptables-conga-CA
3.2. GNOME and KDE System Monitors
3.2. GNOME and KDE System Monitors The GNOME and KDE desktop environments both have graphical tools to assist you in monitoring and modifying the behavior of your system. GNOME System Monitor The GNOME System Monitor displays basic system information and allows you to monitor system processes, and resource or file system usage. Open it with the gnome-system-monitor command in the Terminal , or click on the Applications menu, and select System Tools > System Monitor . GNOME System Monitor has four tabs: System Displays basic information about the computer's hardware and software. Processes Shows active processes, and the relationships between those processes, as well as detailed information about each process. It also lets you filter the processes displayed, and perform certain actions on those processes (start, stop, kill, change priority, etc.). Resources Displays the current CPU time usage, memory and swap space usage, and network usage. File Systems Lists all mounted file systems alongside some basic information about each, such as the file system type, mount point, and memory usage. For further information about the GNOME System Monitor , refer to the Help menu in the application, or to the Deployment Guide , available from http://access.redhat.com/site/documentation/Red_Hat_Enterprise_Linux/ . KDE System Guard The KDE System Guard allows you to monitor current system load and processes that are running. It also lets you perform actions on processes. Open it with the ksysguard command in the Terminal , or click on the Kickoff Application Launcher and select Applications > System > System Monitor . There are two tabs to KDE System Guard : Process Table Displays a list of all running processes, alphabetically by default. You can also sort processes by a number of other properties, including total CPU usage, physical or shared memory usage, owner, and priority. You can also filter the visible results, search for specific processes, or perform certain actions on a process. System Load Displays historical graphs of CPU usage, memory and swap space usage, and network usage. Hover over the graphs for detailed analysis and graph keys. For further information about the KDE System Guard , refer to the Help menu in the application.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/performance_tuning_guide/s-analyzeperf-gnome
Chapter 1. Updating a model
Chapter 1. Updating a model Red Hat Enterprise Linux AI allows you to upgrade LLMs that you locally downloaded to the latest version of the model. 1.1. Updating the models You can upgrade your local models to the latest version of the model using the RHEL AI tool set. Prerequisites You installed the InstructLab tools with the bootable container image. You initialized InstructLab and can use the ilab CLI. You downloaded LLMs on Red Hat Enterprise Linux AI. You created a Red Hat registry account and logged in on your machine. Procedure You can upgrade any model by running the following command. USD ilab model download --repository <repository_and_model> --release latest where: <repository_and_model> Specifies the repository location of the model as well as the model. You can access the models from the registry.redhat.io/rhelai1/ repository. <release> Specifies the version of the model. Set to latest , or a specific version of the model, for the most up to date version of the model. Verification You can view all the downloaded models on your system with the following command: USD ilab model list
[ "ilab model download --repository <repository_and_model> --release latest", "ilab model list" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux_ai/1.1/html/updating_your_models/updating_a_model
Preface
Preface Making open source more inclusive Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see our CTO Chris Wright's message . Providing feedback on Red Hat documentation We appreciate your feedback on our documentation. To propose improvements, open a Jira issue and describe your suggested changes. Provide as much detail as possible to enable us to address your request quickly. Prerequisite You have a Red Hat Customer Portal account. This account enables you to log in to the Red Hat Jira Software instance. If you do not have an account, you will be prompted to create one. Procedure Click the following link: Create issue . In the Summary text box, enter a brief description of the issue. In the Description text box, provide the following information: The URL of the page where you found the issue. A detailed description of the issue. You can leave the information in any other fields at their default values. Click Create to submit the Jira issue to the documentation team. Thank you for taking the time to provide feedback.
null
https://docs.redhat.com/en/documentation/red_hat_integration/2023.q4/html/installing_debezium_on_rhel/pr01
Chapter 6. Configuring kernel parameters permanently by using the kernel_settings RHEL System Role
Chapter 6. Configuring kernel parameters permanently by using the kernel_settings RHEL System Role As an experienced user with good knowledge of Red Hat Ansible, you can use the kernel_settings role to configure kernel parameters on multiple clients at once. This solution: Provides a friendly interface with efficient input setting. Keeps all intended kernel parameters in one place. After you run the kernel_settings role from the control machine, the kernel parameters are applied to the managed systems immediately and persist across reboots. Important Note that RHEL System Role delivered over RHEL channels are available to RHEL customers as an RPM package in the default AppStream repository. RHEL System Role are also available as a collection to customers with Ansible subscriptions over Ansible Automation Hub. 6.1. Introduction to the kernel_settings role RHEL System Roles is a set of roles that provide a consistent configuration interface to remotely manage multiple systems. RHEL System Roles were introduced for automated configurations of the kernel using the kernel_settings System Role. The rhel-system-roles package contains this system role, and also the reference documentation. To apply the kernel parameters on one or more systems in an automated fashion, use the kernel_settings role with one or more of its role variables of your choice in a playbook. A playbook is a list of one or more plays that are human-readable, and are written in the YAML format. You can use an inventory file to define a set of systems that you want Ansible to configure according to the playbook. With the kernel_settings role you can configure: The kernel parameters using the kernel_settings_sysctl role variable Various kernel subsystems, hardware devices, and device drivers using the kernel_settings_sysfs role variable The CPU affinity for the systemd service manager and processes it forks using the kernel_settings_systemd_cpu_affinity role variable The kernel memory subsystem transparent hugepages using the kernel_settings_transparent_hugepages and kernel_settings_transparent_hugepages_defrag role variables Additional resources README.md and README.html files in the /usr/share/doc/rhel-system-roles/kernel_settings/ directory Working with playbooks How to build your inventory 6.2. Applying selected kernel parameters using the kernel_settings role Follow these steps to prepare and apply an Ansible playbook to remotely configure kernel parameters with persisting effect on multiple managed operating systems. Prerequisites You have root permissions. Entitled by your RHEL subscription, you installed the ansible-core and rhel-system-roles packages on the control machine. An inventory of managed hosts is present on the control machine and Ansible is able to connect to them. Procedure Optionally, review the inventory file for illustration purposes: The file defines the [testingservers] group and other groups. It allows you to run Ansible more effectively against a specific set of systems. Create a configuration file to set defaults and privilege escalation for Ansible operations. Create a new YAML file and open it in a text editor, for example: Insert the following content into the file: The [defaults] section specifies a path to the inventory file of managed hosts. The [privilege_escalation] section defines that user privileges be shifted to root on the specified managed hosts. This is necessary for successful configuration of kernel parameters. When Ansible playbook is run, you will be prompted for user password. The user automatically switches to root by means of sudo after connecting to a managed host. Create an Ansible playbook that uses the kernel_settings role. Create a new YAML file and open it in a text editor, for example: This file represents a playbook and usually contains an ordered list of tasks, also called plays , that are run against specific managed hosts selected from your inventory file. Insert the following content into the file: The name key is optional. It associates an arbitrary string with the play as a label and identifies what the play is for. The hosts key in the play specifies the hosts against which the play is run. The value or values for this key can be provided as individual names of managed hosts or as groups of hosts as defined in the inventory file. The vars section represents a list of variables containing selected kernel parameter names and values to which they have to be set. The roles key specifies what system role is going to configure the parameters and values mentioned in the vars section. Note You can modify the kernel parameters and their values in the playbook to fit your needs. Optionally, verify that the syntax in your play is correct. This example shows the successful verification of a playbook. Execute your playbook. # ansible-playbook kernel-roles.yml ... BECOME password: PLAY [Configure kernel settings] ********************************************************************************** PLAY RECAP ******************************************************************************************************** [email protected] : ok=10 changed=4 unreachable=0 failed=0 skipped=6 rescued=0 ignored=0 [email protected] : ok=10 changed=4 unreachable=0 failed=0 skipped=6 rescued=0 ignored=0 Before Ansible runs your playbook, you are going to be prompted for your password and so that a user on managed hosts can be switched to root , which is necessary for configuring kernel parameters. The recap section shows that the play finished successfully ( failed=0 ) for all managed hosts, and that 4 kernel parameters have been applied ( changed=4 ). Restart your managed hosts and check the affected kernel parameters to verify that the changes have been applied and persist across reboots. Additional resources Preparing a control node and managed nodes to use RHEL System Roles README.html and README.md files in the /usr/share/doc/rhel-system-roles/kernel_settings/ directory Build Your Inventory Configuring Ansible Working With Playbooks Using Variables Roles
[ "cat /home/jdoe/< ansible_project_name >/inventory [testingservers] [email protected] [email protected] [db-servers] db1.example.com db2.example.com [webservers] web1.example.com web2.example.com 192.0.2.42", "vi /home/jdoe/< ansible_project_name >/ansible.cfg", "[defaults] inventory = ./inventory [privilege_escalation] become = true become_method = sudo become_user = root become_ask_pass = true", "vi /home/jdoe/< ansible_project_name >/kernel-roles.yml", "--- - hosts: testingservers name: \"Configure kernel settings\" roles: - rhel-system-roles.kernel_settings vars: kernel_settings_sysctl: - name: fs.file-max value: 400000 - name: kernel.threads-max value: 65536 kernel_settings_sysfs: - name: /sys/class/net/lo/mtu value: 65000 kernel_settings_transparent_hugepages: madvise", "ansible-playbook --syntax-check kernel-roles.yml playbook: kernel-roles.yml", "ansible-playbook kernel-roles.yml BECOME password: PLAY [Configure kernel settings] ********************************************************************************** PLAY RECAP ******************************************************************************************************** [email protected] : ok=10 changed=4 unreachable=0 failed=0 skipped=6 rescued=0 ignored=0 [email protected] : ok=10 changed=4 unreachable=0 failed=0 skipped=6 rescued=0 ignored=0" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/automating_system_administration_by_using_rhel_system_roles_in_rhel_7.9/configuring-kernel-parameters-permanently-by-using-the-kernel-settings-rhel-system-role_automating-system-administration-by-using-rhel-system-roles
Preface
Preface Making open source more inclusive Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see our CTO Chris Wright's message .
null
https://docs.redhat.com/en/documentation/red_hat_build_of_apache_camel_for_spring_boot/3.20/html/camel_spring_boot_reference/pr01
5.4.3.2. Splitting Off a Redundant Image of a Mirrored Logical Volume
5.4.3.2. Splitting Off a Redundant Image of a Mirrored Logical Volume You can split off a redundant image of a mirrored logical volume to form a new logical volume. To split off an image, you use the --splitmirrors argument of the lvconvert command, specifying the number of redundant images to split off. You must use the --name argument of the command to specify a name for the newly-split-off logical volume. The following command splits off a new logical volume named copy from the mirrored logical volume vg/lv . The new logical volume contains two mirror legs. In this example, LVM selects which devices to split off. You can specify which devices to split off. The following command splits off a new logical volume named copy from the mirrored logical volume vg/lv . The new logical volume contains two mirror legs consisting of devices /dev/sdc1 and /dev/sde1 .
[ "lvconvert --splitmirrors 2 --name copy vg/lv", "lvconvert --splitmirrors 2 --name copy vg/lv /dev/sd[ce]1" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/logical_volume_manager_administration/mirror_split
15.10. Removing a Directory Server Instance from the Replication Topology
15.10. Removing a Directory Server Instance from the Replication Topology In certain situations, such as hardware outages or structural changes, administrators want to remove Directory Server instances from a replication topology. This section explains the details about removing an instance. 15.10.1. Removing a Consumer or Hub from the Replication Topology To remove a consumer or hub from the replication topology: If the host to remove is a hub and also a supplier for other servers in the topology, configure other suppliers or hubs to replicate data to these servers. If these servers have no other supplier configured and you remove the hub, these servers become isolated from the replication topology. For details about configuring replication, see: Section 15.2, "Single-supplier Replication" Section 15.3, "Multi-Supplier Replication" Section 15.4, "Cascading Replication" On the host to remove, set the database into read-only mode to prevent any updates: On all suppliers that have a replication agreement with the host to remove, delete the replication agreements. For example: On the consumer or hub to remove disable replication for all suffixes. For example: Disabling replication automatically deletes all replication agreements for this suffix on this server. 15.10.2. Removing a Supplier from the Replication Topology Removing a supplier cleanly from the replication topology is more complex than removing a consumer or hub. This is because every supplier in the topology stores information about other suppliers, and they retain that information even if a supplier suddenly becomes unavailable. Directory Server maintains information about the replication topology in a set of meta data called the replica update vector (RUV). The RUV contains information about the supplier, such as its ID, URL, latest change state number (CSN) on the local server, and the CSN of the first change. Both suppliers and consumers store RUV information, and they use it to control replication updates. To remove a supplier cleanly, you must remove its meta data along with the configuration entries. If the replica to be removed is also a supplier for other servers in the topology, configure other suppliers or hubs to replicate data to these servers. If these servers have no other supplier configured and you remove the supplier, these servers become isolated from the replication topology. For details about configuring replication, see: Section 15.2, "Single-supplier Replication" Section 15.3, "Multi-Supplier Replication" Section 15.4, "Cascading Replication" On the supplier to remove: Set the database into read-only mode to prevent any updates. For details, see Section 2.2.2.1, "Setting a Database in Read-Only Mode" . Wait until all other servers in the topology received all data from this supplier. To verify, ensure that the CSN on other servers is equal or greater than the CSN on the supplier to remove. For example: Display the replica ID: In this example, the replica ID is 1 . Remember your replica ID for the last step of this procedure. On all suppliers that have a replication agreement with the replica to remove, delete the replication agreements. For example: On the replica to remove, disable replication for all suffixes. For example: Disabling replication automatically deletes all replication agreements for this suffix on this server. On one of the remaining suppliers in the topology, clean the RUVs for the replica ID. For example: The command requires to specify the replica ID displayed in an earlier step of this procedure.
[ "ldapmodify -D \"cn=Directory Manager\" -W -p 389 -h host-to-remove.example.com -x dn: cn=userRoot,cn=ldbm database,cn=plugins,cn=config changetype: modify replace: nsslapd-readonly nsslapd-readonly: on", "dsconf -D \"cn=Directory Manager\" ldap://server.example.com repl-agmt delete --suffix=\"dc=example,dc=com\" agreement_name", "dsconf -D \"cn=Directory Manager\" ldap:// host-to-remove.example.com replication disable --suffix=\" dc=example,dc=com \"", "ds-replcheck online -D \"cn=Directory Manager\" -w password -m ldap://replica-to-remove.example.com:389 -r ldap://server.example.com:389 -b dc=example,dc=com ================================================================================ Replication Synchronization Report (Tue Mar 5 09:46:20 2019) ================================================================================ Database RUV's ===================================================== Supplier RUV: {replica 1 ldap://replica-to-remove.example.com:389} 5c7e8927000100010000 5c7e89a0000100010000 {replicageneration} 5c7e8927000000010000 Replica RUV: {replica 1 ldap://replica-to-remove.example.com:389} 5c7e8927000100010000 5c7e8927000400010000 {replica 2 ldap://server.example.com:389} {replicageneration} 5c7e8927000000010000", "dsconf -D \"cn=Directory Manager\" ldap:// replica-to-remove.example.com replication get --suffix=\" dc=example,dc=com \" | grep -i \"nsds5replicaid\" nsDS5ReplicaId: 1", "dsconf -D \"cn=Directory Manager\" ldap://server.example.com repl-agmt delete --suffix=\" dc=example,dc=com \" agreement_name", "dsconf -D \"cn=Directory Manager\" ldap:// replica-to-remove.example.com replication disable --suffix=\" dc=example,dc=com \"", "dsconf -D \"cn=Directory Manager\" ldap://server.example.com repl-tasks cleanallruv --suffix=\" dc=example,dc=com \" --replica-id= 1" ]
https://docs.redhat.com/en/documentation/red_hat_directory_server/11/html/administration_guide/removing_a_directory_server_instance_from_the_replication_topology
20.6. Starting, Resuming, and Restoring a Virtual Machine
20.6. Starting, Resuming, and Restoring a Virtual Machine 20.6.1. Starting a Guest Virtual Machine The virsh start domain ; [--console] [--paused] [--autodestroy] [--bypass-cache] [--force-boot] command starts an inactive virtual machine that was already defined but whose state is inactive since its last managed save state or a fresh boot. By default, if the domain was saved by the virsh managedsave command, the domain will be restored to its state. Otherwise, it will be freshly booted. The command can take the following arguments and the name of the virtual machine is required. --console - will attach the terminal running virsh to the domain's console device. This is runlevel 3. --paused - if this is supported by the driver, it will start the guest virtual machine in a paused state --autodestroy - the guest virtual machine is automatically destroyed when virsh disconnects --bypass-cache - used if the guest virtual machine is in the managedsave --force-boot - discards any managedsave options and causes a fresh boot to occur Example 20.3. How to start a virtual machine The following example starts the guest1 virtual machine that you already created and is currently in the inactive state. In addition, the command attaches the guest's console to the terminal running virsh: 20.6.2. Configuring a Virtual Machine to be Started Automatically at Boot The virsh autostart [--disable] domain command will automatically start the guest virtual machine when the host machine boots. Adding the --disable argument to this command disables autostart. The guest in this case will not start automatically when the host physical machine boots. Example 20.4. How to make a virtual machine start automatically when the host physical machine starts The following example sets the guest1 virtual machine which you already created to autostart when the host boots: # virsh autostart guest1 20.6.3. Rebooting a Guest Virtual Machine Reboot a guest virtual machine using the virsh reboot domain [--mode modename ] command. Remember that this action will only return once it has executed the reboot, so there may be a time lapse from that point until the guest virtual machine actually reboots. You can control the behavior of the rebooting guest virtual machine by modifying the on_reboot element in the guest virtual machine's XML configuration file. By default, the hypervisor attempts to select a suitable shutdown method automatically. To specify an alternative method, the --mode argument can specify a comma separated list which includes acpi and agent . The order in which drivers will try each mode is undefined, and not related to the order specified in virsh. For strict control over ordering, use a single mode at a time and repeat the command. Example 20.5. How to reboot a guest virtual machine The following example reboots a guest virtual machine named guest1 . In this example, the reboot uses the initctl method, but you can choose any mode that suits your needs. # virsh reboot guest1 --mode initctl 20.6.4. Restoring a Guest Virtual Machine The virsh restore <file> [--bypass-cache] [--xml /path/to/file ] [--running] [--paused] command restores a guest virtual machine previously saved with the virsh save command. See Section 20.7.1, "Saving a Guest Virtual Machine's Configuration" for information on the virsh save command. The restore action restarts the saved guest virtual machine, which may take some time. The guest virtual machine's name and UUID are preserved, but the ID will not necessarily match the ID that the virtual machine had when it was saved. The virsh restore command can take the following arguments: --bypass-cache - causes the restore to avoid the file system cache but note that using this flag may slow down the restore operation. --xml - this argument must be used with an XML file name. Although this argument is usually omitted, it can be used to supply an alternative XML file for use on a restored guest virtual machine with changes only in the host-specific portions of the domain XML. For example, it can be used to account for the file naming differences in underlying storage due to disk snapshots taken after the guest was saved. --running - overrides the state recorded in the save image to start the guest virtual machine as running. --paused - overrides the state recorded in the save image to start the guest virtual machine as paused. Example 20.6. How to restore a guest virtual machine The following example restores the guest virtual machine and its running configuration file guest1-config.xml : # virsh restore guest1-config.xml --running 20.6.5. Resuming a Guest Virtual Machine The virsh resume domain command restarts the CPUs of a domain that was suspended. This operation is immediate. The guest virtual machine resumes execution from the point it was suspended. Note that this action will not resume a guest virtual machine that has been undefined. This action will not resume transient virtual machines and will only work on persistent virtual machines. Example 20.7. How to restore a suspended guest virtual machine The following example restores the guest1 virtual machine: # virsh resume guest1
[ "virsh start guest1 --console Domain guest1 started Connected to domain guest1 Escape character is ^]" ]
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/virtualization_deployment_and_administration_guide/sect-starting_suspending_resuming_saving_and_restoring_a_guest_virtual_machine-starting_a_defined_domain
Chapter 1. Updating Satellite to the next patch version
Chapter 1. Updating Satellite to the patch version You can update your Satellite Server and Capsule Server to a new patch release version, such as from 6.16.0 to 6.16.1, by using the Satellite maintain tool. The patch releases are non-disruptive to your operating environment and often fast. You can update the underlying operating system. If there are pending Satellite Server updates, updating the operating system will update both. Important Perform updates regularly to resolve security vulnerabilities and other issues.
null
https://docs.redhat.com/en/documentation/red_hat_satellite/6.16/html/updating_red_hat_satellite/updating-project-to-next-patch-version_updating
Chapter 1. Upgrade overview
Chapter 1. Upgrade overview The upgrade procedure for Red Hat Quay depends on the type of installation you are using. The Red Hat Quay Operator provides a simple method to deploy and manage a Red Hat Quay cluster. This is the preferred procedure for deploying Red Hat Quay on OpenShift. The Red Hat Quay Operator should be upgraded using the Operator Lifecycle Manager (OLM) as described in the section "Upgrading Quay using the Quay Operator". The procedure for upgrading a proof-of-concept or highly available installation of Red Hat Quay and Clair is documented in the section "Standalone upgrade".
null
https://docs.redhat.com/en/documentation/red_hat_quay/3.10/html/upgrade_red_hat_quay/upgrade_overview
Chapter 1. Post-installation configuration overview
Chapter 1. Post-installation configuration overview After installing OpenShift Container Platform, a cluster administrator can configure and customize the following components: Machine Cluster Node Network Storage Users Alerts and notifications 1.1. Configuration tasks to perform after installation Cluster administrators can perform the following post-installation configuration tasks: Configure operating system features : Machine Config Operator (MCO) manages MachineConfig objects. By using MCO, you can perform the following tasks on an OpenShift Container Platform cluster: Configure nodes by using MachineConfig objects Configure MCO-related custom resources Configure cluster features : As a cluster administrator, you can modify the configuration resources of the major features of an OpenShift Container Platform cluster. These features include: Image registry Networking configuration Image build behavior Identity provider The etcd configuration Machine set creation to handle the workloads Cloud provider credential management Configure cluster components to be private : By default, the installation program provisions OpenShift Container Platform by using a publicly accessible DNS and endpoints. If you want your cluster to be accessible only from within an internal network, configure the following components to be private: DNS Ingress Controller API server Perform node operations : By default, OpenShift Container Platform uses Red Hat Enterprise Linux CoreOS (RHCOS) compute machines. As a cluster administrator, you can perform the following operations with the machines in your OpenShift Container Platform cluster: Add and remove compute machines Add and remove taints and tolerations to the nodes Configure the maximum number of pods per node Enable Device Manager Configure network : After installing OpenShift Container Platform, you can configure the following: Ingress cluster traffic Node port service range Network policy Enabling the cluster-wide proxy Configure storage : By default, containers operate using ephemeral storage or transient local storage. The ephemeral storage has a lifetime limitation. TO store the data for a long time, you must configure persistent storage. You can configure storage by using one of the following methods: Dynamic provisioning : You can dynamically provision storage on demand by defining and creating storage classes that control different levels of storage, including storage access. Static provisioning : You can use Kubernetes persistent volumes to make existing storage available to a cluster. Static provisioning can support various device configurations and mount options. Configure users : OAuth access tokens allow users to authenticate themselves to the API. As a cluster administrator, you can configure OAuth to perform the following tasks: Specify an identity provider Use role-based access control to define and supply permissions to users Install an Operator from OperatorHub Manage alerts and notifications : By default, firing alerts are displayed on the Alerting UI of the web console. You can also configure OpenShift Container Platform to send alert notifications to external systems.
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.9/html/post-installation_configuration/post-install-configuration-overview
Chapter 3. Configuring core platform monitoring
Chapter 3. Configuring core platform monitoring 3.1. Preparing to configure core platform monitoring stack The OpenShift Container Platform installation program provides only a low number of configuration options before installation. Configuring most OpenShift Container Platform framework components, including the cluster monitoring stack, happens after the installation. This section explains which monitoring components can be configured and how to prepare for configuring the monitoring stack. Important Not all configuration parameters for the monitoring stack are exposed. Only the parameters and fields listed in the Config map reference for the Cluster Monitoring Operator are supported for configuration. The monitoring stack imposes additional resource requirements. Consult the computing resources recommendations in Scaling the Cluster Monitoring Operator and verify that you have sufficient resources. 3.1.1. Configurable monitoring components This table shows the monitoring components you can configure and the keys used to specify the components in the cluster-monitoring-config config map. Table 3.1. Configurable core platform monitoring components Component cluster-monitoring-config config map key Prometheus Operator prometheusOperator Prometheus prometheusK8s Alertmanager alertmanagerMain Thanos Querier thanosQuerier kube-state-metrics kubeStateMetrics monitoring-plugin monitoringPlugin openshift-state-metrics openshiftStateMetrics Telemeter Client telemeterClient Metrics Server metricsServer Warning Different configuration changes to the ConfigMap object result in different outcomes: The pods are not redeployed. Therefore, there is no service outage. The affected pods are redeployed: For single-node clusters, this results in temporary service outage. For multi-node clusters, because of high-availability, the affected pods are gradually rolled out and the monitoring stack remains available. Configuring and resizing a persistent volume always results in a service outage, regardless of high availability. Each procedure that requires a change in the config map includes its expected outcome. 3.1.2. Creating a cluster monitoring config map You can configure the core OpenShift Container Platform monitoring components by creating and updating the cluster-monitoring-config config map in the openshift-monitoring project. The Cluster Monitoring Operator (CMO) then configures the core components of the monitoring stack. Prerequisites You have access to the cluster as a user with the cluster-admin cluster role. You have installed the OpenShift CLI ( oc ). Procedure Check whether the cluster-monitoring-config ConfigMap object exists: USD oc -n openshift-monitoring get configmap cluster-monitoring-config If the ConfigMap object does not exist: Create the following YAML manifest. In this example the file is called cluster-monitoring-config.yaml : apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | Apply the configuration to create the ConfigMap object: USD oc apply -f cluster-monitoring-config.yaml 3.1.3. Granting users permissions for core platform monitoring As a cluster administrator, you can monitor all core OpenShift Container Platform and user-defined projects. You can also grant developers and other users different permissions for core platform monitoring. You can grant the permissions by assigning one of the following monitoring roles or cluster roles: Name Description Project cluster-monitoring-metrics-api Users with this role have the ability to access Thanos Querier API endpoints. Additionally, it grants access to the core platform Prometheus API and user-defined Thanos Ruler API endpoints. openshift-monitoring cluster-monitoring-operator-alert-customization Users with this role can manage AlertingRule and AlertRelabelConfig resources for core platform monitoring. These permissions are required for the alert customization feature. openshift-monitoring monitoring-alertmanager-edit Users with this role can manage the Alertmanager API for core platform monitoring. They can also manage alert silences in the Administrator perspective of the OpenShift Container Platform web console. openshift-monitoring monitoring-alertmanager-view Users with this role can monitor the Alertmanager API for core platform monitoring. They can also view alert silences in the Administrator perspective of the OpenShift Container Platform web console. openshift-monitoring cluster-monitoring-view Users with this cluster role have the same access rights as cluster-monitoring-metrics-api role, with additional permissions, providing access to the /federate endpoint for the user-defined Prometheus. Must be bound with ClusterRoleBinding to gain access to the /federate endpoint for the user-defined Prometheus. Additional resources Resources reference for the Cluster Monitoring Operator CMO services resources 3.1.3.1. Granting user permissions by using the web console You can grant users permissions for the openshift-monitoring project or their own projects, by using the OpenShift Container Platform web console. Prerequisites You have access to the cluster as a user with the cluster-admin cluster role. The user account that you are assigning the role to already exists. Procedure In the Administrator perspective of the OpenShift Container Platform web console, go to User Management RoleBindings Create binding . In the Binding Type section, select the Namespace Role Binding type. In the Name field, enter a name for the role binding. In the Namespace field, select the project where you want to grant the access. Important The monitoring role or cluster role permissions that you grant to a user by using this procedure apply only to the project that you select in the Namespace field. Select a monitoring role or cluster role from the Role Name list. In the Subject section, select User . In the Subject Name field, enter the name of the user. Select Create to apply the role binding. 3.1.3.2. Granting user permissions by using the CLI You can grant users permissions for the openshift-monitoring project or their own projects, by using the OpenShift CLI ( oc ). Important Whichever role or cluster role you choose, you must bind it against a specific project as a cluster administrator. Prerequisites You have access to the cluster as a user with the cluster-admin cluster role. The user account that you are assigning the role to already exists. You have installed the OpenShift CLI ( oc ). Procedure To assign a monitoring role to a user for a project, enter the following command: USD oc adm policy add-role-to-user <role> <user> -n <namespace> --role-namespace <namespace> 1 1 Substitute <role> with the wanted monitoring role, <user> with the user to whom you want to assign the role, and <namespace> with the project where you want to grant the access. To assign a monitoring cluster role to a user for a project, enter the following command: USD oc adm policy add-cluster-role-to-user <cluster-role> <user> -n <namespace> 1 1 Substitute <cluster-role> with the wanted monitoring cluster role, <user> with the user to whom you want to assign the cluster role, and <namespace> with the project where you want to grant the access. 3.2. Configuring performance and scalability for core platform monitoring You can configure the monitoring stack to optimize the performance and scale of your clusters. The following documentation provides information about how to distribute the monitoring components and control the impact of the monitoring stack on CPU and memory resources. About performance and scalability 3.2.1. Controlling the placement and distribution of monitoring components You can move the monitoring stack components to specific nodes: Use the nodeSelector constraint with labeled nodes to move any of the monitoring stack components to specific nodes. Assign tolerations to enable moving components to tainted nodes. By doing so, you control the placement and distribution of the monitoring components across a cluster. By controlling placement and distribution of monitoring components, you can optimize system resource use, improve performance, and separate workloads based on specific requirements or policies. Additional resources Using node selectors to move monitoring components 3.2.1.1. Moving monitoring components to different nodes To specify the nodes in your cluster on which monitoring stack components will run, configure the nodeSelector constraint for the components in the cluster-monitoring-config config map to match labels assigned to the nodes. Note You cannot add a node selector constraint directly to an existing scheduled pod. Prerequisites You have access to the cluster as a user with the cluster-admin cluster role. You have created the cluster-monitoring-config ConfigMap object. You have installed the OpenShift CLI ( oc ). Procedure If you have not done so yet, add a label to the nodes on which you want to run the monitoring components: USD oc label nodes <node_name> <node_label> 1 1 Replace <node_name> with the name of the node where you want to add the label. Replace <node_label> with the name of the wanted label. Edit the cluster-monitoring-config ConfigMap object in the openshift-monitoring project: USD oc -n openshift-monitoring edit configmap cluster-monitoring-config Specify the node labels for the nodeSelector constraint for the component under data/config.yaml : apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | # ... <component>: 1 nodeSelector: <node_label_1> 2 <node_label_2> 3 # ... 1 Substitute <component> with the appropriate monitoring stack component name. 2 Substitute <node_label_1> with the label you added to the node. 3 Optional: Specify additional labels. If you specify additional labels, the pods for the component are only scheduled on the nodes that contain all of the specified labels. Note If monitoring components remain in a Pending state after configuring the nodeSelector constraint, check the pod events for errors relating to taints and tolerations. Save the file to apply the changes. The components specified in the new configuration are automatically moved to the new nodes, and the pods affected by the new configuration are redeployed. Additional resources Preparing to configure core platform monitoring stack Understanding how to update labels on nodes Placing pods on specific nodes using node selectors nodeSelector (Kubernetes documentation) 3.2.1.2. Assigning tolerations to monitoring components You can assign tolerations to any of the monitoring stack components to enable moving them to tainted nodes. Prerequisites You have access to the cluster as a user with the cluster-admin cluster role. You have created the cluster-monitoring-config ConfigMap object. You have installed the OpenShift CLI ( oc ). Procedure Edit the cluster-monitoring-config config map in the openshift-monitoring project: USD oc -n openshift-monitoring edit configmap cluster-monitoring-config Specify tolerations for the component: apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | <component>: tolerations: <toleration_specification> Substitute <component> and <toleration_specification> accordingly. For example, oc adm taint nodes node1 key1=value1:NoSchedule adds a taint to node1 with the key key1 and the value value1 . This prevents monitoring components from deploying pods on node1 unless a toleration is configured for that taint. The following example configures the alertmanagerMain component to tolerate the example taint: apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | alertmanagerMain: tolerations: - key: "key1" operator: "Equal" value: "value1" effect: "NoSchedule" Save the file to apply the changes. The pods affected by the new configuration are automatically redeployed. Additional resources Preparing to configure core platform monitoring stack Controlling pod placement using node taints Taints and Tolerations (Kubernetes documentation) 3.2.2. Setting the body size limit for metrics scraping By default, no limit exists for the uncompressed body size for data returned from scraped metrics targets. You can set a body size limit to help avoid situations in which Prometheus consumes excessive amounts of memory when scraped targets return a response that contains a large amount of data. In addition, by setting a body size limit, you can reduce the impact that a malicious target might have on Prometheus and on the cluster as a whole. After you set a value for enforcedBodySizeLimit , the alert PrometheusScrapeBodySizeLimitHit fires when at least one Prometheus scrape target replies with a response body larger than the configured value. Note If metrics data scraped from a target has an uncompressed body size exceeding the configured size limit, the scrape fails. Prometheus then considers this target to be down and sets its up metric value to 0 , which can trigger the TargetDown alert. Prerequisites You have access to the cluster as a user with the cluster-admin cluster role. You have installed the OpenShift CLI ( oc ). Procedure Edit the cluster-monitoring-config ConfigMap object in the openshift-monitoring namespace: USD oc -n openshift-monitoring edit configmap cluster-monitoring-config Add a value for enforcedBodySizeLimit to data/config.yaml/prometheusK8s to limit the body size that can be accepted per target scrape: apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: |- prometheusK8s: enforcedBodySizeLimit: 40MB 1 1 Specify the maximum body size for scraped metrics targets. This enforcedBodySizeLimit example limits the uncompressed size per target scrape to 40 megabytes. Valid numeric values use the Prometheus data size format: B (bytes), KB (kilobytes), MB (megabytes), GB (gigabytes), TB (terabytes), PB (petabytes), and EB (exabytes). The default value is 0 , which specifies no limit. You can also set the value to automatic to calculate the limit automatically based on cluster capacity. Save the file to apply the changes. The new configuration is applied automatically. Additional resources scrape_config configuration (Prometheus documentation) 3.2.3. Managing CPU and memory resources for monitoring components You can ensure that the containers that run monitoring components have enough CPU and memory resources by specifying values for resource limits and requests for those components. You can configure these limits and requests for core platform monitoring components in the openshift-monitoring namespace. 3.2.3.1. Specifying limits and requests To configure CPU and memory resources, specify values for resource limits and requests in the cluster-monitoring-config ConfigMap object in the openshift-monitoring namespace. Prerequisites You have access to the cluster as a user with the cluster-admin cluster role. You have created the ConfigMap object named cluster-monitoring-config . You have installed the OpenShift CLI ( oc ). Procedure Edit the cluster-monitoring-config config map in the openshift-monitoring project: USD oc -n openshift-monitoring edit configmap cluster-monitoring-config Add values to define resource limits and requests for each component you want to configure. Important Ensure that the value set for a limit is always higher than the value set for a request. Otherwise, an error will occur, and the container will not run. Example of setting resource limits and requests apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | alertmanagerMain: resources: limits: cpu: 500m memory: 1Gi requests: cpu: 200m memory: 500Mi prometheusK8s: resources: limits: cpu: 500m memory: 3Gi requests: cpu: 200m memory: 500Mi thanosQuerier: resources: limits: cpu: 500m memory: 1Gi requests: cpu: 200m memory: 500Mi prometheusOperator: resources: limits: cpu: 500m memory: 1Gi requests: cpu: 200m memory: 500Mi metricsServer: resources: requests: cpu: 10m memory: 50Mi limits: cpu: 50m memory: 500Mi kubeStateMetrics: resources: limits: cpu: 500m memory: 1Gi requests: cpu: 200m memory: 500Mi telemeterClient: resources: limits: cpu: 500m memory: 1Gi requests: cpu: 200m memory: 500Mi openshiftStateMetrics: resources: limits: cpu: 500m memory: 1Gi requests: cpu: 200m memory: 500Mi nodeExporter: resources: limits: cpu: 50m memory: 150Mi requests: cpu: 20m memory: 50Mi monitoringPlugin: resources: limits: cpu: 500m memory: 1Gi requests: cpu: 200m memory: 500Mi prometheusOperatorAdmissionWebhook: resources: limits: cpu: 50m memory: 100Mi requests: cpu: 20m memory: 50Mi Save the file to apply the changes. The pods affected by the new configuration are automatically redeployed. Additional resources About specifying limits and requests Kubernetes requests and limits documentation (Kubernetes documentation) 3.2.4. Choosing a metrics collection profile Important Metrics collection profile is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . To choose a metrics collection profile for core OpenShift Container Platform monitoring components, edit the cluster-monitoring-config ConfigMap object. Prerequisites You have installed the OpenShift CLI ( oc ). You have enabled Technology Preview features by using the FeatureGate custom resource (CR). You have created the cluster-monitoring-config ConfigMap object. You have access to the cluster as a user with the cluster-admin cluster role. Procedure Edit the cluster-monitoring-config ConfigMap object in the openshift-monitoring project: USD oc -n openshift-monitoring edit configmap cluster-monitoring-config Add the metrics collection profile setting under data/config.yaml/prometheusK8s : apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | prometheusK8s: collectionProfile: <metrics_collection_profile_name> 1 1 The name of the metrics collection profile. The available values are full or minimal . If you do not specify a value or if the collectionProfile key name does not exist in the config map, the default setting of full is used. The following example sets the metrics collection profile to minimal for the core platform instance of Prometheus: apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | prometheusK8s: collectionProfile: minimal Save the file to apply the changes. The new configuration is applied automatically. Additional resources About metrics collection profiles Viewing a list of available metrics Enabling features using feature gates 3.2.5. Configuring pod topology spread constraints You can configure pod topology spread constraints for all the pods deployed by the Cluster Monitoring Operator to control how pod replicas are scheduled to nodes across zones. This ensures that the pods are highly available and run more efficiently, because workloads are spread across nodes in different data centers or hierarchical infrastructure zones. You can configure pod topology spread constraints for monitoring pods by using the cluster-monitoring-config config map. Prerequisites You have access to the cluster as a user with the cluster-admin cluster role. You have created the cluster-monitoring-config ConfigMap object. You have installed the OpenShift CLI ( oc ). Procedure Edit the cluster-monitoring-config config map in the openshift-monitoring project: USD oc -n openshift-monitoring edit configmap cluster-monitoring-config Add the following settings under the data/config.yaml field to configure pod topology spread constraints: apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | <component>: 1 topologySpreadConstraints: - maxSkew: <n> 2 topologyKey: <key> 3 whenUnsatisfiable: <value> 4 labelSelector: 5 <match_option> 1 Specify a name of the component for which you want to set up pod topology spread constraints. 2 Specify a numeric value for maxSkew , which defines the degree to which pods are allowed to be unevenly distributed. 3 Specify a key of node labels for topologyKey . Nodes that have a label with this key and identical values are considered to be in the same topology. The scheduler tries to put a balanced number of pods into each domain. 4 Specify a value for whenUnsatisfiable . Available options are DoNotSchedule and ScheduleAnyway . Specify DoNotSchedule if you want the maxSkew value to define the maximum difference allowed between the number of matching pods in the target topology and the global minimum. Specify ScheduleAnyway if you want the scheduler to still schedule the pod but to give higher priority to nodes that might reduce the skew. 5 Specify labelSelector to find matching pods. Pods that match this label selector are counted to determine the number of pods in their corresponding topology domain. Example configuration for Prometheus apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | prometheusK8s: topologySpreadConstraints: - maxSkew: 1 topologyKey: monitoring whenUnsatisfiable: DoNotSchedule labelSelector: matchLabels: app.kubernetes.io/name: prometheus Save the file to apply the changes. The pods affected by the new configuration are automatically redeployed. Additional resources About pod topology spread constraints for monitoring Controlling pod placement by using pod topology spread constraints Pod Topology Spread Constraints (Kubernetes documentation) 3.3. Storing and recording data for core platform monitoring Store and record your metrics and alerting data, configure logs to specify which activities are recorded, control how long Prometheus retains stored data, and set the maximum amount of disk space for the data. These actions help you protect your data and use them for troubleshooting. 3.3.1. Configuring persistent storage Run cluster monitoring with persistent storage to gain the following benefits: Protect your metrics and alerting data from data loss by storing them in a persistent volume (PV). As a result, they can survive pods being restarted or recreated. Avoid getting duplicate notifications and losing silences for alerts when the Alertmanager pods are restarted. For production environments, it is highly recommended to configure persistent storage. Important In multi-node clusters, you must configure persistent storage for Prometheus, Alertmanager, and Thanos Ruler to ensure high availability. 3.3.1.1. Persistent storage prerequisites Dedicate sufficient persistent storage to ensure that the disk does not become full. Use Filesystem as the storage type value for the volumeMode parameter when you configure the persistent volume. Important Do not use a raw block volume, which is described with volumeMode: Block in the PersistentVolume resource. Prometheus cannot use raw block volumes. Prometheus does not support file systems that are not POSIX compliant. For example, some NFS file system implementations are not POSIX compliant. If you want to use an NFS file system for storage, verify with the vendor that their NFS implementation is fully POSIX compliant. 3.3.1.2. Configuring a persistent volume claim To use a persistent volume (PV) for monitoring components, you must configure a persistent volume claim (PVC). Prerequisites You have access to the cluster as a user with the cluster-admin cluster role. You have created the cluster-monitoring-config ConfigMap object. You have installed the OpenShift CLI ( oc ). Procedure Edit the cluster-monitoring-config config map in the openshift-monitoring project: USD oc -n openshift-monitoring edit configmap cluster-monitoring-config Add your PVC configuration for the component under data/config.yaml : apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | <component>: 1 volumeClaimTemplate: spec: storageClassName: <storage_class> 2 resources: requests: storage: <amount_of_storage> 3 1 Specify the monitoring component for which you want to configure the PVC. 2 Specify an existing storage class. If a storage class is not specified, the default storage class is used. 3 Specify the amount of required storage. The following example configures a PVC that claims persistent storage for Prometheus: Example PVC configuration apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | prometheusK8s: volumeClaimTemplate: spec: storageClassName: my-storage-class resources: requests: storage: 40Gi Save the file to apply the changes. The pods affected by the new configuration are automatically redeployed and the new storage configuration is applied. Warning When you update the config map with a PVC configuration, the affected StatefulSet object is recreated, resulting in a temporary service outage. Additional resources Understanding persistent storage PersistentVolumeClaims (Kubernetes documentation) 3.3.1.3. Resizing a persistent volume You can resize a persistent volume (PV) for monitoring components, such as Prometheus or Alertmanager. You need to manually expand a persistent volume claim (PVC), and then update the config map in which the component is configured. Important You can only expand the size of the PVC. Shrinking the storage size is not possible. Prerequisites You have access to the cluster as a user with the cluster-admin cluster role. You have created the cluster-monitoring-config ConfigMap object. You have configured at least one PVC for core OpenShift Container Platform monitoring components. You have installed the OpenShift CLI ( oc ). Procedure Manually expand a PVC with the updated storage request. For more information, see "Expanding persistent volume claims (PVCs) with a file system" in Expanding persistent volumes . Edit the cluster-monitoring-config config map in the openshift-monitoring project: USD oc -n openshift-monitoring edit configmap cluster-monitoring-config Add a new storage size for the PVC configuration for the component under data/config.yaml : apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | <component>: 1 volumeClaimTemplate: spec: resources: requests: storage: <amount_of_storage> 2 1 The component for which you want to change the storage size. 2 Specify the new size for the storage volume. It must be greater than the value. The following example sets the new PVC request to 100 gigabytes for the Prometheus instance: Example storage configuration for prometheusK8s apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | prometheusK8s: volumeClaimTemplate: spec: resources: requests: storage: 100Gi Save the file to apply the changes. The pods affected by the new configuration are automatically redeployed. Warning When you update the config map with a new storage size, the affected StatefulSet object is recreated, resulting in a temporary service outage. Additional resources Prometheus database storage requirements Expanding persistent volume claims (PVCs) with a file system 3.3.2. Modifying retention time and size for Prometheus metrics data By default, Prometheus retains metrics data for 15 days for core platform monitoring. You can modify the retention time for the Prometheus instance to change when the data is deleted. You can also set the maximum amount of disk space the retained metrics data uses. Note Data compaction occurs every two hours. Therefore, a persistent volume (PV) might fill up before compaction, potentially exceeding the retentionSize limit. In such cases, the KubePersistentVolumeFillingUp alert fires until the space on a PV is lower than the retentionSize limit. Prerequisites You have access to the cluster as a user with the cluster-admin cluster role. You have created the cluster-monitoring-config ConfigMap object. You have installed the OpenShift CLI ( oc ). Procedure Edit the cluster-monitoring-config config map in the openshift-monitoring project: USD oc -n openshift-monitoring edit configmap cluster-monitoring-config Add the retention time and size configuration under data/config.yaml : apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | prometheusK8s: retention: <time_specification> 1 retentionSize: <size_specification> 2 1 The retention time: a number directly followed by ms (milliseconds), s (seconds), m (minutes), h (hours), d (days), w (weeks), or y (years). You can also combine time values for specific times, such as 1h30m15s . 2 The retention size: a number directly followed by B (bytes), KB (kilobytes), MB (megabytes), GB (gigabytes), TB (terabytes), PB (petabytes), and EB (exabytes). The following example sets the retention time to 24 hours and the retention size to 10 gigabytes for the Prometheus instance: Example of setting retention time for Prometheus apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | prometheusK8s: retention: 24h retentionSize: 10GB Save the file to apply the changes. The pods affected by the new configuration are automatically redeployed. Additional resources Retention time and size for Prometheus metrics Preparing to configure core platform monitoring stack Prometheus database storage requirements Recommended configurable storage technology Understanding persistent storage Optimizing storage 3.3.3. Configuring audit logs for Metrics Server You can configure audit logs for Metrics Server to help you troubleshoot issues with the server. Audit logs record the sequence of actions in a cluster. It can record user, application, or control plane activities. You can set audit log rules, which determine what events are recorded and what data they should include. This can be achieved with the following audit profiles: Metadata (default) : This profile enables the logging of event metadata including user, timestamps, resource, and verb. It does not record request and response bodies. Request : This enables the logging of event metadata and request body, but it does not record response body. This configuration does not apply for non-resource requests. RequestResponse : This enables the logging of event metadata, and request and response bodies. This configuration does not apply for non-resource requests. None : None of the previously described events are recorded. You can configure the audit profiles by modifying the cluster-monitoring-config config map. The following example sets the profile to Request , allowing the logging of event metadata and request body for Metrics Server: apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | metricsServer: audit: profile: Request 3.3.4. Setting log levels for monitoring components You can configure the log level for Alertmanager, Prometheus Operator, Prometheus, and Thanos Querier. The following log levels can be applied to the relevant component in the cluster-monitoring-config ConfigMap object: debug . Log debug, informational, warning, and error messages. info . Log informational, warning, and error messages. warn . Log warning and error messages only. error . Log error messages only. The default log level is info . Prerequisites You have access to the cluster as a user with the cluster-admin cluster role. You have created the cluster-monitoring-config ConfigMap object. You have installed the OpenShift CLI ( oc ). Procedure Edit the cluster-monitoring-config config map in the openshift-monitoring project: USD oc -n openshift-monitoring edit configmap cluster-monitoring-config Add logLevel: <log_level> for a component under data/config.yaml : apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | <component>: 1 logLevel: <log_level> 2 1 The monitoring stack component for which you are setting a log level. Available component values are prometheusK8s , alertmanagerMain , prometheusOperator , and thanosQuerier . 2 The log level to set for the component. The available values are error , warn , info , and debug . The default value is info . Save the file to apply the changes. The pods affected by the new configuration are automatically redeployed. Confirm that the log level has been applied by reviewing the deployment or pod configuration in the related project. The following example checks the log level for the prometheus-operator deployment: USD oc -n openshift-monitoring get deploy prometheus-operator -o yaml | grep "log-level" Example output - --log-level=debug Check that the pods for the component are running. The following example lists the status of pods: USD oc -n openshift-monitoring get pods Note If an unrecognized logLevel value is included in the ConfigMap object, the pods for the component might not restart successfully. 3.3.5. Enabling the query log file for Prometheus You can configure Prometheus to write all queries that have been run by the engine to a log file. Important Because log rotation is not supported, only enable this feature temporarily when you need to troubleshoot an issue. After you finish troubleshooting, disable query logging by reverting the changes you made to the ConfigMap object to enable the feature. Prerequisites You have access to the cluster as a user with the cluster-admin cluster role. You have created the cluster-monitoring-config ConfigMap object. You have installed the OpenShift CLI ( oc ). Procedure Edit the cluster-monitoring-config config map in the openshift-monitoring project: USD oc -n openshift-monitoring edit configmap cluster-monitoring-config Add the queryLogFile parameter for Prometheus under data/config.yaml : apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | prometheusK8s: queryLogFile: <path> 1 1 Add the full path to the file in which queries will be logged. Save the file to apply the changes. The pods affected by the new configuration are automatically redeployed. Verify that the pods for the component are running. The following sample command lists the status of pods: USD oc -n openshift-monitoring get pods Example output ... prometheus-operator-567c9bc75c-96wkj 2/2 Running 0 62m prometheus-k8s-0 6/6 Running 1 57m prometheus-k8s-1 6/6 Running 1 57m thanos-querier-56c76d7df4-2xkpc 6/6 Running 0 57m thanos-querier-56c76d7df4-j5p29 6/6 Running 0 57m ... Read the query log: USD oc -n openshift-monitoring exec prometheus-k8s-0 -- cat <path> Important Revert the setting in the config map after you have examined the logged query information. Additional resources Preparing to configure core platform monitoring stack 3.3.6. Enabling query logging for Thanos Querier For default platform monitoring in the openshift-monitoring project, you can enable the Cluster Monitoring Operator (CMO) to log all queries run by Thanos Querier. Important Because log rotation is not supported, only enable this feature temporarily when you need to troubleshoot an issue. After you finish troubleshooting, disable query logging by reverting the changes you made to the ConfigMap object to enable the feature. Prerequisites You have installed the OpenShift CLI ( oc ). You have access to the cluster as a user with the cluster-admin cluster role. You have created the cluster-monitoring-config ConfigMap object. Procedure You can enable query logging for Thanos Querier in the openshift-monitoring project: Edit the cluster-monitoring-config ConfigMap object in the openshift-monitoring project: USD oc -n openshift-monitoring edit configmap cluster-monitoring-config Add a thanosQuerier section under data/config.yaml and add values as shown in the following example: apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | thanosQuerier: enableRequestLogging: <value> 1 logLevel: <value> 2 1 Set the value to true to enable logging and false to disable logging. The default value is false . 2 Set the value to debug , info , warn , or error . If no value exists for logLevel , the log level defaults to error . Save the file to apply the changes. The pods affected by the new configuration are automatically redeployed. Verification Verify that the Thanos Querier pods are running. The following sample command lists the status of pods in the openshift-monitoring project: USD oc -n openshift-monitoring get pods Run a test query using the following sample commands as a model: USD token=`oc create token prometheus-k8s -n openshift-monitoring` USD oc -n openshift-monitoring exec -c prometheus prometheus-k8s-0 -- curl -k -H "Authorization: Bearer USDtoken" 'https://thanos-querier.openshift-monitoring.svc:9091/api/v1/query?query=cluster_version' Run the following command to read the query log: USD oc -n openshift-monitoring logs <thanos_querier_pod_name> -c thanos-query Note Because the thanos-querier pods are highly available (HA) pods, you might be able to see logs in only one pod. After you examine the logged query information, disable query logging by changing the enableRequestLogging value to false in the config map. 3.4. Configuring metrics for core platform monitoring Configure the collection of metrics to monitor how cluster components and your own workloads are performing. You can send ingested metrics to remote systems for long-term storage and add cluster ID labels to the metrics to identify the data coming from different clusters. Additional resources Understanding metrics 3.4.1. Configuring remote write storage You can configure remote write storage to enable Prometheus to send ingested metrics to remote systems for long-term storage. Doing so has no impact on how or for how long Prometheus stores metrics. Prerequisites You have access to the cluster as a user with the cluster-admin cluster role. You have created the cluster-monitoring-config ConfigMap object. You have installed the OpenShift CLI ( oc ). You have set up a remote write compatible endpoint (such as Thanos) and know the endpoint URL. See the Prometheus remote endpoints and storage documentation for information about endpoints that are compatible with the remote write feature. Important Red Hat only provides information for configuring remote write senders and does not offer guidance on configuring receiver endpoints. Customers are responsible for setting up their own endpoints that are remote-write compatible. Issues with endpoint receiver configurations are not included in Red Hat production support. You have set up authentication credentials in a Secret object for the remote write endpoint. You must create the secret in the openshift-monitoring namespace. Warning To reduce security risks, use HTTPS and authentication to send metrics to an endpoint. Procedure Edit the cluster-monitoring-config config map in the openshift-monitoring project: USD oc -n openshift-monitoring edit configmap cluster-monitoring-config Add a remoteWrite: section under data/config.yaml/prometheusK8s , as shown in the following example: apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | prometheusK8s: remoteWrite: - url: "https://remote-write-endpoint.example.com" 1 <endpoint_authentication_credentials> 2 1 The URL of the remote write endpoint. 2 The authentication method and credentials for the endpoint. Currently supported authentication methods are AWS Signature Version 4, authentication using HTTP in an Authorization request header, Basic authentication, OAuth 2.0, and TLS client. See Supported remote write authentication settings for sample configurations of supported authentication methods. Add write relabel configuration values after the authentication credentials: apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | prometheusK8s: remoteWrite: - url: "https://remote-write-endpoint.example.com" <endpoint_authentication_credentials> writeRelabelConfigs: - <your_write_relabel_configs> 1 1 Add configuration for metrics that you want to send to the remote endpoint. Example of forwarding a single metric called my_metric apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | prometheusK8s: remoteWrite: - url: "https://remote-write-endpoint.example.com" writeRelabelConfigs: - sourceLabels: [__name__] regex: 'my_metric' action: keep Example of forwarding metrics called my_metric_1 and my_metric_2 in my_namespace namespace apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | prometheusK8s: remoteWrite: - url: "https://remote-write-endpoint.example.com" writeRelabelConfigs: - sourceLabels: [__name__,namespace] regex: '(my_metric_1|my_metric_2);my_namespace' action: keep Save the file to apply the changes. The new configuration is applied automatically. 3.4.1.1. Supported remote write authentication settings You can use different methods to authenticate with a remote write endpoint. Currently supported authentication methods are AWS Signature Version 4, basic authentication, authorization, OAuth 2.0, and TLS client. The following table provides details about supported authentication methods for use with remote write. Authentication method Config map field Description AWS Signature Version 4 sigv4 This method uses AWS Signature Version 4 authentication to sign requests. You cannot use this method simultaneously with authorization, OAuth 2.0, or Basic authentication. Basic authentication basicAuth Basic authentication sets the authorization header on every remote write request with the configured username and password. authorization authorization Authorization sets the Authorization header on every remote write request using the configured token. OAuth 2.0 oauth2 An OAuth 2.0 configuration uses the client credentials grant type. Prometheus fetches an access token from tokenUrl with the specified client ID and client secret to access the remote write endpoint. You cannot use this method simultaneously with authorization, AWS Signature Version 4, or Basic authentication. TLS client tlsConfig A TLS client configuration specifies the CA certificate, the client certificate, and the client key file information used to authenticate with the remote write endpoint server using TLS. The sample configuration assumes that you have already created a CA certificate file, a client certificate file, and a client key file. 3.4.1.2. Example remote write authentication settings The following samples show different authentication settings you can use to connect to a remote write endpoint. Each sample also shows how to configure a corresponding Secret object that contains authentication credentials and other relevant settings. Each sample configures authentication for use with default platform monitoring in the openshift-monitoring namespace. 3.4.1.2.1. Sample YAML for AWS Signature Version 4 authentication The following shows the settings for a sigv4 secret named sigv4-credentials in the openshift-monitoring namespace. apiVersion: v1 kind: Secret metadata: name: sigv4-credentials namespace: openshift-monitoring stringData: accessKey: <AWS_access_key> 1 secretKey: <AWS_secret_key> 2 type: Opaque 1 The AWS API access key. 2 The AWS API secret key. The following shows sample AWS Signature Version 4 remote write authentication settings that use a Secret object named sigv4-credentials in the openshift-monitoring namespace: apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | prometheusK8s: remoteWrite: - url: "https://authorization.example.com/api/write" sigv4: region: <AWS_region> 1 accessKey: name: sigv4-credentials 2 key: accessKey 3 secretKey: name: sigv4-credentials 4 key: secretKey 5 profile: <AWS_profile_name> 6 roleArn: <AWS_role_arn> 7 1 The AWS region. 2 4 The name of the Secret object containing the AWS API access credentials. 3 The key that contains the AWS API access key in the specified Secret object. 5 The key that contains the AWS API secret key in the specified Secret object. 6 The name of the AWS profile that is being used to authenticate. 7 The unique identifier for the Amazon Resource Name (ARN) assigned to your role. 3.4.1.2.2. Sample YAML for Basic authentication The following shows sample Basic authentication settings for a Secret object named rw-basic-auth in the openshift-monitoring namespace: apiVersion: v1 kind: Secret metadata: name: rw-basic-auth namespace: openshift-monitoring stringData: user: <basic_username> 1 password: <basic_password> 2 type: Opaque 1 The username. 2 The password. The following sample shows a basicAuth remote write configuration that uses a Secret object named rw-basic-auth in the openshift-monitoring namespace. It assumes that you have already set up authentication credentials for the endpoint. apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | prometheusK8s: remoteWrite: - url: "https://basicauth.example.com/api/write" basicAuth: username: name: rw-basic-auth 1 key: user 2 password: name: rw-basic-auth 3 key: password 4 1 3 The name of the Secret object that contains the authentication credentials. 2 The key that contains the username in the specified Secret object. 4 The key that contains the password in the specified Secret object. 3.4.1.2.3. Sample YAML for authentication with a bearer token using a Secret Object The following shows bearer token settings for a Secret object named rw-bearer-auth in the openshift-monitoring namespace: apiVersion: v1 kind: Secret metadata: name: rw-bearer-auth namespace: openshift-monitoring stringData: token: <authentication_token> 1 type: Opaque 1 The authentication token. The following shows sample bearer token config map settings that use a Secret object named rw-bearer-auth in the openshift-monitoring namespace: apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | enableUserWorkload: true prometheusK8s: remoteWrite: - url: "https://authorization.example.com/api/write" authorization: type: Bearer 1 credentials: name: rw-bearer-auth 2 key: token 3 1 The authentication type of the request. The default value is Bearer . 2 The name of the Secret object that contains the authentication credentials. 3 The key that contains the authentication token in the specified Secret object. 3.4.1.2.4. Sample YAML for OAuth 2.0 authentication The following shows sample OAuth 2.0 settings for a Secret object named oauth2-credentials in the openshift-monitoring namespace: apiVersion: v1 kind: Secret metadata: name: oauth2-credentials namespace: openshift-monitoring stringData: id: <oauth2_id> 1 secret: <oauth2_secret> 2 type: Opaque 1 The Oauth 2.0 ID. 2 The OAuth 2.0 secret. The following shows an oauth2 remote write authentication sample configuration that uses a Secret object named oauth2-credentials in the openshift-monitoring namespace: apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | prometheusK8s: remoteWrite: - url: "https://test.example.com/api/write" oauth2: clientId: secret: name: oauth2-credentials 1 key: id 2 clientSecret: name: oauth2-credentials 3 key: secret 4 tokenUrl: https://example.com/oauth2/token 5 scopes: 6 - <scope_1> - <scope_2> endpointParams: 7 param1: <parameter_1> param2: <parameter_2> 1 3 The name of the corresponding Secret object. Note that ClientId can alternatively refer to a ConfigMap object, although clientSecret must refer to a Secret object. 2 4 The key that contains the OAuth 2.0 credentials in the specified Secret object. 5 The URL used to fetch a token with the specified clientId and clientSecret . 6 The OAuth 2.0 scopes for the authorization request. These scopes limit what data the tokens can access. 7 The OAuth 2.0 authorization request parameters required for the authorization server. 3.4.1.2.5. Sample YAML for TLS client authentication The following shows sample TLS client settings for a tls Secret object named mtls-bundle in the openshift-monitoring namespace. apiVersion: v1 kind: Secret metadata: name: mtls-bundle namespace: openshift-monitoring data: ca.crt: <ca_cert> 1 client.crt: <client_cert> 2 client.key: <client_key> 3 type: tls 1 The CA certificate in the Prometheus container with which to validate the server certificate. 2 The client certificate for authentication with the server. 3 The client key. The following sample shows a tlsConfig remote write authentication configuration that uses a TLS Secret object named mtls-bundle . apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | prometheusK8s: remoteWrite: - url: "https://remote-write-endpoint.example.com" tlsConfig: ca: secret: name: mtls-bundle 1 key: ca.crt 2 cert: secret: name: mtls-bundle 3 key: client.crt 4 keySecret: name: mtls-bundle 5 key: client.key 6 1 3 5 The name of the corresponding Secret object that contains the TLS authentication credentials. Note that ca and cert can alternatively refer to a ConfigMap object, though keySecret must refer to a Secret object. 2 The key in the specified Secret object that contains the CA certificate for the endpoint. 4 The key in the specified Secret object that contains the client certificate for the endpoint. 6 The key in the specified Secret object that contains the client key secret. 3.4.1.3. Example remote write queue configuration You can use the queueConfig object for remote write to tune the remote write queue parameters. The following example shows the queue parameters with their default values for default platform monitoring in the openshift-monitoring namespace. Example configuration of remote write parameters with default values apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | prometheusK8s: remoteWrite: - url: "https://remote-write-endpoint.example.com" <endpoint_authentication_credentials> queueConfig: capacity: 10000 1 minShards: 1 2 maxShards: 50 3 maxSamplesPerSend: 2000 4 batchSendDeadline: 5s 5 minBackoff: 30ms 6 maxBackoff: 5s 7 retryOnRateLimit: false 8 sampleAgeLimit: 0s 9 1 The number of samples to buffer per shard before they are dropped from the queue. 2 The minimum number of shards. 3 The maximum number of shards. 4 The maximum number of samples per send. 5 The maximum time for a sample to wait in buffer. 6 The initial time to wait before retrying a failed request. The time gets doubled for every retry up to the maxbackoff time. 7 The maximum time to wait before retrying a failed request. 8 Set this parameter to true to retry a request after receiving a 429 status code from the remote write storage. 9 The samples that are older than the sampleAgeLimit limit are dropped from the queue. If the value is undefined or set to 0s , the parameter is ignored. Additional resources Prometheus REST API reference for remote write Setting up remote write compatible endpoints (Prometheus documentation) Tuning remote write settings (Prometheus documentation) Understanding secrets 3.4.2. Creating cluster ID labels for metrics You can create cluster ID labels for metrics by adding the write_relabel settings for remote write storage in the cluster-monitoring-config config map in the openshift-monitoring namespace. Prerequisites You have access to the cluster as a user with the cluster-admin cluster role. You have created the cluster-monitoring-config ConfigMap object. You have installed the OpenShift CLI ( oc ). You have configured remote write storage. Procedure Edit the cluster-monitoring-config config map in the openshift-monitoring project: USD oc -n openshift-monitoring edit configmap cluster-monitoring-config In the writeRelabelConfigs: section under data/config.yaml/prometheusK8s/remoteWrite , add cluster ID relabel configuration values: apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | prometheusK8s: remoteWrite: - url: "https://remote-write-endpoint.example.com" <endpoint_authentication_credentials> writeRelabelConfigs: 1 - <relabel_config> 2 1 Add a list of write relabel configurations for metrics that you want to send to the remote endpoint. 2 Substitute the label configuration for the metrics sent to the remote write endpoint. The following sample shows how to forward a metric with the cluster ID label cluster_id : apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | prometheusK8s: remoteWrite: - url: "https://remote-write-endpoint.example.com" writeRelabelConfigs: - sourceLabels: - __tmp_openshift_cluster_id__ 1 targetLabel: cluster_id 2 action: replace 3 1 The system initially applies a temporary cluster ID source label named __tmp_openshift_cluster_id__ . This temporary label gets replaced by the cluster ID label name that you specify. 2 Specify the name of the cluster ID label for metrics sent to remote write storage. If you use a label name that already exists for a metric, that value is overwritten with the name of this cluster ID label. For the label name, do not use __tmp_openshift_cluster_id__ . The final relabeling step removes labels that use this name. 3 The replace write relabel action replaces the temporary label with the target label for outgoing metrics. This action is the default and is applied if no action is specified. Save the file to apply the changes. The new configuration is applied automatically. Additional resources Adding cluster ID labels to metrics Obtaining your cluster ID 3.5. Configuring alerts and notifications for core platform monitoring You can configure a local or external Alertmanager instance to route alerts from Prometheus to endpoint receivers. You can also attach custom labels to all time series and alerts to add useful metadata information. 3.5.1. Configuring external Alertmanager instances The OpenShift Container Platform monitoring stack includes a local Alertmanager instance that routes alerts from Prometheus. You can add external Alertmanager instances to route alerts for core OpenShift Container Platform projects. If you add the same external Alertmanager configuration for multiple clusters and disable the local instance for each cluster, you can then manage alert routing for multiple clusters by using a single external Alertmanager instance. Prerequisites You have access to the cluster as a user with the cluster-admin cluster role. You have created the cluster-monitoring-config ConfigMap object. You have installed the OpenShift CLI ( oc ). Procedure Edit the cluster-monitoring-config config map in the openshift-monitoring project: USD oc -n openshift-monitoring edit configmap cluster-monitoring-config Add an additionalAlertmanagerConfigs section with configuration details under data/config.yaml/prometheusK8s : apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | prometheusK8s: additionalAlertmanagerConfigs: - <alertmanager_specification> 1 1 Substitute <alertmanager_specification> with authentication and other configuration details for additional Alertmanager instances. Currently supported authentication methods are bearer token ( bearerToken ) and client TLS ( tlsConfig ). The following sample config map configures an additional Alertmanager for Prometheus by using a bearer token with client TLS authentication: apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | prometheusK8s: additionalAlertmanagerConfigs: - scheme: https pathPrefix: / timeout: "30s" apiVersion: v1 bearerToken: name: alertmanager-bearer-token key: token tlsConfig: key: name: alertmanager-tls key: tls.key cert: name: alertmanager-tls key: tls.crt ca: name: alertmanager-tls key: tls.ca staticConfigs: - external-alertmanager1-remote.com - external-alertmanager1-remote2.com Save the file to apply the changes. The pods affected by the new configuration are automatically redeployed. 3.5.1.1. Disabling the local Alertmanager A local Alertmanager that routes alerts from Prometheus instances is enabled by default in the openshift-monitoring project of the OpenShift Container Platform monitoring stack. If you do not need the local Alertmanager, you can disable it by configuring the cluster-monitoring-config config map in the openshift-monitoring project. Prerequisites You have access to the cluster as a user with the cluster-admin cluster role. You have created the cluster-monitoring-config config map. You have installed the OpenShift CLI ( oc ). Procedure Edit the cluster-monitoring-config config map in the openshift-monitoring project: USD oc -n openshift-monitoring edit configmap cluster-monitoring-config Add enabled: false for the alertmanagerMain component under data/config.yaml : apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | alertmanagerMain: enabled: false Save the file to apply the changes. The Alertmanager instance is disabled automatically when you apply the change. Additional resources Alertmanager (Prometheus documentation) Managing alerts as an Administrator 3.5.2. Configuring secrets for Alertmanager The OpenShift Container Platform monitoring stack includes Alertmanager, which routes alerts from Prometheus to endpoint receivers. If you need to authenticate with a receiver so that Alertmanager can send alerts to it, you can configure Alertmanager to use a secret that contains authentication credentials for the receiver. For example, you can configure Alertmanager to use a secret to authenticate with an endpoint receiver that requires a certificate issued by a private Certificate Authority (CA). You can also configure Alertmanager to use a secret to authenticate with a receiver that requires a password file for Basic HTTP authentication. In either case, authentication details are contained in the Secret object rather than in the ConfigMap object. 3.5.2.1. Adding a secret to the Alertmanager configuration You can add secrets to the Alertmanager configuration by editing the cluster-monitoring-config config map in the openshift-monitoring project. After you add a secret to the config map, the secret is mounted as a volume at /etc/alertmanager/secrets/<secret_name> within the alertmanager container for the Alertmanager pods. Prerequisites You have access to the cluster as a user with the cluster-admin cluster role. You have created the cluster-monitoring-config config map. You have created the secret to be configured in Alertmanager in the openshift-monitoring project. You have installed the OpenShift CLI ( oc ). Procedure Edit the cluster-monitoring-config config map in the openshift-monitoring project: USD oc -n openshift-monitoring edit configmap cluster-monitoring-config Add a secrets: section under data/config.yaml/alertmanagerMain with the following configuration: apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | alertmanagerMain: secrets: 1 - <secret_name_1> 2 - <secret_name_2> 1 This section contains the secrets to be mounted into Alertmanager. The secrets must be located within the same namespace as the Alertmanager object. 2 The name of the Secret object that contains authentication credentials for the receiver. If you add multiple secrets, place each one on a new line. The following sample config map settings configure Alertmanager to use two Secret objects named test-secret-basic-auth and test-secret-api-token : apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | alertmanagerMain: secrets: - test-secret-basic-auth - test-secret-api-token Save the file to apply the changes. The new configuration is applied automatically. 3.5.3. Attaching additional labels to your time series and alerts You can attach custom labels to all time series and alerts leaving Prometheus by using the external labels feature of Prometheus. Prerequisites You have access to the cluster as a user with the cluster-admin cluster role. You have created the cluster-monitoring-config ConfigMap object. You have installed the OpenShift CLI ( oc ). Procedure Edit the cluster-monitoring-config config map in the openshift-monitoring project: USD oc -n openshift-monitoring edit configmap cluster-monitoring-config Define labels you want to add for every metric under data/config.yaml : apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | prometheusK8s: externalLabels: <key>: <value> 1 1 Substitute <key>: <value> with key-value pairs where <key> is a unique name for the new label and <value> is its value. Warning Do not use prometheus or prometheus_replica as key names, because they are reserved and will be overwritten. Do not use cluster or managed_cluster as key names. Using them can cause issues where you are unable to see data in the developer dashboards. For example, to add metadata about the region and environment to all time series and alerts, use the following example: apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | prometheusK8s: externalLabels: region: eu environment: prod Save the file to apply the changes. The pods affected by the new configuration are automatically redeployed. Additional resources Preparing to configure core platform monitoring stack 3.5.4. Configuring alert notifications In OpenShift Container Platform 4.18, you can view firing alerts in the Alerting UI. You can configure Alertmanager to send notifications about default platform alerts by configuring alert receivers. Important Alertmanager does not send notifications by default. It is strongly recommended to configure Alertmanager to receive notifications by configuring alert receivers through the web console or through the alertmanager-main secret. Additional resources Sending notifications to external systems PagerDuty (PagerDuty official site) Prometheus Integration Guide (PagerDuty official site) Support version matrix for monitoring components Enabling alert routing for user-defined projects 3.5.4.1. Configuring alert routing for default platform alerts You can configure Alertmanager to send notifications. Customize where and how Alertmanager sends notifications about default platform alerts by editing the default configuration in the alertmanager-main secret in the openshift-monitoring namespace. Note All features of a supported version of upstream Alertmanager are also supported in an OpenShift Container Platform Alertmanager configuration. To check all the configuration options of a supported version of upstream Alertmanager, see Alertmanager configuration (Prometheus documentation). Prerequisites You have access to the cluster as a user with the cluster-admin cluster role. Procedure Open the Alertmanager YAML configuration file: To open the Alertmanager configuration from the CLI: Print the currently active Alertmanager configuration from the alertmanager-main secret into alertmanager.yaml file: USD oc -n openshift-monitoring get secret alertmanager-main --template='{{ index .data "alertmanager.yaml" }}' | base64 --decode > alertmanager.yaml Open the alertmanager.yaml file. To open the Alertmanager configuration from the OpenShift Container Platform web console: Go to the Administration Cluster Settings Configuration Alertmanager YAML page of the web console. Edit the Alertmanager configuration by updating parameters in the YAML: global: resolve_timeout: 5m http_config: proxy_from_environment: true 1 route: group_wait: 30s 2 group_interval: 5m 3 repeat_interval: 12h 4 receiver: default routes: - matchers: - "alertname=Watchdog" repeat_interval: 2m receiver: watchdog - matchers: - "service=<your_service>" 5 routes: - matchers: - <your_matching_rules> 6 receiver: <receiver> 7 receivers: - name: default - name: watchdog - name: <receiver> <receiver_configuration> 8 1 If you configured an HTTP cluster-wide proxy, set the proxy_from_environment parameter to true to enable proxying for all alert receivers. 2 Specify how long Alertmanager waits while collecting initial alerts for a group of alerts before sending a notification. 3 Specify how much time must elapse before Alertmanager sends a notification about new alerts added to a group of alerts for which an initial notification was already sent. 4 Specify the minimum amount of time that must pass before an alert notification is repeated. If you want a notification to repeat at each group interval, set the repeat_interval value to less than the group_interval value. The repeated notification can still be delayed, for example, when certain Alertmanager pods are restarted or rescheduled. 5 Specify the name of the service that fires the alerts. 6 Specify labels to match your alerts. 7 Specify the name of the receiver to use for the alerts. 8 Specify the receiver configuration. Important Use the matchers key name to indicate the matchers that an alert has to fulfill to match the node. Do not use the match or match_re key names, which are both deprecated and planned for removal in a future release. If you define inhibition rules, use the following key names: target_matchers : to indicate the target matchers source_matchers : to indicate the source matchers Do not use the target_match , target_match_re , source_match , or source_match_re key names, which are deprecated and planned for removal in a future release. Example of Alertmanager configuration with PagerDuty as an alert receiver global: resolve_timeout: 5m http_config: proxy_from_environment: true route: group_wait: 30s group_interval: 5m repeat_interval: 12h receiver: default routes: - matchers: - "alertname=Watchdog" repeat_interval: 2m receiver: watchdog - matchers: 1 - "service=example-app" routes: - matchers: - "severity=critical" receiver: team-frontend-page receivers: - name: default - name: watchdog - name: team-frontend-page pagerduty_configs: - service_key: "<your_key>" http_config: 2 proxy_from_environment: true authorization: credentials: xxxxxxxxxx 1 Alerts of critical severity that are fired by the example-app service are sent through the team-frontend-page receiver. Typically, these types of alerts would be paged to an individual or a critical response team. 2 Custom HTTP configuration for a specific receiver. If you configure the custom HTTP configuration for a specific alert receiver, that receiver does not inherit the global HTTP config settings. Apply the new configuration in the file: To apply the changes from the CLI, run the following command: USD oc -n openshift-monitoring create secret generic alertmanager-main --from-file=alertmanager.yaml --dry-run=client -o=yaml | oc -n openshift-monitoring replace secret --filename=- To apply the changes from the OpenShift Container Platform web console, click Save . 3.5.4.2. Configuring alert routing with the OpenShift Container Platform web console You can configure alert routing through the OpenShift Container Platform web console to ensure that you learn about important issues with your cluster. Note The OpenShift Container Platform web console provides fewer settings to configure alert routing than the alertmanager-main secret. To configure alert routing with the access to more configuration settings, see "Configuring alert routing for default platform alerts". Prerequisites You have access to the cluster as a user with the cluster-admin cluster role. Procedure In the Administrator perspective, go to Administration Cluster Settings Configuration Alertmanager . Note Alternatively, you can go to the same page through the notification drawer. Select the bell icon at the top right of the OpenShift Container Platform web console and choose Configure in the AlertmanagerReceiverNotConfigured alert. Click Create Receiver in the Receivers section of the page. In the Create Receiver form, add a Receiver name and choose a Receiver type from the list. Edit the receiver configuration: For PagerDuty receivers: Choose an integration type and add a PagerDuty integration key. Add the URL of your PagerDuty installation. Click Show advanced configuration if you want to edit the client and incident details or the severity specification. For webhook receivers: Add the endpoint to send HTTP POST requests to. Click Show advanced configuration if you want to edit the default option to send resolved alerts to the receiver. For email receivers: Add the email address to send notifications to. Add SMTP configuration details, including the address to send notifications from, the smarthost and port number used for sending emails, the hostname of the SMTP server, and authentication details. Select whether TLS is required. Click Show advanced configuration if you want to edit the default option not to send resolved alerts to the receiver or edit the body of email notifications configuration. For Slack receivers: Add the URL of the Slack webhook. Add the Slack channel or user name to send notifications to. Select Show advanced configuration if you want to edit the default option not to send resolved alerts to the receiver or edit the icon and username configuration. You can also choose whether to find and link channel names and usernames. By default, firing alerts with labels that match all of the selectors are sent to the receiver. If you want label values for firing alerts to be matched exactly before they are sent to the receiver, perform the following steps: Add routing label names and values in the Routing labels section of the form. Click Add label to add further routing labels. Click Create to create the receiver. 3.5.4.3. Configuring different alert receivers for default platform alerts and user-defined alerts You can configure different alert receivers for default platform alerts and user-defined alerts to ensure the following results: All default platform alerts are sent to a receiver owned by the team in charge of these alerts. All user-defined alerts are sent to another receiver so that the team can focus only on platform alerts. You can achieve this by using the openshift_io_alert_source="platform" label that is added by the Cluster Monitoring Operator to all platform alerts: Use the openshift_io_alert_source="platform" matcher to match default platform alerts. Use the openshift_io_alert_source!="platform" or 'openshift_io_alert_source=""' matcher to match user-defined alerts. Note This configuration does not apply if you have enabled a separate instance of Alertmanager dedicated to user-defined alerts.
[ "oc -n openshift-monitoring get configmap cluster-monitoring-config", "apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: |", "oc apply -f cluster-monitoring-config.yaml", "oc adm policy add-role-to-user <role> <user> -n <namespace> --role-namespace <namespace> 1", "oc adm policy add-cluster-role-to-user <cluster-role> <user> -n <namespace> 1", "oc label nodes <node_name> <node_label> 1", "oc -n openshift-monitoring edit configmap cluster-monitoring-config", "apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | # <component>: 1 nodeSelector: <node_label_1> 2 <node_label_2> 3 #", "oc -n openshift-monitoring edit configmap cluster-monitoring-config", "apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | <component>: tolerations: <toleration_specification>", "apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | alertmanagerMain: tolerations: - key: \"key1\" operator: \"Equal\" value: \"value1\" effect: \"NoSchedule\"", "oc -n openshift-monitoring edit configmap cluster-monitoring-config", "apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: |- prometheusK8s: enforcedBodySizeLimit: 40MB 1", "oc -n openshift-monitoring edit configmap cluster-monitoring-config", "apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | alertmanagerMain: resources: limits: cpu: 500m memory: 1Gi requests: cpu: 200m memory: 500Mi prometheusK8s: resources: limits: cpu: 500m memory: 3Gi requests: cpu: 200m memory: 500Mi thanosQuerier: resources: limits: cpu: 500m memory: 1Gi requests: cpu: 200m memory: 500Mi prometheusOperator: resources: limits: cpu: 500m memory: 1Gi requests: cpu: 200m memory: 500Mi metricsServer: resources: requests: cpu: 10m memory: 50Mi limits: cpu: 50m memory: 500Mi kubeStateMetrics: resources: limits: cpu: 500m memory: 1Gi requests: cpu: 200m memory: 500Mi telemeterClient: resources: limits: cpu: 500m memory: 1Gi requests: cpu: 200m memory: 500Mi openshiftStateMetrics: resources: limits: cpu: 500m memory: 1Gi requests: cpu: 200m memory: 500Mi nodeExporter: resources: limits: cpu: 50m memory: 150Mi requests: cpu: 20m memory: 50Mi monitoringPlugin: resources: limits: cpu: 500m memory: 1Gi requests: cpu: 200m memory: 500Mi prometheusOperatorAdmissionWebhook: resources: limits: cpu: 50m memory: 100Mi requests: cpu: 20m memory: 50Mi", "oc -n openshift-monitoring edit configmap cluster-monitoring-config", "apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | prometheusK8s: collectionProfile: <metrics_collection_profile_name> 1", "apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | prometheusK8s: collectionProfile: minimal", "oc -n openshift-monitoring edit configmap cluster-monitoring-config", "apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | <component>: 1 topologySpreadConstraints: - maxSkew: <n> 2 topologyKey: <key> 3 whenUnsatisfiable: <value> 4 labelSelector: 5 <match_option>", "apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | prometheusK8s: topologySpreadConstraints: - maxSkew: 1 topologyKey: monitoring whenUnsatisfiable: DoNotSchedule labelSelector: matchLabels: app.kubernetes.io/name: prometheus", "oc -n openshift-monitoring edit configmap cluster-monitoring-config", "apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | <component>: 1 volumeClaimTemplate: spec: storageClassName: <storage_class> 2 resources: requests: storage: <amount_of_storage> 3", "apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | prometheusK8s: volumeClaimTemplate: spec: storageClassName: my-storage-class resources: requests: storage: 40Gi", "oc -n openshift-monitoring edit configmap cluster-monitoring-config", "apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | <component>: 1 volumeClaimTemplate: spec: resources: requests: storage: <amount_of_storage> 2", "apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | prometheusK8s: volumeClaimTemplate: spec: resources: requests: storage: 100Gi", "oc -n openshift-monitoring edit configmap cluster-monitoring-config", "apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | prometheusK8s: retention: <time_specification> 1 retentionSize: <size_specification> 2", "apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | prometheusK8s: retention: 24h retentionSize: 10GB", "apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | metricsServer: audit: profile: Request", "oc -n openshift-monitoring edit configmap cluster-monitoring-config", "apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | <component>: 1 logLevel: <log_level> 2", "oc -n openshift-monitoring get deploy prometheus-operator -o yaml | grep \"log-level\"", "- --log-level=debug", "oc -n openshift-monitoring get pods", "oc -n openshift-monitoring edit configmap cluster-monitoring-config", "apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | prometheusK8s: queryLogFile: <path> 1", "oc -n openshift-monitoring get pods", "prometheus-operator-567c9bc75c-96wkj 2/2 Running 0 62m prometheus-k8s-0 6/6 Running 1 57m prometheus-k8s-1 6/6 Running 1 57m thanos-querier-56c76d7df4-2xkpc 6/6 Running 0 57m thanos-querier-56c76d7df4-j5p29 6/6 Running 0 57m", "oc -n openshift-monitoring exec prometheus-k8s-0 -- cat <path>", "oc -n openshift-monitoring edit configmap cluster-monitoring-config", "apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | thanosQuerier: enableRequestLogging: <value> 1 logLevel: <value> 2", "oc -n openshift-monitoring get pods", "token=`oc create token prometheus-k8s -n openshift-monitoring` oc -n openshift-monitoring exec -c prometheus prometheus-k8s-0 -- curl -k -H \"Authorization: Bearer USDtoken\" 'https://thanos-querier.openshift-monitoring.svc:9091/api/v1/query?query=cluster_version'", "oc -n openshift-monitoring logs <thanos_querier_pod_name> -c thanos-query", "oc -n openshift-monitoring edit configmap cluster-monitoring-config", "apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | prometheusK8s: remoteWrite: - url: \"https://remote-write-endpoint.example.com\" 1 <endpoint_authentication_credentials> 2", "apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | prometheusK8s: remoteWrite: - url: \"https://remote-write-endpoint.example.com\" <endpoint_authentication_credentials> writeRelabelConfigs: - <your_write_relabel_configs> 1", "apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | prometheusK8s: remoteWrite: - url: \"https://remote-write-endpoint.example.com\" writeRelabelConfigs: - sourceLabels: [__name__] regex: 'my_metric' action: keep", "apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | prometheusK8s: remoteWrite: - url: \"https://remote-write-endpoint.example.com\" writeRelabelConfigs: - sourceLabels: [__name__,namespace] regex: '(my_metric_1|my_metric_2);my_namespace' action: keep", "apiVersion: v1 kind: Secret metadata: name: sigv4-credentials namespace: openshift-monitoring stringData: accessKey: <AWS_access_key> 1 secretKey: <AWS_secret_key> 2 type: Opaque", "apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | prometheusK8s: remoteWrite: - url: \"https://authorization.example.com/api/write\" sigv4: region: <AWS_region> 1 accessKey: name: sigv4-credentials 2 key: accessKey 3 secretKey: name: sigv4-credentials 4 key: secretKey 5 profile: <AWS_profile_name> 6 roleArn: <AWS_role_arn> 7", "apiVersion: v1 kind: Secret metadata: name: rw-basic-auth namespace: openshift-monitoring stringData: user: <basic_username> 1 password: <basic_password> 2 type: Opaque", "apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | prometheusK8s: remoteWrite: - url: \"https://basicauth.example.com/api/write\" basicAuth: username: name: rw-basic-auth 1 key: user 2 password: name: rw-basic-auth 3 key: password 4", "apiVersion: v1 kind: Secret metadata: name: rw-bearer-auth namespace: openshift-monitoring stringData: token: <authentication_token> 1 type: Opaque", "apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | enableUserWorkload: true prometheusK8s: remoteWrite: - url: \"https://authorization.example.com/api/write\" authorization: type: Bearer 1 credentials: name: rw-bearer-auth 2 key: token 3", "apiVersion: v1 kind: Secret metadata: name: oauth2-credentials namespace: openshift-monitoring stringData: id: <oauth2_id> 1 secret: <oauth2_secret> 2 type: Opaque", "apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | prometheusK8s: remoteWrite: - url: \"https://test.example.com/api/write\" oauth2: clientId: secret: name: oauth2-credentials 1 key: id 2 clientSecret: name: oauth2-credentials 3 key: secret 4 tokenUrl: https://example.com/oauth2/token 5 scopes: 6 - <scope_1> - <scope_2> endpointParams: 7 param1: <parameter_1> param2: <parameter_2>", "apiVersion: v1 kind: Secret metadata: name: mtls-bundle namespace: openshift-monitoring data: ca.crt: <ca_cert> 1 client.crt: <client_cert> 2 client.key: <client_key> 3 type: tls", "apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | prometheusK8s: remoteWrite: - url: \"https://remote-write-endpoint.example.com\" tlsConfig: ca: secret: name: mtls-bundle 1 key: ca.crt 2 cert: secret: name: mtls-bundle 3 key: client.crt 4 keySecret: name: mtls-bundle 5 key: client.key 6", "apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | prometheusK8s: remoteWrite: - url: \"https://remote-write-endpoint.example.com\" <endpoint_authentication_credentials> queueConfig: capacity: 10000 1 minShards: 1 2 maxShards: 50 3 maxSamplesPerSend: 2000 4 batchSendDeadline: 5s 5 minBackoff: 30ms 6 maxBackoff: 5s 7 retryOnRateLimit: false 8 sampleAgeLimit: 0s 9", "oc -n openshift-monitoring edit configmap cluster-monitoring-config", "apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | prometheusK8s: remoteWrite: - url: \"https://remote-write-endpoint.example.com\" <endpoint_authentication_credentials> writeRelabelConfigs: 1 - <relabel_config> 2", "apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | prometheusK8s: remoteWrite: - url: \"https://remote-write-endpoint.example.com\" writeRelabelConfigs: - sourceLabels: - __tmp_openshift_cluster_id__ 1 targetLabel: cluster_id 2 action: replace 3", "oc -n openshift-monitoring edit configmap cluster-monitoring-config", "apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | prometheusK8s: additionalAlertmanagerConfigs: - <alertmanager_specification> 1", "apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | prometheusK8s: additionalAlertmanagerConfigs: - scheme: https pathPrefix: / timeout: \"30s\" apiVersion: v1 bearerToken: name: alertmanager-bearer-token key: token tlsConfig: key: name: alertmanager-tls key: tls.key cert: name: alertmanager-tls key: tls.crt ca: name: alertmanager-tls key: tls.ca staticConfigs: - external-alertmanager1-remote.com - external-alertmanager1-remote2.com", "oc -n openshift-monitoring edit configmap cluster-monitoring-config", "apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | alertmanagerMain: enabled: false", "oc -n openshift-monitoring edit configmap cluster-monitoring-config", "apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | alertmanagerMain: secrets: 1 - <secret_name_1> 2 - <secret_name_2>", "apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | alertmanagerMain: secrets: - test-secret-basic-auth - test-secret-api-token", "oc -n openshift-monitoring edit configmap cluster-monitoring-config", "apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | prometheusK8s: externalLabels: <key>: <value> 1", "apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | prometheusK8s: externalLabels: region: eu environment: prod", "oc -n openshift-monitoring get secret alertmanager-main --template='{{ index .data \"alertmanager.yaml\" }}' | base64 --decode > alertmanager.yaml", "global: resolve_timeout: 5m http_config: proxy_from_environment: true 1 route: group_wait: 30s 2 group_interval: 5m 3 repeat_interval: 12h 4 receiver: default routes: - matchers: - \"alertname=Watchdog\" repeat_interval: 2m receiver: watchdog - matchers: - \"service=<your_service>\" 5 routes: - matchers: - <your_matching_rules> 6 receiver: <receiver> 7 receivers: - name: default - name: watchdog - name: <receiver> <receiver_configuration> 8", "global: resolve_timeout: 5m http_config: proxy_from_environment: true route: group_wait: 30s group_interval: 5m repeat_interval: 12h receiver: default routes: - matchers: - \"alertname=Watchdog\" repeat_interval: 2m receiver: watchdog - matchers: 1 - \"service=example-app\" routes: - matchers: - \"severity=critical\" receiver: team-frontend-page receivers: - name: default - name: watchdog - name: team-frontend-page pagerduty_configs: - service_key: \"<your_key>\" http_config: 2 proxy_from_environment: true authorization: credentials: xxxxxxxxxx", "oc -n openshift-monitoring create secret generic alertmanager-main --from-file=alertmanager.yaml --dry-run=client -o=yaml | oc -n openshift-monitoring replace secret --filename=-" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.18/html/monitoring/configuring-core-platform-monitoring
Chapter 5. Limited Availability features
Chapter 5. Limited Availability features Important This section describes Limited Availability features in Red Hat OpenShift AI. Limited Availability means that you can install and receive support for the feature only with specific approval from Red Hat. Without such approval, the feature is unsupported. This applies to all features described in this section. Tuning in OpenShift AI Tuning in OpenShift AI is available as a Limited Availability feature. The Kubeflow Training Operator and the Hugging Face Supervised Fine-tuning Trainer (SFT Trainer) enable users to fine-tune and train their models easily in a distributed environment. In this release, you can use this feature for models that are based on the PyTorch machine-learning framework.
null
https://docs.redhat.com/en/documentation/red_hat_openshift_ai_cloud_service/1/html/release_notes/limited-availability-features_relnotes
Installing on AWS
Installing on AWS OpenShift Container Platform 4.16 Installing OpenShift Container Platform on Amazon Web Services Red Hat OpenShift Documentation Team
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.16/html/installing_on_aws/index
Chapter 2. General Updates
Chapter 2. General Updates A new rollback capability for in-place upgrades With the RHEA-2018:3395 advisory, the Red Hat Upgrade Tool provides a rollback capability by using LVM snapshots for systems that meet conditions specified in the Knowledgebase Solution available at https://access.redhat.com/solutions/3534561 . (BZ#1625999)
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.10_release_notes/new_features_general_updates
Chapter 1. Using language support for Apache Camel extension
Chapter 1. Using language support for Apache Camel extension Important The VS Code extensions for Apache Camel are listed as development support. For more information about scope of development support, see Development Support Scope of Coverage The Visual Studio Code language support extension adds the language support for Apache Camel for XML DSL and Java DSL code. 1.1. About language support for Apache Camel extension This extension provides completion, validation and documentation features for Apache Camel URI elements directly in your Visual Studio Code editor. It works as a client using the Microsoft Language Server Protocol which communicates with Camel Language Server to provide all functionalities. 1.2. Features of language support for Apache Camel extension The important features of the language support extension are listed below: Language service support for Apache Camel URIs. Quick reference documentation when you hover the cursor over a Camel component. Diagnostics for Camel URIs. Navigation for Java and XML langauges. Creating a Camel Route specified with Yaml DSL using Camel JBang. 1.3. Requirements Following points must be considered when using the Apache Camel Language Server: Java 11 is currently required to launch the Apache Camel Language Server. The java.home VS Code option is used to use a different version of JDK than the default one installed on the machine. For some features, JBang must be available on a system command line. For an XML DSL files: Use an .xml file extension. Specify the Camel namespace, for reference, see http://camel.apache.org/schema/blueprint or http://camel.apache.org/schema/spring . For a Java DSL files: Use a .java file extension. Specify the Camel package(usually from an imported package), for example, import org.apache.camel.builder.RouteBuilder . To reference the Camel component, use from or to and a string without a space. The string cannot be a variable. For example, from("timer:timerName") works, but from( "timer:timerName") and from(aVariable) do not work. 1.4. Installing Language support for Apache Camel extension You can download the Language support for Apache Camel extension from the VS Code Extension Marketplace and the Open VSX Registry. You can also install the Language Support for Apache Camel extension directly in the Microsoft VS Code. Procedure Open the VS Code editor. In the VS Code editor, select View > Extensions . In the search bar, type Camel . Select the Language Support for Apache Camel option from the search results and then click Install. This installs the language support extension in your editor. Additional resources Language Support for Apache Camel by Red Hat
null
https://docs.redhat.com/en/documentation/red_hat_build_of_apache_camel_for_spring_boot/3.20/html/camel_spring_boot_user_guide/csb-vscode-language-support-extension
Preface
Preface Open Java Development Kit (OpenJDK) is a free and open-source implementation of the Java Platform, Standard Edition (Java SE). Eclipse Temurin is available in four LTS versions: OpenJDK 8u, OpenJDK 11u, OpenJDK 17u, and OpenJDK 21u. Binary files for Eclipse Temurin are available for macOS, Microsoft Windows, and multiple Linux x86 Operating Systems including Red Hat Enterprise Linux and Ubuntu.
null
https://docs.redhat.com/en/documentation/red_hat_build_of_openjdk/8/html/eclipse_temurin_8.0.442_release_notes/pr01
Providing feedback on Red Hat documentation
Providing feedback on Red Hat documentation We appreciate your input on our documentation. Tell us how we can make it better. Providing documentation feedback in Jira Use the Create Issue form to provide feedback on the documentation. The Jira issue will be created in the Red Hat OpenStack Platform Jira project, where you can track the progress of your feedback. Ensure that you are logged in to Jira. If you do not have a Jira account, create an account to submit feedback. Click the following link to open a the Create Issue page: Create Issue Complete the Summary and Description fields. In the Description field, include the documentation URL, chapter or section number, and a detailed description of the issue. Do not modify any other fields in the form. Click Create .
null
https://docs.redhat.com/en/documentation/red_hat_openstack_platform/16.2/html/manage_secrets_with_openstack_key_manager/proc_providing-feedback-on-red-hat-documentation
Chapter 18. OpenShift
Chapter 18. OpenShift The namespace for openshift-logging specific metadata Data type group 18.1. openshift.labels Labels added by the Cluster Log Forwarder configuration Data type group
null
https://docs.redhat.com/en/documentation/red_hat_openshift_service_on_aws/4/html/logging/openshift
Chapter 6. Fixed issues
Chapter 6. Fixed issues The issues fixed in Streams for Apache Kafka 2.7 on RHEL. For details of the issues fixed in Kafka 3.7.0, refer to the Kafka 3.7.0 Release Notes. Table 6.1. Fixed issues Issue Number Description ENTMQST-5839 OAuth issue fix: oauth.fallback.username.prefix had no effect ENTMQST-5753 Producing with different embedded formats across multiple HTTP requests isn't honoured ENTMQST-5504 Add support for Kafka and Strimzi upgrades when KRaft is enabled ENTMQST-3994 ZooKeeper to KRaft migration Table 6.2. Fixed common vulnerabilities and exposures (CVEs) Issue Number Description ENTMQST-5886 CVE-2023-43642 flaw was found in SnappyInputStream in snappy-java ENTMQST-5885 CVE-2023-52428 Nimbus JOSE+JWT before 9.37.2 ENTMQST-5884 CVE-2022-4899 vulnerability was found in zstd v1.4.10 ENTMQST-5883 CVE-2021-24032 flaw was found in zstd ENTMQST-5882 CVE-2024-23944 Apache ZooKeeper: Information disclosure in persistent watcher handling ENTMQST-5881 CVE-2021-3520 a flaw in lz4 ENTMQST-5835 CVE-2024-29025 netty-codec-http: Allocation of Resources Without Limits or Throttling ENTMQST-5646 CVE-2024-1023 vert.x: io.vertx/vertx-core: memory leak due to the use of Netty FastThreadLocal data structures in Vertx ENTMQST-5667 CVE-2024-1300 vertx-core: io.vertx:vertx-core: memory leak when a TCP server is configured with TLS and SNI support
null
https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.7/html/release_notes_for_streams_for_apache_kafka_2.7_on_rhel/resolved-issues-str
Chapter 5. Scale [autoscaling/v1]
Chapter 5. Scale [autoscaling/v1] Description Scale represents a scaling request for a resource. Type object 5.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta Standard object metadata; More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata . spec object ScaleSpec describes the attributes of a scale subresource. status object ScaleStatus represents the current status of a scale subresource. 5.1.1. .spec Description ScaleSpec describes the attributes of a scale subresource. Type object Property Type Description replicas integer replicas is the desired number of instances for the scaled object. 5.1.2. .status Description ScaleStatus represents the current status of a scale subresource. Type object Required replicas Property Type Description replicas integer replicas is the actual number of observed instances of the scaled object. selector string selector is the label query over pods that should match the replicas count. This is same as the label selector but in the string format to avoid introspection by clients. The string will be in the same format as the query-param syntax. More info about label selectors: https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/ 5.2. API endpoints The following API endpoints are available: /apis/apps/v1/namespaces/{namespace}/deployments/{name}/scale GET : read scale of the specified Deployment PATCH : partially update scale of the specified Deployment PUT : replace scale of the specified Deployment /apis/apps/v1/namespaces/{namespace}/replicasets/{name}/scale GET : read scale of the specified ReplicaSet PATCH : partially update scale of the specified ReplicaSet PUT : replace scale of the specified ReplicaSet /apis/apps/v1/namespaces/{namespace}/statefulsets/{name}/scale GET : read scale of the specified StatefulSet PATCH : partially update scale of the specified StatefulSet PUT : replace scale of the specified StatefulSet /api/v1/namespaces/{namespace}/replicationcontrollers/{name}/scale GET : read scale of the specified ReplicationController PATCH : partially update scale of the specified ReplicationController PUT : replace scale of the specified ReplicationController 5.2.1. /apis/apps/v1/namespaces/{namespace}/deployments/{name}/scale Table 5.1. Global path parameters Parameter Type Description name string name of the Scale HTTP method GET Description read scale of the specified Deployment Table 5.2. HTTP responses HTTP code Reponse body 200 - OK Scale schema 401 - Unauthorized Empty HTTP method PATCH Description partially update scale of the specified Deployment Table 5.3. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 5.4. HTTP responses HTTP code Reponse body 200 - OK Scale schema 201 - Created Scale schema 401 - Unauthorized Empty HTTP method PUT Description replace scale of the specified Deployment Table 5.5. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 5.6. Body parameters Parameter Type Description body Scale schema Table 5.7. HTTP responses HTTP code Reponse body 200 - OK Scale schema 201 - Created Scale schema 401 - Unauthorized Empty 5.2.2. /apis/apps/v1/namespaces/{namespace}/replicasets/{name}/scale Table 5.8. Global path parameters Parameter Type Description name string name of the Scale HTTP method GET Description read scale of the specified ReplicaSet Table 5.9. HTTP responses HTTP code Reponse body 200 - OK Scale schema 401 - Unauthorized Empty HTTP method PATCH Description partially update scale of the specified ReplicaSet Table 5.10. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 5.11. HTTP responses HTTP code Reponse body 200 - OK Scale schema 201 - Created Scale schema 401 - Unauthorized Empty HTTP method PUT Description replace scale of the specified ReplicaSet Table 5.12. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 5.13. Body parameters Parameter Type Description body Scale schema Table 5.14. HTTP responses HTTP code Reponse body 200 - OK Scale schema 201 - Created Scale schema 401 - Unauthorized Empty 5.2.3. /apis/apps/v1/namespaces/{namespace}/statefulsets/{name}/scale Table 5.15. Global path parameters Parameter Type Description name string name of the Scale HTTP method GET Description read scale of the specified StatefulSet Table 5.16. HTTP responses HTTP code Reponse body 200 - OK Scale schema 401 - Unauthorized Empty HTTP method PATCH Description partially update scale of the specified StatefulSet Table 5.17. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 5.18. HTTP responses HTTP code Reponse body 200 - OK Scale schema 201 - Created Scale schema 401 - Unauthorized Empty HTTP method PUT Description replace scale of the specified StatefulSet Table 5.19. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 5.20. Body parameters Parameter Type Description body Scale schema Table 5.21. HTTP responses HTTP code Reponse body 200 - OK Scale schema 201 - Created Scale schema 401 - Unauthorized Empty 5.2.4. /api/v1/namespaces/{namespace}/replicationcontrollers/{name}/scale Table 5.22. Global path parameters Parameter Type Description name string name of the Scale HTTP method GET Description read scale of the specified ReplicationController Table 5.23. HTTP responses HTTP code Reponse body 200 - OK Scale schema 401 - Unauthorized Empty HTTP method PATCH Description partially update scale of the specified ReplicationController Table 5.24. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 5.25. HTTP responses HTTP code Reponse body 200 - OK Scale schema 201 - Created Scale schema 401 - Unauthorized Empty HTTP method PUT Description replace scale of the specified ReplicationController Table 5.26. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 5.27. Body parameters Parameter Type Description body Scale schema Table 5.28. HTTP responses HTTP code Reponse body 200 - OK Scale schema 201 - Created Scale schema 401 - Unauthorized Empty
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.18/html/autoscale_apis/scale-autoscaling-v1
Chapter 10. MachineSet [machine.openshift.io/v1beta1]
Chapter 10. MachineSet [machine.openshift.io/v1beta1] Description MachineSet ensures that a specified number of machines replicas are running at any given time. Compatibility level 2: Stable within a major release for a minimum of 9 months or 3 minor releases (whichever is longer). Type object 10.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata spec object MachineSetSpec defines the desired state of MachineSet status object MachineSetStatus defines the observed state of MachineSet 10.1.1. .spec Description MachineSetSpec defines the desired state of MachineSet Type object Property Type Description deletePolicy string DeletePolicy defines the policy used to identify nodes to delete when downscaling. Defaults to "Random". Valid values are "Random, "Newest", "Oldest" minReadySeconds integer MinReadySeconds is the minimum number of seconds for which a newly created machine should be ready. Defaults to 0 (machine will be considered available as soon as it is ready) replicas integer Replicas is the number of desired replicas. This is a pointer to distinguish between explicit zero and unspecified. Defaults to 1. selector object Selector is a label query over machines that should match the replica count. Label keys and values that must match in order to be controlled by this MachineSet. It must match the machine template's labels. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/#label-selectors template object Template is the object that describes the machine that will be created if insufficient replicas are detected. 10.1.2. .spec.selector Description Selector is a label query over machines that should match the replica count. Label keys and values that must match in order to be controlled by this MachineSet. It must match the machine template's labels. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/#label-selectors Type object Property Type Description matchExpressions array matchExpressions is a list of label selector requirements. The requirements are ANDed. matchExpressions[] object A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. matchLabels object (string) matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed. 10.1.3. .spec.selector.matchExpressions Description matchExpressions is a list of label selector requirements. The requirements are ANDed. Type array 10.1.4. .spec.selector.matchExpressions[] Description A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. Type object Required key operator Property Type Description key string key is the label key that the selector applies to. operator string operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist. values array (string) values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch. 10.1.5. .spec.template Description Template is the object that describes the machine that will be created if insufficient replicas are detected. Type object Property Type Description metadata object Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata spec object Specification of the desired behavior of the machine. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#spec-and-status 10.1.6. .spec.template.metadata Description Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata Type object Property Type Description annotations object (string) Annotations is an unstructured key value map stored with a resource that may be set by external tools to store and retrieve arbitrary metadata. They are not queryable and should be preserved when modifying objects. More info: http://kubernetes.io/docs/user-guide/annotations generateName string GenerateName is an optional prefix, used by the server, to generate a unique name ONLY IF the Name field has not been provided. If this field is used, the name returned to the client will be different than the name passed. This value will also be combined with a unique suffix. The provided value has the same validation rules as the Name field, and may be truncated by the length of the suffix required to make the value unique on the server. If this field is specified and the generated name exists, the server will NOT return a 409 - instead, it will either return 201 Created or 500 with Reason ServerTimeout indicating a unique name could not be found in the time allotted, and the client should retry (optionally after the time indicated in the Retry-After header). Applied only if Name is not specified. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#idempotency labels object (string) Map of string keys and values that can be used to organize and categorize (scope and select) objects. May match selectors of replication controllers and services. More info: http://kubernetes.io/docs/user-guide/labels name string Name must be unique within a namespace. Is required when creating resources, although some resources may allow a client to request the generation of an appropriate name automatically. Name is primarily intended for creation idempotence and configuration definition. Cannot be updated. More info: http://kubernetes.io/docs/user-guide/identifiers#names namespace string Namespace defines the space within each name must be unique. An empty namespace is equivalent to the "default" namespace, but "default" is the canonical representation. Not all objects are required to be scoped to a namespace - the value of this field for those objects will be empty. Must be a DNS_LABEL. Cannot be updated. More info: http://kubernetes.io/docs/user-guide/namespaces ownerReferences array List of objects depended by this object. If ALL objects in the list have been deleted, this object will be garbage collected. If this object is managed by a controller, then an entry in this list will point to this controller, with the controller field set to true. There cannot be more than one managing controller. ownerReferences[] object OwnerReference contains enough information to let you identify an owning object. An owning object must be in the same namespace as the dependent, or be cluster-scoped, so there is no namespace field. 10.1.7. .spec.template.metadata.ownerReferences Description List of objects depended by this object. If ALL objects in the list have been deleted, this object will be garbage collected. If this object is managed by a controller, then an entry in this list will point to this controller, with the controller field set to true. There cannot be more than one managing controller. Type array 10.1.8. .spec.template.metadata.ownerReferences[] Description OwnerReference contains enough information to let you identify an owning object. An owning object must be in the same namespace as the dependent, or be cluster-scoped, so there is no namespace field. Type object Required apiVersion kind name uid Property Type Description apiVersion string API version of the referent. blockOwnerDeletion boolean If true, AND if the owner has the "foregroundDeletion" finalizer, then the owner cannot be deleted from the key-value store until this reference is removed. See https://kubernetes.io/docs/concepts/architecture/garbage-collection/#foreground-deletion for how the garbage collector interacts with this field and enforces the foreground deletion. Defaults to false. To set this field, a user needs "delete" permission of the owner, otherwise 422 (Unprocessable Entity) will be returned. controller boolean If true, this reference points to the managing controller. kind string Kind of the referent. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds name string Name of the referent. More info: http://kubernetes.io/docs/user-guide/identifiers#names uid string UID of the referent. More info: http://kubernetes.io/docs/user-guide/identifiers#uids 10.1.9. .spec.template.spec Description Specification of the desired behavior of the machine. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#spec-and-status Type object Property Type Description lifecycleHooks object LifecycleHooks allow users to pause operations on the machine at certain predefined points within the machine lifecycle. metadata object ObjectMeta will autopopulate the Node created. Use this to indicate what labels, annotations, name prefix, etc., should be used when creating the Node. providerID string ProviderID is the identification ID of the machine provided by the provider. This field must match the provider ID as seen on the node object corresponding to this machine. This field is required by higher level consumers of cluster-api. Example use case is cluster autoscaler with cluster-api as provider. Clean-up logic in the autoscaler compares machines to nodes to find out machines at provider which could not get registered as Kubernetes nodes. With cluster-api as a generic out-of-tree provider for autoscaler, this field is required by autoscaler to be able to have a provider view of the list of machines. Another list of nodes is queried from the k8s apiserver and then a comparison is done to find out unregistered machines and are marked for delete. This field will be set by the actuators and consumed by higher level entities like autoscaler that will be interfacing with cluster-api as generic provider. providerSpec object ProviderSpec details Provider-specific configuration to use during node creation. taints array The list of the taints to be applied to the corresponding Node in additive manner. This list will not overwrite any other taints added to the Node on an ongoing basis by other entities. These taints should be actively reconciled e.g. if you ask the machine controller to apply a taint and then manually remove the taint the machine controller will put it back) but not have the machine controller remove any taints taints[] object The node this Taint is attached to has the "effect" on any pod that does not tolerate the Taint. 10.1.10. .spec.template.spec.lifecycleHooks Description LifecycleHooks allow users to pause operations on the machine at certain predefined points within the machine lifecycle. Type object Property Type Description preDrain array PreDrain hooks prevent the machine from being drained. This also blocks further lifecycle events, such as termination. preDrain[] object LifecycleHook represents a single instance of a lifecycle hook preTerminate array PreTerminate hooks prevent the machine from being terminated. PreTerminate hooks be actioned after the Machine has been drained. preTerminate[] object LifecycleHook represents a single instance of a lifecycle hook 10.1.11. .spec.template.spec.lifecycleHooks.preDrain Description PreDrain hooks prevent the machine from being drained. This also blocks further lifecycle events, such as termination. Type array 10.1.12. .spec.template.spec.lifecycleHooks.preDrain[] Description LifecycleHook represents a single instance of a lifecycle hook Type object Required name owner Property Type Description name string Name defines a unique name for the lifcycle hook. The name should be unique and descriptive, ideally 1-3 words, in CamelCase or it may be namespaced, eg. foo.example.com/CamelCase. Names must be unique and should only be managed by a single entity. owner string Owner defines the owner of the lifecycle hook. This should be descriptive enough so that users can identify who/what is responsible for blocking the lifecycle. This could be the name of a controller (e.g. clusteroperator/etcd) or an administrator managing the hook. 10.1.13. .spec.template.spec.lifecycleHooks.preTerminate Description PreTerminate hooks prevent the machine from being terminated. PreTerminate hooks be actioned after the Machine has been drained. Type array 10.1.14. .spec.template.spec.lifecycleHooks.preTerminate[] Description LifecycleHook represents a single instance of a lifecycle hook Type object Required name owner Property Type Description name string Name defines a unique name for the lifcycle hook. The name should be unique and descriptive, ideally 1-3 words, in CamelCase or it may be namespaced, eg. foo.example.com/CamelCase. Names must be unique and should only be managed by a single entity. owner string Owner defines the owner of the lifecycle hook. This should be descriptive enough so that users can identify who/what is responsible for blocking the lifecycle. This could be the name of a controller (e.g. clusteroperator/etcd) or an administrator managing the hook. 10.1.15. .spec.template.spec.metadata Description ObjectMeta will autopopulate the Node created. Use this to indicate what labels, annotations, name prefix, etc., should be used when creating the Node. Type object Property Type Description annotations object (string) Annotations is an unstructured key value map stored with a resource that may be set by external tools to store and retrieve arbitrary metadata. They are not queryable and should be preserved when modifying objects. More info: http://kubernetes.io/docs/user-guide/annotations generateName string GenerateName is an optional prefix, used by the server, to generate a unique name ONLY IF the Name field has not been provided. If this field is used, the name returned to the client will be different than the name passed. This value will also be combined with a unique suffix. The provided value has the same validation rules as the Name field, and may be truncated by the length of the suffix required to make the value unique on the server. If this field is specified and the generated name exists, the server will NOT return a 409 - instead, it will either return 201 Created or 500 with Reason ServerTimeout indicating a unique name could not be found in the time allotted, and the client should retry (optionally after the time indicated in the Retry-After header). Applied only if Name is not specified. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#idempotency labels object (string) Map of string keys and values that can be used to organize and categorize (scope and select) objects. May match selectors of replication controllers and services. More info: http://kubernetes.io/docs/user-guide/labels name string Name must be unique within a namespace. Is required when creating resources, although some resources may allow a client to request the generation of an appropriate name automatically. Name is primarily intended for creation idempotence and configuration definition. Cannot be updated. More info: http://kubernetes.io/docs/user-guide/identifiers#names namespace string Namespace defines the space within each name must be unique. An empty namespace is equivalent to the "default" namespace, but "default" is the canonical representation. Not all objects are required to be scoped to a namespace - the value of this field for those objects will be empty. Must be a DNS_LABEL. Cannot be updated. More info: http://kubernetes.io/docs/user-guide/namespaces ownerReferences array List of objects depended by this object. If ALL objects in the list have been deleted, this object will be garbage collected. If this object is managed by a controller, then an entry in this list will point to this controller, with the controller field set to true. There cannot be more than one managing controller. ownerReferences[] object OwnerReference contains enough information to let you identify an owning object. An owning object must be in the same namespace as the dependent, or be cluster-scoped, so there is no namespace field. 10.1.16. .spec.template.spec.metadata.ownerReferences Description List of objects depended by this object. If ALL objects in the list have been deleted, this object will be garbage collected. If this object is managed by a controller, then an entry in this list will point to this controller, with the controller field set to true. There cannot be more than one managing controller. Type array 10.1.17. .spec.template.spec.metadata.ownerReferences[] Description OwnerReference contains enough information to let you identify an owning object. An owning object must be in the same namespace as the dependent, or be cluster-scoped, so there is no namespace field. Type object Required apiVersion kind name uid Property Type Description apiVersion string API version of the referent. blockOwnerDeletion boolean If true, AND if the owner has the "foregroundDeletion" finalizer, then the owner cannot be deleted from the key-value store until this reference is removed. See https://kubernetes.io/docs/concepts/architecture/garbage-collection/#foreground-deletion for how the garbage collector interacts with this field and enforces the foreground deletion. Defaults to false. To set this field, a user needs "delete" permission of the owner, otherwise 422 (Unprocessable Entity) will be returned. controller boolean If true, this reference points to the managing controller. kind string Kind of the referent. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds name string Name of the referent. More info: http://kubernetes.io/docs/user-guide/identifiers#names uid string UID of the referent. More info: http://kubernetes.io/docs/user-guide/identifiers#uids 10.1.18. .spec.template.spec.providerSpec Description ProviderSpec details Provider-specific configuration to use during node creation. Type object Property Type Description value `` Value is an inlined, serialized representation of the resource configuration. It is recommended that providers maintain their own versioned API types that should be serialized/deserialized from this field, akin to component config. 10.1.19. .spec.template.spec.taints Description The list of the taints to be applied to the corresponding Node in additive manner. This list will not overwrite any other taints added to the Node on an ongoing basis by other entities. These taints should be actively reconciled e.g. if you ask the machine controller to apply a taint and then manually remove the taint the machine controller will put it back) but not have the machine controller remove any taints Type array 10.1.20. .spec.template.spec.taints[] Description The node this Taint is attached to has the "effect" on any pod that does not tolerate the Taint. Type object Required effect key Property Type Description effect string Required. The effect of the taint on pods that do not tolerate the taint. Valid effects are NoSchedule, PreferNoSchedule and NoExecute. key string Required. The taint key to be applied to a node. timeAdded string TimeAdded represents the time at which the taint was added. It is only written for NoExecute taints. value string The taint value corresponding to the taint key. 10.1.21. .status Description MachineSetStatus defines the observed state of MachineSet Type object Property Type Description availableReplicas integer The number of available replicas (ready for at least minReadySeconds) for this MachineSet. errorMessage string errorReason string In the event that there is a terminal problem reconciling the replicas, both ErrorReason and ErrorMessage will be set. ErrorReason will be populated with a succinct value suitable for machine interpretation, while ErrorMessage will contain a more verbose string suitable for logging and human consumption. These fields should not be set for transitive errors that a controller faces that are expected to be fixed automatically over time (like service outages), but instead indicate that something is fundamentally wrong with the MachineTemplate's spec or the configuration of the machine controller, and that manual intervention is required. Examples of terminal errors would be invalid combinations of settings in the spec, values that are unsupported by the machine controller, or the responsible machine controller itself being critically misconfigured. Any transient errors that occur during the reconciliation of Machines can be added as events to the MachineSet object and/or logged in the controller's output. fullyLabeledReplicas integer The number of replicas that have labels matching the labels of the machine template of the MachineSet. observedGeneration integer ObservedGeneration reflects the generation of the most recently observed MachineSet. readyReplicas integer The number of ready replicas for this MachineSet. A machine is considered ready when the node has been created and is "Ready". replicas integer Replicas is the most recently observed number of replicas. 10.2. API endpoints The following API endpoints are available: /apis/machine.openshift.io/v1beta1/machinesets GET : list objects of kind MachineSet /apis/machine.openshift.io/v1beta1/namespaces/{namespace}/machinesets DELETE : delete collection of MachineSet GET : list objects of kind MachineSet POST : create a MachineSet /apis/machine.openshift.io/v1beta1/namespaces/{namespace}/machinesets/{name} DELETE : delete a MachineSet GET : read the specified MachineSet PATCH : partially update the specified MachineSet PUT : replace the specified MachineSet /apis/machine.openshift.io/v1beta1/namespaces/{namespace}/machinesets/{name}/scale GET : read scale of the specified MachineSet PATCH : partially update scale of the specified MachineSet PUT : replace scale of the specified MachineSet /apis/machine.openshift.io/v1beta1/namespaces/{namespace}/machinesets/{name}/status GET : read status of the specified MachineSet PATCH : partially update status of the specified MachineSet PUT : replace status of the specified MachineSet 10.2.1. /apis/machine.openshift.io/v1beta1/machinesets Table 10.1. Global query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. pretty string If 'true', then the output is pretty printed. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. HTTP method GET Description list objects of kind MachineSet Table 10.2. HTTP responses HTTP code Reponse body 200 - OK MachineSetList schema 401 - Unauthorized Empty 10.2.2. /apis/machine.openshift.io/v1beta1/namespaces/{namespace}/machinesets Table 10.3. Global path parameters Parameter Type Description namespace string object name and auth scope, such as for teams and projects Table 10.4. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method DELETE Description delete collection of MachineSet Table 10.5. Query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. Table 10.6. HTTP responses HTTP code Reponse body 200 - OK Status schema 401 - Unauthorized Empty HTTP method GET Description list objects of kind MachineSet Table 10.7. Query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. Table 10.8. HTTP responses HTTP code Reponse body 200 - OK MachineSetList schema 401 - Unauthorized Empty HTTP method POST Description create a MachineSet Table 10.9. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 10.10. Body parameters Parameter Type Description body MachineSet schema Table 10.11. HTTP responses HTTP code Reponse body 200 - OK MachineSet schema 201 - Created MachineSet schema 202 - Accepted MachineSet schema 401 - Unauthorized Empty 10.2.3. /apis/machine.openshift.io/v1beta1/namespaces/{namespace}/machinesets/{name} Table 10.12. Global path parameters Parameter Type Description name string name of the MachineSet namespace string object name and auth scope, such as for teams and projects Table 10.13. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method DELETE Description delete a MachineSet Table 10.14. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed gracePeriodSeconds integer The duration in seconds before the object should be deleted. Value must be non-negative integer. The value zero indicates delete immediately. If this value is nil, the default grace period for the specified type will be used. Defaults to a per object value if not specified. zero means delete immediately. orphanDependents boolean Deprecated: please use the PropagationPolicy, this field will be deprecated in 1.7. Should the dependent objects be orphaned. If true/false, the "orphan" finalizer will be added to/removed from the object's finalizers list. Either this field or PropagationPolicy may be set, but not both. propagationPolicy string Whether and how garbage collection will be performed. Either this field or OrphanDependents may be set, but not both. The default policy is decided by the existing finalizer set in the metadata.finalizers and the resource-specific default policy. Acceptable values are: 'Orphan' - orphan the dependents; 'Background' - allow the garbage collector to delete the dependents in the background; 'Foreground' - a cascading policy that deletes all dependents in the foreground. Table 10.15. Body parameters Parameter Type Description body DeleteOptions schema Table 10.16. HTTP responses HTTP code Reponse body 200 - OK Status schema 202 - Accepted Status schema 401 - Unauthorized Empty HTTP method GET Description read the specified MachineSet Table 10.17. Query parameters Parameter Type Description resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset Table 10.18. HTTP responses HTTP code Reponse body 200 - OK MachineSet schema 401 - Unauthorized Empty HTTP method PATCH Description partially update the specified MachineSet Table 10.19. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 10.20. Body parameters Parameter Type Description body Patch schema Table 10.21. HTTP responses HTTP code Reponse body 200 - OK MachineSet schema 401 - Unauthorized Empty HTTP method PUT Description replace the specified MachineSet Table 10.22. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 10.23. Body parameters Parameter Type Description body MachineSet schema Table 10.24. HTTP responses HTTP code Reponse body 200 - OK MachineSet schema 201 - Created MachineSet schema 401 - Unauthorized Empty 10.2.4. /apis/machine.openshift.io/v1beta1/namespaces/{namespace}/machinesets/{name}/scale Table 10.25. Global path parameters Parameter Type Description name string name of the MachineSet namespace string object name and auth scope, such as for teams and projects Table 10.26. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method GET Description read scale of the specified MachineSet Table 10.27. Query parameters Parameter Type Description resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset Table 10.28. HTTP responses HTTP code Reponse body 200 - OK Scale schema 401 - Unauthorized Empty HTTP method PATCH Description partially update scale of the specified MachineSet Table 10.29. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 10.30. Body parameters Parameter Type Description body Patch schema Table 10.31. HTTP responses HTTP code Reponse body 200 - OK Scale schema 401 - Unauthorized Empty HTTP method PUT Description replace scale of the specified MachineSet Table 10.32. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 10.33. Body parameters Parameter Type Description body Scale schema Table 10.34. HTTP responses HTTP code Reponse body 200 - OK Scale schema 201 - Created Scale schema 401 - Unauthorized Empty 10.2.5. /apis/machine.openshift.io/v1beta1/namespaces/{namespace}/machinesets/{name}/status Table 10.35. Global path parameters Parameter Type Description name string name of the MachineSet namespace string object name and auth scope, such as for teams and projects Table 10.36. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method GET Description read status of the specified MachineSet Table 10.37. Query parameters Parameter Type Description resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset Table 10.38. HTTP responses HTTP code Reponse body 200 - OK MachineSet schema 401 - Unauthorized Empty HTTP method PATCH Description partially update status of the specified MachineSet Table 10.39. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 10.40. Body parameters Parameter Type Description body Patch schema Table 10.41. HTTP responses HTTP code Reponse body 200 - OK MachineSet schema 401 - Unauthorized Empty HTTP method PUT Description replace status of the specified MachineSet Table 10.42. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 10.43. Body parameters Parameter Type Description body MachineSet schema Table 10.44. HTTP responses HTTP code Reponse body 200 - OK MachineSet schema 201 - Created MachineSet schema 401 - Unauthorized Empty
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.13/html/machine_apis/machineset-machine-openshift-io-v1beta1
Chapter 1. Integrating with image registries
Chapter 1. Integrating with image registries Red Hat Advanced Cluster Security for Kubernetes (RHACS) integrates with a variety of image registries so that you can understand your images and apply security policies for image usage. When you integrate with image registries, you can view important image details, such as image creation date and Dockerfile details (including image layers). After you integrate RHACS with your registry, you can scan images, view image components, and apply security policies to images before or after deployment. Note When you integrate with an image registry, RHACS does not scan all images in your registry. RHACS only scans the images when you: Use the images in deployments Use the roxctl CLI to check images Use a continuous integration (CI) system to enforce security policies You can integrate RHACS with major image registries, including: Amazon Elastic Container Registry (ECR) Docker Hub Google Container Registry (GCR) Google Artifact Registry IBM Cloud Container Registry (ICR) JFrog Artifactory Microsoft Azure Container Registry (ACR) Red Hat Quay Red Hat container registries Sonatype Nexus Any other registry that uses the Docker Registry HTTP API 1.1. Automatic configuration Red Hat Advanced Cluster Security for Kubernetes includes default integrations with standard registries, such as Docker Hub and others. It can also automatically configure integrations based on artifacts found in the monitored clusters, such as image pull secrets. Usually, you do not need to configure registry integrations manually. Important If you use a Google Container Registry (GCR), Red Hat Advanced Cluster Security for Kubernetes does not create a registry integration automatically. If you use Red Hat Advanced Cluster Security Cloud Service, automatic configuration is unavailable, and you must manually create registry integrations. 1.2. Amazon ECR integrations For Amazon ECR integrations, Red Hat Advanced Cluster Security for Kubernetes automatically generates ECR registry integrations if the following conditions are met: The cloud provider for the cluster is AWS. The nodes in your cluster have an Instance Identity and Access Management (IAM) Role association and the Instance Metadata Service is available in the nodes. For example, when using Amazon Elastic Kubernetes Service (EKS) to manage your cluster, this role is known as the EKS Node IAM role. The Instance IAM role has IAM policies granting access to the ECR registries from which you are deploying. If the listed conditions are met, Red Hat Advanced Cluster Security for Kubernetes monitors deployments that pull from ECR registries and automatically generates ECR integrations for them. You can edit these integrations after they are automatically generated. 1.3. Manually configuring image registries If you are using GCR, you must manually create image registry integrations. 1.3.1. Manually configuring OpenShift Container Platform registry You can integrate Red Hat Advanced Cluster Security for Kubernetes with OpenShift Container Platform built-in container image registry. Prerequisites You need a username and a password for authentication with the OpenShift Container Platform registry. Procedure In the RHACS portal, go to Platform Configuration Integrations . Under the Image Integrations section, select Generic Docker Registry . Click New integration . Enter the details for the following fields: Integration name : The name of the integration. Endpoint : The address of the registry. Username and Password . If you are not using a TLS certificate when connecting to the registry, select Disable TLS certificate validation (insecure) . Select Create integration without testing to create the integration without testing the connection to the registry. Select Test to test that the integration with the selected registry is working. Select Save . 1.3.2. Manually configuring Amazon Elastic Container Registry You can use Red Hat Advanced Cluster Security for Kubernetes to create and modify Amazon Elastic Container Registry (ECR) integrations manually. If you are deploying from Amazon ECR, integrations for the Amazon ECR registries are usually automatically generated. However, you might want to create integrations on your own to scan images outside deployments. You can also modify the parameters of an automatically-generated integration. For example, you can change the authentication method used by an automatically-generated Amazon ECR integration to use AssumeRole authentication or other authorization models. Important To erase changes you made to an automatically-generated ECR integration, delete the integration, and Red Hat Advanced Cluster Security for Kubernetes creates a new integration for you with the automatically-generated parameters when you deploy images from Amazon ECR. Prerequisites You must have an Amazon Identity and Access Management (IAM) access key ID and a secret access key. Alternatively, you can use a node-level IAM proxy such as kiam or kube2iam . The access key must have read access to ECR. See How do I create an AWS access key? for more information. If you are running Red Hat Advanced Cluster Security for Kubernetes in Amazon Elastic Kubernetes Service (EKS) and want to integrate with an ECR from a separate Amazon account, you must first set a repository policy statement in your ECR. Follow the instructions at Setting a repository policy statement and for Actions , choose the following scopes of the Amazon ECR API operations: ecr:BatchCheckLayerAvailability ecr:BatchGetImage ecr:DescribeImages ecr:GetDownloadUrlForLayer ecr:ListImages Procedure In the RHACS portal, go to Platform Configuration Integrations . Under the Image Integrations section, select Amazon ECR . Click New integration , or click one of the automatically-generated integrations to open it, then click Edit . Enter or modify the details for the following fields: Update stored credentials : Clear this box if you are modifying an integration without updating the credentials such as access keys and passwords. Integration name : The name of the integration. Registry ID : The ID of the registry. Endpoint : The address of the registry. This value is required only if you are using a private virtual private cloud (VPC) endpoint for Amazon ECR. This field is not enabled when the AssumeRole option is selected. Region : The region for the registry; for example, us-west-1 . If you are using IAM, select Use Container IAM role . Otherwise, clear the Use Container IAM role box and enter the Access key ID and Secret access key . If you are using AssumeRole authentication, select Use AssumeRole and enter the details for the following fields: AssumeRole ID : The ID of the role to assume. AssumeRole External ID (optional): If you are using an external ID with AssumeRole , you can enter it here. Select Create integration without testing to create the integration without testing the connection to the registry. Select Test to test that the integration with the selected registry is working. Select Save . 1.3.2.1. Using assumerole with Amazon ECR You can use AssumeRole to grant access to AWS resources without manually configuring each user's permissions. Instead, you can define a role with the desired permissions so that the user is granted access to assume that role. AssumeRole enables you to grant, revoke, or otherwise generally manage more fine-grained permissions. 1.3.2.1.1. Configuring AssumeRole with container IAM Before you can use AssumeRole with Red Hat Advanced Cluster Security for Kubernetes, you must first configure it. Procedure Enable the IAM OIDC provider for your EKS cluster: USD eksctl utils associate-iam-oidc-provider --cluster <cluster name> --approve Create an IAM role for your EKS cluster. Associate the newly created role with a service account: USD kubectl -n stackrox annotate sa central eks.amazonaws.com/role-arn=arn:aws:iam::67890:role/<role-name> Restart Central to apply the changes. USD kubectl -n stackrox delete pod -l app=central Assign the role to a policy that allows the role to assume another role as required: { "Version": "2012-10-17", "Statement": [ { "Sid": "VisualEditor0", "Effect": "Allow", "Action": "sts:AssumeRole", "Resource": "arn:aws:iam::<ecr-registry>:role/<assumerole-readonly>" 1 } ] } 1 Replace <assumerole-readonly> with the role you want to assume. Update the trust relationship for the role you want to assume: { "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Principal": { "AWS": [ "arn:aws:iam::<ecr-registry>:role/<role-name>" 1 ] }, "Action": "sts:AssumeRole" } ] } 1 The <role-name> should match with the new role you have created earlier. 1.3.2.1.2. Configuring AssumeRole without container IAM To use AssumeRole without container IAM, you must use an access and a secret key to authenticate as an AWS user with programmatic access . Procedure Depending on whether the AssumeRole user is in the same account as the ECR registry or in a different account, you must either: Create a new role with the desired permissions if the user for which you want to assume role is in the same account as the ECR registry. Note When creating the role, you can choose any trusted entity as required. However, you must modify it after creation. Or, you must provide permissions to access the ECR registry and define its trust relationship if the user is in a different account than the ECR registry: { "Version": "2012-10-17", "Statement": [ { "Sid": "VisualEditor0", "Effect": "Allow", "Action": "sts:AssumeRole", "Resource": "arn:aws:iam::<ecr-registry>:role/<assumerole-readonly>" 1 } ] } 1 Replace <assumerole-readonly> with the role you want to assume. Configure the trust relationship of the role by including the user ARN under the Principal field: { "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Principal": { "AWS": [ "arn:aws:iam::<ecr-registry>:user/<role-name>" ] }, "Action": "sts:AssumeRole" } ] } 1.3.2.1.3. Configuring AssumeRole in RHACS After configuring AssumeRole in ECR, you can integrate Red Hat Advanced Cluster Security for Kubernetes with Amazon Elastic Container Registry (ECR) by using AssumeRole. Procedure In the RHACS portal, go to Platform Configuration Integrations . Under the Image Integrations section, select Amazon ECR . Click New Integration . Enter the details for the following fields: Integration Name : The name of the integration. Registry ID : The ID of the registry. Region : The region for the registry; for example, us-west-1 . If you are using IAM, select Use container IAM role . Otherwise, clear the Use custom IAM role box and enter the Access key ID and Secret access key . If you are using AssumeRole, select Use AssumeRole and enter the details for the following fields: AssumeRole ID : The ID of the role to assume. AssumeRole External ID (optional): If you are using an external ID with AssumeRole , you can enter it here. Select Test to test that the integration with the selected registry is working. Select Save . 1.3.3. Manually configuring Google Container Registry You can integrate Red Hat Advanced Cluster Security for Kubernetes with Google Container Registry (GCR). Prerequisites You need either a workload identity or a service account key for authentication. The associated service account must have access to the registry. See Configuring access control for information about granting users and other projects access to GCR. If you are using GCR Container Analysis , you must also grant the following roles to the service account: Container Analysis Notes Viewer Container Analysis Occurrences Viewer Storage Object Viewer Procedure In the RHACS portal, go to Platform Configuration Integrations . Under the Image Integrations section, select Google Container Registry . Click New integration . Enter the details for the following fields: Integration name : The name of the integration. Type : Select Registry . Registry Endpoint : The address of the registry. Project : The Google Cloud project name. Use workload identity : Check to authenticate using a workload identity. Service account key (JSON) : Your service account key for authentication. Select Create integration without testing to create the integration without testing the connection to the registry. Select Test to test that the integration with the selected registry is working. Select Save . 1.3.4. Manually configuring Google Artifact Registry You can integrate Red Hat Advanced Cluster Security for Kubernetes with Google Artifact Registry. Prerequisites You need either a workload identity or a service account key for authentication. The associated service account must have the Artifact Registry Reader Identity and Access Management (IAM) role roles/artifactregistry.reader . Procedure In the RHACS portal, go to Platform Configuration Integrations . Under the Image Integrations section, select Google Artifact Registry . Click New integration . Enter the details for the following fields: Integration name : The name of the integration. Registry endpoint : The address of the registry. Project : The Google Cloud project name. Use workload identity : Check to authenticate using a workload identity. Service account key (JSON) : Your service account key for authentication. Select Create integration without testing to create the integration without testing the connection to the registry. Select Test to test that the integration with the selected registry is working. Select Save . 1.3.5. Manually configuring Microsoft Azure Container Registry You can integrate Red Hat Advanced Cluster Security for Kubernetes with Microsoft Azure Container Registry. Prerequisites You must have a username and a password for authentication. Procedure In the RHACS portal, go to Platform Configuration Integrations . Under the Image Integrations section, select Microsoft Azure Container Registry . Click New integration . Enter the details for the following fields: Integration name : The name of the integration. Endpoint : The address of the registry. Username and Password . Select Create integration without testing to create the integration without testing the connection to the registry. Select Test to test that the integration with the selected registry is working. Select Save . 1.3.6. Manually configuring JFrog Artifactory You can integrate Red Hat Advanced Cluster Security for Kubernetes with JFrog Artifactory. Prerequisites You must have a username and a password for authentication with JFrog Artifactory. Procedure In the RHACS portal, go to Platform Configuration Integrations . Under the Image Integrations section, select JFrog Artifactory . Click New integration . Enter the details for the following fields: Integration name : The name of the integration. Endpoint : The address of the registry. Username and Password . If you are not using a TLS certificate when connecting to the registry, select Disable TLS certificate validation (insecure) . Select Create integration without testing to create the integration without testing the connection to the registry. Select Test to test that the integration with the selected registry is working. Select Save . 1.3.7. Manually configuring Quay Container Registry You can integrate Red Hat Advanced Cluster Security for Kubernetes (RHACS) with Quay Container Registry. You can integrate with Quay by using the following methods: Integrating with the Quay public repository (registry): This method does not require authentication. Integrating with a Quay private registry by using a robot account: This method requires that you create a robot account to use with Quay (recommended). See the Quay documentation for more information. Integrating with Quay to use the Quay scanner rather than the RHACS scanner: This method uses the API and requires an OAuth token for authentication. See "Integrating with Quay Container Registry to scan images" in the "Additional Resources" section. Prerequisites For authentication with a Quay private registry, you need the credentials associated with a robot account or an OAuth token (deprecated). Procedure In the RHACS portal, go to Platform Configuration Integrations . Under the Image Integrations section, select Red Hat Quay.io . Click New integration . Enter the Integration name. Enter the Endpoint , or the address of the registry. If you are integrating with the Quay public repository, under Type , select Registry , and then go to the step. If you are integrating with a Quay private registry, under Type , select Registry and enter information in the following fields: Robot username : If you are accessing the registry by using a Quay robot account, enter the user name in the format <namespace>+<accountname> . Robot password : If you are accessing the registry by using a Quay robot account, enter the password for the robot account user name. OAuth token : If you are accessing the registry by using an OAuth token (deprecated), enter it in this field. Optional: If you are not using a TLS certificate when connecting to the registry, select Disable TLS certificate validation (insecure) . Optional: To create the integration without testing, select Create integration without testing . Select Save . Note If you are editing a Quay integration but do not want to update your credentials, verify that Update stored credentials is not selected. 1.4. Additional resources Integrating with Quay Container Registry to scan images 1.4.1. Manually configuring IBM Cloud Container Registry You can integrate Red Hat Advanced Cluster Security for Kubernetes with IBM Cloud Container Registry. Prerequisites You must have an API key for authentication with the IBM Cloud Container Registry. Procedure In the RHACS portal, go to Platform Configuration Integrations . Under the Image Integrations section, select IBM Cloud Container Registry . Click New integration . Enter the details for the following fields: Integration name : The name of the integration. Endpoint : The address of the registry. API key . Select Test to test that the integration with the selected registry is working. Select Save . 1.4.2. Manually configuring Red Hat Container Registry You can integrate Red Hat Advanced Cluster Security for Kubernetes with Red Hat Container Registry. Prerequisites You must have a username and a password for authentication with the Red Hat Container Registry. Procedure In the RHACS portal, go to Platform Configuration Integrations . Under the Image Integrations section, select Red Hat Registry . Click New integration . Enter the details for the following fields: Integration name : The name of the integration. Endpoint : The address of the registry. Username and Password . Select Create integration without testing to create the integration without testing the connection to the registry. Select Test to test that the integration with the selected registry is working. Select Save .
[ "eksctl utils associate-iam-oidc-provider --cluster <cluster name> --approve", "kubectl -n stackrox annotate sa central eks.amazonaws.com/role-arn=arn:aws:iam::67890:role/<role-name>", "kubectl -n stackrox delete pod -l app=central", "{ \"Version\": \"2012-10-17\", \"Statement\": [ { \"Sid\": \"VisualEditor0\", \"Effect\": \"Allow\", \"Action\": \"sts:AssumeRole\", \"Resource\": \"arn:aws:iam::<ecr-registry>:role/<assumerole-readonly>\" 1 } ] }", "{ \"Version\": \"2012-10-17\", \"Statement\": [ { \"Effect\": \"Allow\", \"Principal\": { \"AWS\": [ \"arn:aws:iam::<ecr-registry>:role/<role-name>\" 1 ] }, \"Action\": \"sts:AssumeRole\" } ] }", "{ \"Version\": \"2012-10-17\", \"Statement\": [ { \"Sid\": \"VisualEditor0\", \"Effect\": \"Allow\", \"Action\": \"sts:AssumeRole\", \"Resource\": \"arn:aws:iam::<ecr-registry>:role/<assumerole-readonly>\" 1 } ] }", "{ \"Version\": \"2012-10-17\", \"Statement\": [ { \"Effect\": \"Allow\", \"Principal\": { \"AWS\": [ \"arn:aws:iam::<ecr-registry>:user/<role-name>\" ] }, \"Action\": \"sts:AssumeRole\" } ] }" ]
https://docs.redhat.com/en/documentation/red_hat_advanced_cluster_security_for_kubernetes/4.6/html/integrating/integrate-with-image-registries
CLI tools
CLI tools OpenShift Container Platform 4.9 Learning how to use the command-line tools for OpenShift Container Platform Red Hat OpenShift Documentation Team
[ "tar xvf <file>", "echo USDPATH", "oc <command>", "C:\\> path", "C:\\> oc <command>", "echo USDPATH", "oc <command>", "tar xvf <file>", "echo USDPATH", "oc <command>", "C:\\> path", "C:\\> oc <command>", "echo USDPATH", "oc <command>", "subscription-manager register", "subscription-manager refresh", "subscription-manager list --available --matches '*OpenShift*'", "subscription-manager attach --pool=<pool_id>", "subscription-manager repos --enable=\"rhocp-4.9-for-rhel-8-x86_64-rpms\"", "subscription-manager repos --enable=\"rhel-7-server-ose-4.9-rpms\"", "yum install openshift-clients", "oc <command>", "brew install openshift-cli", "oc login -u user1", "Server [https://localhost:8443]: https://openshift.example.com:6443 1 The server uses a certificate signed by an unknown authority. You can bypass the certificate check, but any data you send to the server could be intercepted by others. Use insecure connections? (y/n): y 2 Authentication required for https://openshift.example.com:6443 (openshift) Username: user1 Password: 3 Login successful. You don't have any projects. You can try to create a new project, by running oc new-project <projectname> Welcome! See 'oc help' to get started.", "oc new-project my-project", "Now using project \"my-project\" on server \"https://openshift.example.com:6443\".", "oc new-app https://github.com/sclorg/cakephp-ex", "--> Found image 40de956 (9 days old) in imagestream \"openshift/php\" under tag \"7.2\" for \"php\" Run 'oc status' to view your app.", "oc get pods -o wide", "NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE cakephp-ex-1-build 0/1 Completed 0 5m45s 10.131.0.10 ip-10-0-141-74.ec2.internal <none> cakephp-ex-1-deploy 0/1 Completed 0 3m44s 10.129.2.9 ip-10-0-147-65.ec2.internal <none> cakephp-ex-1-ktz97 1/1 Running 0 3m33s 10.128.2.11 ip-10-0-168-105.ec2.internal <none>", "oc logs cakephp-ex-1-deploy", "--> Scaling cakephp-ex-1 to 1 --> Success", "oc project", "Using project \"my-project\" on server \"https://openshift.example.com:6443\".", "oc status", "In project my-project on server https://openshift.example.com:6443 svc/cakephp-ex - 172.30.236.80 ports 8080, 8443 dc/cakephp-ex deploys istag/cakephp-ex:latest <- bc/cakephp-ex source builds https://github.com/sclorg/cakephp-ex on openshift/php:7.2 deployment #1 deployed 2 minutes ago - 1 pod 3 infos identified, use 'oc status --suggest' to see details.", "oc api-resources", "NAME SHORTNAMES APIGROUP NAMESPACED KIND bindings true Binding componentstatuses cs false ComponentStatus configmaps cm true ConfigMap", "oc help", "OpenShift Client This client helps you develop, build, deploy, and run your applications on any OpenShift or Kubernetes compatible platform. It also includes the administrative commands for managing a cluster under the 'adm' subcommand. Usage: oc [flags] Basic Commands: login Log in to a server new-project Request a new project new-app Create a new application", "oc create --help", "Create a resource by filename or stdin JSON and YAML formats are accepted. Usage: oc create -f FILENAME [flags]", "oc explain pods", "KIND: Pod VERSION: v1 DESCRIPTION: Pod is a collection of containers that can run on a host. This resource is created by clients and scheduled onto hosts. FIELDS: apiVersion <string> APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#resources", "oc logout", "Logged \"user1\" out on \"https://openshift.example.com\"", "oc completion bash > oc_bash_completion", "sudo cp oc_bash_completion /etc/bash_completion.d/", "cat >>~/.zshrc<<EOF if [ USDcommands[oc] ]; then source <(oc completion zsh) compdef _oc oc fi EOF", "apiVersion: v1 clusters: 1 - cluster: insecure-skip-tls-verify: true server: https://openshift1.example.com:8443 name: openshift1.example.com:8443 - cluster: insecure-skip-tls-verify: true server: https://openshift2.example.com:8443 name: openshift2.example.com:8443 contexts: 2 - context: cluster: openshift1.example.com:8443 namespace: alice-project user: alice/openshift1.example.com:8443 name: alice-project/openshift1.example.com:8443/alice - context: cluster: openshift1.example.com:8443 namespace: joe-project user: alice/openshift1.example.com:8443 name: joe-project/openshift1/alice current-context: joe-project/openshift1.example.com:8443/alice 3 kind: Config preferences: {} users: 4 - name: alice/openshift1.example.com:8443 user: token: xZHd2piv5_9vQrg-SKXRJ2Dsl9SceNJdhNTljEKTb8k", "oc status", "status In project Joe's Project (joe-project) service database (172.30.43.12:5434 -> 3306) database deploys docker.io/openshift/mysql-55-centos7:latest #1 deployed 25 minutes ago - 1 pod service frontend (172.30.159.137:5432 -> 8080) frontend deploys origin-ruby-sample:latest <- builds https://github.com/openshift/ruby-hello-world with joe-project/ruby-20-centos7:latest #1 deployed 22 minutes ago - 2 pods To see more information about a service or deployment, use 'oc describe service <name>' or 'oc describe dc <name>'. You can use 'oc get all' to see lists of each of the types described in this example.", "oc project", "Using project \"joe-project\" from context named \"joe-project/openshift1.example.com:8443/alice\" on server \"https://openshift1.example.com:8443\".", "oc project alice-project", "Now using project \"alice-project\" on server \"https://openshift1.example.com:8443\".", "oc login -u system:admin -n default", "oc config set-cluster <cluster_nickname> [--server=<master_ip_or_fqdn>] [--certificate-authority=<path/to/certificate/authority>] [--api-version=<apiversion>] [--insecure-skip-tls-verify=true]", "oc config set-context <context_nickname> [--cluster=<cluster_nickname>] [--user=<user_nickname>] [--namespace=<namespace>]", "oc config use-context <context_nickname>", "oc config set <property_name> <property_value>", "oc config unset <property_name>", "oc config view", "oc config view --config=<specific_filename>", "oc login https://openshift1.example.com --token=ns7yVhuRNpDM9cgzfhhxQ7bM5s7N2ZVrkZepSRf4LC0", "oc config view", "apiVersion: v1 clusters: - cluster: insecure-skip-tls-verify: true server: https://openshift1.example.com name: openshift1-example-com contexts: - context: cluster: openshift1-example-com namespace: default user: alice/openshift1-example-com name: default/openshift1-example-com/alice current-context: default/openshift1-example-com/alice kind: Config preferences: {} users: - name: alice/openshift1.example.com user: token: ns7yVhuRNpDM9cgzfhhxQ7bM5s7N2ZVrkZepSRf4LC0", "oc config set-context `oc config current-context` --namespace=<project_name>", "oc whoami -c", "#!/bin/bash optional argument handling if [[ \"USD1\" == \"version\" ]] then echo \"1.0.0\" exit 0 fi optional argument handling if [[ \"USD1\" == \"config\" ]] then echo USDKUBECONFIG exit 0 fi echo \"I am a plugin named kubectl-foo\"", "chmod +x <plugin_file>", "sudo mv <plugin_file> /usr/local/bin/.", "oc plugin list", "The following compatible plugins are available: /usr/local/bin/<plugin_file>", "oc ns", "Update pod 'foo' with the annotation 'description' and the value 'my frontend' # If the same annotation is set multiple times, only the last value will be applied oc annotate pods foo description='my frontend' # Update a pod identified by type and name in \"pod.json\" oc annotate -f pod.json description='my frontend' # Update pod 'foo' with the annotation 'description' and the value 'my frontend running nginx', overwriting any existing value oc annotate --overwrite pods foo description='my frontend running nginx' # Update all pods in the namespace oc annotate pods --all description='my frontend running nginx' # Update pod 'foo' only if the resource is unchanged from version 1 oc annotate pods foo description='my frontend running nginx' --resource-version=1 # Update pod 'foo' by removing an annotation named 'description' if it exists # Does not require the --overwrite flag oc annotate pods foo description-", "Print the supported API resources oc api-resources # Print the supported API resources with more information oc api-resources -o wide # Print the supported API resources sorted by a column oc api-resources --sort-by=name # Print the supported namespaced resources oc api-resources --namespaced=true # Print the supported non-namespaced resources oc api-resources --namespaced=false # Print the supported API resources with a specific APIGroup oc api-resources --api-group=extensions", "Print the supported API versions oc api-versions", "Apply the configuration in pod.json to a pod oc apply -f ./pod.json # Apply resources from a directory containing kustomization.yaml - e.g. dir/kustomization.yaml oc apply -k dir/ # Apply the JSON passed into stdin to a pod cat pod.json | oc apply -f - # Note: --prune is still in Alpha # Apply the configuration in manifest.yaml that matches label app=nginx and delete all other resources that are not in the file and match label app=nginx oc apply --prune -f manifest.yaml -l app=nginx # Apply the configuration in manifest.yaml and delete all the other config maps that are not in the file oc apply --prune -f manifest.yaml --all --prune-whitelist=core/v1/ConfigMap", "Edit the last-applied-configuration annotations by type/name in YAML oc apply edit-last-applied deployment/nginx # Edit the last-applied-configuration annotations by file in JSON oc apply edit-last-applied -f deploy.yaml -o json", "Set the last-applied-configuration of a resource to match the contents of a file oc apply set-last-applied -f deploy.yaml # Execute set-last-applied against each configuration file in a directory oc apply set-last-applied -f path/ # Set the last-applied-configuration of a resource to match the contents of a file; will create the annotation if it does not already exist oc apply set-last-applied -f deploy.yaml --create-annotation=true", "View the last-applied-configuration annotations by type/name in YAML oc apply view-last-applied deployment/nginx # View the last-applied-configuration annotations by file in JSON oc apply view-last-applied -f deploy.yaml -o json", "Get output from running pod mypod; use the 'oc.kubernetes.io/default-container' annotation # for selecting the container to be attached or the first container in the pod will be chosen oc attach mypod # Get output from ruby-container from pod mypod oc attach mypod -c ruby-container # Switch to raw terminal mode; sends stdin to 'bash' in ruby-container from pod mypod # and sends stdout/stderr from 'bash' back to the client oc attach mypod -c ruby-container -i -t # Get output from the first pod of a replica set named nginx oc attach rs/nginx", "Check to see if I can create pods in any namespace oc auth can-i create pods --all-namespaces # Check to see if I can list deployments in my current namespace oc auth can-i list deployments.apps # Check to see if I can do everything in my current namespace (\"*\" means all) oc auth can-i '*' '*' # Check to see if I can get the job named \"bar\" in namespace \"foo\" oc auth can-i list jobs.batch/bar -n foo # Check to see if I can read pod logs oc auth can-i get pods --subresource=log # Check to see if I can access the URL /logs/ oc auth can-i get /logs/ # List all allowed actions in namespace \"foo\" oc auth can-i --list --namespace=foo", "Reconcile RBAC resources from a file oc auth reconcile -f my-rbac-rules.yaml", "Auto scale a deployment \"foo\", with the number of pods between 2 and 10, no target CPU utilization specified so a default autoscaling policy will be used oc autoscale deployment foo --min=2 --max=10 # Auto scale a replication controller \"foo\", with the number of pods between 1 and 5, target CPU utilization at 80% oc autoscale rc foo --max=5 --cpu-percent=80", "Cancel the build with the given name oc cancel-build ruby-build-2 # Cancel the named build and print the build logs oc cancel-build ruby-build-2 --dump-logs # Cancel the named build and create a new one with the same parameters oc cancel-build ruby-build-2 --restart # Cancel multiple builds oc cancel-build ruby-build-1 ruby-build-2 ruby-build-3 # Cancel all builds created from the 'ruby-build' build config that are in the 'new' state oc cancel-build bc/ruby-build --state=new", "Print the address of the control plane and cluster services oc cluster-info", "Dump current cluster state to stdout oc cluster-info dump # Dump current cluster state to /path/to/cluster-state oc cluster-info dump --output-directory=/path/to/cluster-state # Dump all namespaces to stdout oc cluster-info dump --all-namespaces # Dump a set of namespaces to /path/to/cluster-state oc cluster-info dump --namespaces default,kube-system --output-directory=/path/to/cluster-state", "Installing bash completion on macOS using homebrew ## If running Bash 3.2 included with macOS brew install bash-completion ## or, if running Bash 4.1+ brew install bash-completion@2 ## If oc is installed via homebrew, this should start working immediately ## If you've installed via other means, you may need add the completion to your completion directory oc completion bash > USD(brew --prefix)/etc/bash_completion.d/oc # Installing bash completion on Linux ## If bash-completion is not installed on Linux, install the 'bash-completion' package ## via your distribution's package manager. ## Load the oc completion code for bash into the current shell source <(oc completion bash) ## Write bash completion code to a file and source it from .bash_profile oc completion bash > ~/.kube/completion.bash.inc printf \" # Kubectl shell completion source 'USDHOME/.kube/completion.bash.inc' \" >> USDHOME/.bash_profile source USDHOME/.bash_profile # Load the oc completion code for zsh[1] into the current shell source <(oc completion zsh) # Set the oc completion code for zsh[1] to autoload on startup oc completion zsh > \"USD{fpath[1]}/_oc\"", "Display the current-context oc config current-context", "Delete the minikube cluster oc config delete-cluster minikube", "Delete the context for the minikube cluster oc config delete-context minikube", "Delete the minikube user oc config delete-user minikube", "List the clusters that oc knows about oc config get-clusters", "List all the contexts in your kubeconfig file oc config get-contexts # Describe one context in your kubeconfig file oc config get-contexts my-context", "List the users that oc knows about oc config get-users", "Rename the context 'old-name' to 'new-name' in your kubeconfig file oc config rename-context old-name new-name", "Set the server field on the my-cluster cluster to https://1.2.3.4 oc config set clusters.my-cluster.server https://1.2.3.4 # Set the certificate-authority-data field on the my-cluster cluster oc config set clusters.my-cluster.certificate-authority-data USD(echo \"cert_data_here\" | base64 -i -) # Set the cluster field in the my-context context to my-cluster oc config set contexts.my-context.cluster my-cluster # Set the client-key-data field in the cluster-admin user using --set-raw-bytes option oc config set users.cluster-admin.client-key-data cert_data_here --set-raw-bytes=true", "Set only the server field on the e2e cluster entry without touching other values oc config set-cluster e2e --server=https://1.2.3.4 # Embed certificate authority data for the e2e cluster entry oc config set-cluster e2e --embed-certs --certificate-authority=~/.kube/e2e/kubernetes.ca.crt # Disable cert checking for the dev cluster entry oc config set-cluster e2e --insecure-skip-tls-verify=true # Set custom TLS server name to use for validation for the e2e cluster entry oc config set-cluster e2e --tls-server-name=my-cluster-name", "Set the user field on the gce context entry without touching other values oc config set-context gce --user=cluster-admin", "Set only the \"client-key\" field on the \"cluster-admin\" # entry, without touching other values oc config set-credentials cluster-admin --client-key=~/.kube/admin.key # Set basic auth for the \"cluster-admin\" entry oc config set-credentials cluster-admin --username=admin --password=uXFGweU9l35qcif # Embed client certificate data in the \"cluster-admin\" entry oc config set-credentials cluster-admin --client-certificate=~/.kube/admin.crt --embed-certs=true # Enable the Google Compute Platform auth provider for the \"cluster-admin\" entry oc config set-credentials cluster-admin --auth-provider=gcp # Enable the OpenID Connect auth provider for the \"cluster-admin\" entry with additional args oc config set-credentials cluster-admin --auth-provider=oidc --auth-provider-arg=client-id=foo --auth-provider-arg=client-secret=bar # Remove the \"client-secret\" config value for the OpenID Connect auth provider for the \"cluster-admin\" entry oc config set-credentials cluster-admin --auth-provider=oidc --auth-provider-arg=client-secret- # Enable new exec auth plugin for the \"cluster-admin\" entry oc config set-credentials cluster-admin --exec-command=/path/to/the/executable --exec-api-version=client.authentication.k8s.io/v1beta1 # Define new exec auth plugin args for the \"cluster-admin\" entry oc config set-credentials cluster-admin --exec-arg=arg1 --exec-arg=arg2 # Create or update exec auth plugin environment variables for the \"cluster-admin\" entry oc config set-credentials cluster-admin --exec-env=key1=val1 --exec-env=key2=val2 # Remove exec auth plugin environment variables for the \"cluster-admin\" entry oc config set-credentials cluster-admin --exec-env=var-to-remove-", "Unset the current-context oc config unset current-context # Unset namespace in foo context oc config unset contexts.foo.namespace", "Use the context for the minikube cluster oc config use-context minikube", "Show merged kubeconfig settings oc config view # Show merged kubeconfig settings and raw certificate data oc config view --raw # Get the password for the e2e user oc config view -o jsonpath='{.users[?(@.name == \"e2e\")].user.password}'", "!!!Important Note!!! # Requires that the 'tar' binary is present in your container # image. If 'tar' is not present, 'oc cp' will fail. # # For advanced use cases, such as symlinks, wildcard expansion or # file mode preservation, consider using 'oc exec'. # Copy /tmp/foo local file to /tmp/bar in a remote pod in namespace <some-namespace> tar cf - /tmp/foo | oc exec -i -n <some-namespace> <some-pod> -- tar xf - -C /tmp/bar # Copy /tmp/foo from a remote pod to /tmp/bar locally oc exec -n <some-namespace> <some-pod> -- tar cf - /tmp/foo | tar xf - -C /tmp/bar # Copy /tmp/foo_dir local directory to /tmp/bar_dir in a remote pod in the default namespace oc cp /tmp/foo_dir <some-pod>:/tmp/bar_dir # Copy /tmp/foo local file to /tmp/bar in a remote pod in a specific container oc cp /tmp/foo <some-pod>:/tmp/bar -c <specific-container> # Copy /tmp/foo local file to /tmp/bar in a remote pod in namespace <some-namespace> oc cp /tmp/foo <some-namespace>/<some-pod>:/tmp/bar # Copy /tmp/foo from a remote pod to /tmp/bar locally oc cp <some-namespace>/<some-pod>:/tmp/foo /tmp/bar", "Create a pod using the data in pod.json oc create -f ./pod.json # Create a pod based on the JSON passed into stdin cat pod.json | oc create -f - # Edit the data in docker-registry.yaml in JSON then create the resource using the edited data oc create -f docker-registry.yaml --edit -o json", "Create a new build oc create build myapp", "Create a cluster resource quota limited to 10 pods oc create clusterresourcequota limit-bob --project-annotation-selector=openshift.io/requester=user-bob --hard=pods=10", "Create a cluster role named \"pod-reader\" that allows user to perform \"get\", \"watch\" and \"list\" on pods oc create clusterrole pod-reader --verb=get,list,watch --resource=pods # Create a cluster role named \"pod-reader\" with ResourceName specified oc create clusterrole pod-reader --verb=get --resource=pods --resource-name=readablepod --resource-name=anotherpod # Create a cluster role named \"foo\" with API Group specified oc create clusterrole foo --verb=get,list,watch --resource=rs.extensions # Create a cluster role named \"foo\" with SubResource specified oc create clusterrole foo --verb=get,list,watch --resource=pods,pods/status # Create a cluster role name \"foo\" with NonResourceURL specified oc create clusterrole \"foo\" --verb=get --non-resource-url=/logs/* # Create a cluster role name \"monitoring\" with AggregationRule specified oc create clusterrole monitoring --aggregation-rule=\"rbac.example.com/aggregate-to-monitoring=true\"", "Create a cluster role binding for user1, user2, and group1 using the cluster-admin cluster role oc create clusterrolebinding cluster-admin --clusterrole=cluster-admin --user=user1 --user=user2 --group=group1", "Create a new config map named my-config based on folder bar oc create configmap my-config --from-file=path/to/bar # Create a new config map named my-config with specified keys instead of file basenames on disk oc create configmap my-config --from-file=key1=/path/to/bar/file1.txt --from-file=key2=/path/to/bar/file2.txt # Create a new config map named my-config with key1=config1 and key2=config2 oc create configmap my-config --from-literal=key1=config1 --from-literal=key2=config2 # Create a new config map named my-config from the key=value pairs in the file oc create configmap my-config --from-file=path/to/bar # Create a new config map named my-config from an env file oc create configmap my-config --from-env-file=path/to/bar.env", "Create a cron job oc create cronjob my-job --image=busybox --schedule=\"*/1 * * * *\" # Create a cron job with a command oc create cronjob my-job --image=busybox --schedule=\"*/1 * * * *\" -- date", "Create a deployment named my-dep that runs the busybox image oc create deployment my-dep --image=busybox # Create a deployment with a command oc create deployment my-dep --image=busybox -- date # Create a deployment named my-dep that runs the nginx image with 3 replicas oc create deployment my-dep --image=nginx --replicas=3 # Create a deployment named my-dep that runs the busybox image and expose port 5701 oc create deployment my-dep --image=busybox --port=5701", "Create an nginx deployment config named my-nginx oc create deploymentconfig my-nginx --image=nginx", "Create an identity with identity provider \"acme_ldap\" and the identity provider username \"adamjones\" oc create identity acme_ldap:adamjones", "Create a new image stream oc create imagestream mysql", "Create a new image stream tag based on an image in a remote registry oc create imagestreamtag mysql:latest --from-image=myregistry.local/mysql/mysql:5.0", "Create a single ingress called 'simple' that directs requests to foo.com/bar to svc # svc1:8080 with a tls secret \"my-cert\" oc create ingress simple --rule=\"foo.com/bar=svc1:8080,tls=my-cert\" # Create a catch all ingress of \"/path\" pointing to service svc:port and Ingress Class as \"otheringress\" oc create ingress catch-all --class=otheringress --rule=\"/path=svc:port\" # Create an ingress with two annotations: ingress.annotation1 and ingress.annotations2 oc create ingress annotated --class=default --rule=\"foo.com/bar=svc:port\" --annotation ingress.annotation1=foo --annotation ingress.annotation2=bla # Create an ingress with the same host and multiple paths oc create ingress multipath --class=default --rule=\"foo.com/=svc:port\" --rule=\"foo.com/admin/=svcadmin:portadmin\" # Create an ingress with multiple hosts and the pathType as Prefix oc create ingress ingress1 --class=default --rule=\"foo.com/path*=svc:8080\" --rule=\"bar.com/admin*=svc2:http\" # Create an ingress with TLS enabled using the default ingress certificate and different path types oc create ingress ingtls --class=default --rule=\"foo.com/=svc:https,tls\" --rule=\"foo.com/path/subpath*=othersvc:8080\" # Create an ingress with TLS enabled using a specific secret and pathType as Prefix oc create ingress ingsecret --class=default --rule=\"foo.com/*=svc:8080,tls=secret1\" # Create an ingress with a default backend oc create ingress ingdefault --class=default --default-backend=defaultsvc:http --rule=\"foo.com/*=svc:8080,tls=secret1\"", "Create a job oc create job my-job --image=busybox # Create a job with a command oc create job my-job --image=busybox -- date # Create a job from a cron job named \"a-cronjob\" oc create job test-job --from=cronjob/a-cronjob", "Create a new namespace named my-namespace oc create namespace my-namespace", "Create a pod disruption budget named my-pdb that will select all pods with the app=rails label # and require at least one of them being available at any point in time oc create poddisruptionbudget my-pdb --selector=app=rails --min-available=1 # Create a pod disruption budget named my-pdb that will select all pods with the app=nginx label # and require at least half of the pods selected to be available at any point in time oc create pdb my-pdb --selector=app=nginx --min-available=50%", "Create a priority class named high-priority oc create priorityclass high-priority --value=1000 --description=\"high priority\" # Create a priority class named default-priority that is considered as the global default priority oc create priorityclass default-priority --value=1000 --global-default=true --description=\"default priority\" # Create a priority class named high-priority that cannot preempt pods with lower priority oc create priorityclass high-priority --value=1000 --description=\"high priority\" --preemption-policy=\"Never\"", "Create a new resource quota named my-quota oc create quota my-quota --hard=cpu=1,memory=1G,pods=2,services=3,replicationcontrollers=2,resourcequotas=1,secrets=5,persistentvolumeclaims=10 # Create a new resource quota named best-effort oc create quota best-effort --hard=pods=100 --scopes=BestEffort", "Create a role named \"pod-reader\" that allows user to perform \"get\", \"watch\" and \"list\" on pods oc create role pod-reader --verb=get --verb=list --verb=watch --resource=pods # Create a role named \"pod-reader\" with ResourceName specified oc create role pod-reader --verb=get --resource=pods --resource-name=readablepod --resource-name=anotherpod # Create a role named \"foo\" with API Group specified oc create role foo --verb=get,list,watch --resource=rs.extensions # Create a role named \"foo\" with SubResource specified oc create role foo --verb=get,list,watch --resource=pods,pods/status", "Create a role binding for user1, user2, and group1 using the admin cluster role oc create rolebinding admin --clusterrole=admin --user=user1 --user=user2 --group=group1", "Create an edge route named \"my-route\" that exposes the frontend service oc create route edge my-route --service=frontend # Create an edge route that exposes the frontend service and specify a path # If the route name is omitted, the service name will be used oc create route edge --service=frontend --path /assets", "Create a passthrough route named \"my-route\" that exposes the frontend service oc create route passthrough my-route --service=frontend # Create a passthrough route that exposes the frontend service and specify # a host name. If the route name is omitted, the service name will be used oc create route passthrough --service=frontend --hostname=www.example.com", "Create a route named \"my-route\" that exposes the frontend service oc create route reencrypt my-route --service=frontend --dest-ca-cert cert.cert # Create a reencrypt route that exposes the frontend service, letting the # route name default to the service name and the destination CA certificate # default to the service CA oc create route reencrypt --service=frontend", "If you don't already have a .dockercfg file, you can create a dockercfg secret directly by using: oc create secret docker-registry my-secret --docker-server=DOCKER_REGISTRY_SERVER --docker-username=DOCKER_USER --docker-password=DOCKER_PASSWORD --docker-email=DOCKER_EMAIL # Create a new secret named my-secret from ~/.docker/config.json oc create secret docker-registry my-secret --from-file=.dockerconfigjson=path/to/.docker/config.json", "Create a new secret named my-secret with keys for each file in folder bar oc create secret generic my-secret --from-file=path/to/bar # Create a new secret named my-secret with specified keys instead of names on disk oc create secret generic my-secret --from-file=ssh-privatekey=path/to/id_rsa --from-file=ssh-publickey=path/to/id_rsa.pub # Create a new secret named my-secret with key1=supersecret and key2=topsecret oc create secret generic my-secret --from-literal=key1=supersecret --from-literal=key2=topsecret # Create a new secret named my-secret using a combination of a file and a literal oc create secret generic my-secret --from-file=ssh-privatekey=path/to/id_rsa --from-literal=passphrase=topsecret # Create a new secret named my-secret from an env file oc create secret generic my-secret --from-env-file=path/to/bar.env", "Create a new TLS secret named tls-secret with the given key pair oc create secret tls tls-secret --cert=path/to/tls.cert --key=path/to/tls.key", "Create a new ClusterIP service named my-cs oc create service clusterip my-cs --tcp=5678:8080 # Create a new ClusterIP service named my-cs (in headless mode) oc create service clusterip my-cs --clusterip=\"None\"", "Create a new ExternalName service named my-ns oc create service externalname my-ns --external-name bar.com", "Create a new LoadBalancer service named my-lbs oc create service loadbalancer my-lbs --tcp=5678:8080", "Create a new NodePort service named my-ns oc create service nodeport my-ns --tcp=5678:8080", "Create a new service account named my-service-account oc create serviceaccount my-service-account", "Create a user with the username \"ajones\" and the display name \"Adam Jones\" oc create user ajones --full-name=\"Adam Jones\"", "Map the identity \"acme_ldap:adamjones\" to the user \"ajones\" oc create useridentitymapping acme_ldap:adamjones ajones", "Start a shell session into a pod using the OpenShift tools image oc debug # Debug a currently running deployment by creating a new pod oc debug deploy/test # Debug a node as an administrator oc debug node/master-1 # Launch a shell in a pod using the provided image stream tag oc debug istag/mysql:latest -n openshift # Test running a job as a non-root user oc debug job/test --as-user=1000000 # Debug a specific failing container by running the env command in the 'second' container oc debug daemonset/test -c second -- /bin/env # See the pod that would be created to debug oc debug mypod-9xbc -o yaml # Debug a resource but launch the debug pod in another namespace # Note: Not all resources can be debugged using --to-namespace without modification. For example, # volumes and service accounts are namespace-dependent. Add '-o yaml' to output the debug pod definition # to disk. If necessary, edit the definition then run 'oc debug -f -' or run without --to-namespace oc debug mypod-9xbc --to-namespace testns", "Delete a pod using the type and name specified in pod.json oc delete -f ./pod.json # Delete resources from a directory containing kustomization.yaml - e.g. dir/kustomization.yaml oc delete -k dir # Delete a pod based on the type and name in the JSON passed into stdin cat pod.json | oc delete -f - # Delete pods and services with same names \"baz\" and \"foo\" oc delete pod,service baz foo # Delete pods and services with label name=myLabel oc delete pods,services -l name=myLabel # Delete a pod with minimal delay oc delete pod foo --now # Force delete a pod on a dead node oc delete pod foo --force # Delete all pods oc delete pods --all", "Describe a node oc describe nodes kubernetes-node-emt8.c.myproject.internal # Describe a pod oc describe pods/nginx # Describe a pod identified by type and name in \"pod.json\" oc describe -f pod.json # Describe all pods oc describe pods # Describe pods by label name=myLabel oc describe po -l name=myLabel # Describe all pods managed by the 'frontend' replication controller (rc-created pods # get the name of the rc as a prefix in the pod the name) oc describe pods frontend", "Diff resources included in pod.json oc diff -f pod.json # Diff file read from stdin cat service.yaml | oc diff -f -", "Edit the service named 'docker-registry' oc edit svc/docker-registry # Use an alternative editor KUBE_EDITOR=\"nano\" oc edit svc/docker-registry # Edit the job 'myjob' in JSON using the v1 API format oc edit job.v1.batch/myjob -o json # Edit the deployment 'mydeployment' in YAML and save the modified config in its annotation oc edit deployment/mydeployment -o yaml --save-config", "Perform garbage collection with the default settings oc ex dockergc", "Get output from running the 'date' command from pod mypod, using the first container by default oc exec mypod -- date # Get output from running the 'date' command in ruby-container from pod mypod oc exec mypod -c ruby-container -- date # Switch to raw terminal mode; sends stdin to 'bash' in ruby-container from pod mypod # and sends stdout/stderr from 'bash' back to the client oc exec mypod -c ruby-container -i -t -- bash -il # List contents of /usr from the first container of pod mypod and sort by modification time # If the command you want to execute in the pod has any flags in common (e.g. -i), # you must use two dashes (--) to separate your command's flags/arguments # Also note, do not surround your command and its flags/arguments with quotes # unless that is how you would execute it normally (i.e., do ls -t /usr, not \"ls -t /usr\") oc exec mypod -i -t -- ls -t /usr # Get output from running 'date' command from the first pod of the deployment mydeployment, using the first container by default oc exec deploy/mydeployment -- date # Get output from running 'date' command from the first pod of the service myservice, using the first container by default oc exec svc/myservice -- date", "Get the documentation of the resource and its fields oc explain pods # Get the documentation of a specific field of a resource oc explain pods.spec.containers", "Create a route based on service nginx. The new route will reuse nginx's labels oc expose service nginx # Create a route and specify your own label and route name oc expose service nginx -l name=myroute --name=fromdowntown # Create a route and specify a host name oc expose service nginx --hostname=www.example.com # Create a route with a wildcard oc expose service nginx --hostname=x.example.com --wildcard-policy=Subdomain # This would be equivalent to *.example.com. NOTE: only hosts are matched by the wildcard; subdomains would not be included # Expose a deployment configuration as a service and use the specified port oc expose dc ruby-hello-world --port=8080 # Expose a service as a route in the specified path oc expose service nginx --path=/nginx # Expose a service using different generators oc expose service nginx --name=exposed-svc --port=12201 --protocol=\"TCP\" --generator=\"service/v2\" oc expose service nginx --name=my-route --port=12201 --generator=\"route/v1\" # Exposing a service using the \"route/v1\" generator (default) will create a new exposed route with the \"--name\" provided # (or the name of the service otherwise). You may not specify a \"--protocol\" or \"--target-port\" option when using this generator", "Extract the secret \"test\" to the current directory oc extract secret/test # Extract the config map \"nginx\" to the /tmp directory oc extract configmap/nginx --to=/tmp # Extract the config map \"nginx\" to STDOUT oc extract configmap/nginx --to=- # Extract only the key \"nginx.conf\" from config map \"nginx\" to the /tmp directory oc extract configmap/nginx --to=/tmp --keys=nginx.conf", "List all pods in ps output format oc get pods # List all pods in ps output format with more information (such as node name) oc get pods -o wide # List a single replication controller with specified NAME in ps output format oc get replicationcontroller web # List deployments in JSON output format, in the \"v1\" version of the \"apps\" API group oc get deployments.v1.apps -o json # List a single pod in JSON output format oc get -o json pod web-pod-13je7 # List a pod identified by type and name specified in \"pod.yaml\" in JSON output format oc get -f pod.yaml -o json # List resources from a directory with kustomization.yaml - e.g. dir/kustomization.yaml oc get -k dir/ # Return only the phase value of the specified pod oc get -o template pod/web-pod-13je7 --template={{.status.phase}} # List resource information in custom columns oc get pod test-pod -o custom-columns=CONTAINER:.spec.containers[0].name,IMAGE:.spec.containers[0].image # List all replication controllers and services together in ps output format oc get rc,services # List one or more resources by their type and names oc get rc/web service/frontend pods/web-pod-13je7", "Idle the scalable controllers associated with the services listed in to-idle.txt USD oc idle --resource-names-file to-idle.txt", "Remove the entrypoint on the mysql:latest image oc image append --from mysql:latest --to myregistry.com/myimage:latest --image '{\"Entrypoint\":null}' # Add a new layer to the image oc image append --from mysql:latest --to myregistry.com/myimage:latest layer.tar.gz # Add a new layer to the image and store the result on disk # This results in USD(pwd)/v2/mysql/blobs,manifests oc image append --from mysql:latest --to file://mysql:local layer.tar.gz # Add a new layer to the image and store the result on disk in a designated directory # This will result in USD(pwd)/mysql-local/v2/mysql/blobs,manifests oc image append --from mysql:latest --to file://mysql:local --dir mysql-local layer.tar.gz # Add a new layer to an image that is stored on disk (~/mysql-local/v2/image exists) oc image append --from-dir ~/mysql-local --to myregistry.com/myimage:latest layer.tar.gz # Add a new layer to an image that was mirrored to the current directory on disk (USD(pwd)/v2/image exists) oc image append --from-dir v2 --to myregistry.com/myimage:latest layer.tar.gz # Add a new layer to a multi-architecture image for an os/arch that is different from the system's os/arch # Note: Wildcard filter is not supported with append. Pass a single os/arch to append oc image append --from docker.io/library/busybox:latest --filter-by-os=linux/s390x --to myregistry.com/myimage:latest layer.tar.gz", "Extract the busybox image into the current directory oc image extract docker.io/library/busybox:latest # Extract the busybox image into a designated directory (must exist) oc image extract docker.io/library/busybox:latest --path /:/tmp/busybox # Extract the busybox image into the current directory for linux/s390x platform # Note: Wildcard filter is not supported with extract. Pass a single os/arch to extract oc image extract docker.io/library/busybox:latest --filter-by-os=linux/s390x # Extract a single file from the image into the current directory oc image extract docker.io/library/centos:7 --path /bin/bash:. # Extract all .repo files from the image's /etc/yum.repos.d/ folder into the current directory oc image extract docker.io/library/centos:7 --path /etc/yum.repos.d/*.repo:. # Extract all .repo files from the image's /etc/yum.repos.d/ folder into a designated directory (must exist) # This results in /tmp/yum.repos.d/*.repo on local system oc image extract docker.io/library/centos:7 --path /etc/yum.repos.d/*.repo:/tmp/yum.repos.d # Extract an image stored on disk into the current directory (USD(pwd)/v2/busybox/blobs,manifests exists) # --confirm is required because the current directory is not empty oc image extract file://busybox:local --confirm # Extract an image stored on disk in a directory other than USD(pwd)/v2 into the current directory # --confirm is required because the current directory is not empty (USD(pwd)/busybox-mirror-dir/v2/busybox exists) oc image extract file://busybox:local --dir busybox-mirror-dir --confirm # Extract an image stored on disk in a directory other than USD(pwd)/v2 into a designated directory (must exist) oc image extract file://busybox:local --dir busybox-mirror-dir --path /:/tmp/busybox # Extract the last layer in the image oc image extract docker.io/library/centos:7[-1] # Extract the first three layers of the image oc image extract docker.io/library/centos:7[:3] # Extract the last three layers of the image oc image extract docker.io/library/centos:7[-3:]", "Show information about an image oc image info quay.io/openshift/cli:latest # Show information about images matching a wildcard oc image info quay.io/openshift/cli:4.* # Show information about a file mirrored to disk under DIR oc image info --dir=DIR file://library/busybox:latest # Select which image from a multi-OS image to show oc image info library/busybox:latest --filter-by-os=linux/arm64", "Copy image to another tag oc image mirror myregistry.com/myimage:latest myregistry.com/myimage:stable # Copy image to another registry oc image mirror myregistry.com/myimage:latest docker.io/myrepository/myimage:stable # Copy all tags starting with mysql to the destination repository oc image mirror myregistry.com/myimage:mysql* docker.io/myrepository/myimage # Copy image to disk, creating a directory structure that can be served as a registry oc image mirror myregistry.com/myimage:latest file://myrepository/myimage:latest # Copy image to S3 (pull from <bucket>.s3.amazonaws.com/image:latest) oc image mirror myregistry.com/myimage:latest s3://s3.amazonaws.com/<region>/<bucket>/image:latest # Copy image to S3 without setting a tag (pull via @<digest>) oc image mirror myregistry.com/myimage:latest s3://s3.amazonaws.com/<region>/<bucket>/image # Copy image to multiple locations oc image mirror myregistry.com/myimage:latest docker.io/myrepository/myimage:stable docker.io/myrepository/myimage:dev # Copy multiple images oc image mirror myregistry.com/myimage:latest=myregistry.com/other:test myregistry.com/myimage:new=myregistry.com/other:target # Copy manifest list of a multi-architecture image, even if only a single image is found oc image mirror myregistry.com/myimage:latest=myregistry.com/other:test --keep-manifest-list=true # Copy specific os/arch manifest of a multi-architecture image # Run 'oc image info myregistry.com/myimage:latest' to see available os/arch for multi-arch images # Note that with multi-arch images, this results in a new manifest list digest that includes only # the filtered manifests oc image mirror myregistry.com/myimage:latest=myregistry.com/other:test --filter-by-os=os/arch # Copy all os/arch manifests of a multi-architecture image # Run 'oc image info myregistry.com/myimage:latest' to see list of os/arch manifests that will be mirrored oc image mirror myregistry.com/myimage:latest=myregistry.com/other:test --keep-manifest-list=true # Note the above command is equivalent to oc image mirror myregistry.com/myimage:latest=myregistry.com/other:test --filter-by-os=.*", "Import tag latest into a new image stream oc import-image mystream --from=registry.io/repo/image:latest --confirm # Update imported data for tag latest in an already existing image stream oc import-image mystream # Update imported data for tag stable in an already existing image stream oc import-image mystream:stable # Update imported data for all tags in an existing image stream oc import-image mystream --all # Import all tags into a new image stream oc import-image mystream --from=registry.io/repo/image --all --confirm # Import all tags into a new image stream using a custom timeout oc --request-timeout=5m import-image mystream --from=registry.io/repo/image --all --confirm", "Build the current working directory oc kustomize # Build some shared configuration directory oc kustomize /home/config/production # Build from github oc kustomize https://github.com/kubernetes-sigs/kustomize.git/examples/helloWorld?ref=v1.0.6", "Update pod 'foo' with the label 'unhealthy' and the value 'true' oc label pods foo unhealthy=true # Update pod 'foo' with the label 'status' and the value 'unhealthy', overwriting any existing value oc label --overwrite pods foo status=unhealthy # Update all pods in the namespace oc label pods --all status=unhealthy # Update a pod identified by the type and name in \"pod.json\" oc label -f pod.json status=unhealthy # Update pod 'foo' only if the resource is unchanged from version 1 oc label pods foo status=unhealthy --resource-version=1 # Update pod 'foo' by removing a label named 'bar' if it exists # Does not require the --overwrite flag oc label pods foo bar-", "Log in interactively oc login --username=myuser # Log in to the given server with the given certificate authority file oc login localhost:8443 --certificate-authority=/path/to/cert.crt # Log in to the given server with the given credentials (will not prompt interactively) oc login localhost:8443 --username=myuser --password=mypass", "Log out oc logout", "Start streaming the logs of the most recent build of the openldap build config oc logs -f bc/openldap # Start streaming the logs of the latest deployment of the mysql deployment config oc logs -f dc/mysql # Get the logs of the first deployment for the mysql deployment config. Note that logs # from older deployments may not exist either because the deployment was successful # or due to deployment pruning or manual deletion of the deployment oc logs --version=1 dc/mysql # Return a snapshot of ruby-container logs from pod backend oc logs backend -c ruby-container # Start streaming of ruby-container logs from pod backend oc logs -f pod/backend -c ruby-container", "List all local templates and image streams that can be used to create an app oc new-app --list # Create an application based on the source code in the current git repository (with a public remote) and a container image oc new-app . --image=registry/repo/langimage # Create an application myapp with Docker based build strategy expecting binary input oc new-app --strategy=docker --binary --name myapp # Create a Ruby application based on the provided [image]~[source code] combination oc new-app centos/ruby-25-centos7~https://github.com/sclorg/ruby-ex.git # Use the public container registry MySQL image to create an app. Generated artifacts will be labeled with db=mysql oc new-app mysql MYSQL_USER=user MYSQL_PASSWORD=pass MYSQL_DATABASE=testdb -l db=mysql # Use a MySQL image in a private registry to create an app and override application artifacts' names oc new-app --image=myregistry.com/mycompany/mysql --name=private # Create an application from a remote repository using its beta4 branch oc new-app https://github.com/openshift/ruby-hello-world#beta4 # Create an application based on a stored template, explicitly setting a parameter value oc new-app --template=ruby-helloworld-sample --param=MYSQL_USER=admin # Create an application from a remote repository and specify a context directory oc new-app https://github.com/youruser/yourgitrepo --context-dir=src/build # Create an application from a remote private repository and specify which existing secret to use oc new-app https://github.com/youruser/yourgitrepo --source-secret=yoursecret # Create an application based on a template file, explicitly setting a parameter value oc new-app --file=./example/myapp/template.json --param=MYSQL_USER=admin # Search all templates, image streams, and container images for the ones that match \"ruby\" oc new-app --search ruby # Search for \"ruby\", but only in stored templates (--template, --image-stream and --image # can be used to filter search results) oc new-app --search --template=ruby # Search for \"ruby\" in stored templates and print the output as YAML oc new-app --search --template=ruby --output=yaml", "Create a build config based on the source code in the current git repository (with a public # remote) and a container image oc new-build . --image=repo/langimage # Create a NodeJS build config based on the provided [image]~[source code] combination oc new-build centos/nodejs-8-centos7~https://github.com/sclorg/nodejs-ex.git # Create a build config from a remote repository using its beta2 branch oc new-build https://github.com/openshift/ruby-hello-world#beta2 # Create a build config using a Dockerfile specified as an argument oc new-build -D USD'FROM centos:7\\nRUN yum install -y httpd' # Create a build config from a remote repository and add custom environment variables oc new-build https://github.com/openshift/ruby-hello-world -e RACK_ENV=development # Create a build config from a remote private repository and specify which existing secret to use oc new-build https://github.com/youruser/yourgitrepo --source-secret=yoursecret # Create a build config from a remote repository and inject the npmrc into a build oc new-build https://github.com/openshift/ruby-hello-world --build-secret npmrc:.npmrc # Create a build config from a remote repository and inject environment data into a build oc new-build https://github.com/openshift/ruby-hello-world --build-config-map env:config # Create a build config that gets its input from a remote repository and another container image oc new-build https://github.com/openshift/ruby-hello-world --source-image=openshift/jenkins-1-centos7 --source-image-path=/var/lib/jenkins:tmp", "Create a new project with minimal information oc new-project web-team-dev # Create a new project with a display name and description oc new-project web-team-dev --display-name=\"Web Team Development\" --description=\"Development project for the web team.\"", "Observe changes to services oc observe services # Observe changes to services, including the clusterIP and invoke a script for each oc observe services --template '{ .spec.clusterIP }' -- register_dns.sh # Observe changes to services filtered by a label selector oc observe namespaces -l regist-dns=true --template '{ .spec.clusterIP }' -- register_dns.sh", "Partially update a node using a strategic merge patch, specifying the patch as JSON oc patch node k8s-node-1 -p '{\"spec\":{\"unschedulable\":true}}' # Partially update a node using a strategic merge patch, specifying the patch as YAML oc patch node k8s-node-1 -p USD'spec:\\n unschedulable: true' # Partially update a node identified by the type and name specified in \"node.json\" using strategic merge patch oc patch -f node.json -p '{\"spec\":{\"unschedulable\":true}}' # Update a container's image; spec.containers[*].name is required because it's a merge key oc patch pod valid-pod -p '{\"spec\":{\"containers\":[{\"name\":\"kubernetes-serve-hostname\",\"image\":\"new image\"}]}}' # Update a container's image using a JSON patch with positional arrays oc patch pod valid-pod --type='json' -p='[{\"op\": \"replace\", \"path\": \"/spec/containers/0/image\", \"value\":\"new image\"}]'", "Add the 'view' role to user1 for the current project oc policy add-role-to-user view user1 # Add the 'edit' role to serviceaccount1 for the current project oc policy add-role-to-user edit -z serviceaccount1", "Check whether service accounts sa1 and sa2 can admit a pod with a template pod spec specified in my_resource.yaml # Service Account specified in myresource.yaml file is ignored oc policy scc-review -z sa1,sa2 -f my_resource.yaml # Check whether service accounts system:serviceaccount:bob:default can admit a pod with a template pod spec specified in my_resource.yaml oc policy scc-review -z system:serviceaccount:bob:default -f my_resource.yaml # Check whether the service account specified in my_resource_with_sa.yaml can admit the pod oc policy scc-review -f my_resource_with_sa.yaml # Check whether the default service account can admit the pod; default is taken since no service account is defined in myresource_with_no_sa.yaml oc policy scc-review -f myresource_with_no_sa.yaml", "Check whether user bob can create a pod specified in myresource.yaml oc policy scc-subject-review -u bob -f myresource.yaml # Check whether user bob who belongs to projectAdmin group can create a pod specified in myresource.yaml oc policy scc-subject-review -u bob -g projectAdmin -f myresource.yaml # Check whether a service account specified in the pod template spec in myresourcewithsa.yaml can create the pod oc policy scc-subject-review -f myresourcewithsa.yaml", "Listen on ports 5000 and 6000 locally, forwarding data to/from ports 5000 and 6000 in the pod oc port-forward pod/mypod 5000 6000 # Listen on ports 5000 and 6000 locally, forwarding data to/from ports 5000 and 6000 in a pod selected by the deployment oc port-forward deployment/mydeployment 5000 6000 # Listen on port 8443 locally, forwarding to the targetPort of the service's port named \"https\" in a pod selected by the service oc port-forward service/myservice 8443:https # Listen on port 8888 locally, forwarding to 5000 in the pod oc port-forward pod/mypod 8888:5000 # Listen on port 8888 on all addresses, forwarding to 5000 in the pod oc port-forward --address 0.0.0.0 pod/mypod 8888:5000 # Listen on port 8888 on localhost and selected IP, forwarding to 5000 in the pod oc port-forward --address localhost,10.19.21.23 pod/mypod 8888:5000 # Listen on a random port locally, forwarding to 5000 in the pod oc port-forward pod/mypod :5000", "Convert the template.json file into a resource list and pass to create oc process -f template.json | oc create -f - # Process a file locally instead of contacting the server oc process -f template.json --local -o yaml # Process template while passing a user-defined label oc process -f template.json -l name=mytemplate # Convert a stored template into a resource list oc process foo # Convert a stored template into a resource list by setting/overriding parameter values oc process foo PARM1=VALUE1 PARM2=VALUE2 # Convert a template stored in different namespace into a resource list oc process openshift//foo # Convert template.json into a resource list cat template.json | oc process -f -", "Switch to the 'myapp' project oc project myapp # Display the project currently in use oc project", "List all projects oc projects", "To proxy all of the Kubernetes API and nothing else oc proxy --api-prefix=/ # To proxy only part of the Kubernetes API and also some static files # You can get pods info with 'curl localhost:8001/api/v1/pods' oc proxy --www=/my/files --www-prefix=/static/ --api-prefix=/api/ # To proxy the entire Kubernetes API at a different root # You can get pods info with 'curl localhost:8001/custom/api/v1/pods' oc proxy --api-prefix=/custom/ # Run a proxy to the Kubernetes API server on port 8011, serving static content from ./local/www/ oc proxy --port=8011 --www=./local/www/ # Run a proxy to the Kubernetes API server on an arbitrary local port # The chosen port for the server will be output to stdout oc proxy --port=0 # Run a proxy to the Kubernetes API server, changing the API prefix to k8s-api # This makes e.g. the pods API available at localhost:8001/k8s-api/v1/pods/ oc proxy --api-prefix=/k8s-api", "Display information about the integrated registry oc registry info", "Log in to the integrated registry oc registry login # Log in as the default service account in the current namespace oc registry login -z default # Log in to different registry using BASIC auth credentials oc registry login --registry quay.io/myregistry --auth-basic=USER:PASS", "Replace a pod using the data in pod.json oc replace -f ./pod.json # Replace a pod based on the JSON passed into stdin cat pod.json | oc replace -f - # Update a single-container pod's image version (tag) to v4 oc get pod mypod -o yaml | sed 's/\\(image: myimage\\):.*USD/\\1:v4/' | oc replace -f - # Force replace, delete and then re-create the resource oc replace --force -f ./pod.json", "Perform a rollback to the last successfully completed deployment for a deployment config oc rollback frontend # See what a rollback to version 3 will look like, but do not perform the rollback oc rollback frontend --to-version=3 --dry-run # Perform a rollback to a specific deployment oc rollback frontend-2 # Perform the rollback manually by piping the JSON of the new config back to oc oc rollback frontend -o json | oc replace dc/frontend -f - # Print the updated deployment configuration in JSON format instead of performing the rollback oc rollback frontend -o json", "Cancel the in-progress deployment based on 'nginx' oc rollout cancel dc/nginx", "View the rollout history of a deployment oc rollout history dc/nginx # View the details of deployment revision 3 oc rollout history dc/nginx --revision=3", "Start a new rollout based on the latest images defined in the image change triggers oc rollout latest dc/nginx # Print the rolled out deployment config oc rollout latest dc/nginx -o json", "Mark the nginx deployment as paused. Any current state of # the deployment will continue its function, new updates to the deployment will not # have an effect as long as the deployment is paused oc rollout pause dc/nginx", "Restart a deployment oc rollout restart deployment/nginx # Restart a daemon set oc rollout restart daemonset/abc", "Resume an already paused deployment oc rollout resume dc/nginx", "Retry the latest failed deployment based on 'frontend' # The deployer pod and any hook pods are deleted for the latest failed deployment oc rollout retry dc/frontend", "Watch the status of the latest rollout oc rollout status dc/nginx", "Roll back to the previous deployment oc rollout undo dc/nginx # Roll back to deployment revision 3. The replication controller for that version must exist oc rollout undo dc/nginx --to-revision=3", "Open a shell session on the first container in pod 'foo' oc rsh foo # Open a shell session on the first container in pod 'foo' and namespace 'bar' # (Note that oc client specific arguments must come before the resource name and its arguments) oc rsh -n bar foo # Run the command 'cat /etc/resolv.conf' inside pod 'foo' oc rsh foo cat /etc/resolv.conf # See the configuration of your internal registry oc rsh dc/docker-registry cat config.yml # Open a shell session on the container named 'index' inside a pod of your job oc rsh -c index job/sheduled", "Synchronize a local directory with a pod directory oc rsync ./local/dir/ POD:/remote/dir # Synchronize a pod directory with a local directory oc rsync POD:/remote/dir/ ./local/dir", "Start a nginx pod oc run nginx --image=nginx # Start a hazelcast pod and let the container expose port 5701 oc run hazelcast --image=hazelcast/hazelcast --port=5701 # Start a hazelcast pod and set environment variables \"DNS_DOMAIN=cluster\" and \"POD_NAMESPACE=default\" in the container oc run hazelcast --image=hazelcast/hazelcast --env=\"DNS_DOMAIN=cluster\" --env=\"POD_NAMESPACE=default\" # Start a hazelcast pod and set labels \"app=hazelcast\" and \"env=prod\" in the container oc run hazelcast --image=hazelcast/hazelcast --labels=\"app=hazelcast,env=prod\" # Dry run; print the corresponding API objects without creating them oc run nginx --image=nginx --dry-run=client # Start a nginx pod, but overload the spec with a partial set of values parsed from JSON oc run nginx --image=nginx --overrides='{ \"apiVersion\": \"v1\", \"spec\": { ... } }' # Start a busybox pod and keep it in the foreground, don't restart it if it exits oc run -i -t busybox --image=busybox --restart=Never # Start the nginx pod using the default command, but use custom arguments (arg1 .. argN) for that command oc run nginx --image=nginx -- <arg1> <arg2> ... <argN> # Start the nginx pod using a different command and custom arguments oc run nginx --image=nginx --command -- <cmd> <arg1> ... <argN>", "Scale a replica set named 'foo' to 3 oc scale --replicas=3 rs/foo # Scale a resource identified by type and name specified in \"foo.yaml\" to 3 oc scale --replicas=3 -f foo.yaml # If the deployment named mysql's current size is 2, scale mysql to 3 oc scale --current-replicas=2 --replicas=3 deployment/mysql # Scale multiple replication controllers oc scale --replicas=5 rc/foo rc/bar rc/baz # Scale stateful set named 'web' to 3 oc scale --replicas=3 statefulset/web", "Add an image pull secret to a service account to automatically use it for pulling pod images oc secrets link serviceaccount-name pull-secret --for=pull # Add an image pull secret to a service account to automatically use it for both pulling and pushing build images oc secrets link builder builder-image-secret --for=pull,mount # If the cluster's serviceAccountConfig is operating with limitSecretReferences: True, secrets must be added to the pod's service account whitelist in order to be available to the pod oc secrets link pod-sa pod-secret", "Unlink a secret currently associated with a service account oc secrets unlink serviceaccount-name secret-name another-secret-name", "Create a kubeconfig file for service account 'default' oc serviceaccounts create-kubeconfig 'default' > default.kubeconfig", "Get the service account token from service account 'default' oc serviceaccounts get-token 'default'", "Generate a new token for service account 'default' oc serviceaccounts new-token 'default' # Generate a new token for service account 'default' and apply # labels 'foo' and 'bar' to the new token for identification oc serviceaccounts new-token 'default' --labels foo=foo-value,bar=bar-value", "Clear post-commit hook on a build config oc set build-hook bc/mybuild --post-commit --remove # Set the post-commit hook to execute a test suite using a new entrypoint oc set build-hook bc/mybuild --post-commit --command -- /bin/bash -c /var/lib/test-image.sh # Set the post-commit hook to execute a shell script oc set build-hook bc/mybuild --post-commit --script=\"/var/lib/test-image.sh param1 param2 && /var/lib/done.sh\"", "Clear the push secret on a build config oc set build-secret --push --remove bc/mybuild # Set the pull secret on a build config oc set build-secret --pull bc/mybuild mysecret # Set the push and pull secret on a build config oc set build-secret --push --pull bc/mybuild mysecret # Set the source secret on a set of build configs matching a selector oc set build-secret --source -l app=myapp gitsecret", "Set the 'password' key of a secret oc set data secret/foo password=this_is_secret # Remove the 'password' key from a secret oc set data secret/foo password- # Update the 'haproxy.conf' key of a config map from a file on disk oc set data configmap/bar --from-file=../haproxy.conf # Update a secret with the contents of a directory, one key per file oc set data secret/foo --from-file=secret-dir", "Clear pre and post hooks on a deployment config oc set deployment-hook dc/myapp --remove --pre --post # Set the pre deployment hook to execute a db migration command for an application # using the data volume from the application oc set deployment-hook dc/myapp --pre --volumes=data -- /var/lib/migrate-db.sh # Set a mid deployment hook along with additional environment variables oc set deployment-hook dc/myapp --mid --volumes=data -e VAR1=value1 -e VAR2=value2 -- /var/lib/prepare-deploy.sh", "Update deployment config 'myapp' with a new environment variable oc set env dc/myapp STORAGE_DIR=/local # List the environment variables defined on a build config 'sample-build' oc set env bc/sample-build --list # List the environment variables defined on all pods oc set env pods --all --list # Output modified build config in YAML oc set env bc/sample-build STORAGE_DIR=/data -o yaml # Update all containers in all replication controllers in the project to have ENV=prod oc set env rc --all ENV=prod # Import environment from a secret oc set env --from=secret/mysecret dc/myapp # Import environment from a config map with a prefix oc set env --from=configmap/myconfigmap --prefix=MYSQL_ dc/myapp # Remove the environment variable ENV from container 'c1' in all deployment configs oc set env dc --all --containers=\"c1\" ENV- # Remove the environment variable ENV from a deployment config definition on disk and # update the deployment config on the server oc set env -f dc.json ENV- # Set some of the local shell environment into a deployment config on the server oc set env | grep RAILS_ | oc env -e - dc/myapp", "Set a deployment configs's nginx container image to 'nginx:1.9.1', and its busybox container image to 'busybox'. oc set image dc/nginx busybox=busybox nginx=nginx:1.9.1 # Set a deployment configs's app container image to the image referenced by the imagestream tag 'openshift/ruby:2.3'. oc set image dc/myapp app=openshift/ruby:2.3 --source=imagestreamtag # Update all deployments' and rc's nginx container's image to 'nginx:1.9.1' oc set image deployments,rc nginx=nginx:1.9.1 --all # Update image of all containers of daemonset abc to 'nginx:1.9.1' oc set image daemonset abc *=nginx:1.9.1 # Print result (in yaml format) of updating nginx container image from local file, without hitting the server oc set image -f path/to/file.yaml nginx=nginx:1.9.1 --local -o yaml", "Print all of the image streams and whether they resolve local names oc set image-lookup # Use local name lookup on image stream mysql oc set image-lookup mysql # Force a deployment to use local name lookup oc set image-lookup deploy/mysql # Show the current status of the deployment lookup oc set image-lookup deploy/mysql --list # Disable local name lookup on image stream mysql oc set image-lookup mysql --enabled=false # Set local name lookup on all image streams oc set image-lookup --all", "Clear both readiness and liveness probes off all containers oc set probe dc/myapp --remove --readiness --liveness # Set an exec action as a liveness probe to run 'echo ok' oc set probe dc/myapp --liveness -- echo ok # Set a readiness probe to try to open a TCP socket on 3306 oc set probe rc/mysql --readiness --open-tcp=3306 # Set an HTTP startup probe for port 8080 and path /healthz over HTTP on the pod IP oc set probe dc/webapp --startup --get-url=http://:8080/healthz # Set an HTTP readiness probe for port 8080 and path /healthz over HTTP on the pod IP oc set probe dc/webapp --readiness --get-url=http://:8080/healthz # Set an HTTP readiness probe over HTTPS on 127.0.0.1 for a hostNetwork pod oc set probe dc/router --readiness --get-url=https://127.0.0.1:1936/stats # Set only the initial-delay-seconds field on all deployments oc set probe dc --all --readiness --initial-delay-seconds=30", "Set a deployments nginx container CPU limits to \"200m and memory to 512Mi\" oc set resources deployment nginx -c=nginx --limits=cpu=200m,memory=512Mi # Set the resource request and limits for all containers in nginx oc set resources deployment nginx --limits=cpu=200m,memory=512Mi --requests=cpu=100m,memory=256Mi # Remove the resource requests for resources on containers in nginx oc set resources deployment nginx --limits=cpu=0,memory=0 --requests=cpu=0,memory=0 # Print the result (in YAML format) of updating nginx container limits locally, without hitting the server oc set resources -f path/to/file.yaml --limits=cpu=200m,memory=512Mi --local -o yaml", "Print the backends on the route 'web' oc set route-backends web # Set two backend services on route 'web' with 2/3rds of traffic going to 'a' oc set route-backends web a=2 b=1 # Increase the traffic percentage going to b by 10%% relative to a oc set route-backends web --adjust b=+10%% # Set traffic percentage going to b to 10%% of the traffic going to a oc set route-backends web --adjust b=10%% # Set weight of b to 10 oc set route-backends web --adjust b=10 # Set the weight to all backends to zero oc set route-backends web --zero", "Set the labels and selector before creating a deployment/service pair. oc create service clusterip my-svc --clusterip=\"None\" -o yaml --dry-run | oc set selector --local -f - 'environment=qa' -o yaml | oc create -f - oc create deployment my-dep -o yaml --dry-run | oc label --local -f - environment=qa -o yaml | oc create -f -", "Set deployment nginx-deployment's service account to serviceaccount1 oc set serviceaccount deployment nginx-deployment serviceaccount1 # Print the result (in YAML format) of updated nginx deployment with service account from a local file, without hitting the API server oc set sa -f nginx-deployment.yaml serviceaccount1 --local --dry-run -o yaml", "Update a cluster role binding for serviceaccount1 oc set subject clusterrolebinding admin --serviceaccount=namespace:serviceaccount1 # Update a role binding for user1, user2, and group1 oc set subject rolebinding admin --user=user1 --user=user2 --group=group1 # Print the result (in YAML format) of updating role binding subjects locally, without hitting the server oc create rolebinding admin --role=admin --user=admin -o yaml --dry-run | oc set subject --local -f - --user=foo -o yaml", "Print the triggers on the deployment config 'myapp' oc set triggers dc/myapp # Set all triggers to manual oc set triggers dc/myapp --manual # Enable all automatic triggers oc set triggers dc/myapp --auto # Reset the GitHub webhook on a build to a new, generated secret oc set triggers bc/webapp --from-github oc set triggers bc/webapp --from-webhook # Remove all triggers oc set triggers bc/webapp --remove-all # Stop triggering on config change oc set triggers dc/myapp --from-config --remove # Add an image trigger to a build config oc set triggers bc/webapp --from-image=namespace1/image:latest # Add an image trigger to a stateful set on the main container oc set triggers statefulset/db --from-image=namespace1/image:latest -c main", "List volumes defined on all deployment configs in the current project oc set volume dc --all # Add a new empty dir volume to deployment config (dc) 'myapp' mounted under # /var/lib/myapp oc set volume dc/myapp --add --mount-path=/var/lib/myapp # Use an existing persistent volume claim (pvc) to overwrite an existing volume 'v1' oc set volume dc/myapp --add --name=v1 -t pvc --claim-name=pvc1 --overwrite # Remove volume 'v1' from deployment config 'myapp' oc set volume dc/myapp --remove --name=v1 # Create a new persistent volume claim that overwrites an existing volume 'v1' oc set volume dc/myapp --add --name=v1 -t pvc --claim-size=1G --overwrite # Change the mount point for volume 'v1' to /data oc set volume dc/myapp --add --name=v1 -m /data --overwrite # Modify the deployment config by removing volume mount \"v1\" from container \"c1\" # (and by removing the volume \"v1\" if no other containers have volume mounts that reference it) oc set volume dc/myapp --remove --name=v1 --containers=c1 # Add new volume based on a more complex volume source (AWS EBS, GCE PD, # Ceph, Gluster, NFS, ISCSI, ...) oc set volume dc/myapp --add -m /data --source=<json-string>", "Starts build from build config \"hello-world\" oc start-build hello-world # Starts build from a previous build \"hello-world-1\" oc start-build --from-build=hello-world-1 # Use the contents of a directory as build input oc start-build hello-world --from-dir=src/ # Send the contents of a Git repository to the server from tag 'v2' oc start-build hello-world --from-repo=../hello-world --commit=v2 # Start a new build for build config \"hello-world\" and watch the logs until the build # completes or fails oc start-build hello-world --follow # Start a new build for build config \"hello-world\" and wait until the build completes. It # exits with a non-zero return code if the build fails oc start-build hello-world --wait", "See an overview of the current project oc status # Export the overview of the current project in an svg file oc status -o dot | dot -T svg -o project.svg # See an overview of the current project including details for any identified issues oc status --suggest", "Tag the current image for the image stream 'openshift/ruby' and tag '2.0' into the image stream 'yourproject/ruby with tag 'tip' oc tag openshift/ruby:2.0 yourproject/ruby:tip # Tag a specific image oc tag openshift/ruby@sha256:6b646fa6bf5e5e4c7fa41056c27910e679c03ebe7f93e361e6515a9da7e258cc yourproject/ruby:tip # Tag an external container image oc tag --source=docker openshift/origin-control-plane:latest yourproject/ruby:tip # Tag an external container image and request pullthrough for it oc tag --source=docker openshift/origin-control-plane:latest yourproject/ruby:tip --reference-policy=local # Remove the specified spec tag from an image stream oc tag openshift/origin-control-plane:latest -d", "Print the OpenShift client, kube-apiserver, and openshift-apiserver version information for the current context oc version # Print the OpenShift client, kube-apiserver, and openshift-apiserver version numbers for the current context oc version --short # Print the OpenShift client version information for the current context oc version --client", "Wait for the pod \"busybox1\" to contain the status condition of type \"Ready\" oc wait --for=condition=Ready pod/busybox1 # The default value of status condition is true; you can set it to false oc wait --for=condition=Ready=false pod/busybox1 # Wait for the pod \"busybox1\" to be deleted, with a timeout of 60s, after having issued the \"delete\" command oc delete pod/busybox1 oc wait --for=delete pod/busybox1 --timeout=60s", "Display the currently authenticated user oc whoami", "Build the dependency tree for the 'latest' tag in <image-stream> oc adm build-chain <image-stream> # Build the dependency tree for the 'v2' tag in dot format and visualize it via the dot utility oc adm build-chain <image-stream>:v2 -o dot | dot -T svg -o deps.svg # Build the dependency tree across all namespaces for the specified image stream tag found in the 'test' namespace oc adm build-chain <image-stream> -n test --all", "Mirror an operator-registry image and its contents to a registry oc adm catalog mirror quay.io/my/image:latest myregistry.com # Mirror an operator-registry image and its contents to a particular namespace in a registry oc adm catalog mirror quay.io/my/image:latest myregistry.com/my-namespace # Mirror to an airgapped registry by first mirroring to files oc adm catalog mirror quay.io/my/image:latest file:///local/index oc adm catalog mirror file:///local/index/my/image:latest my-airgapped-registry.com # Configure a cluster to use a mirrored registry oc apply -f manifests/imageContentSourcePolicy.yaml # Edit the mirroring mappings and mirror with \"oc image mirror\" manually oc adm catalog mirror --manifests-only quay.io/my/image:latest myregistry.com oc image mirror -f manifests/mapping.txt # Delete all ImageContentSourcePolicies generated by oc adm catalog mirror oc delete imagecontentsourcepolicy -l operators.openshift.org/catalog=true", "Approve CSR 'csr-sqgzp' oc adm certificate approve csr-sqgzp", "Deny CSR 'csr-sqgzp' oc adm certificate deny csr-sqgzp", "Installing bash completion on macOS using homebrew ## If running Bash 3.2 included with macOS brew install bash-completion ## or, if running Bash 4.1+ brew install bash-completion@2 ## If oc is installed via homebrew, this should start working immediately ## If you've installed via other means, you may need add the completion to your completion directory oc completion bash > USD(brew --prefix)/etc/bash_completion.d/oc # Installing bash completion on Linux ## If bash-completion is not installed on Linux, install the 'bash-completion' package ## via your distribution's package manager. ## Load the oc completion code for bash into the current shell source <(oc completion bash) ## Write bash completion code to a file and source it from .bash_profile oc completion bash > ~/.kube/completion.bash.inc printf \" # Kubectl shell completion source 'USDHOME/.kube/completion.bash.inc' \" >> USDHOME/.bash_profile source USDHOME/.bash_profile # Load the oc completion code for zsh[1] into the current shell source <(oc completion zsh) # Set the oc completion code for zsh[1] to autoload on startup oc completion zsh > \"USD{fpath[1]}/_oc\"", "Display the current-context oc config current-context", "Delete the minikube cluster oc config delete-cluster minikube", "Delete the context for the minikube cluster oc config delete-context minikube", "Delete the minikube user oc config delete-user minikube", "List the clusters that oc knows about oc config get-clusters", "List all the contexts in your kubeconfig file oc config get-contexts # Describe one context in your kubeconfig file oc config get-contexts my-context", "List the users that oc knows about oc config get-users", "Rename the context 'old-name' to 'new-name' in your kubeconfig file oc config rename-context old-name new-name", "Set the server field on the my-cluster cluster to https://1.2.3.4 oc config set clusters.my-cluster.server https://1.2.3.4 # Set the certificate-authority-data field on the my-cluster cluster oc config set clusters.my-cluster.certificate-authority-data USD(echo \"cert_data_here\" | base64 -i -) # Set the cluster field in the my-context context to my-cluster oc config set contexts.my-context.cluster my-cluster # Set the client-key-data field in the cluster-admin user using --set-raw-bytes option oc config set users.cluster-admin.client-key-data cert_data_here --set-raw-bytes=true", "Set only the server field on the e2e cluster entry without touching other values oc config set-cluster e2e --server=https://1.2.3.4 # Embed certificate authority data for the e2e cluster entry oc config set-cluster e2e --embed-certs --certificate-authority=~/.kube/e2e/kubernetes.ca.crt # Disable cert checking for the dev cluster entry oc config set-cluster e2e --insecure-skip-tls-verify=true # Set custom TLS server name to use for validation for the e2e cluster entry oc config set-cluster e2e --tls-server-name=my-cluster-name", "Set the user field on the gce context entry without touching other values oc config set-context gce --user=cluster-admin", "Set only the \"client-key\" field on the \"cluster-admin\" # entry, without touching other values oc config set-credentials cluster-admin --client-key=~/.kube/admin.key # Set basic auth for the \"cluster-admin\" entry oc config set-credentials cluster-admin --username=admin --password=uXFGweU9l35qcif # Embed client certificate data in the \"cluster-admin\" entry oc config set-credentials cluster-admin --client-certificate=~/.kube/admin.crt --embed-certs=true # Enable the Google Compute Platform auth provider for the \"cluster-admin\" entry oc config set-credentials cluster-admin --auth-provider=gcp # Enable the OpenID Connect auth provider for the \"cluster-admin\" entry with additional args oc config set-credentials cluster-admin --auth-provider=oidc --auth-provider-arg=client-id=foo --auth-provider-arg=client-secret=bar # Remove the \"client-secret\" config value for the OpenID Connect auth provider for the \"cluster-admin\" entry oc config set-credentials cluster-admin --auth-provider=oidc --auth-provider-arg=client-secret- # Enable new exec auth plugin for the \"cluster-admin\" entry oc config set-credentials cluster-admin --exec-command=/path/to/the/executable --exec-api-version=client.authentication.k8s.io/v1beta1 # Define new exec auth plugin args for the \"cluster-admin\" entry oc config set-credentials cluster-admin --exec-arg=arg1 --exec-arg=arg2 # Create or update exec auth plugin environment variables for the \"cluster-admin\" entry oc config set-credentials cluster-admin --exec-env=key1=val1 --exec-env=key2=val2 # Remove exec auth plugin environment variables for the \"cluster-admin\" entry oc config set-credentials cluster-admin --exec-env=var-to-remove-", "Unset the current-context oc config unset current-context # Unset namespace in foo context oc config unset contexts.foo.namespace", "Use the context for the minikube cluster oc config use-context minikube", "Show merged kubeconfig settings oc config view # Show merged kubeconfig settings and raw certificate data oc config view --raw # Get the password for the e2e user oc config view -o jsonpath='{.users[?(@.name == \"e2e\")].user.password}'", "Mark node \"foo\" as unschedulable oc adm cordon foo", "Output a bootstrap project template in YAML format to stdout oc adm create-bootstrap-project-template -o yaml", "Output a template for the error page to stdout oc adm create-error-template", "Output a template for the login page to stdout oc adm create-login-template", "Output a template for the provider selection page to stdout oc adm create-provider-selection-template", "Drain node \"foo\", even if there are pods not managed by a replication controller, replica set, job, daemon set or stateful set on it oc adm drain foo --force # As above, but abort if there are pods not managed by a replication controller, replica set, job, daemon set or stateful set, and use a grace period of 15 minutes oc adm drain foo --grace-period=900", "Add user1 and user2 to my-group oc adm groups add-users my-group user1 user2", "Add a group with no users oc adm groups new my-group # Add a group with two users oc adm groups new my-group user1 user2 # Add a group with one user and shorter output oc adm groups new my-group user1 -o name", "Prune all orphaned groups oc adm groups prune --sync-config=/path/to/ldap-sync-config.yaml --confirm # Prune all orphaned groups except the ones from the blacklist file oc adm groups prune --blacklist=/path/to/blacklist.txt --sync-config=/path/to/ldap-sync-config.yaml --confirm # Prune all orphaned groups from a list of specific groups specified in a whitelist file oc adm groups prune --whitelist=/path/to/whitelist.txt --sync-config=/path/to/ldap-sync-config.yaml --confirm # Prune all orphaned groups from a list of specific groups specified in a whitelist oc adm groups prune groups/group_name groups/other_name --sync-config=/path/to/ldap-sync-config.yaml --confirm", "Remove user1 and user2 from my-group oc adm groups remove-users my-group user1 user2", "Sync all groups with an LDAP server oc adm groups sync --sync-config=/path/to/ldap-sync-config.yaml --confirm # Sync all groups except the ones from the blacklist file with an LDAP server oc adm groups sync --blacklist=/path/to/blacklist.txt --sync-config=/path/to/ldap-sync-config.yaml --confirm # Sync specific groups specified in a whitelist file with an LDAP server oc adm groups sync --whitelist=/path/to/whitelist.txt --sync-config=/path/to/sync-config.yaml --confirm # Sync all OpenShift groups that have been synced previously with an LDAP server oc adm groups sync --type=openshift --sync-config=/path/to/ldap-sync-config.yaml --confirm # Sync specific OpenShift groups if they have been synced previously with an LDAP server oc adm groups sync groups/group1 groups/group2 groups/group3 --sync-config=/path/to/sync-config.yaml --confirm", "Collect debugging data for the \"openshift-apiserver\" clusteroperator oc adm inspect clusteroperator/openshift-apiserver # Collect debugging data for the \"openshift-apiserver\" and \"kube-apiserver\" clusteroperators oc adm inspect clusteroperator/openshift-apiserver clusteroperator/kube-apiserver # Collect debugging data for all clusteroperators oc adm inspect clusteroperator # Collect debugging data for all clusteroperators and clusterversions oc adm inspect clusteroperators,clusterversions", "Perform a dry-run of updating all objects oc adm migrate template-instances # To actually perform the update, the confirm flag must be appended oc adm migrate template-instances --confirm", "Gather information using the default plug-in image and command, writing into ./must-gather.local.<rand> oc adm must-gather # Gather information with a specific local folder to copy to oc adm must-gather --dest-dir=/local/directory # Gather audit information oc adm must-gather -- /usr/bin/gather_audit_logs # Gather information using multiple plug-in images oc adm must-gather --image=quay.io/kubevirt/must-gather --image=quay.io/openshift/origin-must-gather # Gather information using a specific image stream plug-in oc adm must-gather --image-stream=openshift/must-gather:latest # Gather information using a specific image, command, and pod-dir oc adm must-gather --image=my/image:tag --source-dir=/pod/directory -- myspecial-command.sh", "Create a new project using a node selector oc adm new-project myproject --node-selector='type=user-node,region=east'", "Show kubelet logs from all masters oc adm node-logs --role master -u kubelet # See what logs are available in masters in /var/logs oc adm node-logs --role master --path=/ # Display cron log file from all masters oc adm node-logs --role master --path=cron", "Provide isolation for project p1 oc adm pod-network isolate-projects <p1> # Allow all projects with label name=top-secret to have their own isolated project network oc adm pod-network isolate-projects --selector='name=top-secret'", "Allow project p2 to use project p1 network oc adm pod-network join-projects --to=<p1> <p2> # Allow all projects with label name=top-secret to use project p1 network oc adm pod-network join-projects --to=<p1> --selector='name=top-secret'", "Allow project p1 to access all pods in the cluster and vice versa oc adm pod-network make-projects-global <p1> # Allow all projects with label name=share to access all pods in the cluster and vice versa oc adm pod-network make-projects-global --selector='name=share'", "Add the 'view' role to user1 for the current project oc policy add-role-to-user view user1 # Add the 'edit' role to serviceaccount1 for the current project oc policy add-role-to-user edit -z serviceaccount1", "Add the 'restricted' security context constraint to group1 and group2 oc adm policy add-scc-to-group restricted group1 group2", "Add the 'restricted' security context constraint to user1 and user2 oc adm policy add-scc-to-user restricted user1 user2 # Add the 'privileged' security context constraint to serviceaccount1 in the current namespace oc adm policy add-scc-to-user privileged -z serviceaccount1", "Check whether service accounts sa1 and sa2 can admit a pod with a template pod spec specified in my_resource.yaml # Service Account specified in myresource.yaml file is ignored oc policy scc-review -z sa1,sa2 -f my_resource.yaml # Check whether service accounts system:serviceaccount:bob:default can admit a pod with a template pod spec specified in my_resource.yaml oc policy scc-review -z system:serviceaccount:bob:default -f my_resource.yaml # Check whether the service account specified in my_resource_with_sa.yaml can admit the pod oc policy scc-review -f my_resource_with_sa.yaml # Check whether the default service account can admit the pod; default is taken since no service account is defined in myresource_with_no_sa.yaml oc policy scc-review -f myresource_with_no_sa.yaml", "Check whether user bob can create a pod specified in myresource.yaml oc policy scc-subject-review -u bob -f myresource.yaml # Check whether user bob who belongs to projectAdmin group can create a pod specified in myresource.yaml oc policy scc-subject-review -u bob -g projectAdmin -f myresource.yaml # Check whether a service account specified in the pod template spec in myresourcewithsa.yaml can create the pod oc policy scc-subject-review -f myresourcewithsa.yaml", "Dry run deleting older completed and failed builds and also including # all builds whose associated build config no longer exists oc adm prune builds --orphans # To actually perform the prune operation, the confirm flag must be appended oc adm prune builds --orphans --confirm", "Dry run deleting all but the last complete deployment for every deployment config oc adm prune deployments --keep-complete=1 # To actually perform the prune operation, the confirm flag must be appended oc adm prune deployments --keep-complete=1 --confirm", "Prune all orphaned groups oc adm prune groups --sync-config=/path/to/ldap-sync-config.yaml --confirm # Prune all orphaned groups except the ones from the blacklist file oc adm prune groups --blacklist=/path/to/blacklist.txt --sync-config=/path/to/ldap-sync-config.yaml --confirm # Prune all orphaned groups from a list of specific groups specified in a whitelist file oc adm prune groups --whitelist=/path/to/whitelist.txt --sync-config=/path/to/ldap-sync-config.yaml --confirm # Prune all orphaned groups from a list of specific groups specified in a whitelist oc adm prune groups groups/group_name groups/other_name --sync-config=/path/to/ldap-sync-config.yaml --confirm", "See what the prune command would delete if only images and their referrers were more than an hour old # and obsoleted by 3 newer revisions under the same tag were considered oc adm prune images --keep-tag-revisions=3 --keep-younger-than=60m # To actually perform the prune operation, the confirm flag must be appended oc adm prune images --keep-tag-revisions=3 --keep-younger-than=60m --confirm # See what the prune command would delete if we are interested in removing images # exceeding currently set limit ranges ('openshift.io/Image') oc adm prune images --prune-over-size-limit # To actually perform the prune operation, the confirm flag must be appended oc adm prune images --prune-over-size-limit --confirm # Force the insecure http protocol with the particular registry host name oc adm prune images --registry-url=http://registry.example.org --confirm # Force a secure connection with a custom certificate authority to the particular registry host name oc adm prune images --registry-url=registry.example.org --certificate-authority=/path/to/custom/ca.crt --confirm", "Use git to check out the source code for the current cluster release to DIR oc adm release extract --git=DIR # Extract cloud credential requests for AWS oc adm release extract --credentials-requests --cloud=aws", "Show information about the cluster's current release oc adm release info # Show the source code that comprises a release oc adm release info 4.2.2 --commit-urls # Show the source code difference between two releases oc adm release info 4.2.0 4.2.2 --commits # Show where the images referenced by the release are located oc adm release info quay.io/openshift-release-dev/ocp-release:4.2.2 --pullspecs", "Perform a dry run showing what would be mirrored, including the mirror objects oc adm release mirror 4.3.0 --to myregistry.local/openshift/release --release-image-signature-to-dir /tmp/releases --dry-run # Mirror a release into the current directory oc adm release mirror 4.3.0 --to file://openshift/release --release-image-signature-to-dir /tmp/releases # Mirror a release to another directory in the default location oc adm release mirror 4.3.0 --to-dir /tmp/releases # Upload a release from the current directory to another server oc adm release mirror --from file://openshift/release --to myregistry.com/openshift/release --release-image-signature-to-dir /tmp/releases # Mirror the 4.3.0 release to repository registry.example.com and apply signatures to connected cluster oc adm release mirror --from=quay.io/openshift-release-dev/ocp-release:4.3.0-x86_64 --to=registry.example.com/your/repository --apply-release-image-signature", "Create a release from the latest origin images and push to a DockerHub repo oc adm release new --from-image-stream=4.1 -n origin --to-image docker.io/mycompany/myrepo:latest # Create a new release with updated metadata from a previous release oc adm release new --from-release registry.svc.ci.openshift.org/origin/release:v4.1 --name 4.1.1 --previous 4.1.0 --metadata ... --to-image docker.io/mycompany/myrepo:latest # Create a new release and override a single image oc adm release new --from-release registry.svc.ci.openshift.org/origin/release:v4.1 cli=docker.io/mycompany/cli:latest --to-image docker.io/mycompany/myrepo:latest # Run a verification pass to ensure the release can be reproduced oc adm release new --from-release registry.svc.ci.openshift.org/origin/release:v4.1", "Update node 'foo' with a taint with key 'dedicated' and value 'special-user' and effect 'NoSchedule' # If a taint with that key and effect already exists, its value is replaced as specified oc adm taint nodes foo dedicated=special-user:NoSchedule # Remove from node 'foo' the taint with key 'dedicated' and effect 'NoSchedule' if one exists oc adm taint nodes foo dedicated:NoSchedule- # Remove from node 'foo' all the taints with key 'dedicated' oc adm taint nodes foo dedicated- # Add a taint with key 'dedicated' on nodes having label mylabel=X oc adm taint node -l myLabel=X dedicated=foo:PreferNoSchedule # Add to node 'foo' a taint with key 'bar' and no value oc adm taint nodes foo bar:NoSchedule", "Show usage statistics for images oc adm top images", "Show usage statistics for image streams oc adm top imagestreams", "Show metrics for all nodes oc adm top node # Show metrics for a given node oc adm top node NODE_NAME", "Show metrics for all pods in the default namespace oc adm top pod # Show metrics for all pods in the given namespace oc adm top pod --namespace=NAMESPACE # Show metrics for a given pod and its containers oc adm top pod POD_NAME --containers # Show metrics for the pods defined by label name=myLabel oc adm top pod -l name=myLabel", "Mark node \"foo\" as schedulable oc adm uncordon foo", "Verify the image signature and identity using the local GPG keychain oc adm verify-image-signature sha256:c841e9b64e4579bd56c794bdd7c36e1c257110fd2404bebbb8b613e4935228c4 --expected-identity=registry.local:5000/foo/bar:v1 # Verify the image signature and identity using the local GPG keychain and save the status oc adm verify-image-signature sha256:c841e9b64e4579bd56c794bdd7c36e1c257110fd2404bebbb8b613e4935228c4 --expected-identity=registry.local:5000/foo/bar:v1 --save # Verify the image signature and identity via exposed registry route oc adm verify-image-signature sha256:c841e9b64e4579bd56c794bdd7c36e1c257110fd2404bebbb8b613e4935228c4 --expected-identity=registry.local:5000/foo/bar:v1 --registry-url=docker-registry.foo.com # Remove all signature verifications from the image oc adm verify-image-signature sha256:c841e9b64e4579bd56c794bdd7c36e1c257110fd2404bebbb8b613e4935228c4 --remove-all", "odo delete --deploy", "odo login -u developer -p developer", "odo catalog list components", "Odo Devfile Components: NAME DESCRIPTION REGISTRY dotnet50 Stack with .NET 5.0 DefaultDevfileRegistry dotnet60 Stack with .NET 6.0 DefaultDevfileRegistry dotnetcore31 Stack with .NET Core 3.1 DefaultDevfileRegistry go Stack with the latest Go version DefaultDevfileRegistry java-maven Upstream Maven and OpenJDK 11 DefaultDevfileRegistry java-openliberty Java application Maven-built stack using the Open Liberty ru... DefaultDevfileRegistry java-openliberty-gradle Java application Gradle-built stack using the Open Liberty r... DefaultDevfileRegistry java-quarkus Quarkus with Java DefaultDevfileRegistry java-springboot Spring Boot(R) using Java DefaultDevfileRegistry java-vertx Upstream Vert.x using Java DefaultDevfileRegistry java-websphereliberty Java application Maven-built stack using the WebSphere Liber... DefaultDevfileRegistry java-websphereliberty-gradle Java application Gradle-built stack using the WebSphere Libe... DefaultDevfileRegistry java-wildfly Upstream WildFly DefaultDevfileRegistry java-wildfly-bootable-jar Java stack with WildFly in bootable Jar mode, OpenJDK 11 and... DefaultDevfileRegistry nodejs Stack with Node.js 14 DefaultDevfileRegistry nodejs-angular Stack with Angular 12 DefaultDevfileRegistry nodejs-nextjs Stack with Next.js 11 DefaultDevfileRegistry nodejs-nuxtjs Stack with Nuxt.js 2 DefaultDevfileRegistry nodejs-react Stack with React 17 DefaultDevfileRegistry nodejs-svelte Stack with Svelte 3 DefaultDevfileRegistry nodejs-vue Stack with Vue 3 DefaultDevfileRegistry php-laravel Stack with Laravel 8 DefaultDevfileRegistry python Python Stack with Python 3.7 DefaultDevfileRegistry python-django Python3.7 with Django DefaultDevfileRegistry", "curl -L https://developers.redhat.com/content-gateway/rest/mirror/pub/openshift-v4/clients/odo/latest/odo-linux-amd64 -o odo", "curl -L https://developers.redhat.com/content-gateway/rest/mirror/pub/openshift-v4/clients/odo/latest/odo-linux-amd64.tar.gz -o odo.tar.gz tar xvzf odo.tar.gz", "chmod +x <filename>", "echo USDPATH", "odo version", "C:\\> path", "C:\\> odo version", "curl -L https://developers.redhat.com/content-gateway/rest/mirror/pub/openshift-v4/clients/odo/latest/odo-darwin-amd64 -o odo", "curl -L https://developers.redhat.com/content-gateway/rest/mirror/pub/openshift-v4/clients/odo/latest/odo-darwin-amd64.tar.gz -o odo.tar.gz tar xvzf odo.tar.gz", "chmod +x odo", "echo USDPATH", "odo version", "ext install redhat.vscode-openshift-connector", "subscription-manager register", "subscription-manager refresh", "subscription-manager list --available --matches '*OpenShift Developer Tools and Services*'", "subscription-manager attach --pool=<pool_id>", "subscription-manager repos --enable=\"ocp-tools-4.9-for-rhel-8-x86_64-rpms\"", "yum install odo", "odo version", "odo preference view", "PARAMETER CURRENT_VALUE UpdateNotification NamePrefix Timeout BuildTimeout PushTimeout Ephemeral ConsentTelemetry true", "odo preference set <key> <value>", "odo preference set updatenotification false", "Global preference was successfully updated", "odo preference unset <key>", "odo preference unset updatenotification ? Do you want to unset updatenotification in the preference (y/N) y", "Global preference was successfully updated", ".git *.js tests/", "components: - image: imageName: quay.io/myusername/myimage dockerfile: uri: ./Dockerfile 1 buildContext: USD{PROJECTS_ROOT} 2 name: component-built-from-dockerfile", "odo catalog list components", "NAME DESCRIPTION REGISTRY go Stack with the latest Go version DefaultDevfileRegistry java-maven Upstream Maven and OpenJDK 11 DefaultDevfileRegistry nodejs Stack with Node.js 14 DefaultDevfileRegistry php-laravel Stack with Laravel 8 DefaultDevfileRegistry python Python Stack with Python 3.7 DefaultDevfileRegistry [...]", "odo catalog describe component", "odo catalog describe component nodejs", "* Registry: DefaultDevfileRegistry 1 Starter Projects: 2 --- name: nodejs-starter attributes: {} description: \"\" subdir: \"\" projectsource: sourcetype: \"\" git: gitlikeprojectsource: commonprojectsource: {} checkoutfrom: null remotes: origin: https://github.com/odo-devfiles/nodejs-ex.git zip: null custom: null", "odo catalog list services", "Services available through Operators NAME CRDs postgresql-operator.v0.1.1 Backup, Database redis-operator.v0.8.0 RedisCluster, Redis", "odo catalog search service", "odo catalog search service postgres", "Services available through Operators NAME CRDs postgresql-operator.v0.1.1 Backup, Database", "odo catalog describe service", "odo catalog describe service postgresql-operator.v0.1.1/Database", "KIND: Database VERSION: v1alpha1 DESCRIPTION: Database is the Schema for the the Database Database API FIELDS: awsAccessKeyId (string) AWS S3 accessKey/token ID Key ID of AWS S3 storage. Default Value: nil Required to create the Secret with the data to allow send the backup files to AWS S3 storage. [...]", "odo catalog describe service redis-operator.v0.8.0", "NAME: redis-operator.v0.8.0 DESCRIPTION: A Golang based redis operator that will make/oversee Redis standalone/cluster mode setup on top of the Kubernetes. It can create a redis cluster setup with best practices on Cloud as well as the Bare metal environment. Also, it provides an in-built monitoring capability using ... (cut short for beverity) Logging Operator is licensed under [Apache License, Version 2.0](https://github.com/OT-CONTAINER-KIT/redis-operator/blob/master/LICENSE) CRDs: NAME DESCRIPTION RedisCluster Redis Cluster Redis Redis", "odo create nodejs mynodejs", "odo create nodejs mynodejs --context ./node-backend", "odo create nodejs --app myapp --project backend", "odo catalog describe component nodejs", "odo create nodejs --starter nodejs-starter", "odo create mynodejs --devfile https://raw.githubusercontent.com/odo-devfiles/registry/master/devfiles/nodejs/devfile.yaml", "odo create ? Which devfile component type do you wish to create go ? What do you wish to name the new devfile component go-api ? What project do you want the devfile component to be created in default Devfile Object Validation [✓] Checking devfile existence [164258ns] [✓] Creating a devfile component from registry: DefaultDevfileRegistry [246051ns] Validation [✓] Validating if devfile name is correct [92255ns] ? Do you want to download a starter project Yes Starter Project [✓] Downloading starter project go-starter from https://github.com/devfile-samples/devfile-stack-go.git [429ms] Please use odo push command to create the component with source deployed", "odo delete", "odo delete --deploy", "odo delete --all", "schemaVersion: 2.2.0 [...] variables: CONTAINER_IMAGE: quay.io/phmartin/myimage commands: - id: build-image apply: component: outerloop-build - id: deployk8s apply: component: outerloop-deploy - id: deploy composite: commands: - build-image - deployk8s group: kind: deploy isDefault: true components: - name: outerloop-build image: imageName: \"{{CONTAINER_IMAGE}}\" dockerfile: uri: ./Dockerfile buildContext: USD{PROJECTS_ROOT} - name: outerloop-deploy kubernetes: inlined: | kind: Deployment apiVersion: apps/v1 metadata: name: my-component spec: replicas: 1 selector: matchLabels: app: node-app template: metadata: labels: app: node-app spec: containers: - name: main image: {{CONTAINER_IMAGE}}", "odo list", "APP NAME PROJECT TYPE STATE MANAGED BY ODO app backend myproject spring Pushed Yes", "odo service list", "NAME MANAGED BY ODO STATE AGE PostgresCluster/hippo Yes (backend) Pushed 59m41s", "odo link PostgresCluster/hippo", "[✓] Successfully created link between component \"backend\" and service \"PostgresCluster/hippo\" To apply the link, please use `odo push`", "odo url list", "Found the following URLs for component backend NAME STATE URL PORT SECURE KIND 8080-tcp Pushed http://8080-tcp.192.168.39.112.nip.io 8080 false ingress", "odo describe", "Component Name: backend Type: spring Environment Variables: · PROJECTS_ROOT=/projects · PROJECT_SOURCE=/projects · DEBUG_PORT=5858 Storage: · m2 of size 3Gi mounted to /home/user/.m2 URLs: · http://8080-tcp.192.168.39.112.nip.io exposed via 8080 Linked Services: · PostgresCluster/hippo Environment Variables: · POSTGRESCLUSTER_PGBOUNCER-EMPTY · POSTGRESCLUSTER_PGBOUNCER.INI · POSTGRESCLUSTER_ROOT.CRT · POSTGRESCLUSTER_VERIFIER · POSTGRESCLUSTER_ID_ECDSA · POSTGRESCLUSTER_PGBOUNCER-VERIFIER · POSTGRESCLUSTER_TLS.CRT · POSTGRESCLUSTER_PGBOUNCER-URI · POSTGRESCLUSTER_PATRONI.CRT-COMBINED · POSTGRESCLUSTER_USER · pgImage · pgVersion · POSTGRESCLUSTER_CLUSTERIP · POSTGRESCLUSTER_HOST · POSTGRESCLUSTER_PGBACKREST_REPO.CONF · POSTGRESCLUSTER_PGBOUNCER-USERS.TXT · POSTGRESCLUSTER_SSH_CONFIG · POSTGRESCLUSTER_TLS.KEY · POSTGRESCLUSTER_CONFIG-HASH · POSTGRESCLUSTER_PASSWORD · POSTGRESCLUSTER_PATRONI.CA-ROOTS · POSTGRESCLUSTER_DBNAME · POSTGRESCLUSTER_PGBOUNCER-PASSWORD · POSTGRESCLUSTER_SSHD_CONFIG · POSTGRESCLUSTER_PGBOUNCER-FRONTEND.KEY · POSTGRESCLUSTER_PGBACKREST_INSTANCE.CONF · POSTGRESCLUSTER_PGBOUNCER-FRONTEND.CA-ROOTS · POSTGRESCLUSTER_PGBOUNCER-HOST · POSTGRESCLUSTER_PORT · POSTGRESCLUSTER_ROOT.KEY · POSTGRESCLUSTER_SSH_KNOWN_HOSTS · POSTGRESCLUSTER_URI · POSTGRESCLUSTER_PATRONI.YAML · POSTGRESCLUSTER_DNS.CRT · POSTGRESCLUSTER_DNS.KEY · POSTGRESCLUSTER_ID_ECDSA.PUB · POSTGRESCLUSTER_PGBOUNCER-FRONTEND.CRT · POSTGRESCLUSTER_PGBOUNCER-PORT · POSTGRESCLUSTER_CA.CRT", "ls kubernetes odo-service-backend-postgrescluster-hippo.yaml odo-service-hippo.yaml", "odo unlink PostgresCluster/hippo", "[✓] Successfully unlinked component \"backend\" from service \"PostgresCluster/hippo\" To apply the changes, please use `odo push`", "ls kubernetes odo-service-hippo.yaml", "odo link PostgresCluster/hippo --inlined", "[✓] Successfully created link between component \"backend\" and service \"PostgresCluster/hippo\" To apply the link, please use `odo push`", "kubernetes: inlined: | apiVersion: binding.operators.coreos.com/v1alpha1 kind: ServiceBinding metadata: creationTimestamp: null name: backend-postgrescluster-hippo spec: application: group: apps name: backend-app resource: deployments version: v1 bindAsFiles: false detectBindingResources: true services: - group: postgres-operator.crunchydata.com id: hippo kind: PostgresCluster name: hippo version: v1beta1 status: secret: \"\" name: backend-postgrescluster-hippo", "odo link PostgresCluster/hippo --map pgVersion='{{ .hippo.spec.postgresVersion }}'", "odo exec -- env | grep pgVersion", "pgVersion=13", "odo link PostgresCluster/hippo --map pgVersion='{{ .hippo.spec.postgresVersion }}' --map pgImage='{{ .hippo.spec.image }}'", "odo exec -- env | grep -e \"pgVersion\\|pgImage\"", "pgVersion=13 pgImage=registry.developers.crunchydata.com/crunchydata/crunchy-postgres-ha:centos8-13.4-0", "Linked Services: · PostgresCluster/hippo", "odo unlink PostgresCluster/hippo odo push", "odo link PostgresCluster/hippo --bind-as-files odo push", "odo describe Component Name: backend Type: spring Environment Variables: · PROJECTS_ROOT=/projects · PROJECT_SOURCE=/projects · DEBUG_PORT=5858 · SERVICE_BINDING_ROOT=/bindings · SERVICE_BINDING_ROOT=/bindings Storage: · m2 of size 3Gi mounted to /home/user/.m2 URLs: · http://8080-tcp.192.168.39.112.nip.io exposed via 8080 Linked Services: · PostgresCluster/hippo Files: · /bindings/backend-postgrescluster-hippo/pgbackrest_instance.conf · /bindings/backend-postgrescluster-hippo/user · /bindings/backend-postgrescluster-hippo/ssh_known_hosts · /bindings/backend-postgrescluster-hippo/clusterIP · /bindings/backend-postgrescluster-hippo/password · /bindings/backend-postgrescluster-hippo/patroni.yaml · /bindings/backend-postgrescluster-hippo/pgbouncer-frontend.crt · /bindings/backend-postgrescluster-hippo/pgbouncer-host · /bindings/backend-postgrescluster-hippo/root.key · /bindings/backend-postgrescluster-hippo/pgbouncer-frontend.key · /bindings/backend-postgrescluster-hippo/pgbouncer.ini · /bindings/backend-postgrescluster-hippo/uri · /bindings/backend-postgrescluster-hippo/config-hash · /bindings/backend-postgrescluster-hippo/pgbouncer-empty · /bindings/backend-postgrescluster-hippo/port · /bindings/backend-postgrescluster-hippo/dns.crt · /bindings/backend-postgrescluster-hippo/pgbouncer-uri · /bindings/backend-postgrescluster-hippo/root.crt · /bindings/backend-postgrescluster-hippo/ssh_config · /bindings/backend-postgrescluster-hippo/dns.key · /bindings/backend-postgrescluster-hippo/host · /bindings/backend-postgrescluster-hippo/patroni.crt-combined · /bindings/backend-postgrescluster-hippo/pgbouncer-frontend.ca-roots · /bindings/backend-postgrescluster-hippo/tls.key · /bindings/backend-postgrescluster-hippo/verifier · /bindings/backend-postgrescluster-hippo/ca.crt · /bindings/backend-postgrescluster-hippo/dbname · /bindings/backend-postgrescluster-hippo/patroni.ca-roots · /bindings/backend-postgrescluster-hippo/pgbackrest_repo.conf · /bindings/backend-postgrescluster-hippo/pgbouncer-port · /bindings/backend-postgrescluster-hippo/pgbouncer-verifier · /bindings/backend-postgrescluster-hippo/id_ecdsa · /bindings/backend-postgrescluster-hippo/id_ecdsa.pub · /bindings/backend-postgrescluster-hippo/pgbouncer-password · /bindings/backend-postgrescluster-hippo/pgbouncer-users.txt · /bindings/backend-postgrescluster-hippo/sshd_config · /bindings/backend-postgrescluster-hippo/tls.crt", "odo exec -- cat /bindings/backend-postgrescluster-hippo/password", "q({JC:jn^mm/Bw}eu+j.GX{k", "odo exec -- cat /bindings/backend-postgrescluster-hippo/user", "hippo", "odo exec -- cat /bindings/backend-postgrescluster-hippo/clusterIP", "10.101.78.56", "odo link PostgresCluster/hippo --map pgVersion='{{ .hippo.spec.postgresVersion }}' --map pgImage='{{ .hippo.spec.image }}' --bind-as-files odo push", "odo exec -- cat /bindings/backend-postgrescluster-hippo/pgVersion", "13", "odo exec -- cat /bindings/backend-postgrescluster-hippo/pgImage", "registry.developers.crunchydata.com/crunchydata/crunchy-postgres-ha:centos8-13.4-0", "odo registry list", "NAME URL SECURE DefaultDevfileRegistry https://registry.devfile.io No", "odo registry add", "odo registry add StageRegistry https://registry.stage.devfile.io New registry successfully added", "odo registry add MyRegistry https://myregistry.example.com --token <access_token> New registry successfully added", "odo registry delete", "odo registry delete StageRegistry ? Are you sure you want to delete registry \"StageRegistry\" Yes Successfully deleted registry", "odo registry update", "odo registry update MyRegistry https://otherregistry.example.com --token <other_access_token> ? Are you sure you want to update registry \"MyRegistry\" Yes Successfully updated registry", "odo service create", "odo catalog list services Services available through Operators NAME CRDs redis-operator.v0.8.0 RedisCluster, Redis odo service create redis-operator.v0.8.0/Redis my-redis-service Successfully added service to the configuration; do 'odo push' to create service on the cluster", "cat kubernetes/odo-service-my-redis-service.yaml", "apiVersion: redis.redis.opstreelabs.in/v1beta1 kind: Redis metadata: name: my-redis-service spec: kubernetesConfig: image: quay.io/opstree/redis:v6.2.5 imagePullPolicy: IfNotPresent resources: limits: cpu: 101m memory: 128Mi requests: cpu: 101m memory: 128Mi serviceType: ClusterIP redisExporter: enabled: false image: quay.io/opstree/redis-exporter:1.0 storage: volumeClaimTemplate: spec: accessModes: - ReadWriteOnce resources: requests: storage: 1Gi", "cat devfile.yaml", "[...] components: - kubernetes: uri: kubernetes/odo-service-my-redis-service.yaml name: my-redis-service [...]", "odo service create redis-operator.v0.8.0/Redis", "odo service create redis-operator.v0.8.0/Redis my-redis-service --inlined Successfully added service to the configuration; do 'odo push' to create service on the cluster", "cat devfile.yaml", "[...] components: - kubernetes: inlined: | apiVersion: redis.redis.opstreelabs.in/v1beta1 kind: Redis metadata: name: my-redis-service spec: kubernetesConfig: image: quay.io/opstree/redis:v6.2.5 imagePullPolicy: IfNotPresent resources: limits: cpu: 101m memory: 128Mi requests: cpu: 101m memory: 128Mi serviceType: ClusterIP redisExporter: enabled: false image: quay.io/opstree/redis-exporter:1.0 storage: volumeClaimTemplate: spec: accessModes: - ReadWriteOnce resources: requests: storage: 1Gi name: my-redis-service [...]", "odo service create redis-operator.v0.8.0/Redis my-redis-service -p kubernetesConfig.image=quay.io/opstree/redis:v6.2.5 -p kubernetesConfig.serviceType=ClusterIP -p redisExporter.image=quay.io/opstree/redis-exporter:1.0 Successfully added service to the configuration; do 'odo push' to create service on the cluster", "cat kubernetes/odo-service-my-redis-service.yaml", "apiVersion: redis.redis.opstreelabs.in/v1beta1 kind: Redis metadata: name: my-redis-service spec: kubernetesConfig: image: quay.io/opstree/redis:v6.2.5 serviceType: ClusterIP redisExporter: image: quay.io/opstree/redis-exporter:1.0", "cat > my-redis.yaml <<EOF apiVersion: redis.redis.opstreelabs.in/v1beta1 kind: Redis metadata: name: my-redis-service spec: kubernetesConfig: image: quay.io/opstree/redis:v6.2.5 serviceType: ClusterIP redisExporter: image: quay.io/opstree/redis-exporter:1.0 EOF", "odo service create --from-file my-redis.yaml Successfully added service to the configuration; do 'odo push' to create service on the cluster", "odo service delete", "odo service list NAME MANAGED BY ODO STATE AGE Redis/my-redis-service Yes (api) Deleted locally 5m39s", "odo service delete Redis/my-redis-service ? Are you sure you want to delete Redis/my-redis-service Yes Service \"Redis/my-redis-service\" has been successfully deleted; do 'odo push' to delete service from the cluster", "odo service list", "odo service list NAME MANAGED BY ODO STATE AGE Redis/my-redis-service-1 Yes (api) Not pushed Redis/my-redis-service-2 Yes (api) Pushed 52s Redis/my-redis-service-3 Yes (api) Deleted locally 1m22s", "odo service describe", "odo service describe Redis/my-redis-service Version: redis.redis.opstreelabs.in/v1beta1 Kind: Redis Name: my-redis-service Parameters: NAME VALUE kubernetesConfig.image quay.io/opstree/redis:v6.2.5 kubernetesConfig.serviceType ClusterIP redisExporter.image quay.io/opstree/redis-exporter:1.0", "odo storage create", "odo storage create store --path /data --size 1Gi [✓] Added storage store to nodejs-project-ufyy odo storage create tempdir --path /tmp --size 2Gi --ephemeral [✓] Added storage tempdir to nodejs-project-ufyy Please use `odo push` command to make the storage accessible to the component", "odo storage list", "odo storage list The component 'nodejs-project-ufyy' has the following storage attached: NAME SIZE PATH STATE store 1Gi /data Not Pushed tempdir 2Gi /tmp Not Pushed", "odo storage delete", "odo storage delete store -f Deleted storage store from nodejs-project-ufyy Please use `odo push` command to delete the storage from the cluster", "components: - name: nodejs1 container: image: registry.access.redhat.com/ubi8/nodejs-12:1-36 memoryLimit: 1024Mi endpoints: - name: \"3000-tcp\" targetPort: 3000 mountSources: true - name: nodejs2 container: image: registry.access.redhat.com/ubi8/nodejs-12:1-36 memoryLimit: 1024Mi", "odo storage create --container", "odo storage create store --path /data --size 1Gi --container nodejs2 [✓] Added storage store to nodejs-testing-xnfg Please use `odo push` command to make the storage accessible to the component", "odo storage list", "The component 'nodejs-testing-xnfg' has the following storage attached: NAME SIZE PATH CONTAINER STATE store 1Gi /data nodejs2 Not Pushed", "tar xvzf <file>", "echo USDPATH", "subscription-manager register", "subscription-manager refresh", "subscription-manager list --available --matches '*pipelines*'", "subscription-manager attach --pool=<pool_id>", "subscription-manager repos --enable=\"pipelines-1.6-for-rhel-8-x86_64-rpms\"", "subscription-manager repos --enable=\"pipelines-1.6-for-rhel-8-s390x-rpms\"", "subscription-manager repos --enable=\"pipelines-1.6-for-rhel-8-ppc64le-rpms\"", "yum install openshift-pipelines-client", "tkn version", "C:\\> path", "echo USDPATH", "tkn completion bash > tkn_bash_completion", "sudo cp tkn_bash_completion /etc/bash_completion.d/", "tkn", "tkn completion bash", "tkn version", "tkn pipeline --help", "tkn pipeline delete mypipeline -n myspace", "tkn pipeline describe mypipeline", "tkn pipeline list", "tkn pipeline logs -f mypipeline", "tkn pipeline start mypipeline", "tkn pipelinerun -h", "tkn pipelinerun cancel mypipelinerun -n myspace", "tkn pipelinerun delete mypipelinerun1 mypipelinerun2 -n myspace", "tkn pipelinerun delete -n myspace --keep 5 1", "tkn pipelinerun delete --all", "tkn pipelinerun describe mypipelinerun -n myspace", "tkn pipelinerun list -n myspace", "tkn pipelinerun logs mypipelinerun -a -n myspace", "tkn task -h", "tkn task delete mytask1 mytask2 -n myspace", "tkn task describe mytask -n myspace", "tkn task list -n myspace", "tkn task logs mytask mytaskrun -n myspace", "tkn task start mytask -s <ServiceAccountName> -n myspace", "tkn taskrun -h", "tkn taskrun cancel mytaskrun -n myspace", "tkn taskrun delete mytaskrun1 mytaskrun2 -n myspace", "tkn taskrun delete -n myspace --keep 5 1", "tkn taskrun describe mytaskrun -n myspace", "tkn taskrun list -n myspace", "tkn taskrun logs -f mytaskrun -n myspace", "tkn condition --help", "tkn condition delete mycondition1 -n myspace", "tkn condition describe mycondition1 -n myspace", "tkn condition list -n myspace", "tkn resource -h", "tkn resource create -n myspace", "tkn resource delete myresource -n myspace", "tkn resource describe myresource -n myspace", "tkn resource list -n myspace", "tkn clustertask --help", "tkn clustertask delete mytask1 mytask2", "tkn clustertask describe mytask1", "tkn clustertask list", "tkn clustertask start mytask", "tkn eventlistener -h", "tkn eventlistener delete mylistener1 mylistener2 -n myspace", "tkn eventlistener describe mylistener -n myspace", "tkn eventlistener list -n myspace", "tkn eventlistener logs mylistener -n myspace", "tkn triggerbinding -h", "tkn triggerbinding delete mybinding1 mybinding2 -n myspace", "tkn triggerbinding describe mybinding -n myspace", "tkn triggerbinding list -n myspace", "tkn triggertemplate -h", "tkn triggertemplate delete mytemplate1 mytemplate2 -n `myspace`", "tkn triggertemplate describe mytemplate -n `myspace`", "tkn triggertemplate list -n myspace", "tkn clustertriggerbinding -h", "tkn clustertriggerbinding delete myclusterbinding1 myclusterbinding2", "tkn clustertriggerbinding describe myclusterbinding", "tkn clustertriggerbinding list", "tkn hub -h", "tkn hub --api-server https://api.hub.tekton.dev", "tkn hub downgrade task mytask --to version -n mynamespace", "tkn hub get [pipeline | task] myresource --from tekton --version version", "tkn hub info task mytask --from tekton --version version", "tkn hub install task mytask --from tekton --version version -n mynamespace", "tkn hub reinstall task mytask --from tekton --version version -n mynamespace", "tkn hub search --tags cli", "tkn hub upgrade task mytask --to version -n mynamespace", "tar xvf <file>", "echo USDPATH", "sudo mv ./opm /usr/local/bin/", "C:\\> path", "C:\\> move opm.exe <directory>", "opm version", "Version: version.Version{OpmVersion:\"v1.18.0\", GitCommit:\"32eb2591437e394bdc58a58371c5cd1e6fe5e63f\", BuildDate:\"2021-09-21T10:41:00Z\", GoOs:\"linux\", GoArch:\"amd64\"}", "opm <command> [<subcommand>] [<argument>] [<flags>]", "opm index <subcommand> [<flags>]", "opm index add [<flags>]", "opm index export [<flags>]", "opm index prune [<flags>]", "opm index prune-stranded [<flags>]", "opm index rm [<flags>]", "opm init <package_name> [<flags>]", "opm render <index_image | bundle_image | sqlite_file> [<flags>]", "opm validate <directory> [<flags>]", "opm serve <source_path> [<flags>]", "tar xvf operator-sdk-v1.10.1-ocp-linux-x86_64.tar.gz", "chmod +x operator-sdk", "echo USDPATH", "sudo mv ./operator-sdk /usr/local/bin/operator-sdk", "operator-sdk version", "operator-sdk version: \"v1.10.1-ocp\",", "operator-sdk <command> [<subcommand>] [<argument>] [<flags>]", "operator-sdk completion bash", "bash completion for operator-sdk -*- shell-script -*- ex: ts=4 sw=4 et filetype=sh" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.9/html-single/cli_tools/index
Chapter 88. workflow
Chapter 88. workflow This chapter describes the commands under the workflow command. 88.1. workflow create Create new workflow. Usage: Table 88.1. Positional arguments Value Summary definition Workflow definition file. Table 88.2. Command arguments Value Summary -h, --help Show this help message and exit --marker [MARKER] The last execution uuid of the page, displays list of executions after "marker". --limit [LIMIT] Maximum number of entries to return in a single result. --sort_keys [SORT_KEYS] Comma-separated list of sort keys to sort results by. Default: created_at. Example: mistral execution-list --sort_keys=id,description --sort_dirs [SORT_DIRS] Comma-separated list of sort directions. default: asc. Example: mistral execution-list --sort_keys=id,description --sort_dirs=asc,desc --filter FILTERS Filters. can be repeated. --namespace [NAMESPACE] Namespace to create the workflow within. --public With this flag workflow will be marked as "public". Table 88.3. Output formatter options Value Summary -f {csv,json,table,value,yaml}, --format {csv,json,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated --sort-column SORT_COLUMN Specify the column(s) to sort the data (columns specified first have a priority, non-existing columns are ignored), can be repeated Table 88.4. CSV formatter options Value Summary --quote {all,minimal,none,nonnumeric} When to include quotes, defaults to nonnumeric Table 88.5. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 88.6. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 88.2. workflow definition show Show workflow definition. Usage: Table 88.7. Positional arguments Value Summary identifier Workflow id or name. Table 88.8. Command arguments Value Summary -h, --help Show this help message and exit --namespace [NAMESPACE] Namespace to get the workflow from. 88.3. workflow delete Delete workflow. Usage: Table 88.9. Positional arguments Value Summary workflow Name or id of workflow(s). Table 88.10. Command arguments Value Summary -h, --help Show this help message and exit --namespace [NAMESPACE] Namespace to delete the workflow from. 88.4. workflow engine service list List all services. Usage: Table 88.11. Command arguments Value Summary -h, --help Show this help message and exit --marker [MARKER] The last execution uuid of the page, displays list of executions after "marker". --limit [LIMIT] Maximum number of entries to return in a single result. --sort_keys [SORT_KEYS] Comma-separated list of sort keys to sort results by. Default: created_at. Example: mistral execution-list --sort_keys=id,description --sort_dirs [SORT_DIRS] Comma-separated list of sort directions. default: asc. Example: mistral execution-list --sort_keys=id,description --sort_dirs=asc,desc --filter FILTERS Filters. can be repeated. Table 88.12. Output formatter options Value Summary -f {csv,json,table,value,yaml}, --format {csv,json,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated --sort-column SORT_COLUMN Specify the column(s) to sort the data (columns specified first have a priority, non-existing columns are ignored), can be repeated Table 88.13. CSV formatter options Value Summary --quote {all,minimal,none,nonnumeric} When to include quotes, defaults to nonnumeric Table 88.14. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 88.15. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 88.5. workflow env create Create new environment. Usage: Table 88.16. Positional arguments Value Summary file Environment configuration file in json or yaml Table 88.17. Command arguments Value Summary -h, --help Show this help message and exit Table 88.18. Output formatter options Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated Table 88.19. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 88.20. Shell formatter options Value Summary --prefix PREFIX Add a prefix to all variable names Table 88.21. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 88.6. workflow env delete Delete environment. Usage: Table 88.22. Positional arguments Value Summary environment Name of environment(s). Table 88.23. Command arguments Value Summary -h, --help Show this help message and exit 88.7. workflow env list List all environments. Usage: Table 88.24. Command arguments Value Summary -h, --help Show this help message and exit --marker [MARKER] The last execution uuid of the page, displays list of executions after "marker". --limit [LIMIT] Maximum number of entries to return in a single result. --sort_keys [SORT_KEYS] Comma-separated list of sort keys to sort results by. Default: created_at. Example: mistral execution-list --sort_keys=id,description --sort_dirs [SORT_DIRS] Comma-separated list of sort directions. default: asc. Example: mistral execution-list --sort_keys=id,description --sort_dirs=asc,desc --filter FILTERS Filters. can be repeated. Table 88.25. Output formatter options Value Summary -f {csv,json,table,value,yaml}, --format {csv,json,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated --sort-column SORT_COLUMN Specify the column(s) to sort the data (columns specified first have a priority, non-existing columns are ignored), can be repeated Table 88.26. CSV formatter options Value Summary --quote {all,minimal,none,nonnumeric} When to include quotes, defaults to nonnumeric Table 88.27. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 88.28. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 88.8. workflow env show Show specific environment. Usage: Table 88.29. Positional arguments Value Summary environment Environment name Table 88.30. Command arguments Value Summary -h, --help Show this help message and exit --export Export the environment suitable for import Table 88.31. Output formatter options Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated Table 88.32. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 88.33. Shell formatter options Value Summary --prefix PREFIX Add a prefix to all variable names Table 88.34. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 88.9. workflow env update Update environment. Usage: Table 88.35. Positional arguments Value Summary file Environment configuration file in json or yaml Table 88.36. Command arguments Value Summary -h, --help Show this help message and exit Table 88.37. Output formatter options Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated Table 88.38. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 88.39. Shell formatter options Value Summary --prefix PREFIX Add a prefix to all variable names Table 88.40. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 88.10. workflow execution create Create new execution. Usage: Table 88.41. Positional arguments Value Summary workflow_identifier Workflow id or name. workflow name will be deprecated since Mitaka. workflow_input Workflow input params Workflow additional parameters Table 88.42. Command arguments Value Summary -h, --help Show this help message and exit --namespace [NAMESPACE] Workflow namespace. -d DESCRIPTION, --description DESCRIPTION Execution description -s [SOURCE_EXECUTION_ID] Workflow execution id which will allow operators to create a new workflow execution based on the previously successful executed workflow. Example: mistral execution-create -s 123e4567-e89b-12d3-a456-426655440000 Table 88.43. Output formatter options Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated Table 88.44. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 88.45. Shell formatter options Value Summary --prefix PREFIX Add a prefix to all variable names Table 88.46. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 88.11. workflow execution delete Delete execution. Usage: Table 88.47. Positional arguments Value Summary execution Id of execution identifier(s). Table 88.48. Command arguments Value Summary -h, --help Show this help message and exit --force Force the deletion of an execution. might cause a cascade of errors if used for running executions. 88.12. workflow execution input show Show execution input data. Usage: Table 88.49. Positional arguments Value Summary id Execution id Table 88.50. Command arguments Value Summary -h, --help Show this help message and exit 88.13. workflow execution list List all executions. Usage: Table 88.51. Command arguments Value Summary -h, --help Show this help message and exit --marker [MARKER] The last execution uuid of the page, displays list of executions after "marker". --limit [LIMIT] Maximum number of entries to return in a single result. --sort_keys [SORT_KEYS] Comma-separated list of sort keys to sort results by. Default: created_at. Example: mistral execution-list --sort_keys=id,description --sort_dirs [SORT_DIRS] Comma-separated list of sort directions. default: asc. Example: mistral execution-list --sort_keys=id,description --sort_dirs=asc,desc --filter FILTERS Filters. can be repeated. --oldest Display the executions starting from the oldest entries instead of the newest --task [TASK] Parent task execution id associated with workflow execution list. Table 88.52. Output formatter options Value Summary -f {csv,json,table,value,yaml}, --format {csv,json,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated --sort-column SORT_COLUMN Specify the column(s) to sort the data (columns specified first have a priority, non-existing columns are ignored), can be repeated Table 88.53. CSV formatter options Value Summary --quote {all,minimal,none,nonnumeric} When to include quotes, defaults to nonnumeric Table 88.54. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 88.55. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 88.14. workflow execution output show Show execution output data. Usage: Table 88.56. Positional arguments Value Summary id Execution id Table 88.57. Command arguments Value Summary -h, --help Show this help message and exit 88.15. workflow execution published show Show workflow global published variables. Usage: Table 88.58. Positional arguments Value Summary id Workflow id Table 88.59. Command arguments Value Summary -h, --help Show this help message and exit 88.16. workflow execution report show Print execution report. Usage: Table 88.60. Positional arguments Value Summary id Execution id Table 88.61. Command arguments Value Summary -h, --help Show this help message and exit --errors-only Only error paths will be included. --no-errors-only Not only error paths will be included. --max-depth [MAX_DEPTH] Maximum depth of the workflow execution tree. if 0, only the root workflow execution and its tasks will be included 88.17. workflow execution show Show specific execution. Usage: Table 88.62. Positional arguments Value Summary execution Execution identifier Table 88.63. Command arguments Value Summary -h, --help Show this help message and exit Table 88.64. Output formatter options Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated Table 88.65. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 88.66. Shell formatter options Value Summary --prefix PREFIX Add a prefix to all variable names Table 88.67. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 88.18. workflow execution update Update execution. Usage: Table 88.68. Positional arguments Value Summary id Execution identifier Table 88.69. Command arguments Value Summary -h, --help Show this help message and exit -s {RUNNING,PAUSED,SUCCESS,ERROR,CANCELLED}, --state {RUNNING,PAUSED,SUCCESS,ERROR,CANCELLED} Execution state -e ENV, --env ENV Environment variables -d DESCRIPTION, --description DESCRIPTION Execution description Table 88.70. Output formatter options Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated Table 88.71. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 88.72. Shell formatter options Value Summary --prefix PREFIX Add a prefix to all variable names Table 88.73. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 88.19. workflow list List all workflows. Usage: Table 88.74. Command arguments Value Summary -h, --help Show this help message and exit --marker [MARKER] The last execution uuid of the page, displays list of executions after "marker". --limit [LIMIT] Maximum number of entries to return in a single result. --sort_keys [SORT_KEYS] Comma-separated list of sort keys to sort results by. Default: created_at. Example: mistral execution-list --sort_keys=id,description --sort_dirs [SORT_DIRS] Comma-separated list of sort directions. default: asc. Example: mistral execution-list --sort_keys=id,description --sort_dirs=asc,desc --filter FILTERS Filters. can be repeated. Table 88.75. Output formatter options Value Summary -f {csv,json,table,value,yaml}, --format {csv,json,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated --sort-column SORT_COLUMN Specify the column(s) to sort the data (columns specified first have a priority, non-existing columns are ignored), can be repeated Table 88.76. CSV formatter options Value Summary --quote {all,minimal,none,nonnumeric} When to include quotes, defaults to nonnumeric Table 88.77. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 88.78. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 88.20. workflow show Show specific workflow. Usage: Table 88.79. Positional arguments Value Summary workflow Workflow id or name. Table 88.80. Command arguments Value Summary -h, --help Show this help message and exit --namespace [NAMESPACE] Namespace to get the workflow from. Table 88.81. Output formatter options Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated Table 88.82. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 88.83. Shell formatter options Value Summary --prefix PREFIX Add a prefix to all variable names Table 88.84. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 88.21. workflow update Update workflow. Usage: Table 88.85. Positional arguments Value Summary definition Workflow definition Table 88.86. Command arguments Value Summary -h, --help Show this help message and exit --marker [MARKER] The last execution uuid of the page, displays list of executions after "marker". --limit [LIMIT] Maximum number of entries to return in a single result. --sort_keys [SORT_KEYS] Comma-separated list of sort keys to sort results by. Default: created_at. Example: mistral execution-list --sort_keys=id,description --sort_dirs [SORT_DIRS] Comma-separated list of sort directions. default: asc. Example: mistral execution-list --sort_keys=id,description --sort_dirs=asc,desc --filter FILTERS Filters. can be repeated. --id ID Workflow id. --namespace [NAMESPACE] Namespace of the workflow. --public With this flag workflow will be marked as "public". Table 88.87. Output formatter options Value Summary -f {csv,json,table,value,yaml}, --format {csv,json,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated --sort-column SORT_COLUMN Specify the column(s) to sort the data (columns specified first have a priority, non-existing columns are ignored), can be repeated Table 88.88. CSV formatter options Value Summary --quote {all,minimal,none,nonnumeric} When to include quotes, defaults to nonnumeric Table 88.89. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 88.90. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 88.22. workflow validate Validate workflow. Usage: Table 88.91. Positional arguments Value Summary definition Workflow definition file Table 88.92. Command arguments Value Summary -h, --help Show this help message and exit Table 88.93. Output formatter options Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated Table 88.94. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 88.95. Shell formatter options Value Summary --prefix PREFIX Add a prefix to all variable names Table 88.96. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show.
[ "openstack workflow create [-h] [-f {csv,json,table,value,yaml}] [-c COLUMN] [--quote {all,minimal,none,nonnumeric}] [--noindent] [--max-width <integer>] [--fit-width] [--print-empty] [--sort-column SORT_COLUMN] [--marker [MARKER]] [--limit [LIMIT]] [--sort_keys [SORT_KEYS]] [--sort_dirs [SORT_DIRS]] [--filter FILTERS] [--namespace [NAMESPACE]] [--public] definition", "openstack workflow definition show [-h] [--namespace [NAMESPACE]] identifier", "openstack workflow delete [-h] [--namespace [NAMESPACE]] workflow [workflow ...]", "openstack workflow engine service list [-h] [-f {csv,json,table,value,yaml}] [-c COLUMN] [--quote {all,minimal,none,nonnumeric}] [--noindent] [--max-width <integer>] [--fit-width] [--print-empty] [--sort-column SORT_COLUMN] [--marker [MARKER]] [--limit [LIMIT]] [--sort_keys [SORT_KEYS]] [--sort_dirs [SORT_DIRS]] [--filter FILTERS]", "openstack workflow env create [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] file", "openstack workflow env delete [-h] environment [environment ...]", "openstack workflow env list [-h] [-f {csv,json,table,value,yaml}] [-c COLUMN] [--quote {all,minimal,none,nonnumeric}] [--noindent] [--max-width <integer>] [--fit-width] [--print-empty] [--sort-column SORT_COLUMN] [--marker [MARKER]] [--limit [LIMIT]] [--sort_keys [SORT_KEYS]] [--sort_dirs [SORT_DIRS]] [--filter FILTERS]", "openstack workflow env show [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] [--export] environment", "openstack workflow env update [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] file", "openstack workflow execution create [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] [--namespace [NAMESPACE]] [-d DESCRIPTION] [-s [SOURCE_EXECUTION_ID]] [workflow_identifier] [workflow_input] [params]", "openstack workflow execution delete [-h] [--force] execution [execution ...]", "openstack workflow execution input show [-h] id", "openstack workflow execution list [-h] [-f {csv,json,table,value,yaml}] [-c COLUMN] [--quote {all,minimal,none,nonnumeric}] [--noindent] [--max-width <integer>] [--fit-width] [--print-empty] [--sort-column SORT_COLUMN] [--marker [MARKER]] [--limit [LIMIT]] [--sort_keys [SORT_KEYS]] [--sort_dirs [SORT_DIRS]] [--filter FILTERS] [--oldest] [--task [TASK]]", "openstack workflow execution output show [-h] id", "openstack workflow execution published show [-h] id", "openstack workflow execution report show [-h] [--errors-only] [--no-errors-only] [--max-depth [MAX_DEPTH]] id", "openstack workflow execution show [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] execution", "openstack workflow execution update [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] [-s {RUNNING,PAUSED,SUCCESS,ERROR,CANCELLED}] [-e ENV] [-d DESCRIPTION] id", "openstack workflow list [-h] [-f {csv,json,table,value,yaml}] [-c COLUMN] [--quote {all,minimal,none,nonnumeric}] [--noindent] [--max-width <integer>] [--fit-width] [--print-empty] [--sort-column SORT_COLUMN] [--marker [MARKER]] [--limit [LIMIT]] [--sort_keys [SORT_KEYS]] [--sort_dirs [SORT_DIRS]] [--filter FILTERS]", "openstack workflow show [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] [--namespace [NAMESPACE]] workflow", "openstack workflow update [-h] [-f {csv,json,table,value,yaml}] [-c COLUMN] [--quote {all,minimal,none,nonnumeric}] [--noindent] [--max-width <integer>] [--fit-width] [--print-empty] [--sort-column SORT_COLUMN] [--marker [MARKER]] [--limit [LIMIT]] [--sort_keys [SORT_KEYS]] [--sort_dirs [SORT_DIRS]] [--filter FILTERS] [--id ID] [--namespace [NAMESPACE]] [--public] definition", "openstack workflow validate [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] definition" ]
https://docs.redhat.com/en/documentation/red_hat_openstack_platform/16.2/html/command_line_interface_reference/workflow
Providing feedback on Red Hat documentation
Providing feedback on Red Hat documentation We appreciate your feedback on our documentation. Let us know how we can improve it. Submitting feedback through Jira (account required) Log in to the Jira website. Click Create in the top navigation bar Enter a descriptive title in the Summary field. Enter your suggestion for improvement in the Description field. Include links to the relevant parts of the documentation. Click Create at the bottom of the dialogue.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/9/html/using_idm_api/proc_providing-feedback-on-red-hat-documentation_using-idm-api
Chapter 4. Serving and chatting with the models
Chapter 4. Serving and chatting with the models To interact with various models on Red Hat Enterprise Linux AI you must serve the model, which hosts it on a server, then you can chat with the models. 4.1. Serving the model To interact with the models, you must first activate the model in a machine through serving. The ilab model serve commands starts a vLLM server that allows you to chat with the model. Prerequisites You installed RHEL AI with the bootable container image. You initialized InstructLab. You installed your preferred Granite LLMs. You have root user access on your machine. Procedure If you do not specify a model, you can serve the default model, granite-7b-redhat-lab , by running the following command: USD ilab model serve To serve a specific model, run the following command USD ilab model serve --model-path <model-path> Example command USD ilab model serve --model-path ~/.cache/instructlab/models/granite-7b-code-instruct Example output of when the model is served and ready INFO 2024-03-02 02:21:11,352 lab.py:201 Using model 'models/granite-7b-code-instruct' with -1 gpu-layers and 4096 max context size. Starting server process After application startup complete see http://127.0.0.1:8000/docs for API. Press CTRL+C to shut down the server. 4.1.1. Optional: Running ilab model serve as a service You can set up a systemd service so that the ilab model serve command runs as a running service. The systemd service runs the ilab model serve command in the background and restarts if it crashes or fails. You can configure the service to start upon system boot. Prerequisites You installed the Red Hat Enterprise Linux AI image on bare metal. You initialized InstructLab You downloaded your preferred Granite LLMs. You have root user access on your machine. Procedure. Create a directory for your systemd user service by running the following command: USD mkdir -p USDHOME/.config/systemd/user Create your systemd service file with the following example configurations: USD cat << EOF > USDHOME/.config/systemd/user/ilab-serve.service [Unit] Description=ilab model serve service [Install] WantedBy=multi-user.target default.target 1 [Service] ExecStart=ilab model serve --model-family granite Restart=always EOF 1 Specifies to start by default on boot. Reload the systemd manager configuration by running the following command: USD systemctl --user daemon-reload Start the ilab model serve systemd service by running the following command: USD systemctl --user start ilab-serve.service You can check that the service is running with the following command: USD systemctl --user status ilab-serve.service You can check the service logs by running the following command: USD journalctl --user-unit ilab-serve.service To allow the service to start on boot, run the following command: USD sudo loginctl enable-linger Optional: There are a few optional commands you can run for maintaining your systemd service. You can stop the ilab-serve system service by running the following command: USD systemctl --user stop ilab-serve.service You can prevent the service from starting on boot by removing the "WantedBy=multi-user.target default.target" from the USDHOME/.config/systemd/user/ilab-serve.service file. 4.2. Chatting with the model Once you serve your model, you can now chat with the model. Important The model you are chatting with must match the model you are serving. With the default config.yaml file, the granite-7b-redhat-lab model is the default for serving and chatting. Prerequisites You installed RHEL AI with the bootable container image. You initialized InstructLab. You downloaded your preferred Granite LLMs. You are serving a model. You have root user access on your machine. Procedure Since you are serving the model in one terminal window, you must open another terminal to chat with the model. To chat with the default model, run the following command: USD ilab model chat To chat with a specific model run the following command: USD ilab model chat --model <model-path> Example command USD ilab model chat --model ~/.cache/instructlab/models/granite-7b-code-instruct Example output of the chatbot USD ilab model chat ╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────── system ──────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮ │ Welcome to InstructLab Chat w/ GRANITE-7B-CODE-INSTRUCT (type /h for help) │ ╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯ >>> [S][default] + Type exit to leave the chatbot. 4.2.1. Optional: Creating an API key for model chatting By default, the ilab CLI does not use authentication. If you want to expose your server to the internet, you can create a API key that connects to your server with the following procedures. Prerequisites You installed the Red Hat Enterprise Linux AI image on bare metal. You initialized InstructLab You downloaded your preferred Granite LLMs. You have root user access on your machine. Procedure Create a API key that is held in USDVLLM_API_KEY parameter by running the following command: USD export VLLM_API_KEY=USD(python -c 'import secrets; print(secrets.token_urlsafe())') You can view the API key by running the following command: USD echo USDVLLM_API_KEY Update the config.yaml by running the following command: USD ilab config edit Add the following parameters to the vllm_args section of your config.yaml file. serve: vllm: vllm_args: - --api-key - <api-key-string> where <api-key-string> Specify your API key string. You can verify that the server is using API key authentication by running the following command: USD ilab model chat Then, seeing the following error that shows an unauthorized user. openai.AuthenticationError: Error code: 401 - {'error': 'Unauthorized'} Verify that your API key is working by running the following command: USD ilab chat -m granite-7b-redhat-lab --endpoint-url https://inference.rhelai.com/v1 --api-key USDVLLM_API_KEY Example output USD ilab model chat ╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────── system ──────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮ │ Welcome to InstructLab Chat w/ GRANITE-7B-LAB (type /h for help) │ ╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯ >>> [S][default] 4.2.2. Optional: Allowing chat access to a model from a secure endpoint You can serve an inference endpoint and allow others to interact with models provided with Red Hat Enterprise Linux AI on secure connections by creating a systemd service and setting up a nginx reverse proxy that exposes a secure endpoint. This allows you to share the secure endpoint with others so they can chat with the model over a network. The following procedure uses self-signed certifications, but it is recommended to use certificates issued by a trusted Certificate Authority (CA). Note The following procedure is supported only on bare metal platforms. Prerequisites You installed the Red Hat Enterprise Linux AI image on bare-metal. You initialized InstructLab You downloaded your preferred Granite LLMs. You have root user access on your machine. Procedure Create a directory for your certificate file and key by running the following command: USD mkdir -p `pwd`/nginx/ssl/ Create an OpenSSL configuration file with the proper configurations by running the following command: USD cat > openssl.cnf <<EOL [ req ] default_bits = 2048 distinguished_name = <req-distinguished-name> 1 x509_extensions = v3_req prompt = no [ req_distinguished_name ] C = US ST = California L = San Francisco O = My Company OU = My Division CN = rhelai.redhat.com [ v3_req ] subjectAltName = <alt-names> 2 basicConstraints = critical, CA:true subjectKeyIdentifier = hash authorityKeyIdentifier = keyid:always,issuer [ alt_names ] DNS.1 = rhelai.redhat.com 3 DNS.2 = www.rhelai.redhat.com 4 1 Specify the distinguished name for your requirements. 2 Specify the alternate name for your requirements. 3 4 Specify the server common name for RHEL AI. In the example, the server name is rhelai.redhat.com . Generate a self signed certificate with a Subject Alternative Name (SAN) enabled with the following commands: USD openssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout `pwd`/nginx/ssl/rhelai.redhat.com.key -out `pwd`/nginx/ssl/rhelai.redhat.com.crt -config openssl.cnf USD openssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout Create the Nginx Configuration file and add it to the `pwd /nginx/conf.d` by running the following command: mkdir -p `pwd`/nginx/conf.d echo 'server { listen 8443 ssl; server_name <rhelai.redhat.com> 1 ssl_certificate /etc/nginx/ssl/rhelai.redhat.com.crt; ssl_certificate_key /etc/nginx/ssl/rhelai.redhat.com.key; location / { proxy_pass http://127.0.0.1:8000; proxy_set_header Host USDhost; proxy_set_header X-Real-IP USDremote_addr; proxy_set_header X-Forwarded-For USDproxy_add_x_forwarded_for; proxy_set_header X-Forwarded-Proto USDscheme; } } ' > `pwd`/nginx/conf.d/rhelai.redhat.com.conf 1 Specify the name of your server. In the example, the server name is rhelai.redhat.com Run the Nginx container with the new configurations by running the following command: USD podman run --net host -v `pwd`/nginx/conf.d:/etc/nginx/conf.d:ro,Z -v `pwd`/nginx/ssl:/etc/nginx/ssl:ro,Z nginx If you want to use port 443, you must run the podman run command as a root user.. You can now connect to a serving ilab machine using a secure endpoint URL. Example command: USD ilab chat -m /instructlab/instructlab/granite-7b-redhat-lab --endpoint-url https://rhelai.redhat.com:8443/v1 Optional: You can also get the server certificate and append it to the Certifi CA Bundle Get the server certificate by running the following command: USD openssl s_client -connect rhelai.redhat.com:8443 </dev/null 2>/dev/null | openssl x509 -outform PEM > server.crt Copy the certificate to you system's trusted CA storage directory and update the CA trust store with the following commands: USD sudo cp server.crt /etc/pki/ca-trust/source/anchors/ USD sudo update-ca-trust You can append your certificate to the Certifi CA bundle by running the following command: USD cat server.crt >> USD(python -m certifi) You can now run ilab model chat with a self-signed certificate. Example command: USD ilab chat -m /instructlab/instructlab/granite-7b-redhat-lab --endpoint-url https://rhelai.redhat.com:8443/v1
[ "ilab model serve", "ilab model serve --model-path <model-path>", "ilab model serve --model-path ~/.cache/instructlab/models/granite-7b-code-instruct", "INFO 2024-03-02 02:21:11,352 lab.py:201 Using model 'models/granite-7b-code-instruct' with -1 gpu-layers and 4096 max context size. Starting server process After application startup complete see http://127.0.0.1:8000/docs for API. Press CTRL+C to shut down the server.", "mkdir -p USDHOME/.config/systemd/user", "cat << EOF > USDHOME/.config/systemd/user/ilab-serve.service [Unit] Description=ilab model serve service [Install] WantedBy=multi-user.target default.target 1 [Service] ExecStart=ilab model serve --model-family granite Restart=always EOF", "systemctl --user daemon-reload", "systemctl --user start ilab-serve.service", "systemctl --user status ilab-serve.service", "journalctl --user-unit ilab-serve.service", "sudo loginctl enable-linger", "systemctl --user stop ilab-serve.service", "ilab model chat", "ilab model chat --model <model-path>", "ilab model chat --model ~/.cache/instructlab/models/granite-7b-code-instruct", "ilab model chat ╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────── system ──────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮ │ Welcome to InstructLab Chat w/ GRANITE-7B-CODE-INSTRUCT (type /h for help) │ ╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯ >>> [S][default]", "export VLLM_API_KEY=USD(python -c 'import secrets; print(secrets.token_urlsafe())')", "echo USDVLLM_API_KEY", "ilab config edit", "serve: vllm: vllm_args: - --api-key - <api-key-string>", "ilab model chat", "openai.AuthenticationError: Error code: 401 - {'error': 'Unauthorized'}", "ilab chat -m granite-7b-redhat-lab --endpoint-url https://inference.rhelai.com/v1 --api-key USDVLLM_API_KEY", "ilab model chat ╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────── system ──────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮ │ Welcome to InstructLab Chat w/ GRANITE-7B-LAB (type /h for help) │ ╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯ >>> [S][default]", "mkdir -p `pwd`/nginx/ssl/", "cat > openssl.cnf <<EOL [ req ] default_bits = 2048 distinguished_name = <req-distinguished-name> 1 x509_extensions = v3_req prompt = no [ req_distinguished_name ] C = US ST = California L = San Francisco O = My Company OU = My Division CN = rhelai.redhat.com [ v3_req ] subjectAltName = <alt-names> 2 basicConstraints = critical, CA:true subjectKeyIdentifier = hash authorityKeyIdentifier = keyid:always,issuer [ alt_names ] DNS.1 = rhelai.redhat.com 3 DNS.2 = www.rhelai.redhat.com 4", "openssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout `pwd`/nginx/ssl/rhelai.redhat.com.key -out `pwd`/nginx/ssl/rhelai.redhat.com.crt -config openssl.cnf", "openssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout", "mkdir -p `pwd`/nginx/conf.d echo 'server { listen 8443 ssl; server_name <rhelai.redhat.com> 1 ssl_certificate /etc/nginx/ssl/rhelai.redhat.com.crt; ssl_certificate_key /etc/nginx/ssl/rhelai.redhat.com.key; location / { proxy_pass http://127.0.0.1:8000; proxy_set_header Host USDhost; proxy_set_header X-Real-IP USDremote_addr; proxy_set_header X-Forwarded-For USDproxy_add_x_forwarded_for; proxy_set_header X-Forwarded-Proto USDscheme; } } ' > `pwd`/nginx/conf.d/rhelai.redhat.com.conf", "podman run --net host -v `pwd`/nginx/conf.d:/etc/nginx/conf.d:ro,Z -v `pwd`/nginx/ssl:/etc/nginx/ssl:ro,Z nginx", "ilab chat -m /instructlab/instructlab/granite-7b-redhat-lab --endpoint-url https://rhelai.redhat.com:8443/v1", "openssl s_client -connect rhelai.redhat.com:8443 </dev/null 2>/dev/null | openssl x509 -outform PEM > server.crt", "sudo cp server.crt /etc/pki/ca-trust/source/anchors/", "sudo update-ca-trust", "cat server.crt >> USD(python -m certifi)", "ilab chat -m /instructlab/instructlab/granite-7b-redhat-lab --endpoint-url https://rhelai.redhat.com:8443/v1" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux_ai/1.1/html/building_your_rhel_ai_environment/serving_and_chatting
Integrating the overcloud with an existing Red Hat Ceph Storage Cluster
Integrating the overcloud with an existing Red Hat Ceph Storage Cluster Red Hat OpenStack Platform 17.1 Configuring the overcloud to use a standalone Red Hat Ceph Storage cluster OpenStack Documentation Team [email protected]
null
https://docs.redhat.com/en/documentation/red_hat_openstack_platform/17.1/html/integrating_the_overcloud_with_an_existing_red_hat_ceph_storage_cluster/index
25.3. Getting Help for Vault Commands
25.3. Getting Help for Vault Commands To display all commands used to manage vaults and vault containers: To display detailed help for a particular command, add the --help option to the command: Vault Commands Fail with vault not found Error Some commands require you to specify the owner or the type of the vault using the following options: --user or --service specify the owner of the vault you want to view --shared specify that the vault you want to view is a shared vault For example, if you attempt to view another user's vault without adding --user , IdM informs you it did not find the vault:
[ "ipa help vault", "ipa vault-add --help", "ipa vault-show user_vault --user user", "[admin@server ~]USD ipa vault-show user_vault ipa: ERROR: user_vault: vault not found" ]
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/linux_domain_identity_authentication_and_policy_guide/vault-manage
Chapter 87. Docker Component
Chapter 87. Docker Component Available as of Camel version 2.15 Camel component for communicating with Docker. The Docker Camel component leverages the docker-java via the Docker Remote API . 87.1. URI format docker://[operation]?[options] Where operation is the specific action to perform on Docker. 87.2. General Options The Docker component supports 2 options, which are listed below. Name Description Default Type configuration (advanced) To use the shared docker configuration DockerConfiguration resolveProperty Placeholders (advanced) Whether the component should resolve property placeholders on itself when starting. Only properties which are of String type can use property placeholders. true boolean The Docker endpoint is configured using URI syntax: with the following path and query parameters: 87.2.1. Path Parameters (1 parameters): Name Description Default Type operation Required Which operation to use DockerOperation 87.2.2. Query Parameters (20 parameters): Name Description Default Type email (common) Email address associated with the user String host (common) Required Docker host localhost String port (common) Required Docker port 2375 Integer requestTimeout (common) Request timeout for response (in seconds) Integer bridgeErrorHandler (consumer) Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false boolean exceptionHandler (consumer) To let the consumer use a custom ExceptionHandler. Notice if the option bridgeErrorHandler is enabled then this option is not in use. By default the consumer will deal with exceptions, that will be logged at WARN or ERROR level and ignored. ExceptionHandler exchangePattern (consumer) Sets the exchange pattern when the consumer creates an exchange. ExchangePattern cmdExecFactory (advanced) The fully qualified class name of the DockerCmdExecFactory implementation to use com.github.dockerjava.netty.NettyDockerCmdExecFactory String followRedirectFilter (advanced) Whether to follow redirect filter false boolean loggingFilter (advanced) Whether to use logging filter false boolean maxPerRouteConnections (advanced) Maximum route connections 100 Integer maxTotalConnections (advanced) Maximum total connections 100 Integer serverAddress (advanced) Server address for docker registry. https://index.docker.io/v1/ String socket (advanced) Socket connection mode true boolean synchronous (advanced) Sets whether synchronous processing should be strictly used, or Camel is allowed to use asynchronous processing (if supported). false boolean certPath (security) Location containing the SSL certificate chain String password (security) Password to authenticate with String secure (security) Use HTTPS communication false boolean tlsVerify (security) Check TLS false boolean username (security) User name to authenticate with String 87.3. Spring Boot Auto-Configuration The component supports 20 options, which are listed below. Name Description Default Type camel.component.docker.configuration.cert-path Location containing the SSL certificate chain String camel.component.docker.configuration.cmd-exec-factory The fully qualified class name of the DockerCmdExecFactory implementation to use com.github.dockerjava.netty.NettyDockerCmdExecFactory String camel.component.docker.configuration.email Email address associated with the user String camel.component.docker.configuration.follow-redirect-filter Whether to follow redirect filter false Boolean camel.component.docker.configuration.host Docker host localhost String camel.component.docker.configuration.logging-filter Whether to use logging filter false Boolean camel.component.docker.configuration.max-per-route-connections Maximum route connections 100 Integer camel.component.docker.configuration.max-total-connections Maximum total connections 100 Integer camel.component.docker.configuration.operation Which operation to use DockerOperation camel.component.docker.configuration.parameters Additional configuration parameters as key/value pairs Map camel.component.docker.configuration.password Password to authenticate with String camel.component.docker.configuration.port Docker port 2375 Integer camel.component.docker.configuration.request-timeout Request timeout for response (in seconds) Integer camel.component.docker.configuration.secure Use HTTPS communication false Boolean camel.component.docker.configuration.server-address Server address for docker registry. https://index.docker.io/v1/ String camel.component.docker.configuration.socket Socket connection mode true Boolean camel.component.docker.configuration.tls-verify Check TLS false Boolean camel.component.docker.configuration.username User name to authenticate with String camel.component.docker.enabled Enable docker component true Boolean camel.component.docker.resolve-property-placeholders Whether the component should resolve property placeholders on itself when starting. Only properties which are of String type can use property placeholders. true Boolean 87.4. Header Strategy All URI option can be passed as Header properties. Values found in a message header take precedence over URI parameters. A header property takes the form of a URI option prefixed with CamelDocker as shown below URI Option Header Property containerId CamelDockerContainerId 87.5. Examples The following example consumes events from Docker: from("docker://events?host=192.168.59.103&port=2375").to("log:event"); The following example queries Docker for system wide information from("docker://info?host=192.168.59.103&port=2375").to("log:info"); 87.6. Dependencies To use Docker in your Camel routes you need to add a dependency on camel-docker , which implements the component. If you use Maven you can just add the following to your pom.xml, substituting the version number for the latest and greatest release (see the download page for the latest versions). <dependency> <groupId>org.apache.camel</groupId> <artifactId>camel-docker</artifactId> <version>x.x.x</version> </dependency>
[ "docker://[operation]?[options]", "docker:operation", "from(\"docker://events?host=192.168.59.103&port=2375\").to(\"log:event\");", "from(\"docker://info?host=192.168.59.103&port=2375\").to(\"log:info\");", "<dependency> <groupId>org.apache.camel</groupId> <artifactId>camel-docker</artifactId> <version>x.x.x</version> </dependency>" ]
https://docs.redhat.com/en/documentation/red_hat_fuse/7.13/html/apache_camel_component_reference/docker-component
23.6. Storage Devices
23.6. Storage Devices You can install Red Hat Enterprise Linux on a large variety of storage devices. For System z, select Specialized Storage Devices Figure 23.4. Storage devices Basic Storage Devices This option does not apply to System z. Specialized Storage Devices Select Specialized Storage Devices to install Red Hat Enterprise Linux on the following storage devices: Direct access storage devices (DASDs) Multipath devices such as FCP-attachable SCSI LUN with multiple paths Storage area networks (SANs) such as FCP-attachable SCSI LUNs with a single path Use the Specialized Storage Devices option to configure Internet Small Computer System Interface (iSCSI) connections. You cannot use the FCoE (Fiber Channel over Ethernet) option on System z; this option is grayed out. Note Monitoring of LVM and software RAID devices by the mdeventd daemon is not performed during installation. 23.6.1. The Storage Devices Selection Screen The storage devices selection screen displays all storage devices to which anaconda has access. Devices are grouped under the following tabs: Basic Devices Basic storage devices directly connected to the local system, such as hard disk drives and solid-state drives. On System z, this contains activated DASDs. Firmware RAID Storage devices attached to a firmware RAID controller. This does not apply to System z. Multipath Devices Storage devices accessible through more than one path, such as through multiple SCSI controllers or Fiber Channel ports on the same system. Important The installer only detects multipath storage devices with serial numbers that are 16 or 32 characters in length. Other SAN Devices Any other devices available on a storage area network (SAN) such as FCP LUNs attached over one single path. Figure 23.5. Select storage devices - Basic Devices Figure 23.6. Select storage devices - Multipath Devices Figure 23.7. Select storage devices - Other SAN Devices The storage devices selection screen also contains a Search tab that allows you to filter storage devices either by their World Wide Identifier (WWID) or by the port, target, or logical unit number (LUN) at which they are accessed. Figure 23.8. The Storage Devices Search Tab The tab contains a drop-down menu to select searching by port, target, WWID, or LUN (with corresponding text boxes for these values). Searching by WWID or LUN requires additional values in the corresponding text box. Each tab presents a list of devices detected by anaconda , with information about the device to help you to identify it. A small drop-down menu marked with an icon is located to the right of the column headings. This menu allows you to select the types of data presented on each device. For example, the menu on the Multipath Devices tab allows you to specify any of WWID , Capacity , Vendor , Interconnect , and Paths to include among the details presented for each device. Reducing or expanding the amount of information presented might help you to identify particular devices. Figure 23.9. Selecting Columns Each device is presented on a separate row, with a checkbox to its left. Click the checkbox to make a device available during the installation process, or click the radio button at the left of the column headings to select or deselect all the devices listed in a particular screen. Later in the installation process, you can choose to install Red Hat Enterprise Linux onto any of the devices selected here, and can choose to automatically mount any of the other devices selected here as part of the installed system. Note that the devices that you select here are not automatically erased by the installation process. Selecting a device on this screen does not, in itself, place data stored on the device at risk. Note also that any devices that you do not select here to form part of the installed system can be added to the system after installation by modifying the /etc/fstab file. when you have selected the storage devices to make available during installation, click and proceed to Section 23.7, "Setting the Hostname" 23.6.1.1. DASD low-level formatting Any DASDs used must be low-level formatted. The installer detects this and lists the DASDs that need formatting. If any of the DASDs specified interactively in linuxrc or in a parameter or configuration file are not yet low-level formatted, the following confirmation dialog appears: Figure 23.10. Unformatted DASD Devices Found To automatically allow low-level formatting of unformatted online DASDs specify the kickstart command zerombr . Refer to Chapter 32, Kickstart Installations for more details. 23.6.1.2. Advanced Storage Options From this screen you can configure an iSCSI (SCSI over TCP/IP) target or FCP LUNs. Refer to Appendix B, iSCSI Disks for an introduction to iSCSI. Figure 23.11. Advanced Storage Options 23.6.1.2.1. Configure iSCSI parameters To use iSCSI storage devices for the installation, anaconda must be able to discover them as iSCSI targets and be able to create an iSCSI session to access them. Each of these steps might require a username and password for CHAP (Challenge Handshake Authentication Protocol) authentication. Additionally, you can configure an iSCSI target to authenticate the iSCSI initiator on the system to which the target is attached ( reverse CHAP ), both for discovery and for the session. Used together, CHAP and reverse CHAP are called mutual CHAP or two-way CHAP . Mutual CHAP provides the greatest level of security for iSCSI connections, particularly if the username and password are different for CHAP authentication and reverse CHAP authentication. Repeat the iSCSI discovery and iSCSI login steps as many times as necessary to add all required iSCSI storage. However, you cannot change the name of the iSCSI initiator after you attempt discovery for the first time. To change the iSCSI initiator name, you must restart the installation. Procedure 23.1. iSCSI discovery Use the iSCSI Discovery Details dialog to provide anaconda with the information that it needs to discover the iSCSI target. Figure 23.12. The iSCSI Discovery Details dialog Enter the IP address of the iSCSI target in the Target IP Address field. Provide a name in the iSCSI Initiator Name field for the iSCSI initiator in iSCSI qualified name (IQN) format. A valid IQN contains: the string iqn. (note the period) a date code that specifies the year and month in which your organization's Internet domain or subdomain name was registered, represented as four digits for the year, a dash, and two digits for the month, followed by a period. For example, represent September 2010 as 2010-09. your organization's Internet domain or subdomain name, presented in reverse order with the top-level domain first. For example, represent the subdomain storage.example.com as com.example.storage a colon followed by a string that uniquely identifies this particular iSCSI initiator within your domain or subdomain. For example, :diskarrays-sn-a8675309 . A complete IQN therefore resembles: iqn.2010-09.storage.example.com:diskarrays-sn-a8675309 , and anaconda pre-populates the iSCSI Initiator Name field with a name in this format to help you with the structure. For more information on IQNs, refer to 3.2.6. iSCSI Names in RFC 3720 - Internet Small Computer Systems Interface (iSCSI) available from http://tools.ietf.org/html/rfc3720#section-3.2.6 and 1. iSCSI Names and Addresses in RFC 3721 - Internet Small Computer Systems Interface (iSCSI) Naming and Discovery available from http://tools.ietf.org/html/rfc3721#section-1 . Use the drop-down menu to specify the type of authentication to use for iSCSI discovery: Figure 23.13. iSCSI discovery authentication no credentials CHAP pair CHAP pair and a reverse pair If you selected CHAP pair as the authentication type, provide the username and password for the iSCSI target in the CHAP Username and CHAP Password fields. Figure 23.14. CHAP pair If you selected CHAP pair and a reverse pair as the authentication type, provide the username and password for the iSCSI target in the CHAP Username and CHAP Password field and the username and password for the iSCSI initiator in the Reverse CHAP Username and Reverse CHAP Password fields. Figure 23.15. CHAP pair and a reverse pair Click Start Discovery . Anaconda attempts to discover an iSCSI target based on the information that you provided. If discovery succeeds, the iSCSI Discovered Nodes dialog presents you with a list of all the iSCSI nodes discovered on the target. Each node is presented with a checkbox beside it. Click the checkboxes to select the nodes to use for installation. Figure 23.16. The iSCSI Discovered Nodes dialog Click Login to initiate an iSCSI session. Procedure 23.2. Starting an iSCSI session Use the iSCSI Nodes Login dialog to provide anaconda with the information that it needs to log into the nodes on the iSCSI target and start an iSCSI session. Figure 23.17. The iSCSI Nodes Login dialog Use the drop-down menu to specify the type of authentication to use for the iSCSI session: Figure 23.18. iSCSI session authentication no credentials CHAP pair CHAP pair and a reverse pair Use the credentials from the discovery step If your environment uses the same type of authentication and same username and password for iSCSI discovery and for the iSCSI session, select Use the credentials from the discovery step to reuse these credentials. If you selected CHAP pair as the authentication type, provide the username and password for the iSCSI target in the CHAP Username and CHAP Password fields. Figure 23.19. CHAP pair If you selected CHAP pair and a reverse pair as the authentication type, provide the username and password for the iSCSI target in the CHAP Username and CHAP Password fields and the username and password for the iSCSI initiator in the Reverse CHAP Username and Reverse CHAP Password fields. Figure 23.20. CHAP pair and a reverse pair Click Login . Anaconda attempts to log into the nodes on the iSCSI target based on the information that you provided. The iSCSI Login Results dialog presents you with the results. Figure 23.21. The iSCSI Login Results dialog Click OK to continue. 23.6.1.2.2. FCP Devices FCP devices enable IBM System z to use SCSI devices rather than, or in addition to, DASD devices. FCP devices provide a switched fabric topology that enables System z systems to use SCSI LUNs as disk devices in addition to traditional DASD devices. IBM System z requires that any FCP device be entered manually (either in the installation program interactively, or specified as unique parameter entries in the parameter or CMS configuration file) for the installation program to activate FCP LUNs. The values entered here are unique to each site in which they are set up. Notes Interactive creation of an FCP device is only possible in graphical mode. It is not possible to interactively configure an FCP device in a text-only install. Each value entered should be verified as correct, as any mistakes made may cause the system not to operate properly. Use only lower-case letters in hex values. For more information on these values, refer to the hardware documentation check with the system administrator who set up the network for this system. To configure a Fiber Channel Protocol SCSI device, select Add ZFCP LUN and click Add Drive . In the Add FCP device dialog, fill in the details for the 16-bit device number, 64-bit World Wide Port Number (WWPN) and 64-bit FCP LUN. Click the Add button to connect to the FCP device using this information. Figure 23.22. Add FCP Device The newly added device should then be present and usable in the storage device selection screen on the Multipath Devices tab, if you have activated more than one path to the same LUN, or on Other SAN Devices , if you have activated only one path to the LUN. Important The installer requires the definition of a DASD. For a SCSI-only installation, enter none as the parameter interactively during phase 1 of an interactive installation, or add DASD=none in the parameter or CMS configuration file. This satisfies the requirement for a defined DASD parameter, while resulting in a SCSI-only environment.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/installation_guide/Storage_Devices-s390
Chapter 2. Ceph network configuration
Chapter 2. Ceph network configuration As a storage administrator, you must understand the network environment that the Red Hat Ceph Storage cluster will operate in, and configure the Red Hat Ceph Storage accordingly. Understanding and configuring the Ceph network options will ensure optimal performance and reliability of the overall storage cluster. 2.1. Prerequisites Network connectivity. Installation of the Red Hat Ceph Storage software. 2.2. Network configuration for Ceph Network configuration is critical for building a high performance Red Hat Ceph Storage cluster. The Ceph storage cluster does not perform request routing or dispatching on behalf of the Ceph client. Instead, Ceph clients make requests directly to Ceph OSD daemons. Ceph OSDs perform data replication on behalf of Ceph clients, which means replication and other factors impose additional loads on the networks of Ceph storage clusters. All Ceph clusters must use a public network. However, unless you specify an internal cluster network, Ceph assumes a single public network. Ceph can function with a public network only, but for large storage clusters you will see significant performance improvement with a second private network for carrying only cluster-related traffic. Important Red Hat recommends running a Ceph storage cluster with two networks. One public network and one private network. To support two networks, each Ceph Node will need to have more than one network interface card (NIC). There are several reasons to consider operating two separate networks: Performance: Ceph OSDs handle data replication for the Ceph clients. When Ceph OSDs replicate data more than once, the network load between Ceph OSDs easily dwarfs the network load between Ceph clients and the Ceph storage cluster. This can introduce latency and create a performance problem. Recovery and rebalancing can also introduce significant latency on the public network. Security : While most people are generally civil, some actors will engage in what is known as a Denial of Service (DoS) attack. When traffic between Ceph OSDs gets disrupted, peering may fail and placement groups may no longer reflect an active + clean state, which may prevent users from reading and writing data. A great way to defeat this type of attack is to maintain a completely separate cluster network that does not connect directly to the internet. Network configuration settings are not required. Ceph can function with a public network only, assuming a public network is configured on all hosts running a Ceph daemon. However, Ceph allows you to establish much more specific criteria, including multiple IP networks and subnet masks for your public network. You can also establish a separate cluster network to handle OSD heartbeat, object replication, and recovery traffic. Do not confuse the IP addresses you set in the configuration with the public-facing IP addresses network clients might use to access your service. Typical internal IP networks are often 192.168.0.0 or 10.0.0.0 . Note Ceph uses CIDR notation for subnets, for example, 10.0.0.0/24 . Important If you specify more than one IP address and subnet mask for either the public or the private network, the subnets within the network must be capable of routing to each other. Additionally, make sure you include each IP address and subnet in your IP tables and open ports for them as necessary. When you configured the networks, you can restart the cluster or restart each daemon. Ceph daemons bind dynamically, so you do not have to restart the entire cluster at once if you change the network configuration. 2.3. Configuration requirements for Ceph daemons Ceph has one network configuration requirement that applies to all daemons. The Ceph configuration file must specify the host for each daemon. Important Some deployment utilities might create a configuration file for you. Do not set these values if the deployment utility does it for you. Important The host option is the short name of the node, not its FQDN. It is not an IP address. You can set the host names and the IP addresses for where the daemon resides by specifying the host name. Example You do not have to set the node IP address for a daemon, it is optional. If you have a static IP configuration and both public and private networks running, the Ceph configuration file might specify the IP address of the node for each daemon. Setting a static IP address for a daemon must appear in the daemon instance sections of the Ceph configuration file. Example You can deploy an OSD host with a single NIC in a cluster with two networks by forcing the OSD host. You can force the OSD host to operate on the public network by adding a public addr entry to the [osd.n] section of the Ceph configuration file, where n refers to the number of the OSD with one NIC. Additionally, the public network and cluster network must be able to route traffic to each other, which Red Hat does not recommend for security reasons. Important Red Hat does not recommend deploying an OSD node with a single NIC with two networks for security reasons. Additional Resources See the host options in Red Hat Ceph Storage Configuration Guide , Appendix B for specific option descriptions and usage. See the common options in Red Hat Ceph Storage Configuration Guide , Appendix B for specific option descriptions and usage. 2.4. Ceph network messenger Messenger is the Ceph network layer implementation. Red Hat supports two messenger types: simple async In Red Hat Ceph Storage 3 and higher, async is the default messenger type. To change the messenger type, specify the ms_type configuration setting in the [global] section of the Ceph configuration file. Note For the async messenger, Red Hat supports the posix transport type, but does not currently support rdma or dpdk . By default, the ms_type setting in Red Hat Ceph Storage 3 or higher reflects async+posix , where async is the messenger type and posix is the transport type. SimpleMessenger The SimpleMessenger implementation uses TCP sockets with two threads per socket. Ceph associates each logical session with a connection. A pipe handles the connection, including the input and output of each message. While SimpleMessenger is effective for the posix transport type, it is not effective for other transport types such as rdma or dpdk . AsyncMessenger Consequently, AsyncMessenger is the default messenger type for Red Hat Ceph Storage 3 or higher. For Red Hat Ceph Storage 3 or higher, the AsyncMessenger implementation uses TCP sockets with a fixed-size thread pool for connections, which should be equal to the highest number of replicas or erasure-code chunks. The thread count can be set to a lower value if performance degrades due to a low CPU count or a high number of OSDs per server. Note Red Hat does not support other transport types such as rdma or dpdk at this time. Additional Resources See the AsyncMessenger options in Red Hat Ceph Storage Configuration Guide , Appendix B for specific option descriptions and usage. See the Red Hat Ceph Storage Architecture Guide for details about using on-wire encryption with the Ceph messenger version 2 protocol. 2.5. Configuring a public network The public network configuration allows you specifically define IP addresses and subnets for the public network. You may specifically assign static IP addresses or override public network settings using the public addr setting for a specific daemon. Prerequisites Installation of the Red Hat Ceph Storage software. Procedure Add the following option to the [global] section of the Ceph configuration file: Additional Resources See the common options in Red Hat Ceph Storage Configuration Guide , Appendix B for specific option descriptions and usage. 2.6. Configuring a private network If you declare a cluster network, OSDs will route heartbeat, object replication, and recovery traffic over the cluster network. This can improve performance compared to using a single network. Important It is preferable, that the cluster network is not reachable from the public network or the Internet for added security. The cluster network configuration allows you to declare a cluster network, and specifically define IP addresses and subnets for the cluster network. You can specifically assign static IP addresses or override cluster network settings using the cluster addr setting for specific OSD daemons. Prerequisites A running Red Hat Ceph Storage cluster. Access to the Ceph software repository. Procedure Add the following option to the [global] section of the Ceph configuration file: 2.7. Verify the firewall settings By default, daemons bind to ports within the 6800:7100 range. You can configure this range at your discretion. Before configuring the firewall, check the default firewall configuration. Prerequisites A running Red Hat Ceph Storage cluster. Access to the Ceph software repository. Root-level access to the Ceph Monitor node. Procedure You can configure this range at your discretion: For the firewalld daemon, execute the following command: Some Linux distributions include rules that reject all inbound requests except SSH from all network interfaces. Example 2.8. Firewall settings for Ceph Monitor node Ceph monitors listen on port 3300 and 6789 by default. Additionally, Ceph monitors always operate on the public network. Prerequisites A running Red Hat Ceph Storage cluster. Access to the Ceph software repository. Root-level access to the Ceph Monitor node. Procedure Add rules using the following example: Replace IFACE with the public network interface. For example, eth0 , eth1 , and so on. Replace IP-ADDRESS with the IP address of the public network and NETMASK with the netmask for the public network. For the firewalld daemon, execute the following commands: 2.9. Firewall settings for Ceph OSDs By default, Ceph OSDs bind to the first available ports on a Ceph node beginning at port 6800. Ensure to open at least four ports beginning at port 6800 for each OSD that runs on the node: One for talking to clients and monitors on the public network. One for sending data to other OSDs on the cluster network. Two for sending heartbeat packets on the cluster network. Ports are node-specific. However, you might need to open more ports than the number of ports needed by Ceph daemons running on that Ceph node in the event that processes get restarted and the bound ports do not get released. Consider to open a few additional ports in case a daemon fails and restarts without releasing the port such that the restarted daemon binds to a new port. Also, consider opening the port range of 6800:7300 on each OSD node. If you set separate public and cluster networks, you must add rules for both the public network and the cluster network, because clients will connect using the public network and other Ceph OSD Daemons will connect using the cluster network. Prerequisites A running Red Hat Ceph Storage cluster. Access to the Ceph software repository. Root-level access to the Ceph OSD nodes. Procedure Add rules using the following example: Replace IFACE with the public network interface (for example, eth0 , eth1 , and so on). Replace IP-ADDRESS with the IP address of the public network and NETMASK with the netmask for the public network. For the firewalld daemon, execute the following: If you put the cluster network into another zone, open the ports within that zone as appropriate. 2.10. Verifying and configuring the MTU value The maximum transmission unit (MTU) value is the size, in bytes, of the largest packet sent on the link layer. The default MTU value is 1500 bytes. Red Hat recommends using jumbo frames, a MTU value of 9000 bytes, for a Red Hat Ceph Storage cluster. Important Red Hat Ceph Storage requires the same MTU value throughout all networking devices in the communication path, end-to-end for both public and cluster networks. Verify that the MTU value is the same on all nodes and networking equipment in the environment before using a Red Hat Ceph Storage cluster in production. Note When bonding network interfaces together, the MTU value only needs to be set on the bonded interface. The new MTU value propagates from the bonding device to the underlying network devices. Prerequisites Root-level access to the node. Procedure Verify the current MTU value: Example For this example, the network interface is enp22s0f0 and it has a MTU value of 1500 . To temporarily change the MTU value online: Syntax Example To permanently change the MTU value. Open for editing the network configuration file for that particular network interface: Syntax Example On a new line, add the MTU=9000 option: Example Restart the network service: Example Additional Resources For more details, see the Configuring and Managing Networking guide for Red Hat Enterprise Linux 8. For more details, see the Networking Guide for Red Hat Enterprise Linux 7. 2.11. Additional Resources See the Red Hat Ceph Storage network configuration options in Appendix B for specific option descriptions and usage. See the Red Hat Ceph Storage Architecture Guide for details about using on-wire encryption with the Ceph messenger version 2 protocol.
[ "[mon.a] host = host01 mon_addr = 10.0.0.101:6789, 10.0.0.101:3300 [osd.0] host = host02", "[osd.0] public_addr = 10.74.250.101/21 cluster_addr = 10.74.250.101/21", "[global] public_network = PUBLIC-NET / NETMASK", "[global] cluster_network = CLUSTER-NET / NETMASK", "sudo iptables -L", "firewall-cmd --list-all-zones", "REJECT all -- anywhere anywhere reject-with icmp-host-prohibited", "sudo iptables -A INPUT -i IFACE -p tcp -s IP-ADDRESS / NETMASK --dport 6789 -j ACCEPT sudo iptables -A INPUT -i IFACE -p tcp -s IP-ADDRESS / NETMASK --dport 3300 -j ACCEPT", "firewall-cmd --zone=public --add-port=6789/tcp firewall-cmd --zone=public --add-port=6789/tcp --permanent firewall-cmd --zone=public --add-port=3300/tcp firewall-cmd --zone=public --add-port=3300/tcp --permanent", "sudo iptables -A INPUT -i IFACE -m multiport -p tcp -s IP-ADDRESS / NETMASK --dports 6800:6810 -j ACCEPT", "firewall-cmd --zone=public --add-port=6800-6810/tcp firewall-cmd --zone=public --add-port=6800-6810/tcp --permanent", "ip link list 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 2: enp22s0f0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000 link/ether 40:f2:e9:b8:a0:48 brd ff:ff:ff:ff:ff:ff", "ip link set dev NET_INTERFACE mtu NEW_MTU_VALUE", "ip link set dev enp22s0f0 mtu 9000", "vim /etc/sysconfig/network-scripts/ifcfg- NET_INTERFACE", "vim /etc/sysconfig/network-scripts/ifcfg-enp22s0f0", "NAME=\"enp22s0f0\" DEVICE=\"enp22s0f0\" MTU=9000 1 ONBOOT=yes NETBOOT=yes UUID=\"a8c1f1e5-bd62-48ef-9f29-416a102581b2\" IPV6INIT=yes BOOTPROTO=dhcp TYPE=Ethernet", "systemctl restart network" ]
https://docs.redhat.com/en/documentation/red_hat_ceph_storage/4/html/configuration_guide/ceph-network-configuration
Chapter 6. Generic ephemeral volumes
Chapter 6. Generic ephemeral volumes 6.1. Overview Generic ephemeral volumes are a type of ephemeral volume that can be provided by all storage drivers that support persistent volumes and dynamic provisioning. Generic ephemeral volumes are similar to emptyDir volumes in that they provide a per-pod directory for scratch data, which is usually empty after provisioning. Generic ephemeral volumes are specified inline in the pod spec and follow the pod's lifecycle. They are created and deleted along with the pod. Generic ephemeral volumes have the following features: Storage can be local or network-attached. Volumes can have a fixed size that pods are not able to exceed. Volumes might have some initial data, depending on the driver and parameters. Typical operations on volumes are supported, assuming that the driver supports them, including snapshotting, cloning, resizing, and storage capacity tracking. Note Generic ephemeral volumes do not support offline snapshots and resize. 6.2. Lifecycle and persistent volume claims The parameters for a volume claim are allowed inside a volume source of a pod. Labels, annotations, and the whole set of fields for persistent volume claims (PVCs) are supported. When such a pod is created, the ephemeral volume controller then creates an actual PVC object (from the template shown in the Creating generic ephemeral volumes procedure) in the same namespace as the pod, and ensures that the PVC is deleted when the pod is deleted. This triggers volume binding and provisioning in one of two ways: Either immediately, if the storage class uses immediate volume binding. With immediate binding, the scheduler is forced to select a node that has access to the volume after it is available. When the pod is tentatively scheduled onto a node ( WaitForFirstConsumervolume binding mode). This volume binding option is recommended for generic ephemeral volumes because then the scheduler can choose a suitable node for the pod. In terms of resource ownership, a pod that has generic ephemeral storage is the owner of the PVCs that provide that ephemeral storage. When the pod is deleted, the Kubernetes garbage collector deletes the PVC, which then usually triggers deletion of the volume because the default reclaim policy of storage classes is to delete volumes. You can create quasi-ephemeral local storage by using a storage class with a reclaim policy of retain: the storage outlives the pod, and in this case, you must ensure that volume clean-up happens separately. While these PVCs exist, they can be used like any other PVC. In particular, they can be referenced as data source in volume cloning or snapshotting. The PVC object also holds the current status of the volume. Additional resources Creating generic ephemeral volumes 6.3. Security You can enable the generic ephemeral volume feature to allows users who can create pods to also create persistent volume claims (PVCs) indirectly. This feature works even if these users do not have permission to create PVCs directly. Cluster administrators must be aware of this. If this does not fit your security model, use an admission webhook that rejects objects such as pods that have a generic ephemeral volume. The normal namespace quota for PVCs still applies, so even if users are allowed to use this new mechanism, they cannot use it to circumvent other policies. 6.4. Persistent volume claim naming Automatically created persistent volume claims (PVCs) are named by a combination of the pod name and the volume name, with a hyphen (-) in the middle. This naming convention also introduces a potential conflict between different pods, and between pods and manually created PVCs. For example, pod-a with volume scratch and pod with volume a-scratch both end up with the same PVC name, pod-a-scratch . Such conflicts are detected, and a PVC is only used for an ephemeral volume if it was created for the pod. This check is based on the ownership relationship. An existing PVC is not overwritten or modified, but this does not resolve the conflict. Without the right PVC, a pod cannot start. Important Be careful when naming pods and volumes inside the same namespace so that naming conflicts do not occur. 6.5. Creating generic ephemeral volumes Procedure Create the pod object definition and save it to a file. Include the generic ephemeral volume information in the file. my-example-pod-with-generic-vols.yaml kind: Pod apiVersion: v1 metadata: name: my-app spec: containers: - name: my-frontend image: busybox:1.28 volumeMounts: - mountPath: "/mnt/storage" name: data command: [ "sleep", "1000000" ] volumes: - name: data 1 ephemeral: volumeClaimTemplate: metadata: labels: type: my-app-ephvol spec: accessModes: [ "ReadWriteOnce" ] storageClassName: "gp2-csi" resources: requests: storage: 1Gi 1 Generic ephemeral volume claim.
[ "kind: Pod apiVersion: v1 metadata: name: my-app spec: containers: - name: my-frontend image: busybox:1.28 volumeMounts: - mountPath: \"/mnt/storage\" name: data command: [ \"sleep\", \"1000000\" ] volumes: - name: data 1 ephemeral: volumeClaimTemplate: metadata: labels: type: my-app-ephvol spec: accessModes: [ \"ReadWriteOnce\" ] storageClassName: \"gp2-csi\" resources: requests: storage: 1Gi" ]
https://docs.redhat.com/en/documentation/openshift_dedicated/4/html/storage/generic-ephemeral-volumes
Chapter 4. Deploy standalone Multicloud Object Gateway in internal mode
Chapter 4. Deploy standalone Multicloud Object Gateway in internal mode Deploying only the Multicloud Object Gateway component with the OpenShift Data Foundation provides the flexibility in deployment and helps to reduce the resource consumption. Use this section to deploy only the standalone Multicloud Object Gateway component in internal mode, which involves the following steps: Installing Red Hat OpenShift Data Foundation Operator Creating standalone Multicloud Object Gateway Note Deploying standalone Multicloud Object Gateway component is not supported in external mode deployments. 4.1. Installing Red Hat OpenShift Data Foundation Operator You can install Red Hat OpenShift Data Foundation Operator using the Red Hat OpenShift Container Platform Operator Hub. Prerequisites Access to an OpenShift Container Platform cluster using an account with cluster-admin and operator installation permissions. You must have at least three worker or infrastructure nodes in the Red Hat OpenShift Container Platform cluster. For additional resource requirements, see the Planning your deployment guide. Important When you need to override the cluster-wide default node selector for OpenShift Data Foundation, you can use the following command to specify a blank node selector for the openshift-storage namespace (create openshift-storage namespace in this case): Taint a node as infra to ensure only Red Hat OpenShift Data Foundation resources are scheduled on that node. This helps you save on subscription costs. For more information, see the How to use dedicated worker nodes for Red Hat OpenShift Data Foundation section in the Managing and Allocating Storage Resources guide. Procedure Log in to the OpenShift Web Console. Click Operators OperatorHub . Scroll or type OpenShift Data Foundation into the Filter by keyword box to find the OpenShift Data Foundation Operator. Click Install . Set the following options on the Install Operator page: Update Channel as stable-4.15 . Installation Mode as A specific namespace on the cluster . Installed Namespace as Operator recommended namespace openshift-storage . If Namespace openshift-storage does not exist, it is created during the operator installation. Select Approval Strategy as Automatic or Manual . If you select Automatic updates, then the Operator Lifecycle Manager (OLM) automatically upgrades the running instance of your Operator without any intervention. If you select Manual updates, then the OLM creates an update request. As a cluster administrator, you must then manually approve that update request to update the Operator to a newer version. Ensure that the Enable option is selected for the Console plugin . Click Install . Verification steps After the operator is successfully installed, a pop-up with a message, Web console update is available appears on the user interface. Click Refresh web console from this pop-up for the console changes to reflect. In the Web Console: Navigate to Installed Operators and verify that the OpenShift Data Foundation Operator shows a green tick indicating successful installation. Navigate to Storage and verify if the Data Foundation dashboard is available. 4.2. Creating a standalone Multicloud Object Gateway You can create only the standalone Multicloud Object Gateway component while deploying OpenShift Data Foundation. Prerequisites Ensure that the OpenShift Data Foundation Operator is installed. Procedure In the OpenShift Web Console, click Operators Installed Operators to view all the installed operators. Ensure that the Project selected is openshift-storage . Click OpenShift Data Foundation operator and then click Create StorageSystem . In the Backing storage page, select the following: Select Multicloud Object Gateway for Deployment type . Select the Use an existing StorageClass option. Click . Optional: Select the Connect to an external key management service checkbox. This is optional for cluster-wide encryption. From the Key Management Service Provider drop-down list, either select Vault or Thales CipherTrust Manager (using KMIP) . If you selected Vault , go to the step. If you selected Thales CipherTrust Manager (using KMIP) , go to step iii. Select an Authentication Method . Using Token authentication method Enter a unique Connection Name , host Address of the Vault server ('https://<hostname or ip>'), Port number and Token . Expand Advanced Settings to enter additional settings and certificate details based on your Vault configuration: Enter the Key Value secret path in Backend Path that is dedicated and unique to OpenShift Data Foundation. Optional: Enter TLS Server Name and Vault Enterprise Namespace . Upload the respective PEM encoded certificate file to provide the CA Certificate , Client Certificate and Client Private Key . Click Save and skip to step iv. Using Kubernetes authentication method Enter a unique Vault Connection Name , host Address of the Vault server ('https://<hostname or ip>'), Port number and Role name. Expand Advanced Settings to enter additional settings and certificate details based on your Vault configuration: Enter the Key Value secret path in Backend Path that is dedicated and unique to OpenShift Data Foundation. Optional: Enter TLS Server Name and Authentication Path if applicable. Upload the respective PEM encoded certificate file to provide the CA Certificate , Client Certificate and Client Private Key . Click Save and skip to step iv. To use Thales CipherTrust Manager (using KMIP) as the KMS provider, follow the steps below: Enter a unique Connection Name for the Key Management service within the project. In the Address and Port sections, enter the IP of Thales CipherTrust Manager and the port where the KMIP interface is enabled. For example: Address : 123.34.3.2 Port : 5696 Upload the Client Certificate , CA certificate , and Client Private Key . If StorageClass encryption is enabled, enter the Unique Identifier to be used for encryption and decryption generated above. The TLS Server field is optional and used when there is no DNS entry for the KMIP endpoint. For example, kmip_all_<port>.ciphertrustmanager.local . Select a Network . Click . In the Review and create page, review the configuration details: To modify any configuration settings, click Back . Click Create StorageSystem . Verification steps Verifying that the OpenShift Data Foundation cluster is healthy In the OpenShift Web Console, click Storage Data Foundation . In the Status card of the Overview tab, click Storage System and then click the storage system link from the pop up that appears. In the Status card of the Object tab, verify that both Object Service and Data Resiliency have a green tick. In the Details card, verify that the MCG information is displayed. Verifying the state of the pods Click Workloads Pods from the OpenShift Web Console. Select openshift-storage from the Project drop-down list and verify that the following pods are in Running state. Note If the Show default projects option is disabled, use the toggle button to list all the default projects. Component Corresponding pods OpenShift Data Foundation Operator ocs-operator-* (1 pod on any storage node) ocs-metrics-exporter-* (1 pod on any storage node) odf-operator-controller-manager-* (1 pod on any storage node) odf-console-* (1 pod on any storage node) csi-addons-controller-manager-* (1 pod on any storage node) Rook-ceph Operator rook-ceph-operator-* (1 pod on any storage node) Multicloud Object Gateway noobaa-operator-* (1 pod on any storage node) noobaa-core-* (1 pod on any storage node) noobaa-db-pg-* (1 pod on any storage node) noobaa-endpoint-* (1 pod on any storage node)
[ "oc annotate namespace openshift-storage openshift.io/node-selector=" ]
https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.15/html/deploying_and_managing_openshift_data_foundation_using_red_hat_openstack_platform/deploy-standalone-multicloud-object-gateway
Chapter 6. Mirroring data for hybrid and Multicloud buckets
Chapter 6. Mirroring data for hybrid and Multicloud buckets The Multicloud Object Gateway (MCG) simplifies the process of spanning data across cloud provider and clusters. Prerequisites You must first add a backing storage that can be used by the MCG, see Chapter 4, Adding storage resources for hybrid or Multicloud . Then you create a bucket class that reflects the data management policy, mirroring. Procedure You can set up mirroring data in three ways: Section 6.1, "Creating bucket classes to mirror data using the MCG command-line-interface" Section 6.2, "Creating bucket classes to mirror data using a YAML" Section 6.3, "Configuring buckets to mirror data using the user interface" 6.1. Creating bucket classes to mirror data using the MCG command-line-interface From the Multicloud Object Gateway (MCG) command-line interface, run the following command to create a bucket class with a mirroring policy: Set the newly created bucket class to a new bucket claim, generating a new bucket that will be mirrored between two locations: 6.2. Creating bucket classes to mirror data using a YAML Apply the following YAML. This YAML is a hybrid example that mirrors data between local Ceph storage and AWS: Add the following lines to your standard Object Bucket Claim (OBC): For more information about OBCs, see Chapter 9, Object Bucket Claim . 6.3. Configuring buckets to mirror data using the user interface In the OpenShift Web Console, click Storage OpenShift Data Foundation . In the Status card, click Storage System and click the storage system link from the pop up that appears. In the Object tab, click the Multicloud Object Gateway link. On the NooBaa page, click the buckets icon on the left side. You can see a list of your buckets: Click the bucket you want to update. Click Edit Tier 1 Resources : Select Mirror and check the relevant resources you want to use for this bucket. In the following example, the data between noobaa-default-backing-store which is on RGW and AWS-backingstore which is on AWS is mirrored: Click Save . Note Resources created in NooBaa UI cannot be used by OpenShift UI or Multicloud Object Gateway (MCG) CLI.
[ "noobaa bucketclass create placement-bucketclass mirror-to-aws --backingstores=azure-resource,aws-resource --placement Mirror", "noobaa obc create mirrored-bucket --bucketclass=mirror-to-aws", "apiVersion: noobaa.io/v1alpha1 kind: BucketClass metadata: labels: app: noobaa name: <bucket-class-name> namespace: openshift-storage spec: placementPolicy: tiers: - backingStores: - <backing-store-1> - <backing-store-2> placement: Mirror", "additionalConfig: bucketclass: mirror-to-aws" ]
https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.9/html/managing_hybrid_and_multicloud_resources/mirroring-data-for-hybrid-and-Multicloud-buckets
Updating
Updating Red Hat Enterprise Linux AI 1.4 Upgrading your RHEL AI system and models Red Hat RHEL AI Documentation Team
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux_ai/1.4/html/updating/index
Chapter 61. JAX-RS 2.0 Filters and Interceptors
Chapter 61. JAX-RS 2.0 Filters and Interceptors Abstract JAX-RS 2.0 defines standard APIs and semantics for installing filters and interceptors in the processing pipeline for REST invocations. Filters and interceptors are typically used to provide such capabilities as logging, authentication, authorization, message compression, message encryption, and so on. 61.1. Introduction to JAX-RS Filters and Interceptors Overview This section provides an overview of the processing pipeline for JAX-RS filters and interceptors, highlighting the extension points where it is possible to install a filter chain or an interceptor chain. Filters A JAX-RS 2.0 filter is a type of plug-in that gives a developer access to all of the JAX-RS messages passing through a CXF client or server. A filter is suitable for processing the metadata associated with a message: HTTP headers, query parameters, media type, and other metadata. Filters have the capability to abort a message invocation (useful for security plug-ins, for example). If you like, you can install multiple filters at each extension point, in which case the filters are executed in a chain (the order of execution is undefined, however, unless you specify a priority value for each installed filter). Interceptors A JAX-RS 2.0 interceptor is a type of plug-in that gives a developer access to a message body as it is being read or written. Interceptors are wrapped around either the MessageBodyReader.readFrom method invocation (for reader interceptors) or the MessageBodyWriter.writeTo method invocation (for writer interceptors). If you like, you can install multiple interceptors at each extension point, in which case the interceptors are executed in a chain (the order of execution is undefined, however, unless you specify a priority value for each installed interceptor). Server processing pipeline Figure 61.1, "Server-Side Filter and Interceptor Extension Points" shows an outline of the processing pipeline for JAX-RS filters and interceptors installed on the server side. Figure 61.1. Server-Side Filter and Interceptor Extension Points Server extension points In the server processing pipeline, you can add a filter (or interceptor) at any of the following extension points: PreMatchContainerRequest filter ContainerRequest filter ReadInterceptor ContainerResponse filter WriteInterceptor Note that the PreMatchContainerRequest extension point is reached before resource matching has occurred, so some of the context metadata will not be available at this point. Client processing pipeline Figure 61.2, "Client-Side Filter and Interceptor Extension Points" shows an outline of the processing pipeline for JAX-RS filters and interceptors installed on the client side. Figure 61.2. Client-Side Filter and Interceptor Extension Points Client extension points In the client processing pipeline, you can add a filter (or interceptor) at any of the following extension points: ClientRequest filter WriteInterceptor ClientResponse filter ReadInterceptor Filter and interceptor order If you install multiple filters or interceptors at the same extension point, the execution order of the filters depends on the priority assigned to them (using the @Priority annotation in the Java source). A priority is represented as an integer value. In general, a filter with a higher priority number is placed closer to the resource method invocation on the server side; while a filter with a lower priority number is placed closer to the client invocation. In other words, the filters and interceptors acting on a request message are executed in ascending order of priority number; while the filters and interceptors acting on a response message are executed in descending order of priority number. Filter classes The following Java interfaces can be implemented in order to create custom REST message filters: javax.ws.rs.container.ContainerRequestFilter javax.ws.rs.container.ContainerResponseFilter javax.ws.rs.client.ClientRequestFilter javax.ws.rs.client.ClientResponseFilter Interceptor classes The following Java interfaces can be implemented in order to create custom REST message interceptors: javax.ws.rs.ext.ReaderInterceptor javax.ws.rs.ext.WriterInterceptor 61.2. Container Request Filter Overview This section explains how to implement and register a container request filter , which is used to intercept an incoming request message on the server (container) side. Container request filters are often used to process headers on the server side and can be used for any kind of generic request processing (that is, processing that is independent of the particular resource method called). Moreover, the container request filter is something of a special case, because it can be installed at two distinct extension points: PreMatchContainerRequest (before the resource matching step); and ContainerRequest (after the resource matching step). ContainerRequestFilter interface The javax.ws.rs.container.ContainerRequestFilter interface is defined as follows: By implementing the ContainerRequestFilter interface, you can create a filter for either of the following extension points on the server side: PreMatchContainerRequest ContainerRequest ContainerRequestContext interface The filter method of ContainerRequestFilter receives a single argument of type javax.ws.rs.container.ContainerRequestContext , which can be used to access the incoming request message and its related metadata. The ContainerRequestContext interface is defined as follows: Sample implementation for PreMatchContainerRequest filter To implement a container request filter for the PreMatchContainerRequest extension point (that is, where the filter is executed prior to resource matching), define a class that implements the ContainerRequestFilter interface, making sure to annotate the class with the @PreMatching annotation (to select the PreMatchContainerRequest extension point). For example, the following code shows an example of a simple container request filter that gets installed in the PreMatchContainerRequest extension point, with a priority of 20: Sample implementation for ContainerRequest filter To implement a container request filter for the ContainerRequest extension point (that is, where the filter is executed after resource matching), define a class that implements the ContainerRequestFilter interface, without the @PreMatching annotation. For example, the following code shows an example of a simple container request filter that gets installed in the ContainerRequest extension point, with a priority of 30: Injecting ResourceInfo At the ContainerRequest extension point (that is, after resource matching has occurred), it is possible to access the matched resource class and resource method by injecting the ResourceInfo class. For example, the following code shows how to inject the ResourceInfo class as a field of the ContainerRequestFilter class: Aborting the invocation It is possible to abort a server-side invocation by creating a suitable implementation of a container request filter. Typically, this is useful for implementing security features on the server side: for example, to implement an authentication feature or an authorization feature. If an incoming request fails to authenticate successfully, you could abort the invocation from within the container request filter. For example, the following pre-matching feature attempts to extract a username and password from the URI's query parameters and calls an authenticate method to check the username and password credentials. If the authentication fails, the invocation is aborted by calling abortWith on the ContainerRequestContext object, passing the error response that is to be returned to the client. Binding the server request filter To bind a server request filter (that is, to install it into the Apache CXF runtime), perform the following steps: Add the @Provider annotation to the container request filter class, as shown in the following code fragment: When the container request filter implementation is loaded into the Apache CXF runtime, the REST implementation automatically scans the loaded classes to search for the classes marked with the @Provider annotation (the scanning phase ). When defining a JAX-RS server endpoint in XML (for example, see Section 18.1, "Configuring JAX-RS Server Endpoints" ), add the server request filter to the list of providers in the jaxrs:providers element. Note This step is a non-standard requirement of Apache CXF. Strictly speaking, according to the JAX-RS standard, the @Provider annotation should be all that is required to bind the filter. But in practice, the standard approach is somewhat inflexible and can lead to clashing providers when many libraries are included in a large project. 61.3. Container Response Filter Overview This section explains how to implement and register a container response filter , which is used to intercept an outgoing response message on the server side. Container response filters can be used to populate headers automatically in a response message and, in general, can be used for any kind of generic response processing. ContainerResponseFilter interface The javax.ws.rs.container.ContainerResponseFilter interface is defined as follows: By implementing the ContainerResponseFilter , you can create a filter for the ContainerResponse extension point on the server side, which filters the response message after the invocation has executed. Note The container response filter gives you access both to the request message (through the requestContext argument) and the response message (through the responseContext message), but only the response can be modified at this stage. ContainerResponseContext interface The filter method of ContainerResponseFilter receives two arguments: an argument of type javax.ws.rs.container.ContainerRequestContext (see the section called "ContainerRequestContext interface" ); and an argument of type javax.ws.rs.container.ContainerResponseContext , which can be used to access the outgoing response message and its related metadata. The ContainerResponseContext interface is defined as follows: Sample implementation To implement a container response filter for the ContainerResponse extension point (that is, where the filter is executed after the invocation has been executed on the server side), define a class that implements the ContainerResponseFilter interface. For example, the following code shows an example of a simple container response filter that gets installed in the ContainerResponse extension point, with a priority of 10: Binding the server response filter To bind a server response filter (that is, to install it into the Apache CXF runtime), perform the following steps: Add the @Provider annotation to the container response filter class, as shown in the following code fragment: When the container response filter implementation is loaded into the Apache CXF runtime, the REST implementation automatically scans the loaded classes to search for the classes marked with the @Provider annotation (the scanning phase ). When defining a JAX-RS server endpoint in XML (for example, see Section 18.1, "Configuring JAX-RS Server Endpoints" ), add the server response filter to the list of providers in the jaxrs:providers element. Note This step is a non-standard requirement of Apache CXF. Strictly speaking, according to the JAX-RS standard, the @Provider annotation should be all that is required to bind the filter. But in practice, the standard approach is somewhat inflexible and can lead to clashing providers when many libraries are included in a large project. 61.4. Client Request Filter Overview This section explains how to implement and register a client request filter , which is used to intercept an outgoing request message on the client side. Client request filters are often used to process headers and can be used for any kind of generic request processing. ClientRequestFilter interface The javax.ws.rs.client.ClientRequestFilter interface is defined as follows: By implementing the ClientRequestFilter , you can create a filter for the ClientRequest extension point on the client side, which filters the request message before sending the message to the server. ClientRequestContext interface The filter method of ClientRequestFilter receives a single argument of type javax.ws.rs.client.ClientRequestContext , which can be used to access the outgoing request message and its related metadata. The ClientRequestContext interface is defined as follows: Sample implementation To implement a client request filter for the ClientRequest extension point (that is, where the filter is executed prior to sending the request message), define a class that implements the ClientRequestFilter interface. For example, the following code shows an example of a simple client request filter that gets installed in the ClientRequest extension point, with a priority of 20: Aborting the invocation It is possible to abort a client-side invocation by implementing a suitable client request filter. For example, you might implement a client-side filter to check whether a request is correctly formatted and, if necessary, abort the request. The following test code always aborts the request, returning the BAD_REQUEST HTTP status to the client calling code: Registering the client request filter Using the JAX-RS 2.0 client API, you can register a client request filter directly on a javax.ws.rs.client.Client object or on a javax.ws.rs.client.WebTarget object. Effectively, this means that the client request filter can optionally be applied to different scopes, so that only certain URI paths are affected by the filter. For example, the following code shows how to register the SampleClientRequestFilter filter so that it applies to all invocations made using the client object; and how to register the TestAbortClientRequestFilter filter, so that it applies only to sub-paths of rest/TestAbortClientRequest . 61.5. Client Response Filter Overview This section explains how to implement and register a client response filter , which is used to intercept an incoming response message on the client side. Client response filters can be used for any kind of generic response processing on the client side. ClientResponseFilter interface The javax.ws.rs.client.ClientResponseFilter interface is defined as follows: By implementing the ClientResponseFilter , you can create a filter for the ClientResponse extension point on the client side, which filters the response message after it is received from the server. ClientResponseContext interface The filter method of ClientResponseFilter receives two arguments: an argument of type javax.ws.rs.client.ClientRequestContext (see the section called "ClientRequestContext interface" ); and an argument of type javax.ws.rs.client.ClientResponseContext , which can be used to access the outgoing response message and its related metadata. The ClientResponseContext interface is defined as follows: Sample implementation To implement a client response filter for the ClientResponse extension point (that is, where the filter is executed after receiving a response message from the server), define a class that implements the ClientResponseFilter interface. For example, the following code shows an example of a simple client response filter that gets installed in the ClientResponse extension point, with a priority of 20: Registering the client response filter Using the JAX-RS 2.0 client API, you can register a client response filter directly on a javax.ws.rs.client.Client object or on a javax.ws.rs.client.WebTarget object. Effectively, this means that the client request filter can optionally be applied to different scopes, so that only certain URI paths are affected by the filter. For example, the following code shows how to register the SampleClientResponseFilter filter so that it applies to all invocations made using the client object: 61.6. Entity Reader Interceptor Overview This section explains how to implement and register an entity reader interceptor , which enables you to intercept the input stream when reading a message body either on the client side or on the server side. This is typically useful for generic transformations of the request body, such as encryption and decryption, or compressing and decompressing. ReaderInterceptor interface The javax.ws.rs.ext.ReaderInterceptor interface is defined as follows: By implementing the ReaderInterceptor interface, you can intercept the message body ( Entity object) as it is being read either on the server side or the client side. You can use an entity reader interceptor in either of the following contexts: Server side -if bound as a server-side interceptor, the entity reader interceptor intercepts the request message body when it is accessed by the application code (in the matched resource). Depending on the semantics of the REST request, the message body might not be accessed by the matched resource, in which case the reader interceptor is not called. Client side -if bound as a client-side interceptor, the entity reader interceptor intercepts the response message body when it is accessed by the client code. If the client code does not explicitly access the response message (for example, by calling the Response.getEntity method), the reader interceptor is not called. ReaderInterceptorContext interface The aroundReadFrom method of ReaderInterceptor receives one argument of type javax.ws.rs.ext.ReaderInterceptorContext , which can be used to access both the message body ( Entity object) and message metadata. The ReaderInterceptorContext interface is defined as follows: InterceptorContext interface The ReaderInterceptorContext interface also supports the methods inherited from the base InterceptorContext interface. The InterceptorContext interface is defined as follows: Sample implementation on the client side To implement an entity reader interceptor for the client side, define a class that implements the ReaderInterceptor interface. For example, the following code shows an example of an entity reader interceptor for the client side (with a priority of 10), which replaces all instances of COMPANY_NAME by Red Hat in the message body of the incoming response: Sample implementation on the server side To implement an entity reader interceptor for the server side, define a class that implements the ReaderInterceptor interface and annotate it with the @Provider annotation. For example, the following code shows an example of an entity reader interceptor for the server side (with a priority of 10), which replaces all instances of COMPANY_NAME by Red Hat in the message body of the incoming request: Binding a reader interceptor on the client side Using the JAX-RS 2.0 client API, you can register an entity reader interceptor directly on a javax.ws.rs.client.Client object or on a javax.ws.rs.client.WebTarget object. Effectively, this means that the reader interceptor can optionally be applied to different scopes, so that only certain URI paths are affected by the interceptor. For example, the following code shows how to register the SampleClientReaderInterceptor interceptor so that it applies to all invocations made using the client object: For more details about registering interceptors with a JAX-RS 2.0 client, see Section 49.5, "Configuring the Client Endpoint" . Binding a reader interceptor on the server side To bind a reader interceptor on the server side (that is, to install it into the Apache CXF runtime), perform the following steps: Add the @Provider annotation to the reader interceptor class, as shown in the following code fragment: When the reader interceptor implementation is loaded into the Apache CXF runtime, the REST implementation automatically scans the loaded classes to search for the classes marked with the @Provider annotation (the scanning phase ). When defining a JAX-RS server endpoint in XML (for example, see Section 18.1, "Configuring JAX-RS Server Endpoints" ), add the reader interceptor to the list of providers in the jaxrs:providers element. Note This step is a non-standard requirement of Apache CXF. Strictly speaking, according to the JAX-RS standard, the @Provider annotation should be all that is required to bind the interceptor. But in practice, the standard approach is somewhat inflexible and can lead to clashing providers when many libraries are included in a large project. 61.7. Entity Writer Interceptor Overview This section explains how to implement and register an entity writer interceptor , which enables you to intercept the output stream when writing a message body either on the client side or on the server side. This is typically useful for generic transformations of the request body, such as encryption and decryption, or compressing and decompressing. WriterInterceptor interface The javax.ws.rs.ext.WriterInterceptor interface is defined as follows: By implementing the WriterInterceptor interface, you can intercept the message body ( Entity object) as it is being written either on the server side or the client side. You can use an entity writer interceptor in either of the following contexts: Server side -if bound as a server-side interceptor, the entity writer interceptor intercepts the response message body just before it is marshalled and sent back to the client. Client side -if bound as a client-side interceptor, the entity writer interceptor intercepts the request message body just before it is marshalled and sent out to the server. WriterInterceptorContext interface The aroundWriteTo method of WriterInterceptor receives one argument of type javax.ws.rs.ext.WriterInterceptorContext , which can be used to access both the message body ( Entity object) and message metadata. The WriterInterceptorContext interface is defined as follows: InterceptorContext interface The WriterInterceptorContext interface also supports the methods inherited from the base InterceptorContext interface. For the definition of InterceptorContext , see the section called "InterceptorContext interface" . Sample implementation on the client side To implement an entity writer interceptor for the client side, define a class that implements the WriterInterceptor interface. For example, the following code shows an example of an entity writer interceptor for the client side (with a priority of 10), which appends an extra line of text to the message body of the outgoing request: Sample implementation on the server side To implement an entity writer interceptor for the server side, define a class that implements the WriterInterceptor interface and annotate it with the @Provider annotation. For example, the following code shows an example of an entity writer interceptor for the server side (with a priority of 10), which appends an extra line of text to the message body of the outgoing request: Binding a writer interceptor on the client side Using the JAX-RS 2.0 client API, you can register an entity writer interceptor directly on a javax.ws.rs.client.Client object or on a javax.ws.rs.client.WebTarget object. Effectively, this means that the writer interceptor can optionally be applied to different scopes, so that only certain URI paths are affected by the interceptor. For example, the following code shows how to register the SampleClientReaderInterceptor interceptor so that it applies to all invocations made using the client object: For more details about registering interceptors with a JAX-RS 2.0 client, see Section 49.5, "Configuring the Client Endpoint" . Binding a writer interceptor on the server side To bind a writer interceptor on the server side (that is, to install it into the Apache CXF runtime), perform the following steps: Add the @Provider annotation to the writer interceptor class, as shown in the following code fragment: When the writer interceptor implementation is loaded into the Apache CXF runtime, the REST implementation automatically scans the loaded classes to search for the classes marked with the @Provider annotation (the scanning phase ). When defining a JAX-RS server endpoint in XML (for example, see Section 18.1, "Configuring JAX-RS Server Endpoints" ), add the writer interceptor to the list of providers in the jaxrs:providers element. Note This step is a non-standard requirement of Apache CXF. Strictly speaking, according to the JAX-RS standard, the @Provider annotation should be all that is required to bind the interceptor. But in practice, the standard approach is somewhat inflexible and can lead to clashing providers when many libraries are included in a large project. 61.8. Dynamic Binding Overview The standard approach to binding container filters and container interceptors to resources is to annotate the filters and interceptors with the @Provider annotation. This ensures that the binding is global : that is, the filters and interceptors are bound to every resource class and resource method on the server side. Dynamic binding is an alternative approach to binding on the server side, which enables you to pick and choose which resource methods your interceptors and filters are applied to. To enable dynamic binding for your filters and interceptors, you must implement a custom DynamicFeature interface, as described here. DynamicFeature interface The DynamicFeature interface is defined in the javax.ws.rx.container package, as follows: Implementing a dynamic feature You implement a dynamic feature, as follows: Implement one or more container filters or container interceptors, as described previously. But do not annotate them with the @Provider annotation (otherwise, they would be bound globally, making the dynamic feature effectively irrelevant). Create your own dynamic feature by implementing the DynamicFeature class, overriding the configure method. In the configure method, you can use the resourceInfo argument to discover which resource class and which resource method this feature is being called for. You can use this information as the basis for deciding whether or not to register some of the filters or interceptors. If you decide to register a filter or an interceptor with the current resource method, you can do so by invoking one of the context.register methods. Remember to annotate your dynamic feature class with the @Provider annotation, to ensure that it gets picked up during the scanning phase of deployment. Example dynamic feature The following example shows you how to define a dynamic feature that registers the LoggingFilter filter for any method of the MyResource class (or subclass) that is annotated with @GET : Dynamic binding process The JAX-RS standard requires that the DynamicFeature.configure method is called exactly once for each resource method . This means that every resource method could potentially have filters or interceptors installed by the dynamic feature, but it is up to the dynamic feature to decide whether to register the filters or interceptors in each case. In other words, the granularity of binding supported by the dynamic feature is at the level of individual resource methods. FeatureContext interface The FeatureContext interface (which enables you to register filters and interceptors in the configure method) is defined as a sub-interface of Configurable<> , as follows: The Configurable<> interface defines a variety of methods for registering filters and interceptors on a single resource method, as follows:
[ "// Java package javax.ws.rs.container; import java.io.IOException; public interface ContainerRequestFilter { public void filter(ContainerRequestContext requestContext) throws IOException; }", "// Java package javax.ws.rs.container; import java.io.InputStream; import java.net.URI; import java.util.Collection; import java.util.Date; import java.util.List; import java.util.Locale; import java.util.Map; import javax.ws.rs.core.Cookie; import javax.ws.rs.core.MediaType; import javax.ws.rs.core.MultivaluedMap; import javax.ws.rs.core.Request; import javax.ws.rs.core.Response; import javax.ws.rs.core.SecurityContext; import javax.ws.rs.core.UriInfo; public interface ContainerRequestContext { public Object getProperty(String name); public Collection getPropertyNames(); public void setProperty(String name, Object object); public void removeProperty(String name); public UriInfo getUriInfo(); public void setRequestUri(URI requestUri); public void setRequestUri(URI baseUri, URI requestUri); public Request getRequest(); public String getMethod(); public void setMethod(String method); public MultivaluedMap getHeaders(); public String getHeaderString(String name); public Date getDate(); public Locale getLanguage(); public int getLength(); public MediaType getMediaType(); public List getAcceptableMediaTypes(); public List getAcceptableLanguages(); public Map getCookies(); public boolean hasEntity(); public InputStream getEntityStream(); public void setEntityStream(InputStream input); public SecurityContext getSecurityContext(); public void setSecurityContext(SecurityContext context); public void abortWith(Response response); }", "// Java package org.jboss.fuse.example; import javax.ws.rs.container.ContainerRequestContext; import javax.ws.rs.container.ContainerRequestFilter; import javax.ws.rs.container.PreMatching; import javax.annotation.Priority; import javax.ws.rs.ext.Provider; @PreMatching @Priority(value = 20) @Provider public class SamplePreMatchContainerRequestFilter implements ContainerRequestFilter { public SamplePreMatchContainerRequestFilter() { System.out.println(\"SamplePreMatchContainerRequestFilter starting up\"); } @Override public void filter(ContainerRequestContext requestContext) { System.out.println(\"SamplePreMatchContainerRequestFilter.filter() invoked\"); } }", "// Java package org.jboss.fuse.example; import javax.ws.rs.container.ContainerRequestContext; import javax.ws.rs.container.ContainerRequestFilter; import javax.ws.rs.ext.Provider; import javax.annotation.Priority; @Provider @Priority(value = 30) public class SampleContainerRequestFilter implements ContainerRequestFilter { public SampleContainerRequestFilter() { System.out.println(\"SampleContainerRequestFilter starting up\"); } @Override public void filter(ContainerRequestContext requestContext) { System.out.println(\"SampleContainerRequestFilter.filter() invoked\"); } }", "// Java package org.jboss.fuse.example; import javax.ws.rs.container.ContainerRequestContext; import javax.ws.rs.container.ContainerRequestFilter; import javax.ws.rs.container.ResourceInfo; import javax.ws.rs.ext.Provider; import javax.annotation.Priority; import javax.ws.rs.core.Context; @Provider @Priority(value = 30) public class SampleContainerRequestFilter implements ContainerRequestFilter { @Context private ResourceInfo resinfo; public SampleContainerRequestFilter() { } @Override public void filter(ContainerRequestContext requestContext) { String resourceClass = resinfo.getResourceClass().getName(); String methodName = resinfo.getResourceMethod().getName(); System.out.println(\"REST invocation bound to resource class: \" + resourceClass); System.out.println(\"REST invocation bound to resource method: \" + methodName); } }", "// Java package org.jboss.fuse.example; import javax.annotation.Priority; import javax.ws.rs.container.ContainerRequestContext; import javax.ws.rs.container.ContainerRequestFilter; import javax.ws.rs.container.PreMatching; import javax.ws.rs.core.Response; import javax.ws.rs.core.Response.ResponseBuilder; import javax.ws.rs.core.Response.Status; import javax.ws.rs.ext.Provider; @PreMatching @Priority(value = 20) @Provider public class SampleAuthenticationRequestFilter implements ContainerRequestFilter { public SampleAuthenticationRequestFilter() { System.out.println(\"SampleAuthenticationRequestFilter starting up\"); } @Override public void filter(ContainerRequestContext requestContext) { ResponseBuilder responseBuilder = null; Response response = null; String userName = requestContext.getUriInfo().getQueryParameters().getFirst(\"UserName\"); String password = requestContext.getUriInfo().getQueryParameters().getFirst(\"Password\"); if (authenticate(userName, password) == false) { responseBuilder = Response.serverError(); response = responseBuilder.status(Status.BAD_REQUEST).build(); requestContext.abortWith(response); } } public boolean authenticate(String userName, String password) { // Perform authentication of 'user' } }", "// Java package org.jboss.fuse.example; import javax.ws.rs.container.ContainerRequestContext; import javax.ws.rs.container.ContainerRequestFilter; import javax.ws.rs.ext.Provider; import javax.annotation.Priority; @Provider @Priority(value = 30) public class SampleContainerRequestFilter implements ContainerRequestFilter { }", "<blueprint xmlns=\"http://www.osgi.org/xmlns/blueprint/v1.0.0\" xmlns:xsi=\"http://www.w3.org/2001/XMLSchema-instance\" xmlns:jaxrs=\"http://cxf.apache.org/blueprint/jaxrs\" xmlns:cxf=\"http://cxf.apache.org/blueprint/core\" > <jaxrs:server id=\"customerService\" address=\"/customers\"> <jaxrs:providers> <ref bean=\"filterProvider\" /> </jaxrs:providers> <bean id=\"filterProvider\" class=\"org.jboss.fuse.example.SampleContainerRequestFilter\"/> </jaxrs:server> </blueprint>", "// Java package javax.ws.rs.container; import java.io.IOException; public interface ContainerResponseFilter { public void filter(ContainerRequestContext requestContext, ContainerResponseContext responseContext) throws IOException; }", "// Java package javax.ws.rs.container; import java.io.OutputStream; import java.lang.annotation.Annotation; import java.lang.reflect.Type; import java.net.URI; import java.util.Date; import java.util.Locale; import java.util.Map; import java.util.Set; import javax.ws.rs.core.EntityTag; import javax.ws.rs.core.Link; import javax.ws.rs.core.MediaType; import javax.ws.rs.core.MultivaluedMap; import javax.ws.rs.core.NewCookie; import javax.ws.rs.core.Response; import javax.ws.rs.ext.MessageBodyWriter; public interface ContainerResponseContext { public int getStatus(); public void setStatus(int code); public Response.StatusType getStatusInfo(); public void setStatusInfo(Response.StatusType statusInfo); public MultivaluedMap<String, Object> getHeaders(); public abstract MultivaluedMap<String, String> getStringHeaders(); public String getHeaderString(String name); public Set<String> getAllowedMethods(); public Date getDate(); public Locale getLanguage(); public int getLength(); public MediaType getMediaType(); public Map<String, NewCookie> getCookies(); public EntityTag getEntityTag(); public Date getLastModified(); public URI getLocation(); public Set<Link> getLinks(); boolean hasLink(String relation); public Link getLink(String relation); public Link.Builder getLinkBuilder(String relation); public boolean hasEntity(); public Object getEntity(); public Class<?> getEntityClass(); public Type getEntityType(); public void setEntity(final Object entity); public void setEntity( final Object entity, final Annotation[] annotations, final MediaType mediaType); public Annotation[] getEntityAnnotations(); public OutputStream getEntityStream(); public void setEntityStream(OutputStream outputStream); }", "// Java package org.jboss.fuse.example; import javax.annotation.Priority; import javax.ws.rs.container.ContainerRequestContext; import javax.ws.rs.container.ContainerResponseContext; import javax.ws.rs.container.ContainerResponseFilter; import javax.ws.rs.ext.Provider; @Provider @Priority(value = 10) public class SampleContainerResponseFilter implements ContainerResponseFilter { public SampleContainerResponseFilter() { System.out.println(\"SampleContainerResponseFilter starting up\"); } @Override public void filter( ContainerRequestContext requestContext, ContainerResponseContext responseContext ) { // This filter replaces the response message body with a fixed string if (responseContext.hasEntity()) { responseContext.setEntity(\"New message body!\"); } } }", "// Java package org.jboss.fuse.example; import javax.annotation.Priority; import javax.ws.rs.container.ContainerRequestContext; import javax.ws.rs.container.ContainerResponseContext; import javax.ws.rs.container.ContainerResponseFilter; import javax.ws.rs.ext.Provider; @Provider @Priority(value = 10) public class SampleContainerResponseFilter implements ContainerResponseFilter { }", "<blueprint xmlns=\"http://www.osgi.org/xmlns/blueprint/v1.0.0\" xmlns:xsi=\"http://www.w3.org/2001/XMLSchema-instance\" xmlns:jaxrs=\"http://cxf.apache.org/blueprint/jaxrs\" xmlns:cxf=\"http://cxf.apache.org/blueprint/core\" > <jaxrs:server id=\"customerService\" address=\"/customers\"> <jaxrs:providers> <ref bean=\"filterProvider\" /> </jaxrs:providers> <bean id=\"filterProvider\" class=\"org.jboss.fuse.example.SampleContainerResponseFilter\"/> </jaxrs:server> </blueprint>", "// Java package javax.ws.rs.client; import javax.ws.rs.client.ClientRequestFilter; import javax.ws.rs.client.ClientRequestContext; public interface ClientRequestFilter { void filter(ClientRequestContext requestContext) throws IOException; }", "// Java package javax.ws.rs.client; import java.io.OutputStream; import java.lang.annotation.Annotation; import java.lang.reflect.Type; import java.net.URI; import java.util.Collection; import java.util.Date; import java.util.List; import java.util.Locale; import java.util.Map; import javax.ws.rs.core.Configuration; import javax.ws.rs.core.Cookie; import javax.ws.rs.core.MediaType; import javax.ws.rs.core.MultivaluedMap; import javax.ws.rs.core.Response; import javax.ws.rs.ext.MessageBodyWriter; public interface ClientRequestContext { public Object getProperty(String name); public Collection<String> getPropertyNames(); public void setProperty(String name, Object object); public void removeProperty(String name); public URI getUri(); public void setUri(URI uri); public String getMethod(); public void setMethod(String method); public MultivaluedMap<String, Object> getHeaders(); public abstract MultivaluedMap<String, String> getStringHeaders(); public String getHeaderString(String name); public Date getDate(); public Locale getLanguage(); public MediaType getMediaType(); public List<MediaType> getAcceptableMediaTypes(); public List<Locale> getAcceptableLanguages(); public Map<String, Cookie> getCookies(); public boolean hasEntity(); public Object getEntity(); public Class<?> getEntityClass(); public Type getEntityType(); public void setEntity(final Object entity); public void setEntity( final Object entity, final Annotation[] annotations, final MediaType mediaType); public Annotation[] getEntityAnnotations(); public OutputStream getEntityStream(); public void setEntityStream(OutputStream outputStream); public Client getClient(); public Configuration getConfiguration(); public void abortWith(Response response); }", "// Java package org.jboss.fuse.example; import javax.ws.rs.client.ClientRequestContext; import javax.ws.rs.client.ClientRequestFilter; import javax.annotation.Priority; @Priority(value = 20) public class SampleClientRequestFilter implements ClientRequestFilter { public SampleClientRequestFilter() { System.out.println(\"SampleClientRequestFilter starting up\"); } @Override public void filter(ClientRequestContext requestContext) { System.out.println(\"ClientRequestFilter.filter() invoked\"); } }", "// Java package org.jboss.fuse.example; import javax.ws.rs.client.ClientRequestContext; import javax.ws.rs.client.ClientRequestFilter; import javax.ws.rs.core.Response; import javax.ws.rs.core.Response.Status; import javax.annotation.Priority; @Priority(value = 10) public class TestAbortClientRequestFilter implements ClientRequestFilter { public TestAbortClientRequestFilter() { System.out.println(\"TestAbortClientRequestFilter starting up\"); } @Override public void filter(ClientRequestContext requestContext) { // Test filter: aborts with BAD_REQUEST status requestContext.abortWith(Response.status(Status.BAD_REQUEST).build()); } }", "// Java import javax.ws.rs.client.Client; import javax.ws.rs.client.ClientBuilder; import javax.ws.rs.client.Invocation; import javax.ws.rs.client.WebTarget; import javax.ws.rs.core.Response; Client client = ClientBuilder.newClient(); client.register(new SampleClientRequestFilter()); WebTarget target = client .target(\"http://localhost:8001/rest/TestAbortClientRequest\"); target.register(new TestAbortClientRequestFilter());", "// Java package javax.ws.rs.client; import java.io.IOException; public interface ClientResponseFilter { void filter(ClientRequestContext requestContext, ClientResponseContext responseContext) throws IOException; }", "// Java package javax.ws.rs.client; import java.io.InputStream; import java.net.URI; import java.util.Date; import java.util.Locale; import java.util.Map; import java.util.Set; import javax.ws.rs.core.EntityTag; import javax.ws.rs.core.Link; import javax.ws.rs.core.MediaType; import javax.ws.rs.core.MultivaluedMap; import javax.ws.rs.core.NewCookie; import javax.ws.rs.core.Response; public interface ClientResponseContext { public int getStatus(); public void setStatus(int code); public Response.StatusType getStatusInfo(); public void setStatusInfo(Response.StatusType statusInfo); public MultivaluedMap<String, String> getHeaders(); public String getHeaderString(String name); public Set<String> getAllowedMethods(); public Date getDate(); public Locale getLanguage(); public int getLength(); public MediaType getMediaType(); public Map<String, NewCookie> getCookies(); public EntityTag getEntityTag(); public Date getLastModified(); public URI getLocation(); public Set<Link> getLinks(); boolean hasLink(String relation); public Link getLink(String relation); public Link.Builder getLinkBuilder(String relation); public boolean hasEntity(); public InputStream getEntityStream(); public void setEntityStream(InputStream input); }", "// Java package org.jboss.fuse.example; import javax.ws.rs.client.ClientRequestContext; import javax.ws.rs.client.ClientResponseContext; import javax.ws.rs.client.ClientResponseFilter; import javax.annotation.Priority; @Priority(value = 20) public class SampleClientResponseFilter implements ClientResponseFilter { public SampleClientResponseFilter() { System.out.println(\"SampleClientResponseFilter starting up\"); } @Override public void filter( ClientRequestContext requestContext, ClientResponseContext responseContext ) { // Add an extra header on the response responseContext.getHeaders().putSingle(\"MyCustomHeader\", \"my custom data\"); } }", "// Java import javax.ws.rs.client.Client; import javax.ws.rs.client.ClientBuilder; import javax.ws.rs.client.Invocation; import javax.ws.rs.client.WebTarget; import javax.ws.rs.core.Response; Client client = ClientBuilder.newClient(); client.register(new SampleClientResponseFilter());", "// Java package javax.ws.rs.ext; public interface ReaderInterceptor { public Object aroundReadFrom(ReaderInterceptorContext context) throws java.io.IOException, javax.ws.rs.WebApplicationException; }", "// Java package javax.ws.rs.ext; import java.io.IOException; import java.io.InputStream; import javax.ws.rs.WebApplicationException; import javax.ws.rs.core.MultivaluedMap; public interface ReaderInterceptorContext extends InterceptorContext { public Object proceed() throws IOException, WebApplicationException; public InputStream getInputStream(); public void setInputStream(InputStream is); public MultivaluedMap<String, String> getHeaders(); }", "// Java package javax.ws.rs.ext; import java.lang.annotation.Annotation; import java.lang.reflect.Type; import java.util.Collection; import javax.ws.rs.core.MediaType; public interface InterceptorContext { public Object getProperty(String name); public Collection<String> getPropertyNames(); public void setProperty(String name, Object object); public void removeProperty(String name); public Annotation[] getAnnotations(); public void setAnnotations(Annotation[] annotations); Class<?> getType(); public void setType(Class<?> type); Type getGenericType(); public void setGenericType(Type genericType); public MediaType getMediaType(); public void setMediaType(MediaType mediaType); }", "// Java package org.jboss.fuse.example; import java.io.ByteArrayInputStream; import java.io.IOException; import java.io.InputStream; import javax.annotation.Priority; import javax.ws.rs.WebApplicationException; import javax.ws.rs.ext.ReaderInterceptor; import javax.ws.rs.ext.ReaderInterceptorContext; @Priority(value = 10) public class SampleClientReaderInterceptor implements ReaderInterceptor { @Override public Object aroundReadFrom(ReaderInterceptorContext interceptorContext) throws IOException, WebApplicationException { InputStream inputStream = interceptorContext.getInputStream(); byte[] bytes = new byte[inputStream.available()]; inputStream.read(bytes); String responseContent = new String(bytes); responseContent = responseContent.replaceAll(\"COMPANY_NAME\", \"Red Hat\"); interceptorContext.setInputStream(new ByteArrayInputStream(responseContent.getBytes())); return interceptorContext.proceed(); } }", "// Java package org.jboss.fuse.example; import java.io.ByteArrayInputStream; import java.io.IOException; import java.io.InputStream; import javax.annotation.Priority; import javax.ws.rs.WebApplicationException; import javax.ws.rs.ext.Provider; import javax.ws.rs.ext.ReaderInterceptor; import javax.ws.rs.ext.ReaderInterceptorContext; @Priority(value = 10) @Provider public class SampleServerReaderInterceptor implements ReaderInterceptor { @Override public Object aroundReadFrom(ReaderInterceptorContext interceptorContext) throws IOException, WebApplicationException { InputStream inputStream = interceptorContext.getInputStream(); byte[] bytes = new byte[inputStream.available()]; inputStream.read(bytes); String requestContent = new String(bytes); requestContent = requestContent.replaceAll(\"COMPANY_NAME\", \"Red Hat\"); interceptorContext.setInputStream(new ByteArrayInputStream(requestContent.getBytes())); return interceptorContext.proceed(); } }", "// Java import javax.ws.rs.client.Client; import javax.ws.rs.client.ClientBuilder; import javax.ws.rs.client.Invocation; import javax.ws.rs.client.WebTarget; import javax.ws.rs.core.Response; Client client = ClientBuilder.newClient(); client.register(SampleClientReaderInterceptor.class);", "// Java package org.jboss.fuse.example; import javax.annotation.Priority; import javax.ws.rs.WebApplicationException; import javax.ws.rs.ext.Provider; import javax.ws.rs.ext.ReaderInterceptor; import javax.ws.rs.ext.ReaderInterceptorContext; @Priority(value = 10) @Provider public class SampleServerReaderInterceptor implements ReaderInterceptor { }", "<blueprint xmlns=\"http://www.osgi.org/xmlns/blueprint/v1.0.0\" xmlns:xsi=\"http://www.w3.org/2001/XMLSchema-instance\" xmlns:jaxrs=\"http://cxf.apache.org/blueprint/jaxrs\" xmlns:cxf=\"http://cxf.apache.org/blueprint/core\" > <jaxrs:server id=\"customerService\" address=\"/customers\"> <jaxrs:providers> <ref bean=\"interceptorProvider\" /> </jaxrs:providers> <bean id=\"interceptorProvider\" class=\"org.jboss.fuse.example.SampleServerReaderInterceptor\"/> </jaxrs:server> </blueprint>", "// Java package javax.ws.rs.ext; public interface WriterInterceptor { void aroundWriteTo(WriterInterceptorContext context) throws java.io.IOException, javax.ws.rs.WebApplicationException; }", "// Java package javax.ws.rs.ext; import java.io.IOException; import java.io.OutputStream; import javax.ws.rs.WebApplicationException; import javax.ws.rs.core.MultivaluedMap; public interface WriterInterceptorContext extends InterceptorContext { void proceed() throws IOException, WebApplicationException; Object getEntity(); void setEntity(Object entity); OutputStream getOutputStream(); public void setOutputStream(OutputStream os); MultivaluedMap<String, Object> getHeaders(); }", "// Java package org.jboss.fuse.example; import java.io.IOException; import java.io.OutputStream; import javax.ws.rs.WebApplicationException; import javax.ws.rs.ext.WriterInterceptor; import javax.ws.rs.ext.WriterInterceptorContext; import javax.annotation.Priority; @Priority(value = 10) public class SampleClientWriterInterceptor implements WriterInterceptor { @Override public void aroundWriteTo(WriterInterceptorContext interceptorContext) throws IOException, WebApplicationException { OutputStream outputStream = interceptorContext.getOutputStream(); String appendedContent = \"\\nInterceptors always get the last word in.\"; outputStream.write(appendedContent.getBytes()); interceptorContext.setOutputStream(outputStream); interceptorContext.proceed(); } }", "// Java package org.jboss.fuse.example; import java.io.IOException; import java.io.OutputStream; import javax.ws.rs.WebApplicationException; import javax.ws.rs.ext.Provider; import javax.ws.rs.ext.WriterInterceptor; import javax.ws.rs.ext.WriterInterceptorContext; import javax.annotation.Priority; @Priority(value = 10) @Provider public class SampleServerWriterInterceptor implements WriterInterceptor { @Override public void aroundWriteTo(WriterInterceptorContext interceptorContext) throws IOException, WebApplicationException { OutputStream outputStream = interceptorContext.getOutputStream(); String appendedContent = \"\\nInterceptors always get the last word in.\"; outputStream.write(appendedContent.getBytes()); interceptorContext.setOutputStream(outputStream); interceptorContext.proceed(); } }", "// Java import javax.ws.rs.client.Client; import javax.ws.rs.client.ClientBuilder; import javax.ws.rs.client.Invocation; import javax.ws.rs.client.WebTarget; import javax.ws.rs.core.Response; Client client = ClientBuilder.newClient(); client.register(SampleClientReaderInterceptor.class);", "// Java package org.jboss.fuse.example; import javax.ws.rs.WebApplicationException; import javax.ws.rs.ext.Provider; import javax.ws.rs.ext.WriterInterceptor; import javax.ws.rs.ext.WriterInterceptorContext; import javax.annotation.Priority; @Priority(value = 10) @Provider public class SampleServerWriterInterceptor implements WriterInterceptor { }", "<blueprint xmlns=\"http://www.osgi.org/xmlns/blueprint/v1.0.0\" xmlns:xsi=\"http://www.w3.org/2001/XMLSchema-instance\" xmlns:jaxrs=\"http://cxf.apache.org/blueprint/jaxrs\" xmlns:cxf=\"http://cxf.apache.org/blueprint/core\" > <jaxrs:server id=\"customerService\" address=\"/customers\"> <jaxrs:providers> <ref bean=\"interceptorProvider\" /> </jaxrs:providers> <bean id=\"interceptorProvider\" class=\"org.jboss.fuse.example.SampleServerWriterInterceptor\"/> </jaxrs:server> </blueprint>", "// Java package javax.ws.rs.container; import javax.ws.rs.core.FeatureContext; import javax.ws.rs.ext.ReaderInterceptor; import javax.ws.rs.ext.WriterInterceptor; public interface DynamicFeature { public void configure(ResourceInfo resourceInfo, FeatureContext context); }", "// Java import javax.ws.rs.container.DynamicFeature; import javax.ws.rs.container.ResourceInfo; import javax.ws.rs.core.FeatureContext; import javax.ws.rs.ext.Provider; @Provider public class DynamicLoggingFilterFeature implements DynamicFeature { @Override void configure(ResourceInfo resourceInfo, FeatureContext context) { if (MyResource.class.isAssignableFrom(resourceInfo.getResourceClass()) && resourceInfo.getResourceMethod().isAnnotationPresent(GET.class)) { context.register(new LoggingFilter()); } }", "// Java package javax.ws.rs.core; public interface FeatureContext extends Configurable<FeatureContext> { }", "// Java package javax.ws.rs.core; import java.util.Map; public interface Configurable<C extends Configurable> { public Configuration getConfiguration(); public C property(String name, Object value); public C register(Class<?> componentClass); public C register(Class<?> componentClass, int priority); public C register(Class<?> componentClass, Class<?>... contracts); public C register(Class<?> componentClass, Map<Class<?>, Integer> contracts); public C register(Object component); public C register(Object component, int priority); public C register(Object component, Class<?>... contracts); public C register(Object component, Map<Class<?>, Integer> contracts); }" ]
https://docs.redhat.com/en/documentation/red_hat_fuse/7.13/html/apache_cxf_development_guide/jaxrs20filters
Chapter 7. Using the Dev Spaces server API
Chapter 7. Using the Dev Spaces server API To manage OpenShift Dev Spaces server workloads, use the Swagger web user interface to navigate OpenShift Dev Spaces server API. Procedure Navigate to the Swagger API web user interface: https:// <openshift_dev_spaces_fqdn> /swagger . Additional resources Swagger
null
https://docs.redhat.com/en/documentation/red_hat_openshift_dev_spaces/3.19/html/administration_guide/managing-workloads-using-the-devspaces-server-api
3.3. NIS
3.3. NIS Important Before NIS can be configured as an identity store, NIS itself must be configured for the environment: A NIS server must be fully configured with user accounts set up. The ypbind package must be installed on the local system. This is required for NIS services, but is not installed by default. The portmap and ypbind services are started and enabled to start at boot time. This should be configured as part of the ypbind package installation. 3.3.1. Configuring NIS Authentication from the UI Open the authconfig UI, as in Section 2.2.3, "Launching the authconfig UI" . Select NIS in the User Account Database drop-down menu. Set the information to connect to the NIS server, meaning the NIS domain name and the server host name. If the NIS server is not specified, the authconfig daemon scans for the NIS server. Select the authentication method. NIS allows simple password authentication or Kerberos authentication. Using Kerberos is described in Section 4.3.1, "Configuring Kerberos Authentication from the UI" . 3.3.2. Configuring NIS from the Command Line To use a NIS identity store, use the --enablenis . This automatically uses NIS authentication, unless the Kerberos parameters are explicitly set ( Section 4.3.2, "Configuring Kerberos Authentication from the Command Line" ). The only parameters are to identify the NIS server and NIS domain; if these are not used, then the authconfig service scans the network for NIS servers.
[ "authconfig --enablenis --nisdomain=EXAMPLE --nisserver=nis.example.com --update" ]
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/system-level_authentication_guide/configuring-nis-auth
Chapter 1. Documentation moved
Chapter 1. Documentation moved The OpenShift sandboxed containers user guide and release notes have moved to a new location .
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.14/html/openshift_sandboxed_containers/sandboxed-containers-moved
Part III. Technology Previews
Part III. Technology Previews This part provides a list of all Technology Previews available in Red Hat Enterprise Linux 7.5. For information on Red Hat scope of support for Technology Preview features, see https://access.redhat.com/support/offerings/techpreview/ .
null
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/7.5_release_notes/technology-previews
Chapter 5. Installing a cluster on GCP with network customizations
Chapter 5. Installing a cluster on GCP with network customizations In OpenShift Container Platform version 4.18, you can install a cluster with a customized network configuration on infrastructure that the installation program provisions on Google Cloud Platform (GCP). By customizing your network configuration, your cluster can coexist with existing IP address allocations in your environment and integrate with existing MTU and VXLAN configurations. To customize the installation, you modify parameters in the install-config.yaml file before you install the cluster. You must set most of the network configuration parameters during installation, and you can modify only kubeProxy configuration parameters in a running cluster. 5.1. Prerequisites You reviewed details about the OpenShift Container Platform installation and update processes. You read the documentation on selecting a cluster installation method and preparing it for users . You configured a GCP project to host the cluster. If you use a firewall, you configured it to allow the sites that your cluster requires access to. 5.2. Internet access for OpenShift Container Platform In OpenShift Container Platform 4.18, you require access to the internet to install your cluster. You must have internet access to: Access OpenShift Cluster Manager to download the installation program and perform subscription management. If the cluster has internet access and you do not disable Telemetry, that service automatically entitles your cluster. Access Quay.io to obtain the packages that are required to install your cluster. Obtain the packages that are required to perform cluster updates. Important If your cluster cannot have direct internet access, you can perform a restricted network installation on some types of infrastructure that you provision. During that process, you download the required content and use it to populate a mirror registry with the installation packages. With some installation types, the environment that you install your cluster in will not require internet access. Before you update the cluster, you update the content of the mirror registry. 5.3. Generating a key pair for cluster node SSH access During an OpenShift Container Platform installation, you can provide an SSH public key to the installation program. The key is passed to the Red Hat Enterprise Linux CoreOS (RHCOS) nodes through their Ignition config files and is used to authenticate SSH access to the nodes. The key is added to the ~/.ssh/authorized_keys list for the core user on each node, which enables password-less authentication. After the key is passed to the nodes, you can use the key pair to SSH in to the RHCOS nodes as the user core . To access the nodes through SSH, the private key identity must be managed by SSH for your local user. If you want to SSH in to your cluster nodes to perform installation debugging or disaster recovery, you must provide the SSH public key during the installation process. The ./openshift-install gather command also requires the SSH public key to be in place on the cluster nodes. Important Do not skip this procedure in production environments, where disaster recovery and debugging is required. Note You must use a local key, not one that you configured with platform-specific approaches. Procedure If you do not have an existing SSH key pair on your local machine to use for authentication onto your cluster nodes, create one. For example, on a computer that uses a Linux operating system, run the following command: USD ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1 1 Specify the path and file name, such as ~/.ssh/id_ed25519 , of the new SSH key. If you have an existing key pair, ensure your public key is in the your ~/.ssh directory. Note If you plan to install an OpenShift Container Platform cluster that uses the RHEL cryptographic libraries that have been submitted to NIST for FIPS 140-2/140-3 Validation on only the x86_64 , ppc64le , and s390x architectures, do not create a key that uses the ed25519 algorithm. Instead, create a key that uses the rsa or ecdsa algorithm. View the public SSH key: USD cat <path>/<file_name>.pub For example, run the following to view the ~/.ssh/id_ed25519.pub public key: USD cat ~/.ssh/id_ed25519.pub Add the SSH private key identity to the SSH agent for your local user, if it has not already been added. SSH agent management of the key is required for password-less SSH authentication onto your cluster nodes, or if you want to use the ./openshift-install gather command. Note On some distributions, default SSH private key identities such as ~/.ssh/id_rsa and ~/.ssh/id_dsa are managed automatically. If the ssh-agent process is not already running for your local user, start it as a background task: USD eval "USD(ssh-agent -s)" Example output Agent pid 31874 Note If your cluster is in FIPS mode, only use FIPS-compliant algorithms to generate the SSH key. The key must be either RSA or ECDSA. Add your SSH private key to the ssh-agent : USD ssh-add <path>/<file_name> 1 1 Specify the path and file name for your SSH private key, such as ~/.ssh/id_ed25519 Example output Identity added: /home/<you>/<path>/<file_name> (<computer_name>) steps When you install OpenShift Container Platform, provide the SSH public key to the installation program. 5.4. Obtaining the installation program Before you install OpenShift Container Platform, download the installation file on the host you are using for installation. Prerequisites You have a computer that runs Linux or macOS, with 500 MB of local disk space. Procedure Go to the Cluster Type page on the Red Hat Hybrid Cloud Console. If you have a Red Hat account, log in with your credentials. If you do not, create an account. Tip You can also download the binaries for a specific OpenShift Container Platform release . Select your infrastructure provider from the Run it yourself section of the page. Select your host operating system and architecture from the dropdown menus under OpenShift Installer and click Download Installer . Place the downloaded file in the directory where you want to store the installation configuration files. Important The installation program creates several files on the computer that you use to install your cluster. You must keep the installation program and the files that the installation program creates after you finish installing the cluster. Both of the files are required to delete the cluster. Deleting the files created by the installation program does not remove your cluster, even if the cluster failed during installation. To remove your cluster, complete the OpenShift Container Platform uninstallation procedures for your specific cloud provider. Extract the installation program. For example, on a computer that uses a Linux operating system, run the following command: USD tar -xvf openshift-install-linux.tar.gz Download your installation pull secret from Red Hat OpenShift Cluster Manager . This pull secret allows you to authenticate with the services that are provided by the included authorities, including Quay.io, which serves the container images for OpenShift Container Platform components. Tip Alternatively, you can retrieve the installation program from the Red Hat Customer Portal , where you can specify a version of the installation program to download. However, you must have an active subscription to access this page. 5.5. Creating the installation configuration file You can customize the OpenShift Container Platform cluster you install on Google Cloud Platform (GCP). Prerequisites You have the OpenShift Container Platform installation program and the pull secret for your cluster. Configure a GCP account. Procedure Create the install-config.yaml file. Change to the directory that contains the installation program and run the following command: USD ./openshift-install create install-config --dir <installation_directory> 1 1 For <installation_directory> , specify the directory name to store the files that the installation program creates. When specifying the directory: Verify that the directory has the execute permission. This permission is required to run Terraform binaries under the installation directory. Use an empty directory. Some installation assets, such as bootstrap X.509 certificates, have short expiration intervals, therefore you must not reuse an installation directory. If you want to reuse individual files from another cluster installation, you can copy them into your directory. However, the file names for the installation assets might change between releases. Use caution when copying installation files from an earlier OpenShift Container Platform version. At the prompts, provide the configuration details for your cloud: Optional: Select an SSH key to use to access your cluster machines. Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. Select gcp as the platform to target. If you have not configured the service account key for your GCP account on your computer, you must obtain it from GCP and paste the contents of the file or enter the absolute path to the file. Select the project ID to provision the cluster in. The default value is specified by the service account that you configured. Select the region to deploy the cluster to. Select the base domain to deploy the cluster to. The base domain corresponds to the public DNS zone that you created for your cluster. Enter a descriptive name for your cluster. Modify the install-config.yaml file. You can find more information about the available parameters in the "Installation configuration parameters" section. Back up the install-config.yaml file so that you can use it to install multiple clusters. Important The install-config.yaml file is consumed during the installation process. If you want to reuse the file, you must back it up now. Additional resources Installation configuration parameters for GCP 5.5.1. Minimum resource requirements for cluster installation Each cluster machine must meet the following minimum requirements: Table 5.1. Minimum resource requirements Machine Operating System vCPU [1] Virtual RAM Storage Input/Output Per Second (IOPS) [2] Bootstrap RHCOS 4 16 GB 100 GB 300 Control plane RHCOS 4 16 GB 100 GB 300 Compute RHCOS, RHEL 8.6 and later [3] 2 8 GB 100 GB 300 One vCPU is equivalent to one physical core when simultaneous multithreading (SMT), or Hyper-Threading, is not enabled. When enabled, use the following formula to calculate the corresponding ratio: (threads per core x cores) x sockets = vCPUs. OpenShift Container Platform and Kubernetes are sensitive to disk performance, and faster storage is recommended, particularly for etcd on the control plane nodes which require a 10 ms p99 fsync duration. Note that on many cloud platforms, storage size and IOPS scale together, so you might need to over-allocate storage volume to obtain sufficient performance. As with all user-provisioned installations, if you choose to use RHEL compute machines in your cluster, you take responsibility for all operating system life cycle management and maintenance, including performing system updates, applying patches, and completing all other required tasks. Use of RHEL 7 compute machines is deprecated and has been removed in OpenShift Container Platform 4.10 and later. Note For OpenShift Container Platform version 4.18, RHCOS is based on RHEL version 9.4, which updates the micro-architecture requirements. The following list contains the minimum instruction set architectures (ISA) that each architecture requires: x86-64 architecture requires x86-64-v2 ISA ARM64 architecture requires ARMv8.0-A ISA IBM Power architecture requires Power 9 ISA s390x architecture requires z14 ISA For more information, see Architectures (RHEL documentation). If an instance type for your platform meets the minimum requirements for cluster machines, it is supported to use in OpenShift Container Platform. Additional resources Optimizing storage 5.5.2. Tested instance types for GCP The following Google Cloud Platform instance types have been tested with OpenShift Container Platform. Note Not all instance types are available in all regions and zones. For a detailed breakdown of which instance types are available in which zones, see regions and zones (Google documentation). Some instance types require the use of Hyperdisk storage. If you use an instance type that requires Hyperdisk storage, all of the nodes in your cluster must support Hyperdisk storage, and you must change the default storage class to use Hyperdisk storage. For more information, see machine series support for Hyperdisk (Google documentation). For instructions on modifying storage classes, see the "GCE PersistentDisk (gcePD) object definition" section in the Dynamic Provisioning page in Storage . Example 5.1. Machine series A2 A3 C2 C2D C3 C3D C4 E2 M1 N1 N2 N2D N4 Tau T2D 5.5.3. Tested instance types for GCP on 64-bit ARM infrastructures The following Google Cloud Platform (GCP) 64-bit ARM instance types have been tested with OpenShift Container Platform. Example 5.2. Machine series for 64-bit ARM machines C4A Tau T2A 5.5.4. Using custom machine types Using a custom machine type to install a OpenShift Container Platform cluster is supported. Consider the following when using a custom machine type: Similar to predefined instance types, custom machine types must meet the minimum resource requirements for control plane and compute machines. For more information, see "Minimum resource requirements for cluster installation". The name of the custom machine type must adhere to the following syntax: custom-<number_of_cpus>-<amount_of_memory_in_mb> For example, custom-6-20480 . As part of the installation process, you specify the custom machine type in the install-config.yaml file. Sample install-config.yaml file with a custom machine type compute: - architecture: amd64 hyperthreading: Enabled name: worker platform: gcp: type: custom-6-20480 replicas: 2 controlPlane: architecture: amd64 hyperthreading: Enabled name: master platform: gcp: type: custom-6-20480 replicas: 3 5.5.5. Enabling Shielded VMs You can use Shielded VMs when installing your cluster. Shielded VMs have extra security features including secure boot, firmware and integrity monitoring, and rootkit detection. For more information, see Google's documentation on Shielded VMs . Note Shielded VMs are currently not supported on clusters with 64-bit ARM infrastructures. Procedure Use a text editor to edit the install-config.yaml file prior to deploying your cluster and add one of the following stanzas: To use shielded VMs for only control plane machines: controlPlane: platform: gcp: secureBoot: Enabled To use shielded VMs for only compute machines: compute: - platform: gcp: secureBoot: Enabled To use shielded VMs for all machines: platform: gcp: defaultMachinePlatform: secureBoot: Enabled 5.5.6. Enabling Confidential VMs You can use Confidential VMs when installing your cluster. Confidential VMs encrypt data while it is being processed. For more information, see Google's documentation on Confidential Computing . You can enable Confidential VMs and Shielded VMs at the same time, although they are not dependent on each other. Note Confidential VMs are currently not supported on 64-bit ARM architectures. Procedure Use a text editor to edit the install-config.yaml file prior to deploying your cluster and add one of the following stanzas: To use confidential VMs for only control plane machines: controlPlane: platform: gcp: confidentialCompute: Enabled 1 type: n2d-standard-8 2 onHostMaintenance: Terminate 3 1 Enable confidential VMs. 2 Specify a machine type that supports Confidential VMs. Confidential VMs require the N2D or C2D series of machine types. For more information on supported machine types, see Supported operating systems and machine types . 3 Specify the behavior of the VM during a host maintenance event, such as a hardware or software update. For a machine that uses Confidential VM, this value must be set to Terminate , which stops the VM. Confidential VMs do not support live VM migration. To use confidential VMs for only compute machines: compute: - platform: gcp: confidentialCompute: Enabled type: n2d-standard-8 onHostMaintenance: Terminate To use confidential VMs for all machines: platform: gcp: defaultMachinePlatform: confidentialCompute: Enabled type: n2d-standard-8 onHostMaintenance: Terminate 5.5.7. Sample customized install-config.yaml file for GCP You can customize the install-config.yaml file to specify more details about your OpenShift Container Platform cluster's platform or modify the values of the required parameters. Important This sample YAML file is provided for reference only. You must obtain your install-config.yaml file by using the installation program and modify it. apiVersion: v1 baseDomain: example.com 1 credentialsMode: Mint 2 controlPlane: 3 4 hyperthreading: Enabled 5 name: master platform: gcp: type: n2-standard-4 zones: - us-central1-a - us-central1-c osDisk: diskType: pd-ssd diskSizeGB: 1024 encryptionKey: 6 kmsKey: name: worker-key keyRing: test-machine-keys location: global projectID: project-id tags: 7 - control-plane-tag1 - control-plane-tag2 osImage: 8 project: example-project-name name: example-image-name replicas: 3 compute: 9 10 - hyperthreading: Enabled 11 name: worker platform: gcp: type: n2-standard-4 zones: - us-central1-a - us-central1-c osDisk: diskType: pd-standard diskSizeGB: 128 encryptionKey: 12 kmsKey: name: worker-key keyRing: test-machine-keys location: global projectID: project-id tags: 13 - compute-tag1 - compute-tag2 osImage: 14 project: example-project-name name: example-image-name replicas: 3 metadata: name: test-cluster 15 networking: 16 clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 networkType: OVNKubernetes 17 serviceNetwork: - 172.30.0.0/16 platform: gcp: projectID: openshift-production 18 region: us-central1 19 defaultMachinePlatform: tags: 20 - global-tag1 - global-tag2 osImage: 21 project: example-project-name name: example-image-name pullSecret: '{"auths": ...}' 22 fips: false 23 sshKey: ssh-ed25519 AAAA... 24 1 15 18 19 22 Required. The installation program prompts you for this value. 2 Optional: Add this parameter to force the Cloud Credential Operator (CCO) to use the specified mode. By default, the CCO uses the root credentials in the kube-system namespace to dynamically try to determine the capabilities of the credentials. For details about CCO modes, see the "About the Cloud Credential Operator" section in the Authentication and authorization guide. 3 9 16 If you do not provide these parameters and values, the installation program provides the default value. 4 10 The controlPlane section is a single mapping, but the compute section is a sequence of mappings. To meet the requirements of the different data structures, the first line of the compute section must begin with a hyphen, - , and the first line of the controlPlane section must not. Only one control plane pool is used. 5 11 Whether to enable or disable simultaneous multithreading, or hyperthreading . By default, simultaneous multithreading is enabled to increase the performance of your machines' cores. You can disable it by setting the parameter value to Disabled . If you disable simultaneous multithreading in some cluster machines, you must disable it in all cluster machines. Important If you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance. Use larger machine types, such as n1-standard-8 , for your machines if you disable simultaneous multithreading. 6 12 Optional: The custom encryption key section to encrypt both virtual machines and persistent volumes. Your default compute service account must have the permissions granted to use your KMS key and have the correct IAM role assigned. The default service account name follows the service-<project_number>@compute-system.iam.gserviceaccount.com pattern. For more information about granting the correct permissions for your service account, see "Machine management" "Creating compute machine sets" "Creating a compute machine set on GCP". 7 13 20 Optional: A set of network tags to apply to the control plane or compute machine sets. The platform.gcp.defaultMachinePlatform.tags parameter will apply to both control plane and compute machines. If the compute.platform.gcp.tags or controlPlane.platform.gcp.tags parameters are set, they override the platform.gcp.defaultMachinePlatform.tags parameter. 8 14 21 Optional: A custom Red Hat Enterprise Linux CoreOS (RHCOS) that should be used to boot control plane and compute machines. The project and name parameters under platform.gcp.defaultMachinePlatform.osImage apply to both control plane and compute machines. If the project and name parameters under controlPlane.platform.gcp.osImage or compute.platform.gcp.osImage are set, they override the platform.gcp.defaultMachinePlatform.osImage parameters. 17 The cluster network plugin to install. The default value OVNKubernetes is the only supported value. 23 Whether to enable or disable FIPS mode. By default, FIPS mode is not enabled. If FIPS mode is enabled, the Red Hat Enterprise Linux CoreOS (RHCOS) machines that OpenShift Container Platform runs on bypass the default Kubernetes cryptography suite and use the cryptography modules that are provided with RHCOS instead. Important When running Red Hat Enterprise Linux (RHEL) or Red Hat Enterprise Linux CoreOS (RHCOS) booted in FIPS mode, OpenShift Container Platform core components use the RHEL cryptographic libraries that have been submitted to NIST for FIPS 140-2/140-3 Validation on only the x86_64, ppc64le, and s390x architectures. 24 You can optionally provide the sshKey value that you use to access the machines in your cluster. Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. Additional resources Enabling customer-managed encryption keys for a compute machine set 5.5.8. Configuring the cluster-wide proxy during installation Production environments can deny direct access to the internet and instead have an HTTP or HTTPS proxy available. You can configure a new OpenShift Container Platform cluster to use a proxy by configuring the proxy settings in the install-config.yaml file. Prerequisites You have an existing install-config.yaml file. You reviewed the sites that your cluster requires access to and determined whether any of them need to bypass the proxy. By default, all cluster egress traffic is proxied, including calls to hosting cloud provider APIs. You added sites to the Proxy object's spec.noProxy field to bypass the proxy if necessary. Note The Proxy object status.noProxy field is populated with the values of the networking.machineNetwork[].cidr , networking.clusterNetwork[].cidr , and networking.serviceNetwork[] fields from your installation configuration. For installations on Amazon Web Services (AWS), Google Cloud Platform (GCP), Microsoft Azure, and Red Hat OpenStack Platform (RHOSP), the Proxy object status.noProxy field is also populated with the instance metadata endpoint ( 169.254.169.254 ). Procedure Edit your install-config.yaml file and add the proxy settings. For example: apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5 1 A proxy URL to use for creating HTTP connections outside the cluster. The URL scheme must be http . 2 A proxy URL to use for creating HTTPS connections outside the cluster. 3 A comma-separated list of destination domain names, IP addresses, or other network CIDRs to exclude from proxying. Preface a domain with . to match subdomains only. For example, .y.com matches x.y.com , but not y.com . Use * to bypass the proxy for all destinations. 4 If provided, the installation program generates a config map that is named user-ca-bundle in the openshift-config namespace that contains one or more additional CA certificates that are required for proxying HTTPS connections. The Cluster Network Operator then creates a trusted-ca-bundle config map that merges these contents with the Red Hat Enterprise Linux CoreOS (RHCOS) trust bundle, and this config map is referenced in the trustedCA field of the Proxy object. The additionalTrustBundle field is required unless the proxy's identity certificate is signed by an authority from the RHCOS trust bundle. 5 Optional: The policy to determine the configuration of the Proxy object to reference the user-ca-bundle config map in the trustedCA field. The allowed values are Proxyonly and Always . Use Proxyonly to reference the user-ca-bundle config map only when http/https proxy is configured. Use Always to always reference the user-ca-bundle config map. The default value is Proxyonly . Note The installation program does not support the proxy readinessEndpoints field. Note If the installer times out, restart and then complete the deployment by using the wait-for command of the installer. For example: USD ./openshift-install wait-for install-complete --log-level debug Save the file and reference it when installing OpenShift Container Platform. The installation program creates a cluster-wide proxy that is named cluster that uses the proxy settings in the provided install-config.yaml file. If no proxy settings are provided, a cluster Proxy object is still created, but it will have a nil spec . Note Only the Proxy object named cluster is supported, and no additional proxies can be created. 5.6. Installing the OpenShift CLI You can install the OpenShift CLI ( oc ) to interact with OpenShift Container Platform from a command-line interface. You can install oc on Linux, Windows, or macOS. Important If you installed an earlier version of oc , you cannot use it to complete all of the commands in OpenShift Container Platform 4.18. Download and install the new version of oc . Installing the OpenShift CLI on Linux You can install the OpenShift CLI ( oc ) binary on Linux by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the architecture from the Product Variant drop-down list. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.18 Linux Clients entry and save the file. Unpack the archive: USD tar xvf <file> Place the oc binary in a directory that is on your PATH . To check your PATH , execute the following command: USD echo USDPATH Verification After you install the OpenShift CLI, it is available using the oc command: USD oc <command> Installing the OpenShift CLI on Windows You can install the OpenShift CLI ( oc ) binary on Windows by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.18 Windows Client entry and save the file. Unzip the archive with a ZIP program. Move the oc binary to a directory that is on your PATH . To check your PATH , open the command prompt and execute the following command: C:\> path Verification After you install the OpenShift CLI, it is available using the oc command: C:\> oc <command> Installing the OpenShift CLI on macOS You can install the OpenShift CLI ( oc ) binary on macOS by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.18 macOS Clients entry and save the file. Note For macOS arm64, choose the OpenShift v4.18 macOS arm64 Client entry. Unpack and unzip the archive. Move the oc binary to a directory on your PATH. To check your PATH , open a terminal and execute the following command: USD echo USDPATH Verification Verify your installation by using an oc command: USD oc <command> 5.7. Alternatives to storing administrator-level secrets in the kube-system project By default, administrator secrets are stored in the kube-system project. If you configured the credentialsMode parameter in the install-config.yaml file to Manual , you must use one of the following alternatives: To manage long-term cloud credentials manually, follow the procedure in Manually creating long-term credentials . To implement short-term credentials that are managed outside the cluster for individual components, follow the procedures in Configuring a GCP cluster to use short-term credentials . 5.7.1. Manually creating long-term credentials The Cloud Credential Operator (CCO) can be put into manual mode prior to installation in environments where the cloud identity and access management (IAM) APIs are not reachable, or the administrator prefers not to store an administrator-level credential secret in the cluster kube-system namespace. Procedure Add the following granular permissions to the GCP account that the installation program uses: Example 5.3. Required GCP permissions compute.machineTypes.list compute.regions.list compute.zones.list dns.changes.create dns.changes.get dns.managedZones.create dns.managedZones.delete dns.managedZones.get dns.managedZones.list dns.networks.bindPrivateDNSZone dns.resourceRecordSets.create dns.resourceRecordSets.delete dns.resourceRecordSets.list If you did not set the credentialsMode parameter in the install-config.yaml configuration file to Manual , modify the value as shown: Sample configuration file snippet apiVersion: v1 baseDomain: example.com credentialsMode: Manual # ... If you have not previously created installation manifest files, do so by running the following command: USD openshift-install create manifests --dir <installation_directory> where <installation_directory> is the directory in which the installation program creates files. Set a USDRELEASE_IMAGE variable with the release image from your installation file by running the following command: USD RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}') Extract the list of CredentialsRequest custom resources (CRs) from the OpenShift Container Platform release image by running the following command: USD oc adm release extract \ --from=USDRELEASE_IMAGE \ --credentials-requests \ --included \ 1 --install-config=<path_to_directory_with_installation_configuration>/install-config.yaml \ 2 --to=<path_to_directory_for_credentials_requests> 3 1 The --included parameter includes only the manifests that your specific cluster configuration requires. 2 Specify the location of the install-config.yaml file. 3 Specify the path to the directory where you want to store the CredentialsRequest objects. If the specified directory does not exist, this command creates it. This command creates a YAML file for each CredentialsRequest object. Sample CredentialsRequest object apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: name: <component_credentials_request> namespace: openshift-cloud-credential-operator ... spec: providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: GCPProviderSpec predefinedRoles: - roles/storage.admin - roles/iam.serviceAccountUser skipServiceCheck: true ... Create YAML files for secrets in the openshift-install manifests directory that you generated previously. The secrets must be stored using the namespace and secret name defined in the spec.secretRef for each CredentialsRequest object. Sample CredentialsRequest object with secrets apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: name: <component_credentials_request> namespace: openshift-cloud-credential-operator ... spec: providerSpec: apiVersion: cloudcredential.openshift.io/v1 ... secretRef: name: <component_secret> namespace: <component_namespace> ... Sample Secret object apiVersion: v1 kind: Secret metadata: name: <component_secret> namespace: <component_namespace> data: service_account.json: <base64_encoded_gcp_service_account_file> Important Before upgrading a cluster that uses manually maintained credentials, you must ensure that the CCO is in an upgradeable state. 5.7.2. Configuring a GCP cluster to use short-term credentials To install a cluster that is configured to use GCP Workload Identity, you must configure the CCO utility and create the required GCP resources for your cluster. 5.7.2.1. Configuring the Cloud Credential Operator utility To create and manage cloud credentials from outside of the cluster when the Cloud Credential Operator (CCO) is operating in manual mode, extract and prepare the CCO utility ( ccoctl ) binary. Note The ccoctl utility is a Linux binary that must run in a Linux environment. Prerequisites You have access to an OpenShift Container Platform account with cluster administrator access. You have installed the OpenShift CLI ( oc ). You have added one of the following authentication options to the GCP account that the ccoctl utility uses: The IAM Workload Identity Pool Admin role The following granular permissions: compute.projects.get iam.googleapis.com/workloadIdentityPoolProviders.create iam.googleapis.com/workloadIdentityPoolProviders.get iam.googleapis.com/workloadIdentityPools.create iam.googleapis.com/workloadIdentityPools.delete iam.googleapis.com/workloadIdentityPools.get iam.googleapis.com/workloadIdentityPools.undelete iam.roles.create iam.roles.delete iam.roles.list iam.roles.undelete iam.roles.update iam.serviceAccounts.create iam.serviceAccounts.delete iam.serviceAccounts.getIamPolicy iam.serviceAccounts.list iam.serviceAccounts.setIamPolicy iam.workloadIdentityPoolProviders.get iam.workloadIdentityPools.delete resourcemanager.projects.get resourcemanager.projects.getIamPolicy resourcemanager.projects.setIamPolicy storage.buckets.create storage.buckets.delete storage.buckets.get storage.buckets.getIamPolicy storage.buckets.setIamPolicy storage.objects.create storage.objects.delete storage.objects.list Procedure Set a variable for the OpenShift Container Platform release image by running the following command: USD RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}') Obtain the CCO container image from the OpenShift Container Platform release image by running the following command: USD CCO_IMAGE=USD(oc adm release info --image-for='cloud-credential-operator' USDRELEASE_IMAGE -a ~/.pull-secret) Note Ensure that the architecture of the USDRELEASE_IMAGE matches the architecture of the environment in which you will use the ccoctl tool. Extract the ccoctl binary from the CCO container image within the OpenShift Container Platform release image by running the following command: USD oc image extract USDCCO_IMAGE \ --file="/usr/bin/ccoctl.<rhel_version>" \ 1 -a ~/.pull-secret 1 For <rhel_version> , specify the value that corresponds to the version of Red Hat Enterprise Linux (RHEL) that the host uses. If no value is specified, ccoctl.rhel8 is used by default. The following values are valid: rhel8 : Specify this value for hosts that use RHEL 8. rhel9 : Specify this value for hosts that use RHEL 9. Change the permissions to make ccoctl executable by running the following command: USD chmod 775 ccoctl.<rhel_version> Verification To verify that ccoctl is ready to use, display the help file. Use a relative file name when you run the command, for example: USD ./ccoctl.rhel9 Example output OpenShift credentials provisioning tool Usage: ccoctl [command] Available Commands: aws Manage credentials objects for AWS cloud azure Manage credentials objects for Azure gcp Manage credentials objects for Google cloud help Help about any command ibmcloud Manage credentials objects for {ibm-cloud-title} nutanix Manage credentials objects for Nutanix Flags: -h, --help help for ccoctl Use "ccoctl [command] --help" for more information about a command. 5.7.2.2. Creating GCP resources with the Cloud Credential Operator utility You can use the ccoctl gcp create-all command to automate the creation of GCP resources. Note By default, ccoctl creates objects in the directory in which the commands are run. To create the objects in a different directory, use the --output-dir flag. This procedure uses <path_to_ccoctl_output_dir> to refer to this directory. Prerequisites You must have: Extracted and prepared the ccoctl binary. Procedure Set a USDRELEASE_IMAGE variable with the release image from your installation file by running the following command: USD RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}') Extract the list of CredentialsRequest objects from the OpenShift Container Platform release image by running the following command: USD oc adm release extract \ --from=USDRELEASE_IMAGE \ --credentials-requests \ --included \ 1 --install-config=<path_to_directory_with_installation_configuration>/install-config.yaml \ 2 --to=<path_to_directory_for_credentials_requests> 3 1 The --included parameter includes only the manifests that your specific cluster configuration requires. 2 Specify the location of the install-config.yaml file. 3 Specify the path to the directory where you want to store the CredentialsRequest objects. If the specified directory does not exist, this command creates it. Note This command might take a few moments to run. Use the ccoctl tool to process all CredentialsRequest objects by running the following command: USD ccoctl gcp create-all \ --name=<name> \ 1 --region=<gcp_region> \ 2 --project=<gcp_project_id> \ 3 --credentials-requests-dir=<path_to_credentials_requests_directory> 4 1 Specify the user-defined name for all created GCP resources used for tracking. If you plan to install the GCP Filestore Container Storage Interface (CSI) Driver Operator, retain this value. 2 Specify the GCP region in which cloud resources will be created. 3 Specify the GCP project ID in which cloud resources will be created. 4 Specify the directory containing the files of CredentialsRequest manifests to create GCP service accounts. Note If your cluster uses Technology Preview features that are enabled by the TechPreviewNoUpgrade feature set, you must include the --enable-tech-preview parameter. Verification To verify that the OpenShift Container Platform secrets are created, list the files in the <path_to_ccoctl_output_dir>/manifests directory: USD ls <path_to_ccoctl_output_dir>/manifests Example output cluster-authentication-02-config.yaml openshift-cloud-controller-manager-gcp-ccm-cloud-credentials-credentials.yaml openshift-cloud-credential-operator-cloud-credential-operator-gcp-ro-creds-credentials.yaml openshift-cloud-network-config-controller-cloud-credentials-credentials.yaml openshift-cluster-api-capg-manager-bootstrap-credentials-credentials.yaml openshift-cluster-csi-drivers-gcp-pd-cloud-credentials-credentials.yaml openshift-image-registry-installer-cloud-credentials-credentials.yaml openshift-ingress-operator-cloud-credentials-credentials.yaml openshift-machine-api-gcp-cloud-credentials-credentials.yaml You can verify that the IAM service accounts are created by querying GCP. For more information, refer to GCP documentation on listing IAM service accounts. 5.7.2.3. Incorporating the Cloud Credential Operator utility manifests To implement short-term security credentials managed outside the cluster for individual components, you must move the manifest files that the Cloud Credential Operator utility ( ccoctl ) created to the correct directories for the installation program. Prerequisites You have configured an account with the cloud platform that hosts your cluster. You have configured the Cloud Credential Operator utility ( ccoctl ). You have created the cloud provider resources that are required for your cluster with the ccoctl utility. Procedure Add the following granular permissions to the GCP account that the installation program uses: Example 5.4. Required GCP permissions compute.machineTypes.list compute.regions.list compute.zones.list dns.changes.create dns.changes.get dns.managedZones.create dns.managedZones.delete dns.managedZones.get dns.managedZones.list dns.networks.bindPrivateDNSZone dns.resourceRecordSets.create dns.resourceRecordSets.delete dns.resourceRecordSets.list If you did not set the credentialsMode parameter in the install-config.yaml configuration file to Manual , modify the value as shown: Sample configuration file snippet apiVersion: v1 baseDomain: example.com credentialsMode: Manual # ... If you have not previously created installation manifest files, do so by running the following command: USD openshift-install create manifests --dir <installation_directory> where <installation_directory> is the directory in which the installation program creates files. Copy the manifests that the ccoctl utility generated to the manifests directory that the installation program created by running the following command: USD cp /<path_to_ccoctl_output_dir>/manifests/* ./manifests/ Copy the tls directory that contains the private key to the installation directory: USD cp -a /<path_to_ccoctl_output_dir>/tls . 5.8. Network configuration phases There are two phases prior to OpenShift Container Platform installation where you can customize the network configuration. Phase 1 You can customize the following network-related fields in the install-config.yaml file before you create the manifest files: networking.networkType networking.clusterNetwork networking.serviceNetwork networking.machineNetwork nodeNetworking For more information, see "Installation configuration parameters". Note Set the networking.machineNetwork to match the Classless Inter-Domain Routing (CIDR) where the preferred subnet is located. Important The CIDR range 172.17.0.0/16 is reserved by libVirt . You cannot use any other CIDR range that overlaps with the 172.17.0.0/16 CIDR range for networks in your cluster. Phase 2 After creating the manifest files by running openshift-install create manifests , you can define a customized Cluster Network Operator manifest with only the fields you want to modify. You can use the manifest to specify an advanced network configuration. During phase 2, you cannot override the values that you specified in phase 1 in the install-config.yaml file. However, you can customize the network plugin during phase 2. 5.9. Specifying advanced network configuration You can use advanced network configuration for your network plugin to integrate your cluster into your existing network environment. You can specify advanced network configuration only before you install the cluster. Important Customizing your network configuration by modifying the OpenShift Container Platform manifest files created by the installation program is not supported. Applying a manifest file that you create, as in the following procedure, is supported. Prerequisites You have created the install-config.yaml file and completed any modifications to it. Procedure Change to the directory that contains the installation program and create the manifests: USD ./openshift-install create manifests --dir <installation_directory> 1 1 <installation_directory> specifies the name of the directory that contains the install-config.yaml file for your cluster. Create a stub manifest file for the advanced network configuration that is named cluster-network-03-config.yml in the <installation_directory>/manifests/ directory: apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: Specify the advanced network configuration for your cluster in the cluster-network-03-config.yml file, such as in the following example: Enable IPsec for the OVN-Kubernetes network provider apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: defaultNetwork: ovnKubernetesConfig: ipsecConfig: mode: Full Optional: Back up the manifests/cluster-network-03-config.yml file. The installation program consumes the manifests/ directory when you create the Ignition config files. Remove the Kubernetes manifest files that define the control plane machines and compute MachineSets : USD rm -f openshift/99_openshift-cluster-api_master-machines-*.yaml openshift/99_openshift-cluster-api_worker-machineset-*.yaml Because you create and manage these resources yourself, you do not have to initialize them. You can preserve the MachineSet files to create compute machines by using the machine API, but you must update references to them to match your environment. 5.10. Cluster Network Operator configuration The configuration for the cluster network is specified as part of the Cluster Network Operator (CNO) configuration and stored in a custom resource (CR) object that is named cluster . The CR specifies the fields for the Network API in the operator.openshift.io API group. The CNO configuration inherits the following fields during cluster installation from the Network API in the Network.config.openshift.io API group: clusterNetwork IP address pools from which pod IP addresses are allocated. serviceNetwork IP address pool for services. defaultNetwork.type Cluster network plugin. OVNKubernetes is the only supported plugin during installation. You can specify the cluster network plugin configuration for your cluster by setting the fields for the defaultNetwork object in the CNO object named cluster . 5.10.1. Cluster Network Operator configuration object The fields for the Cluster Network Operator (CNO) are described in the following table: Table 5.2. Cluster Network Operator configuration object Field Type Description metadata.name string The name of the CNO object. This name is always cluster . spec.clusterNetwork array A list specifying the blocks of IP addresses from which pod IP addresses are allocated and the subnet prefix length assigned to each individual node in the cluster. For example: spec: clusterNetwork: - cidr: 10.128.0.0/19 hostPrefix: 23 - cidr: 10.128.32.0/19 hostPrefix: 23 spec.serviceNetwork array A block of IP addresses for services. The OVN-Kubernetes network plugin supports only a single IP address block for the service network. For example: spec: serviceNetwork: - 172.30.0.0/14 You can customize this field only in the install-config.yaml file before you create the manifests. The value is read-only in the manifest file. spec.defaultNetwork object Configures the network plugin for the cluster network. spec.kubeProxyConfig object The fields for this object specify the kube-proxy configuration. If you are using the OVN-Kubernetes cluster network plugin, the kube-proxy configuration has no effect. Important For a cluster that needs to deploy objects across multiple networks, ensure that you specify the same value for the clusterNetwork.hostPrefix parameter for each network type that is defined in the install-config.yaml file. Setting a different value for each clusterNetwork.hostPrefix parameter can impact the OVN-Kubernetes network plugin, where the plugin cannot effectively route object traffic among different nodes. defaultNetwork object configuration The values for the defaultNetwork object are defined in the following table: Table 5.3. defaultNetwork object Field Type Description type string OVNKubernetes . The Red Hat OpenShift Networking network plugin is selected during installation. This value cannot be changed after cluster installation. Note OpenShift Container Platform uses the OVN-Kubernetes network plugin by default. ovnKubernetesConfig object This object is only valid for the OVN-Kubernetes network plugin. Configuration for the OVN-Kubernetes network plugin The following table describes the configuration fields for the OVN-Kubernetes network plugin: Table 5.4. ovnKubernetesConfig object Field Type Description mtu integer The maximum transmission unit (MTU) for the Geneve (Generic Network Virtualization Encapsulation) overlay network. This is detected automatically based on the MTU of the primary network interface. You do not normally need to override the detected MTU. If the auto-detected value is not what you expect it to be, confirm that the MTU on the primary network interface on your nodes is correct. You cannot use this option to change the MTU value of the primary network interface on the nodes. If your cluster requires different MTU values for different nodes, you must set this value to 100 less than the lowest MTU value in your cluster. For example, if some nodes in your cluster have an MTU of 9001 , and some have an MTU of 1500 , you must set this value to 1400 . genevePort integer The port to use for all Geneve packets. The default value is 6081 . This value cannot be changed after cluster installation. ipsecConfig object Specify a configuration object for customizing the IPsec configuration. ipv4 object Specifies a configuration object for IPv4 settings. ipv6 object Specifies a configuration object for IPv6 settings. policyAuditConfig object Specify a configuration object for customizing network policy audit logging. If unset, the defaults audit log settings are used. gatewayConfig object Optional: Specify a configuration object for customizing how egress traffic is sent to the node gateway. Note While migrating egress traffic, you can expect some disruption to workloads and service traffic until the Cluster Network Operator (CNO) successfully rolls out the changes. Table 5.5. ovnKubernetesConfig.ipv4 object Field Type Description internalTransitSwitchSubnet string If your existing network infrastructure overlaps with the 100.88.0.0/16 IPv4 subnet, you can specify a different IP address range for internal use by OVN-Kubernetes. The subnet for the distributed transit switch that enables east-west traffic. This subnet cannot overlap with any other subnets used by OVN-Kubernetes or on the host itself. It must be large enough to accommodate one IP address per node in your cluster. The default value is 100.88.0.0/16 . internalJoinSubnet string If your existing network infrastructure overlaps with the 100.64.0.0/16 IPv4 subnet, you can specify a different IP address range for internal use by OVN-Kubernetes. You must ensure that the IP address range does not overlap with any other subnet used by your OpenShift Container Platform installation. The IP address range must be larger than the maximum number of nodes that can be added to the cluster. For example, if the clusterNetwork.cidr value is 10.128.0.0/14 and the clusterNetwork.hostPrefix value is /23 , then the maximum number of nodes is 2^(23-14)=512 . The default value is 100.64.0.0/16 . Table 5.6. ovnKubernetesConfig.ipv6 object Field Type Description internalTransitSwitchSubnet string If your existing network infrastructure overlaps with the fd97::/64 IPv6 subnet, you can specify a different IP address range for internal use by OVN-Kubernetes. The subnet for the distributed transit switch that enables east-west traffic. This subnet cannot overlap with any other subnets used by OVN-Kubernetes or on the host itself. It must be large enough to accommodate one IP address per node in your cluster. The default value is fd97::/64 . internalJoinSubnet string If your existing network infrastructure overlaps with the fd98::/64 IPv6 subnet, you can specify a different IP address range for internal use by OVN-Kubernetes. You must ensure that the IP address range does not overlap with any other subnet used by your OpenShift Container Platform installation. The IP address range must be larger than the maximum number of nodes that can be added to the cluster. The default value is fd98::/64 . Table 5.7. policyAuditConfig object Field Type Description rateLimit integer The maximum number of messages to generate every second per node. The default value is 20 messages per second. maxFileSize integer The maximum size for the audit log in bytes. The default value is 50000000 or 50 MB. maxLogFiles integer The maximum number of log files that are retained. destination string One of the following additional audit log targets: libc The libc syslog() function of the journald process on the host. udp:<host>:<port> A syslog server. Replace <host>:<port> with the host and port of the syslog server. unix:<file> A Unix Domain Socket file specified by <file> . null Do not send the audit logs to any additional target. syslogFacility string The syslog facility, such as kern , as defined by RFC5424. The default value is local0 . Table 5.8. gatewayConfig object Field Type Description routingViaHost boolean Set this field to true to send egress traffic from pods to the host networking stack. For highly-specialized installations and applications that rely on manually configured routes in the kernel routing table, you might want to route egress traffic to the host networking stack. By default, egress traffic is processed in OVN to exit the cluster and is not affected by specialized routes in the kernel routing table. The default value is false . This field has an interaction with the Open vSwitch hardware offloading feature. If you set this field to true , you do not receive the performance benefits of the offloading because egress traffic is processed by the host networking stack. ipForwarding object You can control IP forwarding for all traffic on OVN-Kubernetes managed interfaces by using the ipForwarding specification in the Network resource. Specify Restricted to only allow IP forwarding for Kubernetes related traffic. Specify Global to allow forwarding of all IP traffic. For new installations, the default is Restricted . For updates to OpenShift Container Platform 4.14 or later, the default is Global . Note The default value of Restricted sets the IP forwarding to drop. ipv4 object Optional: Specify an object to configure the internal OVN-Kubernetes masquerade address for host to service traffic for IPv4 addresses. ipv6 object Optional: Specify an object to configure the internal OVN-Kubernetes masquerade address for host to service traffic for IPv6 addresses. Table 5.9. gatewayConfig.ipv4 object Field Type Description internalMasqueradeSubnet string The masquerade IPv4 addresses that are used internally to enable host to service traffic. The host is configured with these IP addresses as well as the shared gateway bridge interface. The default value is 169.254.169.0/29 . Important For OpenShift Container Platform 4.17 and later versions, clusters use 169.254.0.0/17 as the default masquerade subnet. For upgraded clusters, there is no change to the default masquerade subnet. Table 5.10. gatewayConfig.ipv6 object Field Type Description internalMasqueradeSubnet string The masquerade IPv6 addresses that are used internally to enable host to service traffic. The host is configured with these IP addresses as well as the shared gateway bridge interface. The default value is fd69::/125 . Important For OpenShift Container Platform 4.17 and later versions, clusters use fd69::/112 as the default masquerade subnet. For upgraded clusters, there is no change to the default masquerade subnet. Table 5.11. ipsecConfig object Field Type Description mode string Specifies the behavior of the IPsec implementation. Must be one of the following values: Disabled : IPsec is not enabled on cluster nodes. External : IPsec is enabled for network traffic with external hosts. Full : IPsec is enabled for pod traffic and network traffic with external hosts. Example OVN-Kubernetes configuration with IPSec enabled defaultNetwork: type: OVNKubernetes ovnKubernetesConfig: mtu: 1400 genevePort: 6081 ipsecConfig: mode: Full 5.11. Deploying the cluster You can install OpenShift Container Platform on a compatible cloud platform. Important You can run the create cluster command of the installation program only once, during initial installation. Prerequisites You have configured an account with the cloud platform that hosts your cluster. You have the OpenShift Container Platform installation program and the pull secret for your cluster. You have verified that the cloud provider account on your host has the correct permissions to deploy the cluster. An account with incorrect permissions causes the installation process to fail with an error message that displays the missing permissions. Procedure Remove any existing GCP credentials that do not use the service account key for the GCP account that you configured for your cluster and that are stored in the following locations: The GOOGLE_CREDENTIALS , GOOGLE_CLOUD_KEYFILE_JSON , or GCLOUD_KEYFILE_JSON environment variables The ~/.gcp/osServiceAccount.json file The gcloud cli default credentials Change to the directory that contains the installation program and initialize the cluster deployment: USD ./openshift-install create cluster --dir <installation_directory> \ 1 --log-level=info 2 1 For <installation_directory> , specify the location of your customized ./install-config.yaml file. 2 To view different installation details, specify warn , debug , or error instead of info . Optional: You can reduce the number of permissions for the service account that you used to install the cluster. If you assigned the Owner role to your service account, you can remove that role and replace it with the Viewer role. If you included the Service Account Key Admin role, you can remove it. Verification When the cluster deployment completes successfully: The terminal displays directions for accessing your cluster, including a link to the web console and credentials for the kubeadmin user. Credential information also outputs to <installation_directory>/.openshift_install.log . Important Do not delete the installation program or the files that the installation program creates. Both are required to delete the cluster. Example output ... INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: "kubeadmin", and password: "password" INFO Time elapsed: 36m22s Important The Ignition config files that the installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending node-bootstrapper certificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information. It is recommended that you use Ignition config files within 12 hours after they are generated because the 24-hour certificate rotates from 16 to 22 hours after the cluster is installed. By using the Ignition config files within 12 hours, you can avoid installation failure if the certificate update runs during installation. 5.12. Logging in to the cluster by using the CLI You can log in to your cluster as a default system user by exporting the cluster kubeconfig file. The kubeconfig file contains information about the cluster that is used by the CLI to connect a client to the correct cluster and API server. The file is specific to a cluster and is created during OpenShift Container Platform installation. Prerequisites You deployed an OpenShift Container Platform cluster. You installed the oc CLI. Procedure Export the kubeadmin credentials: USD export KUBECONFIG=<installation_directory>/auth/kubeconfig 1 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. Verify you can run oc commands successfully using the exported configuration: USD oc whoami Example output system:admin Additional resources See Accessing the web console for more details about accessing and understanding the OpenShift Container Platform web console. 5.13. Telemetry access for OpenShift Container Platform In OpenShift Container Platform 4.18, the Telemetry service, which runs by default to provide metrics about cluster health and the success of updates, requires internet access. If your cluster is connected to the internet, Telemetry runs automatically, and your cluster is registered to OpenShift Cluster Manager . After you confirm that your OpenShift Cluster Manager inventory is correct, either maintained automatically by Telemetry or manually by using OpenShift Cluster Manager, use subscription watch to track your OpenShift Container Platform subscriptions at the account or multi-cluster level. Additional resources See About remote health monitoring for more information about the Telemetry service 5.14. steps Customize your cluster . If necessary, you can opt out of remote health reporting .
[ "ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1", "cat <path>/<file_name>.pub", "cat ~/.ssh/id_ed25519.pub", "eval \"USD(ssh-agent -s)\"", "Agent pid 31874", "ssh-add <path>/<file_name> 1", "Identity added: /home/<you>/<path>/<file_name> (<computer_name>)", "tar -xvf openshift-install-linux.tar.gz", "./openshift-install create install-config --dir <installation_directory> 1", "compute: - architecture: amd64 hyperthreading: Enabled name: worker platform: gcp: type: custom-6-20480 replicas: 2 controlPlane: architecture: amd64 hyperthreading: Enabled name: master platform: gcp: type: custom-6-20480 replicas: 3", "controlPlane: platform: gcp: secureBoot: Enabled", "compute: - platform: gcp: secureBoot: Enabled", "platform: gcp: defaultMachinePlatform: secureBoot: Enabled", "controlPlane: platform: gcp: confidentialCompute: Enabled 1 type: n2d-standard-8 2 onHostMaintenance: Terminate 3", "compute: - platform: gcp: confidentialCompute: Enabled type: n2d-standard-8 onHostMaintenance: Terminate", "platform: gcp: defaultMachinePlatform: confidentialCompute: Enabled type: n2d-standard-8 onHostMaintenance: Terminate", "apiVersion: v1 baseDomain: example.com 1 credentialsMode: Mint 2 controlPlane: 3 4 hyperthreading: Enabled 5 name: master platform: gcp: type: n2-standard-4 zones: - us-central1-a - us-central1-c osDisk: diskType: pd-ssd diskSizeGB: 1024 encryptionKey: 6 kmsKey: name: worker-key keyRing: test-machine-keys location: global projectID: project-id tags: 7 - control-plane-tag1 - control-plane-tag2 osImage: 8 project: example-project-name name: example-image-name replicas: 3 compute: 9 10 - hyperthreading: Enabled 11 name: worker platform: gcp: type: n2-standard-4 zones: - us-central1-a - us-central1-c osDisk: diskType: pd-standard diskSizeGB: 128 encryptionKey: 12 kmsKey: name: worker-key keyRing: test-machine-keys location: global projectID: project-id tags: 13 - compute-tag1 - compute-tag2 osImage: 14 project: example-project-name name: example-image-name replicas: 3 metadata: name: test-cluster 15 networking: 16 clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 networkType: OVNKubernetes 17 serviceNetwork: - 172.30.0.0/16 platform: gcp: projectID: openshift-production 18 region: us-central1 19 defaultMachinePlatform: tags: 20 - global-tag1 - global-tag2 osImage: 21 project: example-project-name name: example-image-name pullSecret: '{\"auths\": ...}' 22 fips: false 23 sshKey: ssh-ed25519 AAAA... 24", "apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5", "./openshift-install wait-for install-complete --log-level debug", "tar xvf <file>", "echo USDPATH", "oc <command>", "C:\\> path", "C:\\> oc <command>", "echo USDPATH", "oc <command>", "apiVersion: v1 baseDomain: example.com credentialsMode: Manual", "openshift-install create manifests --dir <installation_directory>", "RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}')", "oc adm release extract --from=USDRELEASE_IMAGE --credentials-requests --included \\ 1 --install-config=<path_to_directory_with_installation_configuration>/install-config.yaml \\ 2 --to=<path_to_directory_for_credentials_requests> 3", "apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: name: <component_credentials_request> namespace: openshift-cloud-credential-operator spec: providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: GCPProviderSpec predefinedRoles: - roles/storage.admin - roles/iam.serviceAccountUser skipServiceCheck: true", "apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: name: <component_credentials_request> namespace: openshift-cloud-credential-operator spec: providerSpec: apiVersion: cloudcredential.openshift.io/v1 secretRef: name: <component_secret> namespace: <component_namespace>", "apiVersion: v1 kind: Secret metadata: name: <component_secret> namespace: <component_namespace> data: service_account.json: <base64_encoded_gcp_service_account_file>", "RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}')", "CCO_IMAGE=USD(oc adm release info --image-for='cloud-credential-operator' USDRELEASE_IMAGE -a ~/.pull-secret)", "oc image extract USDCCO_IMAGE --file=\"/usr/bin/ccoctl.<rhel_version>\" \\ 1 -a ~/.pull-secret", "chmod 775 ccoctl.<rhel_version>", "./ccoctl.rhel9", "OpenShift credentials provisioning tool Usage: ccoctl [command] Available Commands: aws Manage credentials objects for AWS cloud azure Manage credentials objects for Azure gcp Manage credentials objects for Google cloud help Help about any command ibmcloud Manage credentials objects for {ibm-cloud-title} nutanix Manage credentials objects for Nutanix Flags: -h, --help help for ccoctl Use \"ccoctl [command] --help\" for more information about a command.", "RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}')", "oc adm release extract --from=USDRELEASE_IMAGE --credentials-requests --included \\ 1 --install-config=<path_to_directory_with_installation_configuration>/install-config.yaml \\ 2 --to=<path_to_directory_for_credentials_requests> 3", "ccoctl gcp create-all --name=<name> \\ 1 --region=<gcp_region> \\ 2 --project=<gcp_project_id> \\ 3 --credentials-requests-dir=<path_to_credentials_requests_directory> 4", "ls <path_to_ccoctl_output_dir>/manifests", "cluster-authentication-02-config.yaml openshift-cloud-controller-manager-gcp-ccm-cloud-credentials-credentials.yaml openshift-cloud-credential-operator-cloud-credential-operator-gcp-ro-creds-credentials.yaml openshift-cloud-network-config-controller-cloud-credentials-credentials.yaml openshift-cluster-api-capg-manager-bootstrap-credentials-credentials.yaml openshift-cluster-csi-drivers-gcp-pd-cloud-credentials-credentials.yaml openshift-image-registry-installer-cloud-credentials-credentials.yaml openshift-ingress-operator-cloud-credentials-credentials.yaml openshift-machine-api-gcp-cloud-credentials-credentials.yaml", "apiVersion: v1 baseDomain: example.com credentialsMode: Manual", "openshift-install create manifests --dir <installation_directory>", "cp /<path_to_ccoctl_output_dir>/manifests/* ./manifests/", "cp -a /<path_to_ccoctl_output_dir>/tls .", "./openshift-install create manifests --dir <installation_directory> 1", "apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec:", "apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: defaultNetwork: ovnKubernetesConfig: ipsecConfig: mode: Full", "rm -f openshift/99_openshift-cluster-api_master-machines-*.yaml openshift/99_openshift-cluster-api_worker-machineset-*.yaml", "spec: clusterNetwork: - cidr: 10.128.0.0/19 hostPrefix: 23 - cidr: 10.128.32.0/19 hostPrefix: 23", "spec: serviceNetwork: - 172.30.0.0/14", "defaultNetwork: type: OVNKubernetes ovnKubernetesConfig: mtu: 1400 genevePort: 6081 ipsecConfig: mode: Full", "./openshift-install create cluster --dir <installation_directory> \\ 1 --log-level=info 2", "INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: \"kubeadmin\", and password: \"password\" INFO Time elapsed: 36m22s", "export KUBECONFIG=<installation_directory>/auth/kubeconfig 1", "oc whoami", "system:admin" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.18/html/installing_on_gcp/installing-gcp-network-customizations
function::sprint_backtrace
function::sprint_backtrace Name function::sprint_backtrace - Return stack back trace as string Synopsis Arguments None Description Returns a simple (kernel) backtrace. One line per address. Includes the symbol name (or hex address if symbol couldn't be resolved) and module name (if found). Includes the offset from the start of the function if found, otherwise the offset will be added to the module (if found, between brackets). Returns the backtrace as string (each line terminated by a newline character). Note that the returned stack will be truncated to MAXSTRINGLEN, to print fuller and richer stacks use print_backtrace . Equivalent to sprint_stack( backtrace ), but more efficient (no need to translate between hex strings and final backtrace string).
[ "sprint_backtrace:string()" ]
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/systemtap_tapset_reference/api-sprint-backtrace
Making open source more inclusive
Making open source more inclusive Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see our CTO Chris Wright's message .
null
https://docs.redhat.com/en/documentation/red_hat_data_grid/8.5/html/migrating_to_data_grid_8/making-open-source-more-inclusive_datagrid
4.3.6. Tracking System Call Volume Per Process
4.3.6. Tracking System Call Volume Per Process This section illustrates how to determine which processes are performing the highest volume of system calls. In sections, we have described how to monitor the top system calls used by the system over time ( Section 4.3.5, "Tracking Most Frequently Used System Calls" ). We've also described how to identify which applications use a specific set of "polling suspect" system calls the most ( Section 4.3.4, "Monitoring Polling Applications" ). Monitoring the volume of system calls made by each process provides more data in investigating your system for polling processes and other resource hogs. syscalls_by_proc.stp syscalls_by_proc.stp lists the top 20 processes performing the highest number of system calls. It also lists how many system calls each process performed during the time period. Refer to Example 4.16, "topsys.stp Sample Output" for a sample output. Example 4.16. topsys.stp Sample Output If you prefer the output to display the process IDs instead of the process names, use the following script instead. syscalls_by_pid.stp As indicated in the output, you need to manually exit the script in order to display the results. You can add a timed expiration to either script by simply adding a timer.s() probe; for example, to instruct the script to expire after 5 seconds, add the following probe to the script:
[ "#! /usr/bin/env stap Copyright (C) 2006 IBM Corp. # This file is part of systemtap, and is free software. You can redistribute it and/or modify it under the terms of the GNU General Public License (GPL); either version 2, or (at your option) any later version. # Print the system call count by process name in descending order. # global syscalls probe begin { print (\"Collecting data... Type Ctrl-C to exit and display results\\n\") } probe syscall.* { syscalls[execname()]++ } probe end { printf (\"%-10s %-s\\n\", \"#SysCalls\", \"Process Name\") foreach (proc in syscalls-) printf(\"%-10d %-s\\n\", syscalls[proc], proc) }", "Collecting data... Type Ctrl-C to exit and display results #SysCalls Process Name 1577 multiload-apple 692 synergyc 408 pcscd 376 mixer_applet2 299 gnome-terminal 293 Xorg 206 scim-panel-gtk 95 gnome-power-man 90 artsd 85 dhcdbd 84 scim-bridge 78 gnome-screensav 66 scim-launcher [...]", "#! /usr/bin/env stap Copyright (C) 2006 IBM Corp. # This file is part of systemtap, and is free software. You can redistribute it and/or modify it under the terms of the GNU General Public License (GPL); either version 2, or (at your option) any later version. # Print the system call count by process ID in descending order. # global syscalls probe begin { print (\"Collecting data... Type Ctrl-C to exit and display results\\n\") } probe syscall.* { syscalls[pid()]++ } probe end { printf (\"%-10s %-s\\n\", \"#SysCalls\", \"PID\") foreach (pid in syscalls-) printf(\"%-10d %-d\\n\", syscalls[pid], pid) }", "probe timer.s(5) { exit() }" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/systemtap_beginners_guide/syscallsbyprocpidsect