title
stringlengths 4
168
| content
stringlengths 7
1.74M
| commands
sequencelengths 1
5.62k
⌀ | url
stringlengths 79
342
|
---|---|---|---|
Chapter 1. Introducing remote host configuration and management | Chapter 1. Introducing remote host configuration and management Remote host configuration is a powerful tool that enables the following capabilities: Easy registration. With the rhc client, you can register systems to Red Hat Subscription Management (RHSM) and Red Hat Insights for Red Hat Enterprise Linux. Configuration management. Using the remote host configuration manager, you can configure the connection with Insights for Red Hat Enterprise Linux for all of the Red Hat Enterprise Linux (RHEL) systems in your infrastructure. You can enable or disable the rhc client, direct remediations, and other application settings from Insights for Red Hat Enterprise Linux. Remediations from Insights for Red Hat Enterprise Linux. When systems are connected to Insights for Red Hat Enterprise Linux with the rhc client, you can manage the end-to-end experience of finding and fixing issues. Registered systems can directly consume remediation playbooks executed from the Insights for Red Hat Enterprise Linux application. Supported configurations The rhc client is supported on systems registered to Insights for Red Hat Enterprise Linux and running Red Hat Enterprise Linux (RHEL) 8.5 and later, and RHEL 9.0 and later. Single-command registration is supported by RHEL 8.6 and later, and RHEL 9.0 and later. 1.1. Remote host configuration components The complete remote host configuration solution comes with two main components: a client-side daemon and a server-side service to facilitate system management. The remote configuration client. The rhc client comes preinstalled with all Red Hat Enterprise Linux (RHEL) 8.5 and later installations, with the exception of minimal installation. The rhc client consists of the following utility programs: The rhcd daemon runs on the system and listens for messages from the Red Hat Hybrid Cloud Console. It also receives and executes remediation playbooks for systems that are properly configured. The rhc command-line utility for RHEL. The remote host configuration manager. With the remote host configuration manager user interface, you can enable or disable Insights for Red Hat Enterprise Linux connectivity and features. To maximize the value of remote host configuration, you must install additional packages. To allow systems to be managed by remote host configuration manager and to support the execution of remediation playbooks, install the following additional packages: ansible or ansible-core rhc-worker-playbook Important Starting with RHEL 8.6 and RHEL 9.0, the ansible-core and rhc-worker-playbook packages should automatically be installed in the background to make your system fully manageable from the remote host configuration manager user interface. However, a known bug is preventing the process from completing as expected. Thus, the packages must be installed manually after registration. 1.2. User Access settings in the Red Hat Hybrid Cloud Console User Access is the Red Hat implementation of role-based access control (RBAC). Your Organization Administrator uses User Access to configure what users can see and do on the Red Hat Hybrid Cloud Console (the console): Control user access by organizing roles instead of assigning permissions individually to users. Create groups that include roles and their corresponding permissions. Assign users to these groups, allowing them to inherit the permissions associated with their group's roles. 1.2.1. Predefined User Access groups and roles To make groups and roles easier to manage, Red Hat provides two predefined groups and a set of predefined roles. 1.2.1.1. Predefined groups The Default access group contains all users in your organization. Many predefined roles are assigned to this group. It is automatically updated by Red Hat. Note If the Organization Administrator makes changes to the Default access group its name changes to Custom default access group and it is no longer updated by Red Hat. The Default admin access group contains only users who have Organization Administrator permissions. This group is automatically maintained and users and roles in this group cannot be changed. On the Hybrid Cloud Console navigate to Red Hat Hybrid Cloud Console > the Settings icon (⚙) > Identity & Access Management > User Access > Groups to see the current groups in your account. This view is limited to the Organization Administrator. 1.2.1.2. Predefined roles assigned to groups The Default access group contains many of the predefined roles. Because all users in your organization are members of the Default access group, they inherit all permissions assigned to that group. The Default admin access group includes many (but not all) predefined roles that provide update and delete permissions. The roles in this group usually include administrator in their name. On the Hybrid Cloud Console navigate to Red Hat Hybrid Cloud Console > the Settings icon (⚙) > Identity & Access Management > User Access > Roles to see the current roles in your account. You can see how many groups each role is assigned to. This view is limited to the Organization Administrator. See User Access Configuration Guide for Role-based Access Control (RBAC) for additional information. 1.2.2. Access permissions The Prerequisites for each procedure list which predefined role provides the permissions you must have. As a user, you can navigate to Red Hat Hybrid Cloud Console > the Settings icon (⚙) > My User Access to view the roles and application permissions currently inherited by you. If you try to access Insights for Red Hat Enterprise Linux features and see a message that you do not have permission to perform this action, you must obtain additional permissions. The Organization Administrator or the User Access administrator for your organization configures those permissions. Use the Red Hat Hybrid Cloud Console Virtual Assistant to ask "Contact my Organization Administrator". The assistant sends an email to the Organization Administrator on your behalf. 1.2.3. User Access roles for remote host configuration and management There are several User Access roles that are relevant for Red Hat Insights for Red Hat Enterprise Linux users. These roles determine if an Insights user can simply view settings or change them, and use remediation features. User Access roles for using the Remote Host Configuration Manager in the Insights for Red Hat Enterprise Linux web console RHC administrator. Members in a group with this role can perform any operations in the rhc manager. RHC user. This is a default permission for all users on your organization's Red Hat Hybrid Cloud Console account, allowing anyone to see the current status of the configuration. User Access roles for using remediation features in the Insights for Red Hat Enterprise Linux web console Remediations administrator. Members in a group with this role can perform any available operation against any remediations resource, including direct remediations. Remediations user. Members in a group with this role can create, view, update, and delete operations against any remediations resource. This is a default permission given to all Hybrid Cloud Console users on your account. | null | https://docs.redhat.com/en/documentation/red_hat_insights/1-latest/html/remote_host_configuration_and_management/intro-rhc |
Chapter 37. JMS - AMQP 1.0 Kamelet Sink | Chapter 37. JMS - AMQP 1.0 Kamelet Sink A Kamelet that can produce events to any AMQP 1.0 compliant message broker using the Apache Qpid JMS client 37.1. Configuration Options The following table summarizes the configuration options available for the jms-amqp-10-sink Kamelet: Property Name Description Type Default Example destinationName * Destination Name The JMS destination name string remoteURI * Broker URL The JMS URL string "amqp://my-host:31616" destinationType Destination Type The JMS destination type (i.e.: queue or topic) string "queue" Note Fields marked with an asterisk (*) are mandatory. 37.2. Dependencies At runtime, the jms-amqp-10-sink Kamelet relies upon the presence of the following dependencies: camel:jms camel:kamelet mvn:org.apache.qpid:qpid-jms-client:0.55.0 37.3. Usage This section describes how you can use the jms-amqp-10-sink . 37.3.1. Knative Sink You can use the jms-amqp-10-sink Kamelet as a Knative sink by binding it to a Knative object. jms-amqp-10-sink-binding.yaml apiVersion: camel.apache.org/v1alpha1 kind: KameletBinding metadata: name: jms-amqp-10-sink-binding spec: source: ref: kind: Channel apiVersion: messaging.knative.dev/v1 name: mychannel sink: ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: jms-amqp-10-sink properties: destinationName: "The Destination Name" remoteURI: "amqp://my-host:31616" 37.3.1.1. Prerequisite Make sure you have "Red Hat Integration - Camel K" installed into the OpenShift cluster you're connected to. 37.3.1.2. Procedure for using the cluster CLI Save the jms-amqp-10-sink-binding.yaml file to your local drive, and then edit it as needed for your configuration. Run the sink by using the following command: oc apply -f jms-amqp-10-sink-binding.yaml 37.3.1.3. Procedure for using the Kamel CLI Configure and run the sink by using the following command: kamel bind channel:mychannel jms-amqp-10-sink -p "sink.destinationName=The Destination Name" -p "sink.remoteURI=amqp://my-host:31616" This command creates the KameletBinding in the current namespace on the cluster. 37.3.2. Kafka Sink You can use the jms-amqp-10-sink Kamelet as a Kafka sink by binding it to a Kafka topic. jms-amqp-10-sink-binding.yaml apiVersion: camel.apache.org/v1alpha1 kind: KameletBinding metadata: name: jms-amqp-10-sink-binding spec: source: ref: kind: KafkaTopic apiVersion: kafka.strimzi.io/v1beta1 name: my-topic sink: ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: jms-amqp-10-sink properties: destinationName: "The Destination Name" remoteURI: "amqp://my-host:31616" 37.3.2.1. Prerequisites Ensure that you've installed the AMQ Streams operator in your OpenShift cluster and created a topic named my-topic in the current namespace. Make also sure you have "Red Hat Integration - Camel K" installed into the OpenShift cluster you're connected to. 37.3.2.2. Procedure for using the cluster CLI Save the jms-amqp-10-sink-binding.yaml file to your local drive, and then edit it as needed for your configuration. Run the sink by using the following command: oc apply -f jms-amqp-10-sink-binding.yaml 37.3.2.3. Procedure for using the Kamel CLI Configure and run the sink by using the following command: kamel bind kafka.strimzi.io/v1beta1:KafkaTopic:my-topic jms-amqp-10-sink -p "sink.destinationName=The Destination Name" -p "sink.remoteURI=amqp://my-host:31616" This command creates the KameletBinding in the current namespace on the cluster. 37.4. Kamelet source file https://github.com/openshift-integration/kamelet-catalog/jms-amqp-10-sink.kamelet.yaml | [
"apiVersion: camel.apache.org/v1alpha1 kind: KameletBinding metadata: name: jms-amqp-10-sink-binding spec: source: ref: kind: Channel apiVersion: messaging.knative.dev/v1 name: mychannel sink: ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: jms-amqp-10-sink properties: destinationName: \"The Destination Name\" remoteURI: \"amqp://my-host:31616\"",
"apply -f jms-amqp-10-sink-binding.yaml",
"kamel bind channel:mychannel jms-amqp-10-sink -p \"sink.destinationName=The Destination Name\" -p \"sink.remoteURI=amqp://my-host:31616\"",
"apiVersion: camel.apache.org/v1alpha1 kind: KameletBinding metadata: name: jms-amqp-10-sink-binding spec: source: ref: kind: KafkaTopic apiVersion: kafka.strimzi.io/v1beta1 name: my-topic sink: ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: jms-amqp-10-sink properties: destinationName: \"The Destination Name\" remoteURI: \"amqp://my-host:31616\"",
"apply -f jms-amqp-10-sink-binding.yaml",
"kamel bind kafka.strimzi.io/v1beta1:KafkaTopic:my-topic jms-amqp-10-sink -p \"sink.destinationName=The Destination Name\" -p \"sink.remoteURI=amqp://my-host:31616\""
] | https://docs.redhat.com/en/documentation/red_hat_build_of_apache_camel_k/1.10.9/html/kamelets_reference/jms-sink |
Chapter 27. Uninstalling Streams for Apache Kafka | Chapter 27. Uninstalling Streams for Apache Kafka You can uninstall Streams for Apache Kafka on OpenShift 4.12 to 4.16 from the OperatorHub using the OpenShift Container Platform web console or CLI. Use the same approach you used to install Streams for Apache Kafka. When you uninstall Streams for Apache Kafka, you will need to identify resources created specifically for a deployment and referenced from the Streams for Apache Kafka resource. Such resources include: Secrets (Custom CAs and certificates, Kafka Connect secrets, and other Kafka secrets) Logging ConfigMaps (of type external ) These are resources referenced by Kafka , KafkaConnect , KafkaMirrorMaker , or KafkaBridge configuration. Warning Deleting CRDs and related custom resources When a CustomResourceDefinition is deleted, custom resources of that type are also deleted. This includes the Kafka , KafkaConnect , KafkaMirrorMaker , and KafkaBridge resources managed by Streams for Apache Kafka, as well as the StrimziPodSet resource Streams for Apache Kafka uses to manage the pods of the Kafka components. In addition, any OpenShift resources created by these custom resources, such as Deployment , Pod , Service , and ConfigMap resources, are also removed. Always exercise caution when deleting these resources to avoid unintended data loss. 27.1. Uninstalling Streams for Apache Kafka from the OperatorHub using the web console This procedure describes how to uninstall Streams for Apache Kafka from the OperatorHub and remove resources related to the deployment. You can perform the steps from the console or use alternative CLI commands. Prerequisites Access to an OpenShift Container Platform web console using an account with cluster-admin or strimzi-admin permissions. You have identified the resources to be deleted. You can use the following oc CLI command to find resources and also verify that they have been removed when you have uninstalled Streams for Apache Kafka. Command to find resources related to a Streams for Apache Kafka deployment oc get <resource_type> --all-namespaces | grep <kafka_cluster_name> Replace <resource_type> with the type of the resource you are checking, such as secret or configmap . Procedure Navigate in the OpenShift web console to Operators > Installed Operators . For the installed Streams for Apache Kafka operator, select the options icon (three vertical dots) and click Uninstall Operator . The operator is removed from Installed Operators . Navigate to Home > Projects and select the project where you installed Streams for Apache Kafka and the Kafka components. Click the options under Inventory to delete related resources. Resources include the following: Deployments StatefulSets Pods Services ConfigMaps Secrets Tip Use the search to find related resources that begin with the name of the Kafka cluster. You can also find the resources under Workloads . Alternative CLI commands You can use CLI commands to uninstall Streams for Apache Kafka from the OperatorHub. Delete the Streams for Apache Kafka subscription. oc delete subscription amq-streams -n openshift-operators Delete the cluster service version (CSV). oc delete csv amqstreams. <version> -n openshift-operators Remove related CRDs. oc get crd -l app=strimzi -o name | xargs oc delete 27.2. Uninstalling Streams for Apache Kafka using the CLI This procedure describes how to use the oc command-line tool to uninstall Streams for Apache Kafka and remove resources related to the deployment. Prerequisites Access to an OpenShift cluster using an account with cluster-admin or strimzi-admin permissions. You have identified the resources to be deleted. You can use the following oc CLI command to find resources and also verify that they have been removed when you have uninstalled Streams for Apache Kafka. Command to find resources related to a Streams for Apache Kafka deployment oc get <resource_type> --all-namespaces | grep <kafka_cluster_name> Replace <resource_type> with the type of the resource you are checking, such as secret or configmap . Procedure Delete the Cluster Operator Deployment , related CustomResourceDefinitions , and RBAC resources. Specify the installation files used to deploy the Cluster Operator. oc delete -f install/cluster-operator Delete the resources you identified in the prerequisites. oc delete <resource_type> <resource_name> -n <namespace> Replace <resource_type> with the type of resource you are deleting and <resource_name> with the name of the resource. Example to delete a secret oc delete secret my-cluster-clients-ca-cert -n my-project | [
"get <resource_type> --all-namespaces | grep <kafka_cluster_name>",
"delete subscription amq-streams -n openshift-operators",
"delete csv amqstreams. <version> -n openshift-operators",
"get crd -l app=strimzi -o name | xargs oc delete",
"get <resource_type> --all-namespaces | grep <kafka_cluster_name>",
"delete -f install/cluster-operator",
"delete <resource_type> <resource_name> -n <namespace>",
"delete secret my-cluster-clients-ca-cert -n my-project"
] | https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.7/html/deploying_and_managing_streams_for_apache_kafka_on_openshift/assembly-uninstalling-str |
Chapter 3. Distribution of content in RHEL 8 | Chapter 3. Distribution of content in RHEL 8 3.1. Installation Red Hat Enterprise Linux 8 is installed using ISO images. Two types of ISO image are available for the AMD64, Intel 64-bit, 64-bit ARM, IBM Power Systems, and IBM Z architectures: Binary DVD ISO: A full installation image that contains the BaseOS and AppStream repositories and allows you to complete the installation without additional repositories. Note The Binary DVD ISO image is larger than 4.7 GB, and as a result, it might not fit on a single-layer DVD. A dual-layer DVD or USB key is recommended when using the Binary DVD ISO image to create bootable installation media. You can also use the Image Builder tool to create customized RHEL images. For more information about Image Builder, see the Composing a customized RHEL system image document. Boot ISO: A minimal boot ISO image that is used to boot into the installation program. This option requires access to the BaseOS and AppStream repositories to install software packages. The repositories are part of the Binary DVD ISO image. See the Interactively installing RHEL from installation media document for instructions on downloading ISO images, creating installation media, and completing a RHEL installation. For automated Kickstart installations and other advanced topics, see the Automatically installing RHEL document. 3.2. Repositories Red Hat Enterprise Linux 8 is distributed through two main repositories: BaseOS AppStream Both repositories are required for a basic RHEL installation, and are available with all RHEL subscriptions. Content in the BaseOS repository is intended to provide the core set of the underlying OS functionality that provides the foundation for all installations. This content is available in the RPM format and is subject to support terms similar to those in releases of RHEL. For a list of packages distributed through BaseOS, see the Package manifest . Content in the Application Stream repository includes additional user space applications, runtime languages, and databases in support of the varied workloads and use cases. Application Streams are available in the familiar RPM format, as an extension to the RPM format called modules , or as Software Collections. For a list of packages available in AppStream, see the Package manifest . In addition, the CodeReady Linux Builder repository is available with all RHEL subscriptions. It provides additional packages for use by developers. Packages included in the CodeReady Linux Builder repository are unsupported. For more information about RHEL 8 repositories, see the Package manifest . 3.3. Application Streams Red Hat Enterprise Linux 8 introduces the concept of Application Streams. Multiple versions of user space components are now delivered and updated more frequently than the core operating system packages. This provides greater flexibility to customize Red Hat Enterprise Linux without impacting the underlying stability of the platform or specific deployments. Components made available as Application Streams can be packaged as modules or RPM packages and are delivered through the AppStream repository in RHEL 8. Each Application Stream component has a given life cycle, either the same as RHEL 8 or shorter. For details, see Red Hat Enterprise Linux Life Cycle . Modules are collections of packages representing a logical unit: an application, a language stack, a database, or a set of tools. These packages are built, tested, and released together. Module streams represent versions of the Application Stream components. For example, several streams (versions) of the PostgreSQL database server are available in the postgresql module with the default postgresql:10 stream. Only one module stream can be installed on the system. Different versions can be used in separate containers. Detailed module commands are described in the Installing, managing, and removing user-space components document. For a list of modules available in AppStream, see the Package manifest . 3.4. Package management with YUM/DNF On Red Hat Enterprise Linux 8, installing software is ensured by the YUM tool, which is based on the DNF technology. We deliberately adhere to usage of the yum term for consistency with major versions of RHEL. However, if you type dnf instead of yum , the command works as expected because yum is an alias to dnf for compatibility. For more details, see the following documentation: Installing, managing, and removing user-space components Considerations in adopting RHEL 8 | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html/8.3_release_notes/Distribution-of-content-in-RHEL-8 |
Chapter 2. Installing a cluster with z/VM on IBM Z and IBM LinuxONE | Chapter 2. Installing a cluster with z/VM on IBM Z and IBM LinuxONE In OpenShift Container Platform version 4.15, you can install a cluster on IBM Z(R) or IBM(R) LinuxONE infrastructure that you provision. Note While this document refers only to IBM Z(R), all information in it also applies to IBM(R) LinuxONE. Important Additional considerations exist for non-bare metal platforms. Review the information in the guidelines for deploying OpenShift Container Platform on non-tested platforms before you install an OpenShift Container Platform cluster. 2.1. Prerequisites You reviewed details about the OpenShift Container Platform installation and update processes. You read the documentation on selecting a cluster installation method and preparing it for users . Before you begin the installation process, you must clean the installation directory. This ensures that the required installation files are created and updated during the installation process. You provisioned persistent storage using OpenShift Data Foundation or other supported storage protocols for your cluster. To deploy a private image registry, you must set up persistent storage with ReadWriteMany access. If you use a firewall, you configured it to allow the sites that your cluster requires access to. Note Be sure to also review this site list if you are configuring a proxy. 2.2. Internet access for OpenShift Container Platform In OpenShift Container Platform 4.15, you require access to the internet to install your cluster. You must have internet access to: Access OpenShift Cluster Manager to download the installation program and perform subscription management. If the cluster has internet access and you do not disable Telemetry, that service automatically entitles your cluster. Access Quay.io to obtain the packages that are required to install your cluster. Obtain the packages that are required to perform cluster updates. Important If your cluster cannot have direct internet access, you can perform a restricted network installation on some types of infrastructure that you provision. During that process, you download the required content and use it to populate a mirror registry with the installation packages. With some installation types, the environment that you install your cluster in will not require internet access. Before you update the cluster, you update the content of the mirror registry. 2.3. Requirements for a cluster with user-provisioned infrastructure For a cluster that contains user-provisioned infrastructure, you must deploy all of the required machines. This section describes the requirements for deploying OpenShift Container Platform on user-provisioned infrastructure. 2.3.1. Required machines for cluster installation The smallest OpenShift Container Platform clusters require the following hosts: Table 2.1. Minimum required hosts Hosts Description One temporary bootstrap machine The cluster requires the bootstrap machine to deploy the OpenShift Container Platform cluster on the three control plane machines. You can remove the bootstrap machine after you install the cluster. Three control plane machines The control plane machines run the Kubernetes and OpenShift Container Platform services that form the control plane. At least two compute machines, which are also known as worker machines. The workloads requested by OpenShift Container Platform users run on the compute machines. Important To improve high availability of your cluster, distribute the control plane machines over different z/VM instances on at least two physical machines. The bootstrap, control plane, and compute machines must use Red Hat Enterprise Linux CoreOS (RHCOS) as the operating system. Note that RHCOS is based on Red Hat Enterprise Linux (RHEL) 9.2 and inherits all of its hardware certifications and requirements. See Red Hat Enterprise Linux technology capabilities and limits . 2.3.2. Minimum resource requirements for cluster installation Each cluster machine must meet the following minimum requirements: Table 2.2. Minimum resource requirements Machine Operating System vCPU [1] Virtual RAM Storage Input/Output Per Second (IOPS) Bootstrap RHCOS 4 16 GB 100 GB N/A Control plane RHCOS 4 16 GB 100 GB N/A Compute RHCOS 2 8 GB 100 GB N/A One physical core (IFL) provides two logical cores (threads) when SMT-2 is enabled. The hypervisor can provide two or more vCPUs. Note As of OpenShift Container Platform version 4.13, RHCOS is based on RHEL version 9.2, which updates the micro-architecture requirements. The following list contains the minimum instruction set architectures (ISA) that each architecture requires: x86-64 architecture requires x86-64-v2 ISA ARM64 architecture requires ARMv8.0-A ISA IBM Power architecture requires Power 9 ISA s390x architecture requires z14 ISA For more information, see Architectures (RHEL documentation). If an instance type for your platform meets the minimum requirements for cluster machines, it is supported to use in OpenShift Container Platform. Additional resources Optimizing storage 2.3.3. Minimum IBM Z system environment You can install OpenShift Container Platform version 4.15 on the following IBM(R) hardware: IBM(R) z16 (all models), IBM(R) z15 (all models), IBM(R) z14 (all models) IBM(R) LinuxONE 4 (all models), IBM(R) LinuxONE III (all models), IBM(R) LinuxONE Emperor II, IBM(R) LinuxONE Rockhopper II Hardware requirements The equivalent of six Integrated Facilities for Linux (IFL), which are SMT2 enabled, for each cluster. At least one network connection to both connect to the LoadBalancer service and to serve data for traffic outside the cluster. Note You can use dedicated or shared IFLs to assign sufficient compute resources. Resource sharing is one of the key strengths of IBM Z(R). However, you must adjust capacity correctly on each hypervisor layer and ensure sufficient resources for every OpenShift Container Platform cluster. Important Since the overall performance of the cluster can be impacted, the LPARs that are used to set up the OpenShift Container Platform clusters must provide sufficient compute capacity. In this context, LPAR weight management, entitlements, and CPU shares on the hypervisor level play an important role. Operating system requirements One instance of z/VM 7.2 or later On your z/VM instance, set up: Three guest virtual machines for OpenShift Container Platform control plane machines Two guest virtual machines for OpenShift Container Platform compute machines One guest virtual machine for the temporary OpenShift Container Platform bootstrap machine IBM Z network connectivity requirements To install on IBM Z(R) under z/VM, you require a single z/VM virtual NIC in layer 2 mode. You also need: A direct-attached OSA or RoCE network adapter A z/VM VSWITCH in layer 2 Ethernet mode set up. Disk storage FICON attached disk storage (DASDs). These can be z/VM minidisks, fullpack minidisks, or dedicated DASDs, all of which must be formatted as CDL, which is the default. To reach the minimum required DASD size for Red Hat Enterprise Linux CoreOS (RHCOS) installations, you need extended address volumes (EAV). If available, use HyperPAV to ensure optimal performance. FCP attached disk storage Storage / Main Memory 16 GB for OpenShift Container Platform control plane machines 8 GB for OpenShift Container Platform compute machines 16 GB for the temporary OpenShift Container Platform bootstrap machine 2.3.4. Preferred IBM Z system environment Hardware requirements Three LPARS that each have the equivalent of six IFLs, which are SMT2 enabled, for each cluster. Two network connections to both connect to the LoadBalancer service and to serve data for traffic outside the cluster. HiperSockets that are attached to a node either directly as a device or by bridging with one z/VM VSWITCH to be transparent to the z/VM guest. To directly connect HiperSockets to a node, you must set up a gateway to the external network via a RHEL 8 guest to bridge to the HiperSockets network. Operating system requirements Two or three instances of z/VM 7.2 or later for high availability On your z/VM instances, set up: Three guest virtual machines for OpenShift Container Platform control plane machines, one per z/VM instance. At least six guest virtual machines for OpenShift Container Platform compute machines, distributed across the z/VM instances. One guest virtual machine for the temporary OpenShift Container Platform bootstrap machine. To ensure the availability of integral components in an overcommitted environment, increase the priority of the control plane by using the CP command SET SHARE . Do the same for infrastructure nodes, if they exist. See SET SHARE (IBM(R) Documentation). IBM Z network connectivity requirements To install on IBM Z(R) under z/VM, you require a single z/VM virtual NIC in layer 2 mode. You also need: A direct-attached OSA or RoCE network adapter A z/VM VSWITCH in layer 2 Ethernet mode set up. Disk storage FICON attached disk storage (DASDs). These can be z/VM minidisks, fullpack minidisks, or dedicated DASDs, all of which must be formatted as CDL, which is the default. To reach the minimum required DASD size for Red Hat Enterprise Linux CoreOS (RHCOS) installations, you need extended address volumes (EAV). If available, use HyperPAV to ensure optimal performance. FCP attached disk storage Storage / Main Memory 16 GB for OpenShift Container Platform control plane machines 8 GB for OpenShift Container Platform compute machines 16 GB for the temporary OpenShift Container Platform bootstrap machine 2.3.5. Certificate signing requests management Because your cluster has limited access to automatic machine management when you use infrastructure that you provision, you must provide a mechanism for approving cluster certificate signing requests (CSRs) after installation. The kube-controller-manager only approves the kubelet client CSRs. The machine-approver cannot guarantee the validity of a serving certificate that is requested by using kubelet credentials because it cannot confirm that the correct machine issued the request. You must determine and implement a method of verifying the validity of the kubelet serving certificate requests and approving them. Additional resources See Bridging a HiperSockets LAN with a z/VM Virtual Switch in IBM(R) Documentation. See Scaling HyperPAV alias devices on Linux guests on z/VM for performance optimization. See Topics in LPAR performance for LPAR weight management and entitlements. Recommended host practices for IBM Z(R) & IBM(R) LinuxONE environments 2.3.6. Networking requirements for user-provisioned infrastructure All the Red Hat Enterprise Linux CoreOS (RHCOS) machines require networking to be configured in initramfs during boot to fetch their Ignition config files. During the initial boot, the machines require an HTTP or HTTPS server to establish a network connection to download their Ignition config files. The machines are configured with static IP addresses. No DHCP server is required. Ensure that the machines have persistent IP addresses and hostnames. The Kubernetes API server must be able to resolve the node names of the cluster machines. If the API servers and worker nodes are in different zones, you can configure a default DNS search zone to allow the API server to resolve the node names. Another supported approach is to always refer to hosts by their fully-qualified domain names in both the node objects and all DNS requests. 2.3.6.1. Network connectivity requirements You must configure the network connectivity between machines to allow OpenShift Container Platform cluster components to communicate. Each machine must be able to resolve the hostnames of all other machines in the cluster. This section provides details about the ports that are required. Important In connected OpenShift Container Platform environments, all nodes are required to have internet access to pull images for platform containers and provide telemetry data to Red Hat. Table 2.3. Ports used for all-machine to all-machine communications Protocol Port Description ICMP N/A Network reachability tests TCP 1936 Metrics 9000 - 9999 Host level services, including the node exporter on ports 9100 - 9101 and the Cluster Version Operator on port 9099 . 10250 - 10259 The default ports that Kubernetes reserves UDP 4789 VXLAN 6081 Geneve 9000 - 9999 Host level services, including the node exporter on ports 9100 - 9101 . 500 IPsec IKE packets 4500 IPsec NAT-T packets 123 Network Time Protocol (NTP) on UDP port 123 If an external NTP time server is configured, you must open UDP port 123 . TCP/UDP 30000 - 32767 Kubernetes node port ESP N/A IPsec Encapsulating Security Payload (ESP) Table 2.4. Ports used for all-machine to control plane communications Protocol Port Description TCP 6443 Kubernetes API Table 2.5. Ports used for control plane machine to control plane machine communications Protocol Port Description TCP 2379 - 2380 etcd server and peer ports NTP configuration for user-provisioned infrastructure OpenShift Container Platform clusters are configured to use a public Network Time Protocol (NTP) server by default. If you want to use a local enterprise NTP server, or if your cluster is being deployed in a disconnected network, you can configure the cluster to use a specific time server. For more information, see the documentation for Configuring chrony time service . Additional resources Configuring chrony time service 2.3.7. User-provisioned DNS requirements In OpenShift Container Platform deployments, DNS name resolution is required for the following components: The Kubernetes API The OpenShift Container Platform application wildcard The bootstrap, control plane, and compute machines Reverse DNS resolution is also required for the Kubernetes API, the bootstrap machine, the control plane machines, and the compute machines. DNS A/AAAA or CNAME records are used for name resolution and PTR records are used for reverse name resolution. The reverse records are important because Red Hat Enterprise Linux CoreOS (RHCOS) uses the reverse records to set the hostnames for all the nodes, unless the hostnames are provided by DHCP. Additionally, the reverse records are used to generate the certificate signing requests (CSR) that OpenShift Container Platform needs to operate. The following DNS records are required for a user-provisioned OpenShift Container Platform cluster and they must be in place before installation. In each record, <cluster_name> is the cluster name and <base_domain> is the base domain that you specify in the install-config.yaml file. A complete DNS record takes the form: <component>.<cluster_name>.<base_domain>. . Table 2.6. Required DNS records Component Record Description Kubernetes API api.<cluster_name>.<base_domain>. A DNS A/AAAA or CNAME record, and a DNS PTR record, to identify the API load balancer. These records must be resolvable by both clients external to the cluster and from all the nodes within the cluster. api-int.<cluster_name>.<base_domain>. A DNS A/AAAA or CNAME record, and a DNS PTR record, to internally identify the API load balancer. These records must be resolvable from all the nodes within the cluster. Important The API server must be able to resolve the worker nodes by the hostnames that are recorded in Kubernetes. If the API server cannot resolve the node names, then proxied API calls can fail, and you cannot retrieve logs from pods. Routes *.apps.<cluster_name>.<base_domain>. A wildcard DNS A/AAAA or CNAME record that refers to the application ingress load balancer. The application ingress load balancer targets the machines that run the Ingress Controller pods. The Ingress Controller pods run on the compute machines by default. These records must be resolvable by both clients external to the cluster and from all the nodes within the cluster. For example, console-openshift-console.apps.<cluster_name>.<base_domain> is used as a wildcard route to the OpenShift Container Platform console. Bootstrap machine bootstrap.<cluster_name>.<base_domain>. A DNS A/AAAA or CNAME record, and a DNS PTR record, to identify the bootstrap machine. These records must be resolvable by the nodes within the cluster. Control plane machines <control_plane><n>.<cluster_name>.<base_domain>. DNS A/AAAA or CNAME records and DNS PTR records to identify each machine for the control plane nodes. These records must be resolvable by the nodes within the cluster. Compute machines <compute><n>.<cluster_name>.<base_domain>. DNS A/AAAA or CNAME records and DNS PTR records to identify each machine for the worker nodes. These records must be resolvable by the nodes within the cluster. Note In OpenShift Container Platform 4.4 and later, you do not need to specify etcd host and SRV records in your DNS configuration. Tip You can use the dig command to verify name and reverse name resolution. See the section on Validating DNS resolution for user-provisioned infrastructure for detailed validation steps. 2.3.7.1. Example DNS configuration for user-provisioned clusters This section provides A and PTR record configuration samples that meet the DNS requirements for deploying OpenShift Container Platform on user-provisioned infrastructure. The samples are not meant to provide advice for choosing one DNS solution over another. In the examples, the cluster name is ocp4 and the base domain is example.com . Example DNS A record configuration for a user-provisioned cluster The following example is a BIND zone file that shows sample A records for name resolution in a user-provisioned cluster. Example 2.1. Sample DNS zone database USDTTL 1W @ IN SOA ns1.example.com. root ( 2019070700 ; serial 3H ; refresh (3 hours) 30M ; retry (30 minutes) 2W ; expiry (2 weeks) 1W ) ; minimum (1 week) IN NS ns1.example.com. IN MX 10 smtp.example.com. ; ; ns1.example.com. IN A 192.168.1.5 smtp.example.com. IN A 192.168.1.5 ; helper.example.com. IN A 192.168.1.5 helper.ocp4.example.com. IN A 192.168.1.5 ; api.ocp4.example.com. IN A 192.168.1.5 1 api-int.ocp4.example.com. IN A 192.168.1.5 2 ; *.apps.ocp4.example.com. IN A 192.168.1.5 3 ; bootstrap.ocp4.example.com. IN A 192.168.1.96 4 ; control-plane0.ocp4.example.com. IN A 192.168.1.97 5 control-plane1.ocp4.example.com. IN A 192.168.1.98 6 control-plane2.ocp4.example.com. IN A 192.168.1.99 7 ; compute0.ocp4.example.com. IN A 192.168.1.11 8 compute1.ocp4.example.com. IN A 192.168.1.7 9 ; ;EOF 1 Provides name resolution for the Kubernetes API. The record refers to the IP address of the API load balancer. 2 Provides name resolution for the Kubernetes API. The record refers to the IP address of the API load balancer and is used for internal cluster communications. 3 Provides name resolution for the wildcard routes. The record refers to the IP address of the application ingress load balancer. The application ingress load balancer targets the machines that run the Ingress Controller pods. The Ingress Controller pods run on the compute machines by default. Note In the example, the same load balancer is used for the Kubernetes API and application ingress traffic. In production scenarios, you can deploy the API and application ingress load balancers separately so that you can scale the load balancer infrastructure for each in isolation. 4 Provides name resolution for the bootstrap machine. 5 6 7 Provides name resolution for the control plane machines. 8 9 Provides name resolution for the compute machines. Example DNS PTR record configuration for a user-provisioned cluster The following example BIND zone file shows sample PTR records for reverse name resolution in a user-provisioned cluster. Example 2.2. Sample DNS zone database for reverse records USDTTL 1W @ IN SOA ns1.example.com. root ( 2019070700 ; serial 3H ; refresh (3 hours) 30M ; retry (30 minutes) 2W ; expiry (2 weeks) 1W ) ; minimum (1 week) IN NS ns1.example.com. ; 5.1.168.192.in-addr.arpa. IN PTR api.ocp4.example.com. 1 5.1.168.192.in-addr.arpa. IN PTR api-int.ocp4.example.com. 2 ; 96.1.168.192.in-addr.arpa. IN PTR bootstrap.ocp4.example.com. 3 ; 97.1.168.192.in-addr.arpa. IN PTR control-plane0.ocp4.example.com. 4 98.1.168.192.in-addr.arpa. IN PTR control-plane1.ocp4.example.com. 5 99.1.168.192.in-addr.arpa. IN PTR control-plane2.ocp4.example.com. 6 ; 11.1.168.192.in-addr.arpa. IN PTR compute0.ocp4.example.com. 7 7.1.168.192.in-addr.arpa. IN PTR compute1.ocp4.example.com. 8 ; ;EOF 1 Provides reverse DNS resolution for the Kubernetes API. The PTR record refers to the record name of the API load balancer. 2 Provides reverse DNS resolution for the Kubernetes API. The PTR record refers to the record name of the API load balancer and is used for internal cluster communications. 3 Provides reverse DNS resolution for the bootstrap machine. 4 5 6 Provides reverse DNS resolution for the control plane machines. 7 8 Provides reverse DNS resolution for the compute machines. Note A PTR record is not required for the OpenShift Container Platform application wildcard. 2.3.8. Load balancing requirements for user-provisioned infrastructure Before you install OpenShift Container Platform, you must provision the API and application Ingress load balancing infrastructure. In production scenarios, you can deploy the API and application Ingress load balancers separately so that you can scale the load balancer infrastructure for each in isolation. Note If you want to deploy the API and application Ingress load balancers with a Red Hat Enterprise Linux (RHEL) instance, you must purchase the RHEL subscription separately. The load balancing infrastructure must meet the following requirements: API load balancer : Provides a common endpoint for users, both human and machine, to interact with and configure the platform. Configure the following conditions: Layer 4 load balancing only. This can be referred to as Raw TCP or SSL Passthrough mode. A stateless load balancing algorithm. The options vary based on the load balancer implementation. Important Do not configure session persistence for an API load balancer. Configuring session persistence for a Kubernetes API server might cause performance issues from excess application traffic for your OpenShift Container Platform cluster and the Kubernetes API that runs inside the cluster. Configure the following ports on both the front and back of the load balancers: Table 2.7. API load balancer Port Back-end machines (pool members) Internal External Description 6443 Bootstrap and control plane. You remove the bootstrap machine from the load balancer after the bootstrap machine initializes the cluster control plane. You must configure the /readyz endpoint for the API server health check probe. X X Kubernetes API server 22623 Bootstrap and control plane. You remove the bootstrap machine from the load balancer after the bootstrap machine initializes the cluster control plane. X Machine config server Note The load balancer must be configured to take a maximum of 30 seconds from the time the API server turns off the /readyz endpoint to the removal of the API server instance from the pool. Within the time frame after /readyz returns an error or becomes healthy, the endpoint must have been removed or added. Probing every 5 or 10 seconds, with two successful requests to become healthy and three to become unhealthy, are well-tested values. Application Ingress load balancer : Provides an ingress point for application traffic flowing in from outside the cluster. A working configuration for the Ingress router is required for an OpenShift Container Platform cluster. Configure the following conditions: Layer 4 load balancing only. This can be referred to as Raw TCP or SSL Passthrough mode. A connection-based or session-based persistence is recommended, based on the options available and types of applications that will be hosted on the platform. Tip If the true IP address of the client can be seen by the application Ingress load balancer, enabling source IP-based session persistence can improve performance for applications that use end-to-end TLS encryption. Configure the following ports on both the front and back of the load balancers: Table 2.8. Application Ingress load balancer Port Back-end machines (pool members) Internal External Description 443 The machines that run the Ingress Controller pods, compute, or worker, by default. X X HTTPS traffic 80 The machines that run the Ingress Controller pods, compute, or worker, by default. X X HTTP traffic Note If you are deploying a three-node cluster with zero compute nodes, the Ingress Controller pods run on the control plane nodes. In three-node cluster deployments, you must configure your application Ingress load balancer to route HTTP and HTTPS traffic to the control plane nodes. 2.3.8.1. Example load balancer configuration for user-provisioned clusters This section provides an example API and application Ingress load balancer configuration that meets the load balancing requirements for user-provisioned clusters. The sample is an /etc/haproxy/haproxy.cfg configuration for an HAProxy load balancer. The example is not meant to provide advice for choosing one load balancing solution over another. In the example, the same load balancer is used for the Kubernetes API and application ingress traffic. In production scenarios, you can deploy the API and application ingress load balancers separately so that you can scale the load balancer infrastructure for each in isolation. Note If you are using HAProxy as a load balancer and SELinux is set to enforcing , you must ensure that the HAProxy service can bind to the configured TCP port by running setsebool -P haproxy_connect_any=1 . Example 2.3. Sample API and application Ingress load balancer configuration global log 127.0.0.1 local2 pidfile /var/run/haproxy.pid maxconn 4000 daemon defaults mode http log global option dontlognull option http-server-close option redispatch retries 3 timeout http-request 10s timeout queue 1m timeout connect 10s timeout client 1m timeout server 1m timeout http-keep-alive 10s timeout check 10s maxconn 3000 listen api-server-6443 1 bind *:6443 mode tcp option httpchk GET /readyz HTTP/1.0 option log-health-checks balance roundrobin server bootstrap bootstrap.ocp4.example.com:6443 verify none check check-ssl inter 10s fall 2 rise 3 backup 2 server master0 master0.ocp4.example.com:6443 weight 1 verify none check check-ssl inter 10s fall 2 rise 3 server master1 master1.ocp4.example.com:6443 weight 1 verify none check check-ssl inter 10s fall 2 rise 3 server master2 master2.ocp4.example.com:6443 weight 1 verify none check check-ssl inter 10s fall 2 rise 3 listen machine-config-server-22623 3 bind *:22623 mode tcp server bootstrap bootstrap.ocp4.example.com:22623 check inter 1s backup 4 server master0 master0.ocp4.example.com:22623 check inter 1s server master1 master1.ocp4.example.com:22623 check inter 1s server master2 master2.ocp4.example.com:22623 check inter 1s listen ingress-router-443 5 bind *:443 mode tcp balance source server compute0 compute0.ocp4.example.com:443 check inter 1s server compute1 compute1.ocp4.example.com:443 check inter 1s listen ingress-router-80 6 bind *:80 mode tcp balance source server compute0 compute0.ocp4.example.com:80 check inter 1s server compute1 compute1.ocp4.example.com:80 check inter 1s 1 Port 6443 handles the Kubernetes API traffic and points to the control plane machines. 2 4 The bootstrap entries must be in place before the OpenShift Container Platform cluster installation and they must be removed after the bootstrap process is complete. 3 Port 22623 handles the machine config server traffic and points to the control plane machines. 5 Port 443 handles the HTTPS traffic and points to the machines that run the Ingress Controller pods. The Ingress Controller pods run on the compute machines by default. 6 Port 80 handles the HTTP traffic and points to the machines that run the Ingress Controller pods. The Ingress Controller pods run on the compute machines by default. Note If you are deploying a three-node cluster with zero compute nodes, the Ingress Controller pods run on the control plane nodes. In three-node cluster deployments, you must configure your application Ingress load balancer to route HTTP and HTTPS traffic to the control plane nodes. Tip If you are using HAProxy as a load balancer, you can check that the haproxy process is listening on ports 6443 , 22623 , 443 , and 80 by running netstat -nltupe on the HAProxy node. 2.4. Preparing the user-provisioned infrastructure Before you install OpenShift Container Platform on user-provisioned infrastructure, you must prepare the underlying infrastructure. This section provides details about the high-level steps required to set up your cluster infrastructure in preparation for an OpenShift Container Platform installation. This includes configuring IP networking and network connectivity for your cluster nodes, preparing a web server for the Ignition files, enabling the required ports through your firewall, and setting up the required DNS and load balancing infrastructure. After preparation, your cluster infrastructure must meet the requirements outlined in the Requirements for a cluster with user-provisioned infrastructure section. Prerequisites You have reviewed the OpenShift Container Platform 4.x Tested Integrations page. You have reviewed the infrastructure requirements detailed in the Requirements for a cluster with user-provisioned infrastructure section. Procedure Set up static IP addresses. Set up an HTTP or HTTPS server to provide Ignition files to the cluster nodes. Ensure that your network infrastructure provides the required network connectivity between the cluster components. See the Networking requirements for user-provisioned infrastructure section for details about the requirements. Configure your firewall to enable the ports required for the OpenShift Container Platform cluster components to communicate. See Networking requirements for user-provisioned infrastructure section for details about the ports that are required. Important By default, port 1936 is accessible for an OpenShift Container Platform cluster, because each control plane node needs access to this port. Avoid using the Ingress load balancer to expose this port, because doing so might result in the exposure of sensitive information, such as statistics and metrics, related to Ingress Controllers. Setup the required DNS infrastructure for your cluster. Configure DNS name resolution for the Kubernetes API, the application wildcard, the bootstrap machine, the control plane machines, and the compute machines. Configure reverse DNS resolution for the Kubernetes API, the bootstrap machine, the control plane machines, and the compute machines. See the User-provisioned DNS requirements section for more information about the OpenShift Container Platform DNS requirements. Validate your DNS configuration. From your installation node, run DNS lookups against the record names of the Kubernetes API, the wildcard routes, and the cluster nodes. Validate that the IP addresses in the responses correspond to the correct components. From your installation node, run reverse DNS lookups against the IP addresses of the load balancer and the cluster nodes. Validate that the record names in the responses correspond to the correct components. See the Validating DNS resolution for user-provisioned infrastructure section for detailed DNS validation steps. Provision the required API and application ingress load balancing infrastructure. See the Load balancing requirements for user-provisioned infrastructure section for more information about the requirements. Note Some load balancing solutions require the DNS name resolution for the cluster nodes to be in place before the load balancing is initialized. 2.5. Validating DNS resolution for user-provisioned infrastructure You can validate your DNS configuration before installing OpenShift Container Platform on user-provisioned infrastructure. Important The validation steps detailed in this section must succeed before you install your cluster. Prerequisites You have configured the required DNS records for your user-provisioned infrastructure. Procedure From your installation node, run DNS lookups against the record names of the Kubernetes API, the wildcard routes, and the cluster nodes. Validate that the IP addresses contained in the responses correspond to the correct components. Perform a lookup against the Kubernetes API record name. Check that the result points to the IP address of the API load balancer: USD dig +noall +answer @<nameserver_ip> api.<cluster_name>.<base_domain> 1 1 Replace <nameserver_ip> with the IP address of the nameserver, <cluster_name> with your cluster name, and <base_domain> with your base domain name. Example output api.ocp4.example.com. 604800 IN A 192.168.1.5 Perform a lookup against the Kubernetes internal API record name. Check that the result points to the IP address of the API load balancer: USD dig +noall +answer @<nameserver_ip> api-int.<cluster_name>.<base_domain> Example output api-int.ocp4.example.com. 604800 IN A 192.168.1.5 Test an example *.apps.<cluster_name>.<base_domain> DNS wildcard lookup. All of the application wildcard lookups must resolve to the IP address of the application ingress load balancer: USD dig +noall +answer @<nameserver_ip> random.apps.<cluster_name>.<base_domain> Example output random.apps.ocp4.example.com. 604800 IN A 192.168.1.5 Note In the example outputs, the same load balancer is used for the Kubernetes API and application ingress traffic. In production scenarios, you can deploy the API and application ingress load balancers separately so that you can scale the load balancer infrastructure for each in isolation. You can replace random with another wildcard value. For example, you can query the route to the OpenShift Container Platform console: USD dig +noall +answer @<nameserver_ip> console-openshift-console.apps.<cluster_name>.<base_domain> Example output console-openshift-console.apps.ocp4.example.com. 604800 IN A 192.168.1.5 Run a lookup against the bootstrap DNS record name. Check that the result points to the IP address of the bootstrap node: USD dig +noall +answer @<nameserver_ip> bootstrap.<cluster_name>.<base_domain> Example output bootstrap.ocp4.example.com. 604800 IN A 192.168.1.96 Use this method to perform lookups against the DNS record names for the control plane and compute nodes. Check that the results correspond to the IP addresses of each node. From your installation node, run reverse DNS lookups against the IP addresses of the load balancer and the cluster nodes. Validate that the record names contained in the responses correspond to the correct components. Perform a reverse lookup against the IP address of the API load balancer. Check that the response includes the record names for the Kubernetes API and the Kubernetes internal API: USD dig +noall +answer @<nameserver_ip> -x 192.168.1.5 Example output 5.1.168.192.in-addr.arpa. 604800 IN PTR api-int.ocp4.example.com. 1 5.1.168.192.in-addr.arpa. 604800 IN PTR api.ocp4.example.com. 2 1 Provides the record name for the Kubernetes internal API. 2 Provides the record name for the Kubernetes API. Note A PTR record is not required for the OpenShift Container Platform application wildcard. No validation step is needed for reverse DNS resolution against the IP address of the application ingress load balancer. Perform a reverse lookup against the IP address of the bootstrap node. Check that the result points to the DNS record name of the bootstrap node: USD dig +noall +answer @<nameserver_ip> -x 192.168.1.96 Example output 96.1.168.192.in-addr.arpa. 604800 IN PTR bootstrap.ocp4.example.com. Use this method to perform reverse lookups against the IP addresses for the control plane and compute nodes. Check that the results correspond to the DNS record names of each node. 2.6. Generating a key pair for cluster node SSH access During an OpenShift Container Platform installation, you can provide an SSH public key to the installation program. The key is passed to the Red Hat Enterprise Linux CoreOS (RHCOS) nodes through their Ignition config files and is used to authenticate SSH access to the nodes. The key is added to the ~/.ssh/authorized_keys list for the core user on each node, which enables password-less authentication. After the key is passed to the nodes, you can use the key pair to SSH in to the RHCOS nodes as the user core . To access the nodes through SSH, the private key identity must be managed by SSH for your local user. If you want to SSH in to your cluster nodes to perform installation debugging or disaster recovery, you must provide the SSH public key during the installation process. The ./openshift-install gather command also requires the SSH public key to be in place on the cluster nodes. Important Do not skip this procedure in production environments, where disaster recovery and debugging is required. Procedure If you do not have an existing SSH key pair on your local machine to use for authentication onto your cluster nodes, create one. For example, on a computer that uses a Linux operating system, run the following command: USD ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1 1 Specify the path and file name, such as ~/.ssh/id_ed25519 , of the new SSH key. If you have an existing key pair, ensure your public key is in the your ~/.ssh directory. Note If you plan to install an OpenShift Container Platform cluster that uses the RHEL cryptographic libraries that have been submitted to NIST for FIPS 140-2/140-3 Validation on only the x86_64 , ppc64le , and s390x architectures, do not create a key that uses the ed25519 algorithm. Instead, create a key that uses the rsa or ecdsa algorithm. View the public SSH key: USD cat <path>/<file_name>.pub For example, run the following to view the ~/.ssh/id_ed25519.pub public key: USD cat ~/.ssh/id_ed25519.pub Add the SSH private key identity to the SSH agent for your local user, if it has not already been added. SSH agent management of the key is required for password-less SSH authentication onto your cluster nodes, or if you want to use the ./openshift-install gather command. Note On some distributions, default SSH private key identities such as ~/.ssh/id_rsa and ~/.ssh/id_dsa are managed automatically. If the ssh-agent process is not already running for your local user, start it as a background task: USD eval "USD(ssh-agent -s)" Example output Agent pid 31874 Note If your cluster is in FIPS mode, only use FIPS-compliant algorithms to generate the SSH key. The key must be either RSA or ECDSA. Add your SSH private key to the ssh-agent : USD ssh-add <path>/<file_name> 1 1 Specify the path and file name for your SSH private key, such as ~/.ssh/id_ed25519 Example output Identity added: /home/<you>/<path>/<file_name> (<computer_name>) steps When you install OpenShift Container Platform, provide the SSH public key to the installation program. 2.7. Obtaining the installation program Before you install OpenShift Container Platform, download the installation file on your provisioning machine. Prerequisites You have a machine that runs Linux, for example Red Hat Enterprise Linux (RHEL) 8, with at least 1.2 GB of local disk space. Procedure Go to the Cluster Type page on the Red Hat Hybrid Cloud Console. If you have a Red Hat account, log in with your credentials. If you do not, create an account. Tip You can also download the binaries for a specific OpenShift Container Platform release . Select your infrastructure provider from the Run it yourself section of the page. Select your host operating system and architecture from the dropdown menus under OpenShift Installer and click Download Installer . Place the downloaded file in the directory where you want to store the installation configuration files. Important The installation program creates several files on the computer that you use to install your cluster. You must keep the installation program and the files that the installation program creates after you finish installing the cluster. Both of the files are required to delete the cluster. Deleting the files created by the installation program does not remove your cluster, even if the cluster failed during installation. To remove your cluster, complete the OpenShift Container Platform uninstallation procedures for your specific cloud provider. Extract the installation program. For example, on a computer that uses a Linux operating system, run the following command: USD tar -xvf openshift-install-linux.tar.gz Download your installation pull secret from Red Hat OpenShift Cluster Manager . This pull secret allows you to authenticate with the services that are provided by the included authorities, including Quay.io, which serves the container images for OpenShift Container Platform components. Tip Alternatively, you can retrieve the installation program from the Red Hat Customer Portal , where you can specify a version of the installation program to download. However, you must have an active subscription to access this page. 2.8. Installing the OpenShift CLI by downloading the binary You can install the OpenShift CLI ( oc ) to interact with OpenShift Container Platform from a command-line interface. You can install oc on Linux, Windows, or macOS. Important If you installed an earlier version of oc , you cannot use it to complete all of the commands in OpenShift Container Platform 4.15. Download and install the new version of oc . Installing the OpenShift CLI on Linux You can install the OpenShift CLI ( oc ) binary on Linux by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the architecture from the Product Variant drop-down list. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.15 Linux Clients entry and save the file. Unpack the archive: USD tar xvf <file> Place the oc binary in a directory that is on your PATH . To check your PATH , execute the following command: USD echo USDPATH Verification After you install the OpenShift CLI, it is available using the oc command: USD oc <command> Installing the OpenShift CLI on Windows You can install the OpenShift CLI ( oc ) binary on Windows by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.15 Windows Client entry and save the file. Unzip the archive with a ZIP program. Move the oc binary to a directory that is on your PATH . To check your PATH , open the command prompt and execute the following command: C:\> path Verification After you install the OpenShift CLI, it is available using the oc command: C:\> oc <command> Installing the OpenShift CLI on macOS You can install the OpenShift CLI ( oc ) binary on macOS by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.15 macOS Clients entry and save the file. Note For macOS arm64, choose the OpenShift v4.15 macOS arm64 Client entry. Unpack and unzip the archive. Move the oc binary to a directory on your PATH. To check your PATH , open a terminal and execute the following command: USD echo USDPATH Verification Verify your installation by using an oc command: USD oc <command> 2.9. Manually creating the installation configuration file Installing the cluster requires that you manually create the installation configuration file. Prerequisites You have an SSH public key on your local machine to provide to the installation program. The key will be used for SSH authentication onto your cluster nodes for debugging and disaster recovery. You have obtained the OpenShift Container Platform installation program and the pull secret for your cluster. Procedure Create an installation directory to store your required installation assets in: USD mkdir <installation_directory> Important You must create a directory. Some installation assets, like bootstrap X.509 certificates have short expiration intervals, so you must not reuse an installation directory. If you want to reuse individual files from another cluster installation, you can copy them into your directory. However, the file names for the installation assets might change between releases. Use caution when copying installation files from an earlier OpenShift Container Platform version. Customize the sample install-config.yaml file template that is provided and save it in the <installation_directory> . Note You must name this configuration file install-config.yaml . Back up the install-config.yaml file so that you can use it to install multiple clusters. Important The install-config.yaml file is consumed during the step of the installation process. You must back it up now. Additional resources Installation configuration parameters for IBM Z(R) 2.9.1. Sample install-config.yaml file for IBM Z You can customize the install-config.yaml file to specify more details about your OpenShift Container Platform cluster's platform or modify the values of the required parameters. apiVersion: v1 baseDomain: example.com 1 compute: 2 - hyperthreading: Enabled 3 name: worker replicas: 0 4 architecture: s390x controlPlane: 5 hyperthreading: Enabled 6 name: master replicas: 3 7 architecture: s390x metadata: name: test 8 networking: clusterNetwork: - cidr: 10.128.0.0/14 9 hostPrefix: 23 10 networkType: OVNKubernetes 11 serviceNetwork: 12 - 172.30.0.0/16 platform: none: {} 13 fips: false 14 pullSecret: '{"auths": ...}' 15 sshKey: 'ssh-ed25519 AAAA...' 16 1 The base domain of the cluster. All DNS records must be sub-domains of this base and include the cluster name. 2 5 The controlPlane section is a single mapping, but the compute section is a sequence of mappings. To meet the requirements of the different data structures, the first line of the compute section must begin with a hyphen, - , and the first line of the controlPlane section must not. Only one control plane pool is used. 3 6 Specifies whether to enable or disable simultaneous multithreading (SMT), or hyperthreading. By default, SMT is enabled to increase the performance of the cores in your machines. You can disable it by setting the parameter value to Disabled . If you disable SMT, you must disable it in all cluster machines; this includes both control plane and compute machines. Note Simultaneous multithreading (SMT) is enabled by default. If SMT is not available on your OpenShift Container Platform nodes, the hyperthreading parameter has no effect. Important If you disable hyperthreading , whether on your OpenShift Container Platform nodes or in the install-config.yaml file, ensure that your capacity planning accounts for the dramatically decreased machine performance. 4 You must set this value to 0 when you install OpenShift Container Platform on user-provisioned infrastructure. In installer-provisioned installations, the parameter controls the number of compute machines that the cluster creates and manages for you. In user-provisioned installations, you must manually deploy the compute machines before you finish installing the cluster. Note If you are installing a three-node cluster, do not deploy any compute machines when you install the Red Hat Enterprise Linux CoreOS (RHCOS) machines. 7 The number of control plane machines that you add to the cluster. Because the cluster uses these values as the number of etcd endpoints in the cluster, the value must match the number of control plane machines that you deploy. 8 The cluster name that you specified in your DNS records. 9 A block of IP addresses from which pod IP addresses are allocated. This block must not overlap with existing physical networks. These IP addresses are used for the pod network. If you need to access the pods from an external network, you must configure load balancers and routers to manage the traffic. Note Class E CIDR range is reserved for a future use. To use the Class E CIDR range, you must ensure your networking environment accepts the IP addresses within the Class E CIDR range. 10 The subnet prefix length to assign to each individual node. For example, if hostPrefix is set to 23 , then each node is assigned a /23 subnet out of the given cidr , which allows for 510 (2^(32 - 23) - 2) pod IP addresses. If you are required to provide access to nodes from an external network, configure load balancers and routers to manage the traffic. 11 The cluster network plugin to install. The default value OVNKubernetes is the only supported value. 12 The IP address pool to use for service IP addresses. You can enter only one IP address pool. This block must not overlap with existing physical networks. If you need to access the services from an external network, configure load balancers and routers to manage the traffic. 13 You must set the platform to none . You cannot provide additional platform configuration variables for IBM Z(R) infrastructure. Important Clusters that are installed with the platform type none are unable to use some features, such as managing compute machines with the Machine API. This limitation applies even if the compute machines that are attached to the cluster are installed on a platform that would normally support the feature. This parameter cannot be changed after installation. 14 Whether to enable or disable FIPS mode. By default, FIPS mode is not enabled. If FIPS mode is enabled, the Red Hat Enterprise Linux CoreOS (RHCOS) machines that OpenShift Container Platform runs on bypass the default Kubernetes cryptography suite and use the cryptography modules that are provided with RHCOS instead. Important To enable FIPS mode for your cluster, you must run the installation program from a Red Hat Enterprise Linux (RHEL) computer configured to operate in FIPS mode. For more information about configuring FIPS mode on RHEL, see Installing the system in FIPS mode . When running Red Hat Enterprise Linux (RHEL) or Red Hat Enterprise Linux CoreOS (RHCOS) booted in FIPS mode, OpenShift Container Platform core components use the RHEL cryptographic libraries that have been submitted to NIST for FIPS 140-2/140-3 Validation on only the x86_64, ppc64le, and s390x architectures. 15 The pull secret from Red Hat OpenShift Cluster Manager . This pull secret allows you to authenticate with the services that are provided by the included authorities, including Quay.io, which serves the container images for OpenShift Container Platform components. 16 The SSH public key for the core user in Red Hat Enterprise Linux CoreOS (RHCOS). Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. 2.9.2. Configuring the cluster-wide proxy during installation Production environments can deny direct access to the internet and instead have an HTTP or HTTPS proxy available. You can configure a new OpenShift Container Platform cluster to use a proxy by configuring the proxy settings in the install-config.yaml file. Prerequisites You have an existing install-config.yaml file. You reviewed the sites that your cluster requires access to and determined whether any of them need to bypass the proxy. By default, all cluster egress traffic is proxied, including calls to hosting cloud provider APIs. You added sites to the Proxy object's spec.noProxy field to bypass the proxy if necessary. Note The Proxy object status.noProxy field is populated with the values of the networking.machineNetwork[].cidr , networking.clusterNetwork[].cidr , and networking.serviceNetwork[] fields from your installation configuration. For installations on Amazon Web Services (AWS), Google Cloud Platform (GCP), Microsoft Azure, and Red Hat OpenStack Platform (RHOSP), the Proxy object status.noProxy field is also populated with the instance metadata endpoint ( 169.254.169.254 ). Procedure Edit your install-config.yaml file and add the proxy settings. For example: apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5 1 A proxy URL to use for creating HTTP connections outside the cluster. The URL scheme must be http . 2 A proxy URL to use for creating HTTPS connections outside the cluster. 3 A comma-separated list of destination domain names, IP addresses, or other network CIDRs to exclude from proxying. Preface a domain with . to match subdomains only. For example, .y.com matches x.y.com , but not y.com . Use * to bypass the proxy for all destinations. 4 If provided, the installation program generates a config map that is named user-ca-bundle in the openshift-config namespace that contains one or more additional CA certificates that are required for proxying HTTPS connections. The Cluster Network Operator then creates a trusted-ca-bundle config map that merges these contents with the Red Hat Enterprise Linux CoreOS (RHCOS) trust bundle, and this config map is referenced in the trustedCA field of the Proxy object. The additionalTrustBundle field is required unless the proxy's identity certificate is signed by an authority from the RHCOS trust bundle. 5 Optional: The policy to determine the configuration of the Proxy object to reference the user-ca-bundle config map in the trustedCA field. The allowed values are Proxyonly and Always . Use Proxyonly to reference the user-ca-bundle config map only when http/https proxy is configured. Use Always to always reference the user-ca-bundle config map. The default value is Proxyonly . Note The installation program does not support the proxy readinessEndpoints field. Note If the installer times out, restart and then complete the deployment by using the wait-for command of the installer. For example: USD ./openshift-install wait-for install-complete --log-level debug Save the file and reference it when installing OpenShift Container Platform. The installation program creates a cluster-wide proxy that is named cluster that uses the proxy settings in the provided install-config.yaml file. If no proxy settings are provided, a cluster Proxy object is still created, but it will have a nil spec . Note Only the Proxy object named cluster is supported, and no additional proxies can be created. 2.9.3. Configuring a three-node cluster Optionally, you can deploy zero compute machines in a minimal three node cluster that consists of three control plane machines only. This provides smaller, more resource efficient clusters for cluster administrators and developers to use for testing, development, and production. In three-node OpenShift Container Platform environments, the three control plane machines are schedulable, which means that your application workloads are scheduled to run on them. Prerequisites You have an existing install-config.yaml file. Procedure Ensure that the number of compute replicas is set to 0 in your install-config.yaml file, as shown in the following compute stanza: compute: - name: worker platform: {} replicas: 0 Note You must set the value of the replicas parameter for the compute machines to 0 when you install OpenShift Container Platform on user-provisioned infrastructure, regardless of the number of compute machines you are deploying. In installer-provisioned installations, the parameter controls the number of compute machines that the cluster creates and manages for you. This does not apply to user-provisioned installations, where the compute machines are deployed manually. Note The preferred resource for control plane nodes is six vCPUs and 21 GB. For three control plane nodes this is the memory + vCPU equivalent of a minimum five-node cluster. You should back the three nodes, each installed on a 120 GB disk, with three IFLs that are SMT2 enabled. The minimum tested setup is three vCPUs and 10 GB on a 120 GB disk for each control plane node. For three-node cluster installations, follow these steps: If you are deploying a three-node cluster with zero compute nodes, the Ingress Controller pods run on the control plane nodes. In three-node cluster deployments, you must configure your application ingress load balancer to route HTTP and HTTPS traffic to the control plane nodes. See the Load balancing requirements for user-provisioned infrastructure section for more information. When you create the Kubernetes manifest files in the following procedure, ensure that the mastersSchedulable parameter in the <installation_directory>/manifests/cluster-scheduler-02-config.yml file is set to true . This enables your application workloads to run on the control plane nodes. Do not deploy any compute nodes when you create the Red Hat Enterprise Linux CoreOS (RHCOS) machines. 2.10. Cluster Network Operator configuration The configuration for the cluster network is specified as part of the Cluster Network Operator (CNO) configuration and stored in a custom resource (CR) object that is named cluster . The CR specifies the fields for the Network API in the operator.openshift.io API group. The CNO configuration inherits the following fields during cluster installation from the Network API in the Network.config.openshift.io API group: clusterNetwork IP address pools from which pod IP addresses are allocated. serviceNetwork IP address pool for services. defaultNetwork.type Cluster network plugin. OVNKubernetes is the only supported plugin during installation. You can specify the cluster network plugin configuration for your cluster by setting the fields for the defaultNetwork object in the CNO object named cluster . 2.10.1. Cluster Network Operator configuration object The fields for the Cluster Network Operator (CNO) are described in the following table: Table 2.9. Cluster Network Operator configuration object Field Type Description metadata.name string The name of the CNO object. This name is always cluster . spec.clusterNetwork array A list specifying the blocks of IP addresses from which pod IP addresses are allocated and the subnet prefix length assigned to each individual node in the cluster. For example: spec: clusterNetwork: - cidr: 10.128.0.0/19 hostPrefix: 23 - cidr: 10.128.32.0/19 hostPrefix: 23 spec.serviceNetwork array A block of IP addresses for services. The OpenShift SDN and OVN-Kubernetes network plugins support only a single IP address block for the service network. For example: spec: serviceNetwork: - 172.30.0.0/14 You can customize this field only in the install-config.yaml file before you create the manifests. The value is read-only in the manifest file. spec.defaultNetwork object Configures the network plugin for the cluster network. spec.kubeProxyConfig object The fields for this object specify the kube-proxy configuration. If you are using the OVN-Kubernetes cluster network plugin, the kube-proxy configuration has no effect. Important For a cluster that needs to deploy objects across multiple networks, ensure that you specify the same value for the clusterNetwork.hostPrefix parameter for each network type that is defined in the install-config.yaml file. Setting a different value for each clusterNetwork.hostPrefix parameter can impact the OVN-Kubernetes network plugin, where the plugin cannot effectively route object traffic among different nodes. defaultNetwork object configuration The values for the defaultNetwork object are defined in the following table: Table 2.10. defaultNetwork object Field Type Description type string OVNKubernetes . The Red Hat OpenShift Networking network plugin is selected during installation. This value cannot be changed after cluster installation. Note OpenShift Container Platform uses the OVN-Kubernetes network plugin by default. OpenShift SDN is no longer available as an installation choice for new clusters. ovnKubernetesConfig object This object is only valid for the OVN-Kubernetes network plugin. Configuration for the OVN-Kubernetes network plugin The following table describes the configuration fields for the OVN-Kubernetes network plugin: Table 2.11. ovnKubernetesConfig object Field Type Description mtu integer The maximum transmission unit (MTU) for the Geneve (Generic Network Virtualization Encapsulation) overlay network. This is detected automatically based on the MTU of the primary network interface. You do not normally need to override the detected MTU. If the auto-detected value is not what you expect it to be, confirm that the MTU on the primary network interface on your nodes is correct. You cannot use this option to change the MTU value of the primary network interface on the nodes. If your cluster requires different MTU values for different nodes, you must set this value to 100 less than the lowest MTU value in your cluster. For example, if some nodes in your cluster have an MTU of 9001 , and some have an MTU of 1500 , you must set this value to 1400 . genevePort integer The port to use for all Geneve packets. The default value is 6081 . This value cannot be changed after cluster installation. ipsecConfig object Specify a configuration object for customizing the IPsec configuration. ipv4 object Specifies a configuration object for IPv4 settings. ipv6 object Specifies a configuration object for IPv6 settings. policyAuditConfig object Specify a configuration object for customizing network policy audit logging. If unset, the defaults audit log settings are used. gatewayConfig object Optional: Specify a configuration object for customizing how egress traffic is sent to the node gateway. Note While migrating egress traffic, you can expect some disruption to workloads and service traffic until the Cluster Network Operator (CNO) successfully rolls out the changes. Table 2.12. ovnKubernetesConfig.ipv4 object Field Type Description internalTransitSwitchSubnet string If your existing network infrastructure overlaps with the 100.88.0.0/16 IPv4 subnet, you can specify a different IP address range for internal use by OVN-Kubernetes. The subnet for the distributed transit switch that enables east-west traffic. This subnet cannot overlap with any other subnets used by OVN-Kubernetes or on the host itself. It must be large enough to accommodate one IP address per node in your cluster. The default value is 100.88.0.0/16 . internalJoinSubnet string If your existing network infrastructure overlaps with the 100.64.0.0/16 IPv4 subnet, you can specify a different IP address range for internal use by OVN-Kubernetes. You must ensure that the IP address range does not overlap with any other subnet used by your OpenShift Container Platform installation. The IP address range must be larger than the maximum number of nodes that can be added to the cluster. For example, if the clusterNetwork.cidr value is 10.128.0.0/14 and the clusterNetwork.hostPrefix value is /23 , then the maximum number of nodes is 2^(23-14)=512 . The default value is 100.64.0.0/16 . Table 2.13. ovnKubernetesConfig.ipv6 object Field Type Description internalTransitSwitchSubnet string If your existing network infrastructure overlaps with the fd98::/48 IPv6 subnet, you can specify a different IP address range for internal use by OVN-Kubernetes. You must ensure that the IP address range does not overlap with any other subnet used by your OpenShift Container Platform installation. The IP address range must be larger than the maximum number of nodes that can be added to the cluster. This field cannot be changed after installation. The default value is fd98::/48 . internalJoinSubnet string If your existing network infrastructure overlaps with the fd98::/64 IPv6 subnet, you can specify a different IP address range for internal use by OVN-Kubernetes. You must ensure that the IP address range does not overlap with any other subnet used by your OpenShift Container Platform installation. The IP address range must be larger than the maximum number of nodes that can be added to the cluster. The default value is fd98::/64 . Table 2.14. policyAuditConfig object Field Type Description rateLimit integer The maximum number of messages to generate every second per node. The default value is 20 messages per second. maxFileSize integer The maximum size for the audit log in bytes. The default value is 50000000 or 50 MB. maxLogFiles integer The maximum number of log files that are retained. destination string One of the following additional audit log targets: libc The libc syslog() function of the journald process on the host. udp:<host>:<port> A syslog server. Replace <host>:<port> with the host and port of the syslog server. unix:<file> A Unix Domain Socket file specified by <file> . null Do not send the audit logs to any additional target. syslogFacility string The syslog facility, such as kern , as defined by RFC5424. The default value is local0 . Table 2.15. gatewayConfig object Field Type Description routingViaHost boolean Set this field to true to send egress traffic from pods to the host networking stack. For highly-specialized installations and applications that rely on manually configured routes in the kernel routing table, you might want to route egress traffic to the host networking stack. By default, egress traffic is processed in OVN to exit the cluster and is not affected by specialized routes in the kernel routing table. The default value is false . This field has an interaction with the Open vSwitch hardware offloading feature. If you set this field to true , you do not receive the performance benefits of the offloading because egress traffic is processed by the host networking stack. ipForwarding object You can control IP forwarding for all traffic on OVN-Kubernetes managed interfaces by using the ipForwarding specification in the Network resource. Specify Restricted to only allow IP forwarding for Kubernetes related traffic. Specify Global to allow forwarding of all IP traffic. For new installations, the default is Restricted . For updates to OpenShift Container Platform 4.14 or later, the default is Global . ipv4 object Optional: Specify an object to configure the internal OVN-Kubernetes masquerade address for host to service traffic for IPv4 addresses. ipv6 object Optional: Specify an object to configure the internal OVN-Kubernetes masquerade address for host to service traffic for IPv6 addresses. Table 2.16. gatewayConfig.ipv4 object Field Type Description internalMasqueradeSubnet string The masquerade IPv4 addresses that are used internally to enable host to service traffic. The host is configured with these IP addresses as well as the shared gateway bridge interface. The default value is 169.254.169.0/29 . Table 2.17. gatewayConfig.ipv6 object Field Type Description internalMasqueradeSubnet string The masquerade IPv6 addresses that are used internally to enable host to service traffic. The host is configured with these IP addresses as well as the shared gateway bridge interface. The default value is fd69::/125 . Table 2.18. ipsecConfig object Field Type Description mode string Specifies the behavior of the IPsec implementation. Must be one of the following values: Disabled : IPsec is not enabled on cluster nodes. External : IPsec is enabled for network traffic with external hosts. Full : IPsec is enabled for pod traffic and network traffic with external hosts. Example OVN-Kubernetes configuration with IPSec enabled defaultNetwork: type: OVNKubernetes ovnKubernetesConfig: mtu: 1400 genevePort: 6081 ipsecConfig: mode: Full Important Using OVNKubernetes can lead to a stack exhaustion problem on IBM Power(R). kubeProxyConfig object configuration (OpenShiftSDN container network interface only) The values for the kubeProxyConfig object are defined in the following table: Table 2.19. kubeProxyConfig object Field Type Description iptablesSyncPeriod string The refresh period for iptables rules. The default value is 30s . Valid suffixes include s , m , and h and are described in the Go time package documentation. Note Because of performance improvements introduced in OpenShift Container Platform 4.3 and greater, adjusting the iptablesSyncPeriod parameter is no longer necessary. proxyArguments.iptables-min-sync-period array The minimum duration before refreshing iptables rules. This field ensures that the refresh does not happen too frequently. Valid suffixes include s , m , and h and are described in the Go time package . The default value is: kubeProxyConfig: proxyArguments: iptables-min-sync-period: - 0s 2.11. Creating the Kubernetes manifest and Ignition config files Because you must modify some cluster definition files and manually start the cluster machines, you must generate the Kubernetes manifest and Ignition config files that the cluster needs to configure the machines. The installation configuration file transforms into the Kubernetes manifests. The manifests wrap into the Ignition configuration files, which are later used to configure the cluster machines. Important The Ignition config files that the OpenShift Container Platform installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending node-bootstrapper certificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information. It is recommended that you use Ignition config files within 12 hours after they are generated because the 24-hour certificate rotates from 16 to 22 hours after the cluster is installed. By using the Ignition config files within 12 hours, you can avoid installation failure if the certificate update runs during installation. Note The installation program that generates the manifest and Ignition files is architecture specific and can be obtained from the client image mirror . The Linux version of the installation program runs on s390x only. This installer program is also available as a Mac OS version. Prerequisites You obtained the OpenShift Container Platform installation program. You created the install-config.yaml installation configuration file. Procedure Change to the directory that contains the OpenShift Container Platform installation program and generate the Kubernetes manifests for the cluster: USD ./openshift-install create manifests --dir <installation_directory> 1 1 For <installation_directory> , specify the installation directory that contains the install-config.yaml file you created. Warning If you are installing a three-node cluster, skip the following step to allow the control plane nodes to be schedulable. Important When you configure control plane nodes from the default unschedulable to schedulable, additional subscriptions are required. This is because control plane nodes then become compute nodes. Check that the mastersSchedulable parameter in the <installation_directory>/manifests/cluster-scheduler-02-config.yml Kubernetes manifest file is set to false . This setting prevents pods from being scheduled on the control plane machines: Open the <installation_directory>/manifests/cluster-scheduler-02-config.yml file. Locate the mastersSchedulable parameter and ensure that it is set to false . Save and exit the file. To create the Ignition configuration files, run the following command from the directory that contains the installation program: USD ./openshift-install create ignition-configs --dir <installation_directory> 1 1 For <installation_directory> , specify the same installation directory. Ignition config files are created for the bootstrap, control plane, and compute nodes in the installation directory. The kubeadmin-password and kubeconfig files are created in the ./<installation_directory>/auth directory: 2.12. Configuring NBDE with static IP in an IBM Z or IBM LinuxONE environment Enabling NBDE disk encryption in an IBM Z(R) or IBM(R) LinuxONE environment requires additional steps, which are described in detail in this section. Prerequisites You have set up the External Tang Server. See Network-bound disk encryption for instructions. You have installed the butane utility. You have reviewed the instructions for how to create machine configs with Butane. Procedure Create Butane configuration files for the control plane and compute nodes. The following example of a Butane configuration for a control plane node creates a file named master-storage.bu for disk encryption: variant: openshift version: 4.15.0 metadata: name: master-storage labels: machineconfiguration.openshift.io/role: master storage: luks: - clevis: tang: - thumbprint: QcPr_NHFJammnRCA3fFMVdNBwjs url: http://clevis.example.com:7500 options: 1 - --cipher - aes-cbc-essiv:sha256 device: /dev/disk/by-partlabel/root 2 label: luks-root name: root wipe_volume: true filesystems: - device: /dev/mapper/root format: xfs label: root wipe_filesystem: true openshift: fips: true 3 1 The cipher option is only required if FIPS mode is enabled. Omit the entry if FIPS is disabled. 2 For installations on DASD-type disks, replace with device: /dev/disk/by-label/root . 3 Whether to enable or disable FIPS mode. By default, FIPS mode is not enabled. If FIPS mode is enabled, the Red Hat Enterprise Linux CoreOS (RHCOS) machines that OpenShift Container Platform runs on bypass the default Kubernetes cryptography suite and use the cryptography modules that are provided with RHCOS instead. Create a customized initramfs file to boot the machine, by running the following command: USD coreos-installer pxe customize \ /root/rhcos-bootfiles/rhcos-<release>-live-initramfs.s390x.img \ --dest-device /dev/disk/by-id/scsi-<serial_number> --dest-karg-append \ ip=<ip_address>::<gateway_ip>:<subnet_mask>::<network_device>:none \ --dest-karg-append nameserver=<nameserver_ip> \ --dest-karg-append rd.neednet=1 -o \ /root/rhcos-bootfiles/<node_name>-initramfs.s390x.img Note Before first boot, you must customize the initramfs for each node in the cluster, and add PXE kernel parameters. Create a parameter file that includes ignition.platform.id=metal and ignition.firstboot . Example kernel parameter file for the control plane machine: rd.neednet=1 \ console=ttysclp0 \ coreos.inst.install_dev=/dev/dasda \ 1 ignition.firstboot ignition.platform.id=metal \ coreos.live.rootfs_url=http://<http_server>/rhcos-<version>-live-rootfs.<architecture>.img \ 2 coreos.inst.ignition_url=http://<http_server>/master.ign \ 3 ip=10.19.17.2::10.19.17.1:255.255.255.0::enbdd0:none nameserver=10.19.17.1 \ zfcp.allow_lun_scan=0 \ 4 rd.znet=qeth,0.0.bdd0,0.0.bdd1,0.0.bdd2,layer2=1 \ rd.zfcp=0.0.5677,0x600606680g7f0056,0x034F000000000000 \ 5 1 For installations on DASD-type disks, add coreos.inst.install_dev=/dev/dasda . Omit this value for FCP-type disks. 2 Specify the location of the rootfs artifact for the kernel and initramfs you are booting. Only HTTP and HTTPS protocols are supported. 3 Specify the location of the Ignition config file. Use master.ign or worker.ign . Only HTTP and HTTPS protocols are supported. 4 For installations on FCP-type disks, add zfcp.allow_lun_scan=0 . Omit this value for DASD-type disks. 5 For installations on DASD-type disks, replace with rd.dasd=0.0.3490 to specify the DASD device. Note Write all options in the parameter file as a single line and make sure you have no newline characters. Additional resources Creating machine configs with Butane 2.13. Installing RHCOS and starting the OpenShift Container Platform bootstrap process To install OpenShift Container Platform on IBM Z(R) infrastructure that you provision, you must install Red Hat Enterprise Linux CoreOS (RHCOS) on z/VM guest virtual machines. When you install RHCOS, you must provide the Ignition config file that was generated by the OpenShift Container Platform installation program for the type of machine you are installing. If you have configured suitable networking, DNS, and load balancing infrastructure, the OpenShift Container Platform bootstrap process begins automatically after the RHCOS z/VM guest virtual machines have rebooted. Complete the following steps to create the machines. Prerequisites An HTTP or HTTPS server running on your provisioning machine that is accessible to the machines you create. Procedure Log in to Linux on your provisioning machine. Obtain the Red Hat Enterprise Linux CoreOS (RHCOS) kernel, initramfs, and rootfs files from the RHCOS image mirror . Important The RHCOS images might not change with every release of OpenShift Container Platform. You must download images with the highest version that is less than or equal to the OpenShift Container Platform version that you install. Only use the appropriate kernel, initramfs, and rootfs artifacts described in the following procedure. The file names contain the OpenShift Container Platform version number. They resemble the following examples: kernel: rhcos-<version>-live-kernel-<architecture> initramfs: rhcos-<version>-live-initramfs.<architecture>.img rootfs: rhcos-<version>-live-rootfs.<architecture>.img Note The rootfs image is the same for FCP and DASD. Create parameter files. The following parameters are specific for a particular virtual machine: For ip= , specify the following seven entries: The IP address for the machine. An empty string. The gateway. The netmask. The machine host and domain name in the form hostname.domainname . Omit this value to let RHCOS decide. The network interface name. Omit this value to let RHCOS decide. If you use static IP addresses, specify none . For coreos.inst.ignition_url= , specify the Ignition file for the machine role. Use bootstrap.ign , master.ign , or worker.ign . Only HTTP and HTTPS protocols are supported. For coreos.live.rootfs_url= , specify the matching rootfs artifact for the kernel and initramfs you are booting. Only HTTP and HTTPS protocols are supported. For installations on DASD-type disks, complete the following tasks: For coreos.inst.install_dev= , specify /dev/dasda . Use rd.dasd= to specify the DASD where RHCOS is to be installed. Leave all other parameters unchanged. Example parameter file, bootstrap-0.parm , for the bootstrap machine: rd.neednet=1 \ console=ttysclp0 \ coreos.inst.install_dev=/dev/dasda \ coreos.live.rootfs_url=http://cl1.provide.example.com:8080/assets/rhcos-live-rootfs.s390x.img \ coreos.inst.ignition_url=http://cl1.provide.example.com:8080/ignition/bootstrap.ign \ ip=172.18.78.2::172.18.78.1:255.255.255.0:::none nameserver=172.18.78.1 \ rd.znet=qeth,0.0.bdf0,0.0.bdf1,0.0.bdf2,layer2=1,portno=0 \ zfcp.allow_lun_scan=0 \ rd.dasd=0.0.3490 Write all options in the parameter file as a single line and make sure you have no newline characters. For installations on FCP-type disks, complete the following tasks: Use rd.zfcp=<adapter>,<wwpn>,<lun> to specify the FCP disk where RHCOS is to be installed. For multipathing repeat this step for each additional path. Note When you install with multiple paths, you must enable multipathing directly after the installation, not at a later point in time, as this can cause problems. Set the install device as: coreos.inst.install_dev=/dev/disk/by-id/scsi-<serial_number> . Note If additional LUNs are configured with NPIV, FCP requires zfcp.allow_lun_scan=0 . If you must enable zfcp.allow_lun_scan=1 because you use a CSI driver, for example, you must configure your NPIV so that each node cannot access the boot partition of another node. Leave all other parameters unchanged. Important Additional postinstallation steps are required to fully enable multipathing. For more information, see "Enabling multipathing with kernel arguments on RHCOS" in Postinstallation machine configuration tasks . The following is an example parameter file worker-1.parm for a worker node with multipathing: rd.neednet=1 \ console=ttysclp0 \ coreos.inst.install_dev=/dev/disk/by-id/scsi-<serial_number> \ coreos.live.rootfs_url=http://cl1.provide.example.com:8080/assets/rhcos-live-rootfs.s390x.img \ coreos.inst.ignition_url=http://cl1.provide.example.com:8080/ignition/worker.ign \ ip=172.18.78.2::172.18.78.1:255.255.255.0:::none nameserver=172.18.78.1 \ rd.znet=qeth,0.0.bdf0,0.0.bdf1,0.0.bdf2,layer2=1,portno=0 \ zfcp.allow_lun_scan=0 \ rd.zfcp=0.0.1987,0x50050763070bc5e3,0x4008400B00000000 \ rd.zfcp=0.0.19C7,0x50050763070bc5e3,0x4008400B00000000 \ rd.zfcp=0.0.1987,0x50050763071bc5e3,0x4008400B00000000 \ rd.zfcp=0.0.19C7,0x50050763071bc5e3,0x4008400B00000000 Write all options in the parameter file as a single line and make sure you have no newline characters. Transfer the initramfs, kernel, parameter files, and RHCOS images to z/VM, for example with FTP. For details about how to transfer the files with FTP and boot from the virtual reader, see Installing under Z/VM . Punch the files to the virtual reader of the z/VM guest virtual machine that is to become your bootstrap node. See PUNCH in IBM Documentation. Tip You can use the CP PUNCH command or, if you use Linux, the vmur command to transfer files between two z/VM guest virtual machines. Log in to CMS on the bootstrap machine. IPL the bootstrap machine from the reader: See IPL in IBM Documentation. Repeat this procedure for the other machines in the cluster. 2.13.1. Advanced RHCOS installation reference This section illustrates the networking configuration and other advanced options that allow you to modify the Red Hat Enterprise Linux CoreOS (RHCOS) manual installation process. The following tables describe the kernel arguments and command-line options you can use with the RHCOS live installer and the coreos-installer command. 2.13.1.1. Networking and bonding options for ISO installations If you install RHCOS from an ISO image, you can add kernel arguments manually when you boot the image to configure networking for a node. If no networking arguments are specified, DHCP is activated in the initramfs when RHCOS detects that networking is required to fetch the Ignition config file. Important When adding networking arguments manually, you must also add the rd.neednet=1 kernel argument to bring the network up in the initramfs. The following information provides examples for configuring networking and bonding on your RHCOS nodes for ISO installations. The examples describe how to use the ip= , nameserver= , and bond= kernel arguments. Note Ordering is important when adding the kernel arguments: ip= , nameserver= , and then bond= . The networking options are passed to the dracut tool during system boot. For more information about the networking options supported by dracut , see the dracut.cmdline manual page . The following examples are the networking options for ISO installation. Configuring DHCP or static IP addresses To configure an IP address, either use DHCP ( ip=dhcp ) or set an individual static IP address ( ip=<host_ip> ). If setting a static IP, you must then identify the DNS server IP address ( nameserver=<dns_ip> ) on each node. The following example sets: The node's IP address to 10.10.10.2 The gateway address to 10.10.10.254 The netmask to 255.255.255.0 The hostname to core0.example.com The DNS server address to 4.4.4.41 The auto-configuration value to none . No auto-configuration is required when IP networking is configured statically. ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp1s0:none nameserver=4.4.4.41 Note When you use DHCP to configure IP addressing for the RHCOS machines, the machines also obtain the DNS server information through DHCP. For DHCP-based deployments, you can define the DNS server address that is used by the RHCOS nodes through your DHCP server configuration. Configuring an IP address without a static hostname You can configure an IP address without assigning a static hostname. If a static hostname is not set by the user, it will be picked up and automatically set by a reverse DNS lookup. To configure an IP address without a static hostname refer to the following example: The node's IP address to 10.10.10.2 The gateway address to 10.10.10.254 The netmask to 255.255.255.0 The DNS server address to 4.4.4.41 The auto-configuration value to none . No auto-configuration is required when IP networking is configured statically. ip=10.10.10.2::10.10.10.254:255.255.255.0::enp1s0:none nameserver=4.4.4.41 Specifying multiple network interfaces You can specify multiple network interfaces by setting multiple ip= entries. ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp1s0:none ip=10.10.10.3::10.10.10.254:255.255.255.0:core0.example.com:enp2s0:none Configuring default gateway and route Optional: You can configure routes to additional networks by setting an rd.route= value. Note When you configure one or multiple networks, one default gateway is required. If the additional network gateway is different from the primary network gateway, the default gateway must be the primary network gateway. Run the following command to configure the default gateway: ip=::10.10.10.254:::: Enter the following command to configure the route for the additional network: rd.route=20.20.20.0/24:20.20.20.254:enp2s0 Disabling DHCP on a single interface You can disable DHCP on a single interface, such as when there are two or more network interfaces and only one interface is being used. In the example, the enp1s0 interface has a static networking configuration and DHCP is disabled for enp2s0 , which is not used: ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp1s0:none ip=::::core0.example.com:enp2s0:none Combining DHCP and static IP configurations You can combine DHCP and static IP configurations on systems with multiple network interfaces, for example: ip=enp1s0:dhcp ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp2s0:none Configuring VLANs on individual interfaces Optional: You can configure VLANs on individual interfaces by using the vlan= parameter. To configure a VLAN on a network interface and use a static IP address, run the following command: ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp2s0.100:none vlan=enp2s0.100:enp2s0 To configure a VLAN on a network interface and to use DHCP, run the following command: ip=enp2s0.100:dhcp vlan=enp2s0.100:enp2s0 Providing multiple DNS servers You can provide multiple DNS servers by adding a nameserver= entry for each server, for example: nameserver=1.1.1.1 nameserver=8.8.8.8 Bonding multiple network interfaces to a single interface Optional: You can bond multiple network interfaces to a single interface by using the bond= option. Refer to the following examples: The syntax for configuring a bonded interface is: bond=<name>[:<network_interfaces>][:options] <name> is the bonding device name ( bond0 ), <network_interfaces> represents a comma-separated list of physical (ethernet) interfaces ( em1,em2 ), and options is a comma-separated list of bonding options. Enter modinfo bonding to see available options. When you create a bonded interface using bond= , you must specify how the IP address is assigned and other information for the bonded interface. To configure the bonded interface to use DHCP, set the bond's IP address to dhcp . For example: bond=bond0:em1,em2:mode=active-backup ip=bond0:dhcp To configure the bonded interface to use a static IP address, enter the specific IP address you want and related information. For example: bond=bond0:em1,em2:mode=active-backup,fail_over_mac=1 ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:bond0:none Always set the fail_over_mac=1 option in active-backup mode, to avoid problems when shared OSA/RoCE cards are used. Bonding multiple network interfaces to a single interface Optional: You can configure VLANs on bonded interfaces by using the vlan= parameter and to use DHCP, for example: ip=bond0.100:dhcp bond=bond0:em1,em2:mode=active-backup vlan=bond0.100:bond0 Use the following example to configure the bonded interface with a VLAN and to use a static IP address: ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:bond0.100:none bond=bond0:em1,em2:mode=active-backup vlan=bond0.100:bond0 Using network teaming Optional: You can use a network teaming as an alternative to bonding by using the team= parameter: The syntax for configuring a team interface is: team=name[:network_interfaces] name is the team device name ( team0 ) and network_interfaces represents a comma-separated list of physical (ethernet) interfaces ( em1, em2 ). Note Teaming is planned to be deprecated when RHCOS switches to an upcoming version of RHEL. For more information, see this Red Hat Knowledgebase Article . Use the following example to configure a network team: team=team0:em1,em2 ip=team0:dhcp 2.14. Waiting for the bootstrap process to complete The OpenShift Container Platform bootstrap process begins after the cluster nodes first boot into the persistent RHCOS environment that has been installed to disk. The configuration information provided through the Ignition config files is used to initialize the bootstrap process and install OpenShift Container Platform on the machines. You must wait for the bootstrap process to complete. Prerequisites You have created the Ignition config files for your cluster. You have configured suitable network, DNS and load balancing infrastructure. You have obtained the installation program and generated the Ignition config files for your cluster. You installed RHCOS on your cluster machines and provided the Ignition config files that the OpenShift Container Platform installation program generated. Your machines have direct internet access or have an HTTP or HTTPS proxy available. Procedure Monitor the bootstrap process: USD ./openshift-install --dir <installation_directory> wait-for bootstrap-complete \ 1 --log-level=info 2 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. 2 To view different installation details, specify warn , debug , or error instead of info . Example output INFO Waiting up to 30m0s for the Kubernetes API at https://api.test.example.com:6443... INFO API v1.28.5 up INFO Waiting up to 30m0s for bootstrapping to complete... INFO It is now safe to remove the bootstrap resources The command succeeds when the Kubernetes API server signals that it has been bootstrapped on the control plane machines. After the bootstrap process is complete, remove the bootstrap machine from the load balancer. Important You must remove the bootstrap machine from the load balancer at this point. You can also remove or reformat the bootstrap machine itself. 2.15. Logging in to the cluster by using the CLI You can log in to your cluster as a default system user by exporting the cluster kubeconfig file. The kubeconfig file contains information about the cluster that is used by the CLI to connect a client to the correct cluster and API server. The file is specific to a cluster and is created during OpenShift Container Platform installation. Prerequisites You deployed an OpenShift Container Platform cluster. You installed the oc CLI. Procedure Export the kubeadmin credentials: USD export KUBECONFIG=<installation_directory>/auth/kubeconfig 1 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. Verify you can run oc commands successfully using the exported configuration: USD oc whoami Example output system:admin 2.16. Approving the certificate signing requests for your machines When you add machines to a cluster, two pending certificate signing requests (CSRs) are generated for each machine that you added. You must confirm that these CSRs are approved or, if necessary, approve them yourself. The client requests must be approved first, followed by the server requests. Prerequisites You added machines to your cluster. Procedure Confirm that the cluster recognizes the machines: USD oc get nodes Example output NAME STATUS ROLES AGE VERSION master-0 Ready master 63m v1.28.5 master-1 Ready master 63m v1.28.5 master-2 Ready master 64m v1.28.5 The output lists all of the machines that you created. Note The preceding output might not include the compute nodes, also known as worker nodes, until some CSRs are approved. Review the pending CSRs and ensure that you see the client requests with the Pending or Approved status for each machine that you added to the cluster: USD oc get csr Example output NAME AGE REQUESTOR CONDITION csr-mddf5 20m system:node:master-01.example.com Approved,Issued csr-z5rln 16m system:node:worker-21.example.com Approved,Issued If the CSRs were not approved, after all of the pending CSRs for the machines you added are in Pending status, approve the CSRs for your cluster machines: Note Because the CSRs rotate automatically, approve your CSRs within an hour of adding the machines to the cluster. If you do not approve them within an hour, the certificates will rotate, and more than two certificates will be present for each node. You must approve all of these certificates. After the client CSR is approved, the Kubelet creates a secondary CSR for the serving certificate, which requires manual approval. Then, subsequent serving certificate renewal requests are automatically approved by the machine-approver if the Kubelet requests a new certificate with identical parameters. Note For clusters running on platforms that are not machine API enabled, such as bare metal and other user-provisioned infrastructure, you must implement a method of automatically approving the kubelet serving certificate requests (CSRs). If a request is not approved, then the oc exec , oc rsh , and oc logs commands cannot succeed, because a serving certificate is required when the API server connects to the kubelet. Any operation that contacts the Kubelet endpoint requires this certificate approval to be in place. The method must watch for new CSRs, confirm that the CSR was submitted by the node-bootstrapper service account in the system:node or system:admin groups, and confirm the identity of the node. To approve them individually, run the following command for each valid CSR: USD oc adm certificate approve <csr_name> 1 1 <csr_name> is the name of a CSR from the list of current CSRs. To approve all pending CSRs, run the following command: USD oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve Note Some Operators might not become available until some CSRs are approved. Now that your client requests are approved, you must review the server requests for each machine that you added to the cluster: USD oc get csr Example output NAME AGE REQUESTOR CONDITION csr-bfd72 5m26s system:node:ip-10-0-50-126.us-east-2.compute.internal Pending csr-c57lv 5m26s system:node:ip-10-0-95-157.us-east-2.compute.internal Pending ... If the remaining CSRs are not approved, and are in the Pending status, approve the CSRs for your cluster machines: To approve them individually, run the following command for each valid CSR: USD oc adm certificate approve <csr_name> 1 1 <csr_name> is the name of a CSR from the list of current CSRs. To approve all pending CSRs, run the following command: USD oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs oc adm certificate approve After all client and server CSRs have been approved, the machines have the Ready status. Verify this by running the following command: USD oc get nodes Example output NAME STATUS ROLES AGE VERSION master-0 Ready master 73m v1.28.5 master-1 Ready master 73m v1.28.5 master-2 Ready master 74m v1.28.5 worker-0 Ready worker 11m v1.28.5 worker-1 Ready worker 11m v1.28.5 Note It can take a few minutes after approval of the server CSRs for the machines to transition to the Ready status. Additional information For more information on CSRs, see Certificate Signing Requests . 2.17. Initial Operator configuration After the control plane initializes, you must immediately configure some Operators so that they all become available. Prerequisites Your control plane has initialized. Procedure Watch the cluster components come online: USD watch -n5 oc get clusteroperators Example output NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE authentication 4.15.0 True False False 19m baremetal 4.15.0 True False False 37m cloud-credential 4.15.0 True False False 40m cluster-autoscaler 4.15.0 True False False 37m config-operator 4.15.0 True False False 38m console 4.15.0 True False False 26m csi-snapshot-controller 4.15.0 True False False 37m dns 4.15.0 True False False 37m etcd 4.15.0 True False False 36m image-registry 4.15.0 True False False 31m ingress 4.15.0 True False False 30m insights 4.15.0 True False False 31m kube-apiserver 4.15.0 True False False 26m kube-controller-manager 4.15.0 True False False 36m kube-scheduler 4.15.0 True False False 36m kube-storage-version-migrator 4.15.0 True False False 37m machine-api 4.15.0 True False False 29m machine-approver 4.15.0 True False False 37m machine-config 4.15.0 True False False 36m marketplace 4.15.0 True False False 37m monitoring 4.15.0 True False False 29m network 4.15.0 True False False 38m node-tuning 4.15.0 True False False 37m openshift-apiserver 4.15.0 True False False 32m openshift-controller-manager 4.15.0 True False False 30m openshift-samples 4.15.0 True False False 32m operator-lifecycle-manager 4.15.0 True False False 37m operator-lifecycle-manager-catalog 4.15.0 True False False 37m operator-lifecycle-manager-packageserver 4.15.0 True False False 32m service-ca 4.15.0 True False False 38m storage 4.15.0 True False False 37m Configure the Operators that are not available. 2.17.1. Image registry storage configuration The Image Registry Operator is not initially available for platforms that do not provide default storage. After installation, you must configure your registry to use storage so that the Registry Operator is made available. Instructions are shown for configuring a persistent volume, which is required for production clusters. Where applicable, instructions are shown for configuring an empty directory as the storage location, which is available for only non-production clusters. Additional instructions are provided for allowing the image registry to use block storage types by using the Recreate rollout strategy during upgrades. 2.17.1.1. Configuring registry storage for IBM Z As a cluster administrator, following installation you must configure your registry to use storage. Prerequisites You have access to the cluster as a user with the cluster-admin role. You have a cluster on IBM Z(R). You have provisioned persistent storage for your cluster, such as Red Hat OpenShift Data Foundation. Important OpenShift Container Platform supports ReadWriteOnce access for image registry storage when you have only one replica. ReadWriteOnce access also requires that the registry uses the Recreate rollout strategy. To deploy an image registry that supports high availability with two or more replicas, ReadWriteMany access is required. Must have 100Gi capacity. Procedure To configure your registry to use storage, change the spec.storage.pvc in the configs.imageregistry/cluster resource. Note When you use shared storage, review your security settings to prevent outside access. Verify that you do not have a registry pod: USD oc get pod -n openshift-image-registry -l docker-registry=default Example output No resources found in openshift-image-registry namespace Note If you do have a registry pod in your output, you do not need to continue with this procedure. Check the registry configuration: USD oc edit configs.imageregistry.operator.openshift.io Example output storage: pvc: claim: Leave the claim field blank to allow the automatic creation of an image-registry-storage PVC. Check the clusteroperator status: USD oc get clusteroperator image-registry Example output NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MESSAGE image-registry 4.15 True False False 6h50m Ensure that your registry is set to managed to enable building and pushing of images. Run: Then, change the line to 2.17.1.2. Configuring storage for the image registry in non-production clusters You must configure storage for the Image Registry Operator. For non-production clusters, you can set the image registry to an empty directory. If you do so, all images are lost if you restart the registry. Procedure To set the image registry storage to an empty directory: USD oc patch configs.imageregistry.operator.openshift.io cluster --type merge --patch '{"spec":{"storage":{"emptyDir":{}}}}' Warning Configure this option for only non-production clusters. If you run this command before the Image Registry Operator initializes its components, the oc patch command fails with the following error: Error from server (NotFound): configs.imageregistry.operator.openshift.io "cluster" not found Wait a few minutes and run the command again. 2.18. Completing installation on user-provisioned infrastructure After you complete the Operator configuration, you can finish installing the cluster on infrastructure that you provide. Prerequisites Your control plane has initialized. You have completed the initial Operator configuration. Procedure Confirm that all the cluster components are online with the following command: USD watch -n5 oc get clusteroperators Example output NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE authentication 4.15.0 True False False 19m baremetal 4.15.0 True False False 37m cloud-credential 4.15.0 True False False 40m cluster-autoscaler 4.15.0 True False False 37m config-operator 4.15.0 True False False 38m console 4.15.0 True False False 26m csi-snapshot-controller 4.15.0 True False False 37m dns 4.15.0 True False False 37m etcd 4.15.0 True False False 36m image-registry 4.15.0 True False False 31m ingress 4.15.0 True False False 30m insights 4.15.0 True False False 31m kube-apiserver 4.15.0 True False False 26m kube-controller-manager 4.15.0 True False False 36m kube-scheduler 4.15.0 True False False 36m kube-storage-version-migrator 4.15.0 True False False 37m machine-api 4.15.0 True False False 29m machine-approver 4.15.0 True False False 37m machine-config 4.15.0 True False False 36m marketplace 4.15.0 True False False 37m monitoring 4.15.0 True False False 29m network 4.15.0 True False False 38m node-tuning 4.15.0 True False False 37m openshift-apiserver 4.15.0 True False False 32m openshift-controller-manager 4.15.0 True False False 30m openshift-samples 4.15.0 True False False 32m operator-lifecycle-manager 4.15.0 True False False 37m operator-lifecycle-manager-catalog 4.15.0 True False False 37m operator-lifecycle-manager-packageserver 4.15.0 True False False 32m service-ca 4.15.0 True False False 38m storage 4.15.0 True False False 37m Alternatively, the following command notifies you when all of the clusters are available. It also retrieves and displays credentials: USD ./openshift-install --dir <installation_directory> wait-for install-complete 1 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. Example output INFO Waiting up to 30m0s for the cluster to initialize... The command succeeds when the Cluster Version Operator finishes deploying the OpenShift Container Platform cluster from Kubernetes API server. Important The Ignition config files that the installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending node-bootstrapper certificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information. It is recommended that you use Ignition config files within 12 hours after they are generated because the 24-hour certificate rotates from 16 to 22 hours after the cluster is installed. By using the Ignition config files within 12 hours, you can avoid installation failure if the certificate update runs during installation. Confirm that the Kubernetes API server is communicating with the pods. To view a list of all pods, use the following command: USD oc get pods --all-namespaces Example output NAMESPACE NAME READY STATUS RESTARTS AGE openshift-apiserver-operator openshift-apiserver-operator-85cb746d55-zqhs8 1/1 Running 1 9m openshift-apiserver apiserver-67b9g 1/1 Running 0 3m openshift-apiserver apiserver-ljcmx 1/1 Running 0 1m openshift-apiserver apiserver-z25h4 1/1 Running 0 2m openshift-authentication-operator authentication-operator-69d5d8bf84-vh2n8 1/1 Running 0 5m ... View the logs for a pod that is listed in the output of the command by using the following command: USD oc logs <pod_name> -n <namespace> 1 1 Specify the pod name and namespace, as shown in the output of the command. If the pod logs display, the Kubernetes API server can communicate with the cluster machines. For an installation with Fibre Channel Protocol (FCP), additional steps are required to enable multipathing. Do not enable multipathing during installation. See "Enabling multipathing with kernel arguments on RHCOS" in the Postinstallation machine configuration tasks documentation for more information. 2.19. Telemetry access for OpenShift Container Platform In OpenShift Container Platform 4.15, the Telemetry service, which runs by default to provide metrics about cluster health and the success of updates, requires internet access. If your cluster is connected to the internet, Telemetry runs automatically, and your cluster is registered to OpenShift Cluster Manager . After you confirm that your OpenShift Cluster Manager inventory is correct, either maintained automatically by Telemetry or manually by using OpenShift Cluster Manager, use subscription watch to track your OpenShift Container Platform subscriptions at the account or multi-cluster level. Additional resources See About remote health monitoring for more information about the Telemetry service How to generate SOSREPORT within OpenShift4 nodes without SSH . 2.20. steps Enabling multipathing with kernel arguments on RHCOS . Customize your cluster . If necessary, you can opt out of remote health reporting . | [
"USDTTL 1W @ IN SOA ns1.example.com. root ( 2019070700 ; serial 3H ; refresh (3 hours) 30M ; retry (30 minutes) 2W ; expiry (2 weeks) 1W ) ; minimum (1 week) IN NS ns1.example.com. IN MX 10 smtp.example.com. ; ; ns1.example.com. IN A 192.168.1.5 smtp.example.com. IN A 192.168.1.5 ; helper.example.com. IN A 192.168.1.5 helper.ocp4.example.com. IN A 192.168.1.5 ; api.ocp4.example.com. IN A 192.168.1.5 1 api-int.ocp4.example.com. IN A 192.168.1.5 2 ; *.apps.ocp4.example.com. IN A 192.168.1.5 3 ; bootstrap.ocp4.example.com. IN A 192.168.1.96 4 ; control-plane0.ocp4.example.com. IN A 192.168.1.97 5 control-plane1.ocp4.example.com. IN A 192.168.1.98 6 control-plane2.ocp4.example.com. IN A 192.168.1.99 7 ; compute0.ocp4.example.com. IN A 192.168.1.11 8 compute1.ocp4.example.com. IN A 192.168.1.7 9 ; ;EOF",
"USDTTL 1W @ IN SOA ns1.example.com. root ( 2019070700 ; serial 3H ; refresh (3 hours) 30M ; retry (30 minutes) 2W ; expiry (2 weeks) 1W ) ; minimum (1 week) IN NS ns1.example.com. ; 5.1.168.192.in-addr.arpa. IN PTR api.ocp4.example.com. 1 5.1.168.192.in-addr.arpa. IN PTR api-int.ocp4.example.com. 2 ; 96.1.168.192.in-addr.arpa. IN PTR bootstrap.ocp4.example.com. 3 ; 97.1.168.192.in-addr.arpa. IN PTR control-plane0.ocp4.example.com. 4 98.1.168.192.in-addr.arpa. IN PTR control-plane1.ocp4.example.com. 5 99.1.168.192.in-addr.arpa. IN PTR control-plane2.ocp4.example.com. 6 ; 11.1.168.192.in-addr.arpa. IN PTR compute0.ocp4.example.com. 7 7.1.168.192.in-addr.arpa. IN PTR compute1.ocp4.example.com. 8 ; ;EOF",
"global log 127.0.0.1 local2 pidfile /var/run/haproxy.pid maxconn 4000 daemon defaults mode http log global option dontlognull option http-server-close option redispatch retries 3 timeout http-request 10s timeout queue 1m timeout connect 10s timeout client 1m timeout server 1m timeout http-keep-alive 10s timeout check 10s maxconn 3000 listen api-server-6443 1 bind *:6443 mode tcp option httpchk GET /readyz HTTP/1.0 option log-health-checks balance roundrobin server bootstrap bootstrap.ocp4.example.com:6443 verify none check check-ssl inter 10s fall 2 rise 3 backup 2 server master0 master0.ocp4.example.com:6443 weight 1 verify none check check-ssl inter 10s fall 2 rise 3 server master1 master1.ocp4.example.com:6443 weight 1 verify none check check-ssl inter 10s fall 2 rise 3 server master2 master2.ocp4.example.com:6443 weight 1 verify none check check-ssl inter 10s fall 2 rise 3 listen machine-config-server-22623 3 bind *:22623 mode tcp server bootstrap bootstrap.ocp4.example.com:22623 check inter 1s backup 4 server master0 master0.ocp4.example.com:22623 check inter 1s server master1 master1.ocp4.example.com:22623 check inter 1s server master2 master2.ocp4.example.com:22623 check inter 1s listen ingress-router-443 5 bind *:443 mode tcp balance source server compute0 compute0.ocp4.example.com:443 check inter 1s server compute1 compute1.ocp4.example.com:443 check inter 1s listen ingress-router-80 6 bind *:80 mode tcp balance source server compute0 compute0.ocp4.example.com:80 check inter 1s server compute1 compute1.ocp4.example.com:80 check inter 1s",
"dig +noall +answer @<nameserver_ip> api.<cluster_name>.<base_domain> 1",
"api.ocp4.example.com. 604800 IN A 192.168.1.5",
"dig +noall +answer @<nameserver_ip> api-int.<cluster_name>.<base_domain>",
"api-int.ocp4.example.com. 604800 IN A 192.168.1.5",
"dig +noall +answer @<nameserver_ip> random.apps.<cluster_name>.<base_domain>",
"random.apps.ocp4.example.com. 604800 IN A 192.168.1.5",
"dig +noall +answer @<nameserver_ip> console-openshift-console.apps.<cluster_name>.<base_domain>",
"console-openshift-console.apps.ocp4.example.com. 604800 IN A 192.168.1.5",
"dig +noall +answer @<nameserver_ip> bootstrap.<cluster_name>.<base_domain>",
"bootstrap.ocp4.example.com. 604800 IN A 192.168.1.96",
"dig +noall +answer @<nameserver_ip> -x 192.168.1.5",
"5.1.168.192.in-addr.arpa. 604800 IN PTR api-int.ocp4.example.com. 1 5.1.168.192.in-addr.arpa. 604800 IN PTR api.ocp4.example.com. 2",
"dig +noall +answer @<nameserver_ip> -x 192.168.1.96",
"96.1.168.192.in-addr.arpa. 604800 IN PTR bootstrap.ocp4.example.com.",
"ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1",
"cat <path>/<file_name>.pub",
"cat ~/.ssh/id_ed25519.pub",
"eval \"USD(ssh-agent -s)\"",
"Agent pid 31874",
"ssh-add <path>/<file_name> 1",
"Identity added: /home/<you>/<path>/<file_name> (<computer_name>)",
"tar -xvf openshift-install-linux.tar.gz",
"tar xvf <file>",
"echo USDPATH",
"oc <command>",
"C:\\> path",
"C:\\> oc <command>",
"echo USDPATH",
"oc <command>",
"mkdir <installation_directory>",
"apiVersion: v1 baseDomain: example.com 1 compute: 2 - hyperthreading: Enabled 3 name: worker replicas: 0 4 architecture: s390x controlPlane: 5 hyperthreading: Enabled 6 name: master replicas: 3 7 architecture: s390x metadata: name: test 8 networking: clusterNetwork: - cidr: 10.128.0.0/14 9 hostPrefix: 23 10 networkType: OVNKubernetes 11 serviceNetwork: 12 - 172.30.0.0/16 platform: none: {} 13 fips: false 14 pullSecret: '{\"auths\": ...}' 15 sshKey: 'ssh-ed25519 AAAA...' 16",
"apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5",
"./openshift-install wait-for install-complete --log-level debug",
"compute: - name: worker platform: {} replicas: 0",
"spec: clusterNetwork: - cidr: 10.128.0.0/19 hostPrefix: 23 - cidr: 10.128.32.0/19 hostPrefix: 23",
"spec: serviceNetwork: - 172.30.0.0/14",
"defaultNetwork: type: OVNKubernetes ovnKubernetesConfig: mtu: 1400 genevePort: 6081 ipsecConfig: mode: Full",
"kubeProxyConfig: proxyArguments: iptables-min-sync-period: - 0s",
"./openshift-install create manifests --dir <installation_directory> 1",
"./openshift-install create ignition-configs --dir <installation_directory> 1",
". ├── auth │ ├── kubeadmin-password │ └── kubeconfig ├── bootstrap.ign ├── master.ign ├── metadata.json └── worker.ign",
"variant: openshift version: 4.15.0 metadata: name: master-storage labels: machineconfiguration.openshift.io/role: master storage: luks: - clevis: tang: - thumbprint: QcPr_NHFJammnRCA3fFMVdNBwjs url: http://clevis.example.com:7500 options: 1 - --cipher - aes-cbc-essiv:sha256 device: /dev/disk/by-partlabel/root 2 label: luks-root name: root wipe_volume: true filesystems: - device: /dev/mapper/root format: xfs label: root wipe_filesystem: true openshift: fips: true 3",
"coreos-installer pxe customize /root/rhcos-bootfiles/rhcos-<release>-live-initramfs.s390x.img --dest-device /dev/disk/by-id/scsi-<serial_number> --dest-karg-append ip=<ip_address>::<gateway_ip>:<subnet_mask>::<network_device>:none --dest-karg-append nameserver=<nameserver_ip> --dest-karg-append rd.neednet=1 -o /root/rhcos-bootfiles/<node_name>-initramfs.s390x.img",
"rd.neednet=1 console=ttysclp0 coreos.inst.install_dev=/dev/dasda \\ 1 ignition.firstboot ignition.platform.id=metal coreos.live.rootfs_url=http://<http_server>/rhcos-<version>-live-rootfs.<architecture>.img \\ 2 coreos.inst.ignition_url=http://<http_server>/master.ign \\ 3 ip=10.19.17.2::10.19.17.1:255.255.255.0::enbdd0:none nameserver=10.19.17.1 zfcp.allow_lun_scan=0 \\ 4 rd.znet=qeth,0.0.bdd0,0.0.bdd1,0.0.bdd2,layer2=1 rd.zfcp=0.0.5677,0x600606680g7f0056,0x034F000000000000 \\ 5",
"rd.neednet=1 console=ttysclp0 coreos.inst.install_dev=/dev/dasda coreos.live.rootfs_url=http://cl1.provide.example.com:8080/assets/rhcos-live-rootfs.s390x.img coreos.inst.ignition_url=http://cl1.provide.example.com:8080/ignition/bootstrap.ign ip=172.18.78.2::172.18.78.1:255.255.255.0:::none nameserver=172.18.78.1 rd.znet=qeth,0.0.bdf0,0.0.bdf1,0.0.bdf2,layer2=1,portno=0 zfcp.allow_lun_scan=0 rd.dasd=0.0.3490",
"rd.neednet=1 console=ttysclp0 coreos.inst.install_dev=/dev/disk/by-id/scsi-<serial_number> coreos.live.rootfs_url=http://cl1.provide.example.com:8080/assets/rhcos-live-rootfs.s390x.img coreos.inst.ignition_url=http://cl1.provide.example.com:8080/ignition/worker.ign ip=172.18.78.2::172.18.78.1:255.255.255.0:::none nameserver=172.18.78.1 rd.znet=qeth,0.0.bdf0,0.0.bdf1,0.0.bdf2,layer2=1,portno=0 zfcp.allow_lun_scan=0 rd.zfcp=0.0.1987,0x50050763070bc5e3,0x4008400B00000000 rd.zfcp=0.0.19C7,0x50050763070bc5e3,0x4008400B00000000 rd.zfcp=0.0.1987,0x50050763071bc5e3,0x4008400B00000000 rd.zfcp=0.0.19C7,0x50050763071bc5e3,0x4008400B00000000",
"ipl c",
"ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp1s0:none nameserver=4.4.4.41",
"ip=10.10.10.2::10.10.10.254:255.255.255.0::enp1s0:none nameserver=4.4.4.41",
"ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp1s0:none ip=10.10.10.3::10.10.10.254:255.255.255.0:core0.example.com:enp2s0:none",
"ip=::10.10.10.254::::",
"rd.route=20.20.20.0/24:20.20.20.254:enp2s0",
"ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp1s0:none ip=::::core0.example.com:enp2s0:none",
"ip=enp1s0:dhcp ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp2s0:none",
"ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp2s0.100:none vlan=enp2s0.100:enp2s0",
"ip=enp2s0.100:dhcp vlan=enp2s0.100:enp2s0",
"nameserver=1.1.1.1 nameserver=8.8.8.8",
"bond=bond0:em1,em2:mode=active-backup ip=bond0:dhcp",
"bond=bond0:em1,em2:mode=active-backup,fail_over_mac=1 ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:bond0:none",
"ip=bond0.100:dhcp bond=bond0:em1,em2:mode=active-backup vlan=bond0.100:bond0",
"ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:bond0.100:none bond=bond0:em1,em2:mode=active-backup vlan=bond0.100:bond0",
"team=team0:em1,em2 ip=team0:dhcp",
"./openshift-install --dir <installation_directory> wait-for bootstrap-complete \\ 1 --log-level=info 2",
"INFO Waiting up to 30m0s for the Kubernetes API at https://api.test.example.com:6443 INFO API v1.28.5 up INFO Waiting up to 30m0s for bootstrapping to complete INFO It is now safe to remove the bootstrap resources",
"export KUBECONFIG=<installation_directory>/auth/kubeconfig 1",
"oc whoami",
"system:admin",
"oc get nodes",
"NAME STATUS ROLES AGE VERSION master-0 Ready master 63m v1.28.5 master-1 Ready master 63m v1.28.5 master-2 Ready master 64m v1.28.5",
"oc get csr",
"NAME AGE REQUESTOR CONDITION csr-mddf5 20m system:node:master-01.example.com Approved,Issued csr-z5rln 16m system:node:worker-21.example.com Approved,Issued",
"oc adm certificate approve <csr_name> 1",
"oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve",
"oc get csr",
"NAME AGE REQUESTOR CONDITION csr-bfd72 5m26s system:node:ip-10-0-50-126.us-east-2.compute.internal Pending csr-c57lv 5m26s system:node:ip-10-0-95-157.us-east-2.compute.internal Pending",
"oc adm certificate approve <csr_name> 1",
"oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs oc adm certificate approve",
"oc get nodes",
"NAME STATUS ROLES AGE VERSION master-0 Ready master 73m v1.28.5 master-1 Ready master 73m v1.28.5 master-2 Ready master 74m v1.28.5 worker-0 Ready worker 11m v1.28.5 worker-1 Ready worker 11m v1.28.5",
"watch -n5 oc get clusteroperators",
"NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE authentication 4.15.0 True False False 19m baremetal 4.15.0 True False False 37m cloud-credential 4.15.0 True False False 40m cluster-autoscaler 4.15.0 True False False 37m config-operator 4.15.0 True False False 38m console 4.15.0 True False False 26m csi-snapshot-controller 4.15.0 True False False 37m dns 4.15.0 True False False 37m etcd 4.15.0 True False False 36m image-registry 4.15.0 True False False 31m ingress 4.15.0 True False False 30m insights 4.15.0 True False False 31m kube-apiserver 4.15.0 True False False 26m kube-controller-manager 4.15.0 True False False 36m kube-scheduler 4.15.0 True False False 36m kube-storage-version-migrator 4.15.0 True False False 37m machine-api 4.15.0 True False False 29m machine-approver 4.15.0 True False False 37m machine-config 4.15.0 True False False 36m marketplace 4.15.0 True False False 37m monitoring 4.15.0 True False False 29m network 4.15.0 True False False 38m node-tuning 4.15.0 True False False 37m openshift-apiserver 4.15.0 True False False 32m openshift-controller-manager 4.15.0 True False False 30m openshift-samples 4.15.0 True False False 32m operator-lifecycle-manager 4.15.0 True False False 37m operator-lifecycle-manager-catalog 4.15.0 True False False 37m operator-lifecycle-manager-packageserver 4.15.0 True False False 32m service-ca 4.15.0 True False False 38m storage 4.15.0 True False False 37m",
"oc get pod -n openshift-image-registry -l docker-registry=default",
"No resources found in openshift-image-registry namespace",
"oc edit configs.imageregistry.operator.openshift.io",
"storage: pvc: claim:",
"oc get clusteroperator image-registry",
"NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MESSAGE image-registry 4.15 True False False 6h50m",
"oc edit configs.imageregistry/cluster",
"managementState: Removed",
"managementState: Managed",
"oc patch configs.imageregistry.operator.openshift.io cluster --type merge --patch '{\"spec\":{\"storage\":{\"emptyDir\":{}}}}'",
"Error from server (NotFound): configs.imageregistry.operator.openshift.io \"cluster\" not found",
"watch -n5 oc get clusteroperators",
"NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE authentication 4.15.0 True False False 19m baremetal 4.15.0 True False False 37m cloud-credential 4.15.0 True False False 40m cluster-autoscaler 4.15.0 True False False 37m config-operator 4.15.0 True False False 38m console 4.15.0 True False False 26m csi-snapshot-controller 4.15.0 True False False 37m dns 4.15.0 True False False 37m etcd 4.15.0 True False False 36m image-registry 4.15.0 True False False 31m ingress 4.15.0 True False False 30m insights 4.15.0 True False False 31m kube-apiserver 4.15.0 True False False 26m kube-controller-manager 4.15.0 True False False 36m kube-scheduler 4.15.0 True False False 36m kube-storage-version-migrator 4.15.0 True False False 37m machine-api 4.15.0 True False False 29m machine-approver 4.15.0 True False False 37m machine-config 4.15.0 True False False 36m marketplace 4.15.0 True False False 37m monitoring 4.15.0 True False False 29m network 4.15.0 True False False 38m node-tuning 4.15.0 True False False 37m openshift-apiserver 4.15.0 True False False 32m openshift-controller-manager 4.15.0 True False False 30m openshift-samples 4.15.0 True False False 32m operator-lifecycle-manager 4.15.0 True False False 37m operator-lifecycle-manager-catalog 4.15.0 True False False 37m operator-lifecycle-manager-packageserver 4.15.0 True False False 32m service-ca 4.15.0 True False False 38m storage 4.15.0 True False False 37m",
"./openshift-install --dir <installation_directory> wait-for install-complete 1",
"INFO Waiting up to 30m0s for the cluster to initialize",
"oc get pods --all-namespaces",
"NAMESPACE NAME READY STATUS RESTARTS AGE openshift-apiserver-operator openshift-apiserver-operator-85cb746d55-zqhs8 1/1 Running 1 9m openshift-apiserver apiserver-67b9g 1/1 Running 0 3m openshift-apiserver apiserver-ljcmx 1/1 Running 0 1m openshift-apiserver apiserver-z25h4 1/1 Running 0 2m openshift-authentication-operator authentication-operator-69d5d8bf84-vh2n8 1/1 Running 0 5m",
"oc logs <pod_name> -n <namespace> 1"
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.15/html/installing_on_ibm_z_and_ibm_linuxone/installing-ibm-z |
Appendix A. The Device Mapper | Appendix A. The Device Mapper The Device Mapper is a kernel driver that provides a framework for volume management. It provides a generic way of creating mapped devices, which may be used as logical volumes. It does not specifically know about volume groups or metadata formats. The Device Mapper provides the foundation for a number of higher-level technologies. In addition to LVM, Device-Mapper multipath and the dmraid command use the Device Mapper. The application interface to the Device Mapper is the ioctl system call. The user interface is the dmsetup command. LVM logical volumes are activated using the Device Mapper. Each logical volume is translated into a mapped device. Each segment translates into a line in the mapping table that describes the device. The Device Mapper supports a variety of mapping targets, including linear mapping, striped mapping, and error mapping. So, for example, two disks may be concatenated into one logical volume with a pair of linear mappings, one for each disk. When LVM creates a volume, it creates an underlying device-mapper device that can be queried with the dmsetup command. For information about the format of devices in a mapping table, see Section A.1, "Device Table Mappings" . For information about using the dmsetup command to query a device, see Section A.2, "The dmsetup Command" . A.1. Device Table Mappings A mapped device is defined by a table that specifies how to map each range of logical sectors of the device using a supported Device Table mapping. The table for a mapped device is constructed from a list of lines of the form: In the first line of a Device Mapper table, the start parameter must equal 0. The start + length parameters on one line must equal the start on the line. Which mapping parameters are specified in a line of the mapping table depends on which mapping type is specified on the line. Sizes in the Device Mapper are always specified in sectors (512 bytes). When a device is specified as a mapping parameter in the Device Mapper, it can be referenced by the device name in the filesystem (for example, /dev/hda ) or by the major and minor numbers in the format major : minor . The major:minor format is preferred because it avoids pathname lookups. The following shows a sample mapping table for a device. In this table there are four linear targets: The first 2 parameters of each line are the segment starting block and the length of the segment. The keyword is the mapping target, which in all of the cases in this example is linear . The rest of the line consists of the parameters for a linear target. The following subsections describe these mapping formats: linear striped mirror snapshot and snapshot-origin error zero multipath crypt device-mapper RAID thin thin-pool A.1.1. The linear Mapping Target A linear mapping target maps a continuous range of blocks onto another block device. The format of a linear target is as follows: start starting block in virtual device length length of this segment device block device, referenced by the device name in the filesystem or by the major and minor numbers in the format major : minor offset starting offset of the mapping on the device The following example shows a linear target with a starting block in the virtual device of 0, a segment length of 1638400, a major:minor number pair of 8:2, and a starting offset for the device of 41146992. The following example shows a linear target with the device parameter specified as the device /dev/hda . A.1.2. The striped Mapping Target The striped mapping target supports striping across physical devices. It takes as arguments the number of stripes and the striping chunk size followed by a list of pairs of device name and sector. The format of a striped target is as follows: There is one set of device and offset parameters for each stripe. start starting block in virtual device length length of this segment #stripes number of stripes for the virtual device chunk_size number of sectors written to each stripe before switching to the ; must be power of 2 at least as big as the kernel page size device block device, referenced by the device name in the filesystem or by the major and minor numbers in the format major : minor . offset starting offset of the mapping on the device The following example shows a striped target with three stripes and a chunk size of 128: 0 starting block in virtual device 73728 length of this segment striped 3 128 stripe across three devices with chunk size of 128 blocks 8:9 major:minor numbers of first device 384 starting offset of the mapping on the first device 8:8 major:minor numbers of second device 384 starting offset of the mapping on the second device 8:7 major:minor numbers of third device 9789824 starting offset of the mapping on the third device The following example shows a striped target for 2 stripes with 256 KiB chunks, with the device parameters specified by the device names in the file system rather than by the major and minor numbers. A.1.3. The mirror Mapping Target The mirror mapping target supports the mapping of a mirrored logical device. The format of a mirrored target is as follows: start starting block in virtual device length length of this segment log_type The possible log types and their arguments are as follows: core The mirror is local and the mirror log is kept in core memory. This log type takes 1 - 3 arguments: regionsize [[ no ] sync ] [ block_on_error ] disk The mirror is local and the mirror log is kept on disk. This log type takes 2 - 4 arguments: logdevice regionsize [[ no ] sync ] [ block_on_error ] clustered_core The mirror is clustered and the mirror log is kept in core memory. This log type takes 2 - 4 arguments: regionsize UUID [[ no ] sync ] [ block_on_error ] clustered_disk The mirror is clustered and the mirror log is kept on disk. This log type takes 3 - 5 arguments: logdevice regionsize UUID [[ no ] sync ] [ block_on_error ] LVM maintains a small log which it uses to keep track of which regions are in sync with the mirror or mirrors. The regionsize argument specifies the size of these regions. In a clustered environment, the UUID argument is a unique identifier associated with the mirror log device so that the log state can be maintained throughout the cluster. The optional [no]sync argument can be used to specify the mirror as "in-sync" or "out-of-sync". The block_on_error argument is used to tell the mirror to respond to errors rather than ignoring them. #log_args number of log arguments that will be specified in the mapping logargs the log arguments for the mirror; the number of log arguments provided is specified by the #log-args parameter and the valid log arguments are determined by the log_type parameter. #devs the number of legs in the mirror; a device and an offset is specified for each leg device block device for each mirror leg, referenced by the device name in the filesystem or by the major and minor numbers in the format major : minor . A block device and offset is specified for each mirror leg, as indicated by the #devs parameter. offset starting offset of the mapping on the device. A block device and offset is specified for each mirror leg, as indicated by the #devs parameter. The following example shows a mirror mapping target for a clustered mirror with a mirror log kept on disk. 0 starting block in virtual device 52428800 length of this segment mirror clustered_disk mirror target with a log type specifying that mirror is clustered and the mirror log is maintained on disk 4 4 mirror log arguments will follow 253:2 major:minor numbers of log device 1024 region size the mirror log uses to keep track of what is in sync UUID UUID of mirror log device to maintain log information throughout a cluster block_on_error mirror should respond to errors 3 number of legs in mirror 253:3 0 253:4 0 253:5 0 major:minor numbers and offset for devices constituting each leg of mirror A.1.4. The snapshot and snapshot-origin Mapping Targets When you create the first LVM snapshot of a volume, four Device Mapper devices are used: A device with a linear mapping containing the original mapping table of the source volume. A device with a linear mapping used as the copy-on-write (COW) device for the source volume; for each write, the original data is saved in the COW device of each snapshot to keep its visible content unchanged (until the COW device fills up). A device with a snapshot mapping combining #1 and #2, which is the visible snapshot volume. The "original" volume (which uses the device number used by the original source volume), whose table is replaced by a "snapshot-origin" mapping from device #1. A fixed naming scheme is used to create these devices, For example, you might use the following commands to create an LVM volume named base and a snapshot volume named snap based on that volume. This yields four devices, which you can view with the following commands: The format for the snapshot-origin target is as follows: start starting block in virtual device length length of this segment origin base volume of snapshot The snapshot-origin will normally have one or more snapshots based on it. Reads will be mapped directly to the backing device. For each write, the original data will be saved in the COW device of each snapshot to keep its visible content unchanged until the COW device fills up. The format for the snapshot target is as follows: start starting block in virtual device length length of this segment origin base volume of snapshot COW-device Device on which changed chunks of data are stored P|N P (Persistent) or N (Not persistent); indicates whether snapshot will survive after reboot. For transient snapshots (N) less metadata must be saved on disk; they can be kept in memory by the kernel. chunksize Size in sectors of changed chunks of data that will be stored on the COW device The following example shows a snapshot-origin target with an origin device of 254:11. The following example shows a snapshot target with an origin device of 254:11 and a COW device of 254:12. This snapshot device is persistent across reboots and the chunk size for the data stored on the COW device is 16 sectors. A.1.5. The error Mapping Target With an error mapping target, any I/O operation to the mapped sector fails. An error mapping target can be used for testing. To test how a device behaves in failure, you can create a device mapping with a bad sector in the middle of a device, or you can swap out the leg of a mirror and replace the leg with an error target. An error target can be used in place of a failing device, as a way of avoiding timeouts and retries on the actual device. It can serve as an intermediate target while you rearrange LVM metadata during failures. The error mapping target takes no additional parameters besides the start and length parameters. The following example shows an error target. A.1.6. The zero Mapping Target The zero mapping target is a block device equivalent of /dev/zero . A read operation to this mapping returns blocks of zeros. Data written to this mapping is discarded, but the write succeeds. The zero mapping target takes no additional parameters besides the start and length parameters. The following example shows a zero target for a 16Tb Device. A.1.7. The multipath Mapping Target The multipath mapping target supports the mapping of a multipathed device. The format for the multipath target is as follows: There is one set of pathgroupargs parameters for each path group. start starting block in virtual device length length of this segment #features The number of multipath features, followed by those features. If this parameter is zero, then there is no feature parameter and the device mapping parameter is #handlerargs . Currently there is one supported feature that can be set with the features attribute in the multipath.conf file, queue_if_no_path . This indicates that this multipathed device is currently set to queue I/O operations if there is no path available. In the following example, the no_path_retry attribute in the multipath.conf file has been set to queue I/O operations only until all paths have been marked as failed after a set number of attempts have been made to use the paths. In this case, the mapping appears as follows until all the path checkers have failed the specified number of checks. After all the path checkers have failed the specified number of checks, the mapping would appear as follows. #handlerargs The number of hardware handler arguments, followed by those arguments. A hardware handler specifies a module that will be used to perform hardware-specific actions when switching path groups or handling I/O errors. If this is set to 0, then the parameter is #pathgroups . #pathgroups The number of path groups. A path group is the set of paths over which a multipathed device will load balance. There is one set of pathgroupargs parameters for each path group. pathgroup The path group to try. pathgroupsargs Each path group consists of the following arguments: There is one set of path arguments for each path in the path group. pathselector Specifies the algorithm in use to determine what path in this path group to use for the I/O operation. #selectorargs The number of path selector arguments which follow this argument in the multipath mapping. Currently, the value of this argument is always 0. #paths The number of paths in this path group. #pathargs The number of path arguments specified for each path in this group. Currently this number is always 1, the ioreqs argument. device The block device number of the path, referenced by the major and minor numbers in the format major : minor ioreqs The number of I/O requests to route to this path before switching to the path in the current group. Figure A.1, "Multipath Mapping Target" shows the format of a multipath target with two path groups. Figure A.1. Multipath Mapping Target The following example shows a pure failover target definition for the same multipath device. In this target there are four path groups, with only one open path per path group so that the multipathed device will use only one path at a time. The following example shows a full spread (multibus) target definition for the same multipathed device. In this target there is only one path group, which includes all of the paths. In this setup, multipath spreads the load evenly out to all of the paths. For further information about multipathing, see the Using Device Mapper Multipath document. A.1.8. The crypt Mapping Target The crypt target encrypts the data passing through the specified device. It uses the kernel Crypto API. The format for the crypt target is as follows: start starting block in virtual device length length of this segment cipher Cipher consists of cipher[-chainmode]-ivmode[:iv options] . cipher Ciphers available are listed in /proc/crypto (for example, aes ). chainmode Always use cbc . Do not use ebc ; it does not use an initial vector (IV). ivmode[:iv options] IV is an initial vector used to vary the encryption. The IV mode is plain or essiv:hash . An ivmode of -plain uses the sector number (plus IV offset) as the IV. An ivmode of -essiv is an enhancement avoiding a watermark weakness. key Encryption key, supplied in hex IV-offset Initial Vector (IV) offset device block device, referenced by the device name in the filesystem or by the major and minor numbers in the format major : minor offset starting offset of the mapping on the device The following is an example of a crypt target. A.1.9. The device-mapper RAID Mapping Target The device-mapper RAID (dm-raid) target provides a bridge from DM to MD. It allows the MD RAID drivers to be accessed using a device-mapper interface. The format of the dm-raid target is as follows start starting block in virtual device length length of this segment raid_type The RAID type can be one of the following raid1 RAID1 mirroring raid4 RAID4 dedicated parity disk raid5_la RAID5 left asymmetric - rotating parity 0 with data continuation raid5_ra RAID5 right asymmetric - rotating parity N with data continuation raid5_ls RAID5 left symmetric - rotating parity 0 with data restart raid5_rs RAID5 right symmetric - rotating parity N with data restart raid6_zr RAID6 zero restart - rotating parity 0 (left to right) with data restart raid6_nr RAID6 N restart - rotating parity N (right to left) with data restart raid6_nc RAID6 N continue - rotating parity N (right to left) with data continuation raid10 Various RAID10-inspired algorithms selected by further optional arguments - RAID 10: Striped mirrors (striping on top of mirrors) - RAID 1E: Integrated adjacent striped mirroring - RAID 1E: Integrated offset striped mirroring - Other similar RAID10 variants #raid_params The number of parameters that follow raid_params Mandatory parameters: chunk_size Chunk size in sectors. This parameter is often known as "stripe size". It is the only mandatory parameter and is placed first. Followed by optional parameters (in any order): [sync|nosync] Force or prevent RAID initialization. rebuild idx Rebuild drive number idx (first drive is 0). daemon_sleep ms Interval between runs of the bitmap daemon that clear bits. A longer interval means less bitmap I/O but resyncing after a failure is likely to take longer. min_recovery_rate KB/sec/disk Throttle RAID initialization max_recovery_rate KB/sec/disk Throttle RAID initialization write_mostly idx Mark drive index idx write-mostly. max_write_behind sectors See the description of --write-behind in the mdadm man page. stripe_cache sectors Stripe cache size (RAID 4/5/6 only) region_size sectors The region_size multiplied by the number of regions is the logical size of the array. The bitmap records the device synchronization state for each region. raid10_copies #copies The number of RAID10 copies. This parameter is used in conjunction with the raid10_format parameter to alter the default layout of a RAID10 configuration. The default value is 2. raid10_format near|far|offset This parameter is used in conjunction with the raid10_copies parameter to alter the default layout of a RAID10 configuration. The default value is near , which specifies a standard mirroring layout. If the raid10_copies and raid10_format are left unspecified, or raid10_copies 2 and/or raid10_format near is specified, then the layouts for 2, 3 and 4 devices are as follows: The 2-device layout is equivalent to 2-way RAID1. The 4-device layout is what a traditional RAID10 would look like. The 3-device layout is what might be called a 'RAID1E - Integrated Adjacent Stripe Mirroring'. If raid10_copies 2 and raid10_format far are specified, then the layouts for 2, 3 and 4 devices are as follows: If raid10_copies 2 and raid10_format offset are specified, then the layouts for 2, 3 and 4 devices are as follows: These layouts closely resemble the layouts fo RAID1E - Integrated Offset Stripe Mirroring' #raid_devs The number of devices composing the array Each device consists of two entries. The first is the device containing the metadata (if any); the second is the one containing the data. If a drive has failed or is missing at creation time, a '-' can be given for both the metadata and data drives for a given position. The following example shows a RAID4 target with a starting block of 0 and a segment length of 1960893648. There are 4 data drives, 1 parity, with no metadata devices specified to hold superblock/bitmap info and a chunk size of 1MiB The following example shows a RAID4 target with a starting block of 0 and a segment length of 1960893648. there are 4 data drives, 1 parity, with metadata devices, a chunk size of 1MiB, force RAID initialization, and a min_recovery rate of 20 kiB/sec/disks. A.1.10. The thin and thin-pool Mapping Targets The format of a thin-pool target is as follows: start starting block in virtual device length length of this segment metadata_dev The metadata device data_dev The data device data_block_size The data block size (in sectors). The data block size gives the smallest unit of disk space that can be allocated at a time expressed in units of 512-byte sectors. Data block size must be between 64KB (128 sectors) and 1GB (2097152 sectors) inclusive and it must be a mutlipole of 128 (64KB). low_water_mark The low water mark, expressed in blocks of size data_block_size . If free space on the data device drops below this level then a device-mapper event will be triggered which a user-space daemon should catch allowing it to extend the pool device. Only one such event will be sent. Resuming a device with a new table itself triggers an event so the user-space daemon can use this to detect a situation where a new table already exceeds the threshold. A low water mark for the metadata device is maintained in the kernel and will trigger a device-mapper event if free space on the metadata device drops below it. #feature_args The number of feature arguments arg The thin pool feature argument are as follows: skip_block_zeroing Skip the zeroing of newly-provisioned blocks. ignore_discard Disable discard support. no_discard_passdown Do not pass discards down to the underlying data device, but just remove the mapping. read_only Do not allow any changes to be made to the pool metadata. error_if_no_space Error IOs, instead of queuing, if no space. The following example shows a thin-pool target with a starting block in the virtual device of 0, a segment length of 1638400. /dev/sdc1 is a small metadata device and /dev/sdc2 is a larger data device. The chunksize is 64k, the low_water_mark is 0, and there are no features. The format of a thin target is as follows: start starting block in virtual device length length of this segment pool_dev The thin-pool device, for example /dev/mapper/my_pool or 253:0 dev_id The internal device identifier of the device to be activated. external_origin_dev An optional block device outside the pool to be treated as a read-only snapshot origin. Reads to unprovisioned areas of the thin target will be mapped to this device. The following example shows a 1 GiB thinLV that uses /dev/mapper/pool as its backing store (thin-pool). The target has a starting block in the virtual device of 0 and a segment length of 2097152. | [
"start length mapping [ mapping_parameters... ]",
"0 35258368 linear 8:48 65920 35258368 35258368 linear 8:32 65920 70516736 17694720 linear 8:16 17694976 88211456 17694720 linear 8:16 256",
"start length linear device offset",
"0 16384000 linear 8:2 41156992",
"0 20971520 linear /dev/hda 384",
"start length striped #stripes chunk_size device1 offset1 ... deviceN offsetN",
"0 73728 striped 3 128 8:9 384 8:8 384 8:7 9789824",
"0 65536 striped 2 512 /dev/hda 0 /dev/hdb 0",
"start length mirror log_type #logargs logarg1 ... logargN #devs device1 offset1 ... deviceN offsetN",
"0 52428800 mirror clustered_disk 4 253:2 1024 UUID block_on_error 3 253:3 0 253:4 0 253:5 0",
"lvcreate -L 1G -n base volumeGroup lvcreate -L 100M --snapshot -n snap volumeGroup/base",
"dmsetup table|grep volumeGroup volumeGroup-base-real: 0 2097152 linear 8:19 384 volumeGroup-snap-cow: 0 204800 linear 8:19 2097536 volumeGroup-snap: 0 2097152 snapshot 254:11 254:12 P 16 volumeGroup-base: 0 2097152 snapshot-origin 254:11 ls -lL /dev/mapper/volumeGroup-* brw------- 1 root root 254, 11 29 ago 18:15 /dev/mapper/volumeGroup-base-real brw------- 1 root root 254, 12 29 ago 18:15 /dev/mapper/volumeGroup-snap-cow brw------- 1 root root 254, 13 29 ago 18:15 /dev/mapper/volumeGroup-snap brw------- 1 root root 254, 10 29 ago 18:14 /dev/mapper/volumeGroup-base",
"start length snapshot-origin origin",
"start length snapshot origin COW-device P|N chunksize",
"0 2097152 snapshot-origin 254:11",
"0 2097152 snapshot 254:11 254:12 P 16",
"0 65536 error",
"0 65536 zero",
"start length multipath #features [feature1 ... featureN] #handlerargs [handlerarg1 ... handlerargN] #pathgroups pathgroup pathgroupargs1 ... pathgroupargsN",
"0 71014400 multipath 1 queue_if_no_path 0 2 1 round-robin 0 2 1 66:128 1000 65:64 1000 round-robin 0 2 1 8:0 1000 67:192 1000",
"0 71014400 multipath 0 0 2 1 round-robin 0 2 1 66:128 1000 65:64 1000 round-robin 0 2 1 8:0 1000 67:192 1000",
"pathselector #selectorargs #paths #pathargs device1 ioreqs1 ... deviceN ioreqsN",
"0 71014400 multipath 0 0 4 1 round-robin 0 1 1 66:112 1000 round-robin 0 1 1 67:176 1000 round-robin 0 1 1 68:240 1000 round-robin 0 1 1 65:48 1000",
"0 71014400 multipath 0 0 1 1 round-robin 0 4 1 66:112 1000 67:176 1000 68:240 1000 65:48 1000",
"start length crypt cipher key IV-offset device offset",
"0 2097152 crypt aes-plain 0123456789abcdef0123456789abcdef 0 /dev/hda 0",
"start length raid raid_type #raid_params raid_params #raid_devs metadata_dev0 dev0 [.. metadata_devN devN ]",
"2 drives 3 drives 4 drives -------- ---------- -------------- A1 A1 A1 A1 A2 A1 A1 A2 A2 A2 A2 A2 A3 A3 A3 A3 A4 A4 A3 A3 A4 A4 A5 A5 A5 A6 A6 A4 A4 A5 A6 A6 A7 A7 A8 A8 .. .. .. .. .. .. .. .. ..",
"2 drives 3 drives 4 drives -------- ----------- ------------------ A1 A2 A1 A2 A3 A1 A2 A3 A4 A3 A4 A4 A5 A6 A5 A6 A7 A8 A5 A6 A7 A8 A9 A9 A10 A11 A12 .. .. .. .. .. .. .. .. .. A2 A1 A3 A1 A2 A2 A1 A4 A3 A4 A3 A6 A4 A5 A6 A5 A8 A7 A6 A5 A9 A7 A8 A10 A9 A12 A11 .. .. .. .. .. .. .. .. ..",
"2 drives 3 drives 4 drives -------- -------- ------------------ A1 A2 A1 A2 A3 A1 A2 A3 A4 A2 A1 A3 A1 A2 A2 A1 A4 A3 A3 A4 A4 A5 A6 A5 A6 A7 A8 A4 A3 A6 A4 A5 A6 A5 A8 A7 A5 A6 A7 A8 A9 A9 A10 A11 A12 A6 A5 A9 A7 A8 A10 A9 A12 A11 .. .. .. .. .. .. .. .. ..",
"0 1960893648 raid raid4 1 2048 5 - 8:17 - 8:33 - 8:49 - 8:65 - 8:81",
"0 1960893648 raid raid4 4 2048 sync min_recovery_rate 20 5 8:17 8:18 8:33 8:34 8:49 8:50 8:65 8:66 8:81 8:82",
"start length thin-pool metadata_dev data_dev data_block_size low_water_mark [ #feature_args [ arg *] ]",
"0 16384000 thin-pool /dev/sdc1 /dev/sdc2 128 0 0",
"start length thin pool_dev dev_id [ external_origin_dev ]",
"0 2097152 thin /dev/mapper/pool 1"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/logical_volume_manager_administration/device_mapper |
8.117. mobile-broadband-provider-info | 8.117. mobile-broadband-provider-info 8.117.1. RHBA-2013:0974 - mobile-broadband-provider-info bug fix update Updated mobile-broadband-provider-info packages that fix one bug are now available for Red Hat Enterprise Linux 6. The mobile-broadband-provider-info packages contain listings of mobile broadband (3G) providers, associated network, and plan information. Bug Fix BZ# 844288 Previously, in the serviceproviders.xml file located in the /usr/share/mobile-broadband-provider-info/ directory, "internet.saunalahti" was incorrectly specified as an APN (Access Point Name) value for the Sonera provider. This prevented the Sonera mobile broadband configuration from working. The stanza containing "internet.saunalahti" as an APN value for Sonera has been removed from the XML file, and the Sonera mobile broadband configuration now works as expected. Users of mobile-broadband-provider-info are advised to upgrade to these updated packages, which fix this bug. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.5_technical_notes/mobile-broadband-provider-info |
Chapter 26. Installing an Identity Management replica using an Ansible playbook | Chapter 26. Installing an Identity Management replica using an Ansible playbook Configuring a system as an IdM replica by using Ansible enrolls it into an IdM domain and enables the system to use IdM services on IdM servers in the domain. The deployment is managed by the ipareplica Ansible role. The role can use the autodiscovery mode for identifying the IdM servers, domain and other settings. However, if you deploy multiple replicas in a tier-like model, with different groups of replicas being deployed at different times, you must define specific servers or replicas for each group. Prerequisites You have installed the ansible-freeipa package on the Ansible control node. You understand the general Ansible and IdM concepts. You have planned the replica topology in your deployment . 26.1. Specifying the base, server and client variables for installing the IdM replica Complete this procedure to configure the inventory file for installing an IdM replica. Prerequisites You have configured your Ansible control node to meet the following requirements: You are using Ansible version 2.15 or later. You have installed the ansible-freeipa package on the Ansible controller. Procedure Open the inventory file for editing. Specify the fully-qualified domain names (FQDN) of the hosts to become IdM replicas. The FQDNs must be valid DNS names: Only numbers, alphabetic characters, and hyphens ( - ) are allowed. For example, underscores are not allowed and can cause DNS failures. The host name must be all lower-case. Example of a simple inventory hosts file with only the replicas' FQDN defined If the IdM server is already deployed and the SRV records are set properly in the IdM DNS zone, the script automatically discovers all the other required values. Optional: Provide additional information in the inventory file based on how you have designed your topology: Scenario 1 If you want to avoid autodiscovery and have all replicas listed in the [ipareplicas] section use a specific IdM server, set the server in the [ipaservers] section of the inventory file. Example inventory hosts file with the FQDN of the IdM server and replicas defined Scenario 2 Alternatively, if you want to avoid autodiscovery but want to deploy specific replicas with specific servers, set the servers for specific replicas individually in the [ipareplicas] section in the inventory file. Example inventory file with a specific IdM server defined for a specific replica In the example above, replica3.idm.example.com uses the already deployed replica1.idm.example.com as its replication source. Scenario 3 If you are deploying several replicas in one batch and time is a concern to you, multitier replica deployment can be useful for you. Define specific groups of replicas in the inventory file, for example [ipareplicas_tier1] and [ipareplicas_tier2] , and design separate plays for each group in the install-replica.yml playbook. Example inventory file with replica tiers defined The first entry in ipareplica_servers will be used. The second entry will be used as a fallback option. When using multiple tiers for deploying IdM replicas, you must have separate tasks in the playbook to first deploy replicas from tier1 and then replicas from tier2: Example of a playbook file with different plays for different replica groups Optional: Provide additional information regarding firewalld and DNS: Scenario 1 If you want the replica to use a specified firewalld zone, for example an internal one, you can specify it in the inventory file. If you do not set a custom zone, IdM will add its services to the default firewalld zone. The predefined default zone is public . Important The specified firewalld zone must exist and be permanent. Example of a simple inventory hosts file with a custom firewalld zone Scenario 2 If you want the replica to host the IdM DNS service, add the ipareplica_setup_dns=true line to the [ipareplicas:vars] section. Additionally, specify if you want to use per-server DNS forwarders: To configure per-server forwarders, add the ipareplica_forwarders variable and a list of strings to the [ipareplicas:vars] section, for example: ipareplica_forwarders=192.0.2.1,192.0.2.2 To configure no per-server forwarders, add the following line to the [ipareplicas:vars] section: ipareplica_no_forwarders=true . To configure per-server forwarders based on the forwarders listed in the /etc/resolv.conf file of the replica, add the ipareplica_auto_forwarders variable to the [ipareplicas:vars] section. Example inventory file with instructions to set up DNS and per-server forwarders on the replicas Scenario 3 Specify the DNS resolver using the ipaclient_configure_dns_resolve and ipaclient_dns_servers options (if available) to simplify cluster deployments. This is especially useful if your IdM deployment is using integrated DNS: An inventory file snippet specifying a DNS resolver: Note The ipaclient_dns_servers list must contain only IP addresses. Host names are not allowed. Additional resources /usr/share/ansible/roles/ipareplica/README.md 26.2. Specifying the credentials for installing the IdM replica using an Ansible playbook Complete this procedure to configure the authorization for installing the IdM replica. Prerequisites You have configured your Ansible control node to meet the following requirements: You are using Ansible version 2.15 or later. You have installed the ansible-freeipa package on the Ansible controller. Procedure Specify the password of a user authorized to deploy replicas , for example the IdM admin . Use the Ansible Vault to store the password, and reference the Vault file from the playbook file, for example install-replica.yml : Example playbook file using principal from inventory file and password from an Ansible Vault file For details how to use Ansible Vault, see the official Ansible Vault documentation. Less securely, provide the credentials of admin directly in the inventory file. Use the ipaadmin_password option in the [ipareplicas:vars] section of the inventory file. The inventory file and the install-replica.yml playbook file can then look as follows: Example inventory hosts.replica file Example playbook using principal and password from inventory file Alternatively but also less securely, provide the credentials of another user authorized to deploy a replica directly in the inventory file. To specify a different authorized user, use the ipaadmin_principal option for the user name, and the ipaadmin_password option for the password. The inventory file and the install-replica.yml playbook file can then look as follows: Example inventory hosts.replica file Example playbook using principal and password from inventory file Note As of RHEL 9.5, during the installation of an IdM replica, checking if the provided Kerberos principal has the required privilege also extends to checking user ID overrides. As a result, you can deploy a replica using the credentials of an AD administrator that is configured to act as an IdM administrator. Additional resources /usr/share/ansible/roles/ipareplica/README.md 26.3. Deploying an IdM replica using an Ansible playbook Complete this procedure to use an Ansible playbook to deploy an IdM replica. Prerequisites The managed node is a Red Hat Enterprise Linux 9 system with a static IP address and a working package manager. You have configured the inventory file for installing an IdM replica . You have configured the authorization for installing the IdM replica . Procedure Run the Ansible playbook: 26.4. Uninstalling an IdM replica using an Ansible playbook Note In an existing Identity Management (IdM) deployment, replica and server are interchangeable terms. For information on how to uninstall an IdM server, see Uninstalling an IdM server using an Ansible playbook or Using an Ansible playbook to uninstall an IdM server even if this leads to a disconnected topology . Additional resources Introduction to IdM servers and clients | [
"[ipareplicas] replica1.idm.example.com replica2.idm.example.com replica3.idm.example.com [...]",
"[ipaservers] server.idm.example.com [ipareplicas] replica1.idm.example.com replica2.idm.example.com replica3.idm.example.com [...]",
"[ipaservers] server.idm.example.com replica1.idm.example.com [ipareplicas] replica2.idm.example.com replica3.idm.example.com ipareplica_servers=replica1.idm.example.com",
"[ipaservers] server.idm.example.com [ipareplicas_tier1] replica1.idm.example.com [ipareplicas_tier2] replica2.idm.example.com \\ ipareplica_servers=replica1.idm.example.com,server.idm.example.com",
"--- - name: Playbook to configure IPA replicas (tier1) hosts: ipareplicas_tier1 become: true roles: - role: ipareplica state: present - name: Playbook to configure IPA replicas (tier2) hosts: ipareplicas_tier2 become: true roles: - role: ipareplica state: present",
"[ipaservers] server.idm.example.com [ipareplicas] replica1.idm.example.com replica2.idm.example.com replica3.idm.example.com [...] [ipareplicas:vars] ipareplica_firewalld_zone= custom zone",
"[ipaservers] server.idm.example.com [ipareplicas] replica1.idm.example.com replica2.idm.example.com replica3.idm.example.com [...] [ipareplicas:vars] ipareplica_setup_dns=true ipareplica_forwarders=192.0.2.1,192.0.2.2",
"[...] [ipaclient:vars] ipaclient_configure_dns_resolver=true ipaclient_dns_servers=192.168.100.1",
"- name: Playbook to configure IPA replicas hosts: ipareplicas become: true vars_files: - playbook_sensitive_data.yml roles: - role: ipareplica state: present",
"[...] [ipareplicas:vars] ipaadmin_password=Secret123",
"- name: Playbook to configure IPA replicas hosts: ipareplicas become: true roles: - role: ipareplica state: present",
"[...] [ipareplicas:vars] ipaadmin_principal=my_admin ipaadmin_password=my_admin_secret123",
"- name: Playbook to configure IPA replicas hosts: ipareplicas become: true roles: - role: ipareplica state: present",
"ansible-playbook -i ~/MyPlaybooks/inventory ~/MyPlaybooks/install-replica.yml"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/9/html/installing_identity_management/installing-an-identity-management-replica-using-an-ansible-playbook_installing-identity-management |
Chapter 6. Uninstalling a cluster on AWS | Chapter 6. Uninstalling a cluster on AWS You can remove a cluster that you deployed to Amazon Web Services (AWS). 6.1. Removing a cluster that uses installer-provisioned infrastructure You can remove a cluster that uses installer-provisioned infrastructure from your cloud. Note After uninstallation, check your cloud provider for any resources not removed properly, especially with User Provisioned Infrastructure (UPI) clusters. There might be resources that the installer did not create or that the installer is unable to access. Prerequisites You have a copy of the installation program that you used to deploy the cluster. You have the files that the installation program generated when you created your cluster. Procedure From the directory that contains the installation program on the computer that you used to install the cluster, run the following command: USD ./openshift-install destroy cluster \ --dir <installation_directory> --log-level info 1 2 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. 2 To view different details, specify warn , debug , or error instead of info . Note You must specify the directory that contains the cluster definition files for your cluster. The installation program requires the metadata.json file in this directory to delete the cluster. Optional: Delete the <installation_directory> directory and the OpenShift Container Platform installation program. 6.2. Deleting Amazon Web Services resources with the Cloud Credential Operator utility After uninstalling an OpenShift Container Platform cluster that uses short-term credentials managed outside the cluster, you can use the CCO utility ( ccoctl ) to remove the Amazon Web Services (AWS) resources that ccoctl created during installation. Prerequisites Extract and prepare the ccoctl binary. Uninstall an OpenShift Container Platform cluster on AWS that uses short-term credentials. Procedure Delete the AWS resources that ccoctl created by running the following command: USD ccoctl aws delete \ --name=<name> \ 1 --region=<aws_region> 2 1 <name> matches the name that was originally used to create and tag the cloud resources. 2 <aws_region> is the AWS region in which to delete cloud resources. Example output 2021/04/08 17:50:41 Identity Provider object .well-known/openid-configuration deleted from the bucket <name>-oidc 2021/04/08 17:50:42 Identity Provider object keys.json deleted from the bucket <name>-oidc 2021/04/08 17:50:43 Identity Provider bucket <name>-oidc deleted 2021/04/08 17:51:05 Policy <name>-openshift-cloud-credential-operator-cloud-credential-o associated with IAM Role <name>-openshift-cloud-credential-operator-cloud-credential-o deleted 2021/04/08 17:51:05 IAM Role <name>-openshift-cloud-credential-operator-cloud-credential-o deleted 2021/04/08 17:51:07 Policy <name>-openshift-cluster-csi-drivers-ebs-cloud-credentials associated with IAM Role <name>-openshift-cluster-csi-drivers-ebs-cloud-credentials deleted 2021/04/08 17:51:07 IAM Role <name>-openshift-cluster-csi-drivers-ebs-cloud-credentials deleted 2021/04/08 17:51:08 Policy <name>-openshift-image-registry-installer-cloud-credentials associated with IAM Role <name>-openshift-image-registry-installer-cloud-credentials deleted 2021/04/08 17:51:08 IAM Role <name>-openshift-image-registry-installer-cloud-credentials deleted 2021/04/08 17:51:09 Policy <name>-openshift-ingress-operator-cloud-credentials associated with IAM Role <name>-openshift-ingress-operator-cloud-credentials deleted 2021/04/08 17:51:10 IAM Role <name>-openshift-ingress-operator-cloud-credentials deleted 2021/04/08 17:51:11 Policy <name>-openshift-machine-api-aws-cloud-credentials associated with IAM Role <name>-openshift-machine-api-aws-cloud-credentials deleted 2021/04/08 17:51:11 IAM Role <name>-openshift-machine-api-aws-cloud-credentials deleted 2021/04/08 17:51:39 Identity Provider with ARN arn:aws:iam::<aws_account_id>:oidc-provider/<name>-oidc.s3.<aws_region>.amazonaws.com deleted Verification To verify that the resources are deleted, query AWS. For more information, refer to AWS documentation. 6.3. Deleting a cluster with a configured AWS Local Zone infrastructure After you install a cluster on Amazon Web Services (AWS) into an existing Virtual Private Cloud (VPC), and you set subnets for each Local Zone location, you can delete the cluster and any AWS resources associated with it. The example in the procedure assumes that you created a VPC and its subnets by using a CloudFormation template. Prerequisites You know the name of the CloudFormation stacks, <local_zone_stack_name> and <vpc_stack_name> , that were used during the creation of the network. You need the name of the stack to delete the cluster. You have access rights to the directory that contains the installation files that were created by the installation program. Your account includes a policy that provides you with permissions to delete the CloudFormation stack. Procedure Change to the directory that contains the stored installation program, and delete the cluster by using the destroy cluster command: USD ./openshift-install destroy cluster --dir <installation_directory> \ 1 --log-level=debug 2 1 For <installation_directory> , specify the directory that stored any files created by the installation program. 2 To view different log details, specify error , info , or warn instead of debug . Delete the CloudFormation stack for the Local Zone subnet: USD aws cloudformation delete-stack --stack-name <local_zone_stack_name> Delete the stack of resources that represent the VPC: USD aws cloudformation delete-stack --stack-name <vpc_stack_name> Verification Check that you removed the stack resources by issuing the following commands in the AWS CLI. The AWS CLI outputs that no template component exists. USD aws cloudformation describe-stacks --stack-name <local_zone_stack_name> USD aws cloudformation describe-stacks --stack-name <vpc_stack_name> Additional resources See Working with stacks in the AWS documentation for more information about AWS CloudFormation stacks. Opt into AWS Local Zones AWS Local Zones available locations AWS Local Zones features | [
"./openshift-install destroy cluster --dir <installation_directory> --log-level info 1 2",
"ccoctl aws delete --name=<name> \\ 1 --region=<aws_region> 2",
"2021/04/08 17:50:41 Identity Provider object .well-known/openid-configuration deleted from the bucket <name>-oidc 2021/04/08 17:50:42 Identity Provider object keys.json deleted from the bucket <name>-oidc 2021/04/08 17:50:43 Identity Provider bucket <name>-oidc deleted 2021/04/08 17:51:05 Policy <name>-openshift-cloud-credential-operator-cloud-credential-o associated with IAM Role <name>-openshift-cloud-credential-operator-cloud-credential-o deleted 2021/04/08 17:51:05 IAM Role <name>-openshift-cloud-credential-operator-cloud-credential-o deleted 2021/04/08 17:51:07 Policy <name>-openshift-cluster-csi-drivers-ebs-cloud-credentials associated with IAM Role <name>-openshift-cluster-csi-drivers-ebs-cloud-credentials deleted 2021/04/08 17:51:07 IAM Role <name>-openshift-cluster-csi-drivers-ebs-cloud-credentials deleted 2021/04/08 17:51:08 Policy <name>-openshift-image-registry-installer-cloud-credentials associated with IAM Role <name>-openshift-image-registry-installer-cloud-credentials deleted 2021/04/08 17:51:08 IAM Role <name>-openshift-image-registry-installer-cloud-credentials deleted 2021/04/08 17:51:09 Policy <name>-openshift-ingress-operator-cloud-credentials associated with IAM Role <name>-openshift-ingress-operator-cloud-credentials deleted 2021/04/08 17:51:10 IAM Role <name>-openshift-ingress-operator-cloud-credentials deleted 2021/04/08 17:51:11 Policy <name>-openshift-machine-api-aws-cloud-credentials associated with IAM Role <name>-openshift-machine-api-aws-cloud-credentials deleted 2021/04/08 17:51:11 IAM Role <name>-openshift-machine-api-aws-cloud-credentials deleted 2021/04/08 17:51:39 Identity Provider with ARN arn:aws:iam::<aws_account_id>:oidc-provider/<name>-oidc.s3.<aws_region>.amazonaws.com deleted",
"./openshift-install destroy cluster --dir <installation_directory> \\ 1 --log-level=debug 2",
"aws cloudformation delete-stack --stack-name <local_zone_stack_name>",
"aws cloudformation delete-stack --stack-name <vpc_stack_name>",
"aws cloudformation describe-stacks --stack-name <local_zone_stack_name>",
"aws cloudformation describe-stacks --stack-name <vpc_stack_name>"
] | https://docs.redhat.com/en/documentation/openshift_container_platform_installation/4.16/html/installing_on_aws/uninstalling-cluster-aws |
A.2. Creating Dump Files | A.2. Creating Dump Files You can request a dump of the core of a guest virtual machine to a file so that errors in the virtual machine can be diagnosed, for example by the crash utility . Warning In Red Hat Enterprise Linux 7.5 and later, the Kernel Address Space Randomization (KASLR) feature prevents guest dump files from being readable by crash . To fix this, add the <vmcoreinfo/> element to the <features> section of the XML configuration files of your guests. Note, however, that migrating guests with <vmcoreinfo/> fails if the destination host is using an OS that does not support <vmcoreinfo/> . These include Red Hat Enterprise Linux 7.4 and earlier, as well as Red Hat Enterprise Linux 6.9 and earlier. A.2.1. Creating virsh Dump Files Executing the virsh dump command sends a request to dump the core of a guest virtual machine to a file so errors in the virtual machine can be diagnosed. Running this command may require you to manually ensure proper permissions on file and path specified by the argument corefilepath . The virsh dump command is similar to a core dump(or the crash utility). For further information, see Creating a Dump File of a Guest Virtual Machine's Core . A.2.2. Saving a Core Dump Using a Python Script The dump-guest-memory.py python script implements a GNU Debugger (GDB) extension that extracts and saves a guest virtual machine's memory from the core dump after the qemu-kvm process crashes on a host. If the host-side QEMU process crash is related to guest actions, investigating the guest state at the time of the QEMU process crash could be useful. The python script implements a GDB extension. This is a new command for the GDB. After opening the core dump file of the original (crashed) QEMU process with GDB, the python script can be loaded into GDB. The new command can then be executed from the GDB prompt. This extracts a guest memory dump from the QEMU core dumpto a new local file. To use the dump-guest-memory.py python script: Install the qemu-kvm-debuginfo package on the system. Launch GDB, opening the core dump file saved for the crashed /usr/libexec/qemu-kvm binary. The debug symbols load automatically. Load the new command in GDB: Note After loading the python script, the built-in GDB help command can provide detailed information about the dump-guest-memory extension. Run the command in GDB. For example: Open /home/user/extracted-vmcore with the crash utility for guest kernel analysis. For more information about extracting guest virtual machine cores from QEMU core files for use with the crash utility, see How to extract ELF cores from 'gcore' generated qemu core files for use with the 'crash' utility . | [
"source /usr/share/qemu-kvm/dump-guest-memory.py",
"dump-guest-memory /home/user/extracted-vmcore X86_64"
] | https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/virtualization_deployment_and_administration_guide/sect-troubleshooting-creating_dump_files |
Chapter 19. Controlling Access to Services | Chapter 19. Controlling Access to Services Maintaining security on your system is extremely important, and one approach for this task is to manage access to system services carefully. Your system may need to provide open access to particular services (for example, httpd if you are running a Web server). However, if you do not need to provide a service, you should turn it off to minimize your exposure to possible bug exploits. There are several different methods for managing access to system services. Decide which method of management to use based on the service, your system's configuration, and your level of Linux expertise. The easiest way to deny access to a service is to turn it off. Both the services managed by xinetd and the services in the /etc/rc.d/init.d hierarchy (also known as SysV services) can be configured to start or stop using three different applications: Services Configuration Tool - a graphical application that displays a description of each service, displays whether each service is started at boot time (for runlevels 3, 4, and 5), and allows services to be started, stopped, and restarted. ntsysv - a text-based application that allows you to configure which services are started at boot time for each runlevel. Non- xinetd services can not be started, stopped, or restarted using this program. chkconfig - a command line utility that allows you to turn services on and off for the different runlevels. Non- xinetd services can not be started, stopped, or restarted using this utility. You may find that these tools are easier to use than the alternatives - editing the numerous symbolic links located in the directories below /etc/rc.d by hand or editing the xinetd configuration files in /etc/xinetd.d . Another way to manage access to system services is by using iptables to configure an IP firewall. If you are a new Linux user, please realize that iptables may not be the best solution for you. Setting up iptables can be complicated and is best tackled by experienced Linux system administrators. On the other hand, the benefit of using iptables is flexibility. For example, if you need a customized solution which provides certain hosts access to certain services, iptables can provide it for you. Refer to the Reference Guide and the Security Guide for more information about iptables . Alternatively, if you are looking for a utility to set general access rules for your home machine, and/or if you are new to Linux, try the Security Level Configuration Tool ( system-config-securitylevel ), which allows you to select the security level for your system, similar to the Firewall Configuration screen in the installation program. If you need more specific firewall rules, refer to the iptables chapter in the Reference Guide . 19.1. Runlevels Before you can configure access to services, you must understand Linux runlevels. A runlevel is a state, or mode , that is defined by the services listed in the directory /etc/rc.d/rc <x> .d , where <x> is the number of the runlevel. The following runlevels exist: 0 - Halt 1 - Single-user mode 2 - Not used (user-definable) 3 - Full multi-user mode 4 - Not used (user-definable) 5 - Full multi-user mode (with an X-based login screen) 6 - Reboot If you use a text login screen, you are operating in runlevel 3. If you use a graphical login screen, you are operating in runlevel 5. The default runlevel can be changed by modifying the /etc/inittab file, which contains a line near the top of the file similar to the following: Change the number in this line to the desired runlevel. The change does not take effect until you reboot the system. To change the runlevel immediately, use the command telinit followed by the runlevel number. You must be root to use this command. The telinit command does not change the /etc/inittab file; it only changes the runlevel currently running. When the system is rebooted, it continues to boot the runlevel as specified in /etc/inittab . | [
"id:5:initdefault:"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/system_administration_guide/controlling_access_to_services |
Chapter 6. Managing permissions | Chapter 6. Managing permissions A permission associates the object being protected and the policies that must be evaluated to decide whether access should be granted. After creating the resources you want to protect and the policies you want to use to protect these resources, you can start managing permissions. To manage permissions, click the Permissions tab when editing a resource server. Permissions Permissions can be created to protect two main types of objects: Resources Scopes To create a permission, select the permission type you want to create from the item list in the upper right corner of the permission listing. The following sections describe these two types of objects in more detail. 6.1. Creating resource-based permission A resource-based permission defines a set of one or more resources to protect using a set of one or more authorization policies. To create a new resource-based permission, select Create resource-based permission from the Create permission dropdown. Add Resource Permission 6.1.1. Configuration Name A human-readable and unique string describing the permission. A best practice is to use names that are closely related to your business and security requirements, so you can identify them more easily. Description A string containing details about this permission. Apply To Resource Type Specifies if the permission is applied to all resources with a given type. When selecting this field, you are prompted to enter the resource type to protect. Resource Type Defines the resource type to protect. When defined, this permission is evaluated for all resources matching that type. Resources Defines a set of one or more resources to protect. Policy Defines a set of one or more policies to associate with a permission. To associate a policy you can either select an existing policy or create a new one by selecting the type of the policy you want to create. Decision Strategy The Decision Strategy for this permission. 6.1.2. Typed resource permission Resource permissions can also be used to define policies that are to be applied to all resources with a given type . This form of resource-based permission can be useful when you have resources sharing common access requirements and constraints. Frequently, resources within an application can be categorized (or typed) based on the data they encapsulate or the functionality they provide. For example, a financial application can manage different banking accounts where each one belongs to a specific customer. Although they are different banking accounts, they share common security requirements and constraints that are globally defined by the banking organization. With typed resource permissions, you can define common policies to apply to all banking accounts, such as: Only the owner can manage his account Only allow access from the owner's country and/or region Enforce a specific authentication method To create a typed resource permission, click Apply to Resource Type when creating a new resource-based permission. With Apply to Resource Type set to On , you can specify the type that you want to protect as well as the policies that are to be applied to govern access to all resources with type you have specified. Example of a typed resource permission 6.2. Creating scope-based permissions A scope-based permission defines a set of one or more scopes to protect using a set of one or more authorization policies. Unlike resource-based permissions, you can use this permission type to create permissions not only for a resource, but also for the scopes associated with it, providing more granularity when defining the permissions that govern your resources and the actions that can be performed on them. To create a new scope-based permission, select Create scope-based permission from the Create permission dropdown. Add Scope Permission 6.2.1. Configuration Name A human-readable and unique string describing the permission. A best practice is to use names that are closely related to your business and security requirements, so you can identify them more easily. Description A string containing details about this permission. Resource Restricts the scopes to those associated with the selected resource. If none is selected, all scopes are available. Scopes Defines a set of one or more scopes to protect. Policy Defines a set of one or more policies to associate with a permission. To associate a policy you can either select an existing policy or create a new one by selecting the type of the policy you want to create. Decision Strategy The Decision Strategy for this permission. 6.3. Policy decision strategies When associating policies with a permission, you can also define a decision strategy to specify how to evaluate the outcome of the associated policies to determine access. Unanimous The default strategy if none is provided. In this case, all policies must evaluate to a positive decision for the final decision to be also positive. Affirmative In this case, at least one policy must evaluate to a positive decision for the final decision to be also positive. Consensus In this case, the number of positive decisions must be greater than the number of negative decisions. If the number of positive and negative decisions is equal, the final decision will be negative. | null | https://docs.redhat.com/en/documentation/red_hat_build_of_keycloak/24.0/html/authorization_services_guide/permission_overview |
Administering Red Hat OpenShift API Management | Administering Red Hat OpenShift API Management Red Hat OpenShift API Management 1 Administering Red Hat OpenShift API Management. Red Hat Customer Content Services | null | https://docs.redhat.com/en/documentation/red_hat_openshift_api_management/1/html/administering_red_hat_openshift_api_management/index |
Chapter 5. Pipelines CLI (tkn) | Chapter 5. Pipelines CLI (tkn) 5.1. Installing tkn Use the CLI tool to manage Red Hat OpenShift Pipelines from a terminal. The following section describes how to install the CLI tool on different platforms. You can also find the URL to the latest binaries from the OpenShift Container Platform web console by clicking the ? icon in the upper-right corner and selecting Command Line Tools . :FeatureName: Running Red Hat OpenShift Pipelines on ARM hardware Important {FeatureName} is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . Note Both the archives and the RPMs contain the following executables: tkn tkn-pac opc Important Running Red Hat OpenShift Pipelines with the opc CLI tool is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . 5.1.1. Installing the Red Hat OpenShift Pipelines CLI on Linux For Linux distributions, you can download the CLI as a tar.gz archive. Procedure Download the relevant CLI tool. Linux (x86_64, amd64) Linux on IBM Z(R) and IBM(R) LinuxONE (s390x) Linux on IBM Power(R) (ppc64le) Linux on ARM (aarch64, arm64) Unpack the archive: USD tar xvzf <file> Add the location of your tkn , tkn-pac , and opc files to your PATH environment variable. To check your PATH , run the following command: USD echo USDPATH 5.1.2. Installing the Red Hat OpenShift Pipelines CLI on Linux using an RPM For Red Hat Enterprise Linux (RHEL) version 8, you can install the Red Hat OpenShift Pipelines CLI as an RPM. Prerequisites You have an active OpenShift Container Platform subscription on your Red Hat account. You have root or sudo privileges on your local system. Procedure Register with Red Hat Subscription Manager: # subscription-manager register Pull the latest subscription data: # subscription-manager refresh List the available subscriptions: # subscription-manager list --available --matches '*pipelines*' In the output for the command, find the pool ID for your OpenShift Container Platform subscription and attach the subscription to the registered system: # subscription-manager attach --pool=<pool_id> Enable the repositories required by Red Hat OpenShift Pipelines: Linux (x86_64, amd64) # subscription-manager repos --enable="pipelines-1.13-for-rhel-8-x86_64-rpms" Linux on IBM Z(R) and IBM(R) LinuxONE (s390x) # subscription-manager repos --enable="pipelines-1.13-for-rhel-8-s390x-rpms" Linux on IBM Power(R) (ppc64le) # subscription-manager repos --enable="pipelines-1.13-for-rhel-8-ppc64le-rpms" Linux on ARM (aarch64, arm64) # subscription-manager repos --enable="pipelines-1.13-for-rhel-8-aarch64-rpms" Install the openshift-pipelines-client package: # yum install openshift-pipelines-client After you install the CLI, it is available using the tkn command: USD tkn version 5.1.3. Installing the Red Hat OpenShift Pipelines CLI on Windows For Windows, you can download the CLI as a zip archive. Procedure Download the CLI tool . Extract the archive with a ZIP program. Add the location of your tkn , tkn-pac , and opc files to your PATH environment variable. To check your PATH , run the following command: C:\> path 5.1.4. Installing the Red Hat OpenShift Pipelines CLI on macOS For macOS, you can download the CLI as a tar.gz archive. Procedure Download the relevant CLI tool. macOS macOS on ARM Unpack and extract the archive. Add the location of your tkn , tkn-pac , and opc files to your PATH environment variable. To check your PATH , run the following command: USD echo USDPATH 5.2. Configuring the OpenShift Pipelines tkn CLI Configure the Red Hat OpenShift Pipelines tkn CLI to enable tab completion. 5.2.1. Enabling tab completion After you install the tkn CLI, you can enable tab completion to automatically complete tkn commands or suggest options when you press Tab. Prerequisites You must have the tkn CLI tool installed. You must have bash-completion installed on your local system. Procedure The following procedure enables tab completion for Bash. Save the Bash completion code to a file: USD tkn completion bash > tkn_bash_completion Copy the file to /etc/bash_completion.d/ : USD sudo cp tkn_bash_completion /etc/bash_completion.d/ Alternatively, you can save the file to a local directory and source it from your .bashrc file instead. Tab completion is enabled when you open a new terminal. 5.3. OpenShift Pipelines tkn reference This section lists the basic tkn CLI commands. 5.3.1. Basic syntax tkn [command or options] [arguments... ] 5.3.2. Global options --help, -h 5.3.3. Utility commands 5.3.3.1. tkn Parent command for tkn CLI. Example: Display all options USD tkn 5.3.3.2. completion [shell] Print shell completion code which must be evaluated to provide interactive completion. Supported shells are bash and zsh . Example: Completion code for bash shell USD tkn completion bash 5.3.3.3. version Print version information of the tkn CLI. Example: Check the tkn version USD tkn version 5.3.4. Pipelines management commands 5.3.4.1. pipeline Manage pipelines. Example: Display help USD tkn pipeline --help 5.3.4.2. pipeline delete Delete a pipeline. Example: Delete the mypipeline pipeline from a namespace USD tkn pipeline delete mypipeline -n myspace 5.3.4.3. pipeline describe Describe a pipeline. Example: Describe the mypipeline pipeline USD tkn pipeline describe mypipeline 5.3.4.4. pipeline list Display a list of pipelines. Example: Display a list of pipelines USD tkn pipeline list 5.3.4.5. pipeline logs Display the logs for a specific pipeline. Example: Stream the live logs for the mypipeline pipeline USD tkn pipeline logs -f mypipeline 5.3.4.6. pipeline start Start a pipeline. Example: Start the mypipeline pipeline USD tkn pipeline start mypipeline 5.3.5. Pipeline run commands 5.3.5.1. pipelinerun Manage pipeline runs. Example: Display help USD tkn pipelinerun -h 5.3.5.2. pipelinerun cancel Cancel a pipeline run. Example: Cancel the mypipelinerun pipeline run from a namespace USD tkn pipelinerun cancel mypipelinerun -n myspace 5.3.5.3. pipelinerun delete Delete a pipeline run. Example: Delete pipeline runs from a namespace USD tkn pipelinerun delete mypipelinerun1 mypipelinerun2 -n myspace Example: Delete all pipeline runs from a namespace, except the five most recently executed pipeline runs USD tkn pipelinerun delete -n myspace --keep 5 1 1 Replace 5 with the number of most recently executed pipeline runs you want to retain. Example: Delete all pipelines USD tkn pipelinerun delete --all Note Starting with Red Hat OpenShift Pipelines 1.6, the tkn pipelinerun delete --all command does not delete any resources that are in the running state. 5.3.5.4. pipelinerun describe Describe a pipeline run. Example: Describe the mypipelinerun pipeline run in a namespace USD tkn pipelinerun describe mypipelinerun -n myspace 5.3.5.5. pipelinerun list List pipeline runs. Example: Display a list of pipeline runs in a namespace USD tkn pipelinerun list -n myspace 5.3.5.6. pipelinerun logs Display the logs of a pipeline run. Example: Display the logs of the mypipelinerun pipeline run with all tasks and steps in a namespace USD tkn pipelinerun logs mypipelinerun -a -n myspace 5.3.6. Task management commands 5.3.6.1. task Manage tasks. Example: Display help USD tkn task -h 5.3.6.2. task delete Delete a task. Example: Delete mytask1 and mytask2 tasks from a namespace USD tkn task delete mytask1 mytask2 -n myspace 5.3.6.3. task describe Describe a task. Example: Describe the mytask task in a namespace USD tkn task describe mytask -n myspace 5.3.6.4. task list List tasks. Example: List all the tasks in a namespace USD tkn task list -n myspace 5.3.6.5. task logs Display task logs. Example: Display logs for the mytaskrun task run of the mytask task USD tkn task logs mytask mytaskrun -n myspace 5.3.6.6. task start Start a task. Example: Start the mytask task in a namespace USD tkn task start mytask -s <ServiceAccountName> -n myspace 5.3.7. Task run commands 5.3.7.1. taskrun Manage task runs. Example: Display help USD tkn taskrun -h 5.3.7.2. taskrun cancel Cancel a task run. Example: Cancel the mytaskrun task run from a namespace USD tkn taskrun cancel mytaskrun -n myspace 5.3.7.3. taskrun delete Delete a TaskRun. Example: Delete the mytaskrun1 and mytaskrun2 task runs from a namespace USD tkn taskrun delete mytaskrun1 mytaskrun2 -n myspace Example: Delete all but the five most recently executed task runs from a namespace USD tkn taskrun delete -n myspace --keep 5 1 1 Replace 5 with the number of most recently executed task runs you want to retain. 5.3.7.4. taskrun describe Describe a task run. Example: Describe the mytaskrun task run in a namespace USD tkn taskrun describe mytaskrun -n myspace 5.3.7.5. taskrun list List task runs. Example: List all the task runs in a namespace USD tkn taskrun list -n myspace 5.3.7.6. taskrun logs Display task run logs. Example: Display live logs for the mytaskrun task run in a namespace USD tkn taskrun logs -f mytaskrun -n myspace 5.3.8. Condition management commands 5.3.8.1. condition Manage Conditions. Example: Display help USD tkn condition --help 5.3.8.2. condition delete Delete a Condition. Example: Delete the mycondition1 Condition from a namespace USD tkn condition delete mycondition1 -n myspace 5.3.8.3. condition describe Describe a Condition. Example: Describe the mycondition1 Condition in a namespace USD tkn condition describe mycondition1 -n myspace 5.3.8.4. condition list List Conditions. Example: List Conditions in a namespace USD tkn condition list -n myspace 5.3.9. Pipeline Resource management commands 5.3.9.1. resource Manage Pipeline Resources. Example: Display help USD tkn resource -h 5.3.9.2. resource create Create a Pipeline Resource. Example: Create a Pipeline Resource in a namespace USD tkn resource create -n myspace This is an interactive command that asks for input on the name of the Resource, type of the Resource, and the values based on the type of the Resource. 5.3.9.3. resource delete Delete a Pipeline Resource. Example: Delete the myresource Pipeline Resource from a namespace USD tkn resource delete myresource -n myspace 5.3.9.4. resource describe Describe a Pipeline Resource. Example: Describe the myresource Pipeline Resource USD tkn resource describe myresource -n myspace 5.3.9.5. resource list List Pipeline Resources. Example: List all Pipeline Resources in a namespace USD tkn resource list -n myspace 5.3.10. ClusterTask management commands Important In Red Hat OpenShift Pipelines 1.10, ClusterTask functionality of the tkn command line utility is deprecated and is planned to be removed in a future release. 5.3.10.1. clustertask Manage ClusterTasks. Example: Display help USD tkn clustertask --help 5.3.10.2. clustertask delete Delete a ClusterTask resource in a cluster. Example: Delete mytask1 and mytask2 ClusterTasks USD tkn clustertask delete mytask1 mytask2 5.3.10.3. clustertask describe Describe a ClusterTask. Example: Describe the mytask ClusterTask USD tkn clustertask describe mytask1 5.3.10.4. clustertask list List ClusterTasks. Example: List ClusterTasks USD tkn clustertask list 5.3.10.5. clustertask start Start ClusterTasks. Example: Start the mytask ClusterTask USD tkn clustertask start mytask 5.3.11. Trigger management commands 5.3.11.1. eventlistener Manage EventListeners. Example: Display help USD tkn eventlistener -h 5.3.11.2. eventlistener delete Delete an EventListener. Example: Delete mylistener1 and mylistener2 EventListeners in a namespace USD tkn eventlistener delete mylistener1 mylistener2 -n myspace 5.3.11.3. eventlistener describe Describe an EventListener. Example: Describe the mylistener EventListener in a namespace USD tkn eventlistener describe mylistener -n myspace 5.3.11.4. eventlistener list List EventListeners. Example: List all the EventListeners in a namespace USD tkn eventlistener list -n myspace 5.3.11.5. eventlistener logs Display logs of an EventListener. Example: Display the logs of the mylistener EventListener in a namespace USD tkn eventlistener logs mylistener -n myspace 5.3.11.6. triggerbinding Manage TriggerBindings. Example: Display TriggerBindings help USD tkn triggerbinding -h 5.3.11.7. triggerbinding delete Delete a TriggerBinding. Example: Delete mybinding1 and mybinding2 TriggerBindings in a namespace USD tkn triggerbinding delete mybinding1 mybinding2 -n myspace 5.3.11.8. triggerbinding describe Describe a TriggerBinding. Example: Describe the mybinding TriggerBinding in a namespace USD tkn triggerbinding describe mybinding -n myspace 5.3.11.9. triggerbinding list List TriggerBindings. Example: List all the TriggerBindings in a namespace USD tkn triggerbinding list -n myspace 5.3.11.10. triggertemplate Manage TriggerTemplates. Example: Display TriggerTemplate help USD tkn triggertemplate -h 5.3.11.11. triggertemplate delete Delete a TriggerTemplate. Example: Delete mytemplate1 and mytemplate2 TriggerTemplates in a namespace USD tkn triggertemplate delete mytemplate1 mytemplate2 -n `myspace` 5.3.11.12. triggertemplate describe Describe a TriggerTemplate. Example: Describe the mytemplate TriggerTemplate in a namespace USD tkn triggertemplate describe mytemplate -n `myspace` 5.3.11.13. triggertemplate list List TriggerTemplates. Example: List all the TriggerTemplates in a namespace USD tkn triggertemplate list -n myspace 5.3.11.14. clustertriggerbinding Manage ClusterTriggerBindings. Example: Display ClusterTriggerBindings help USD tkn clustertriggerbinding -h 5.3.11.15. clustertriggerbinding delete Delete a ClusterTriggerBinding. Example: Delete myclusterbinding1 and myclusterbinding2 ClusterTriggerBindings USD tkn clustertriggerbinding delete myclusterbinding1 myclusterbinding2 5.3.11.16. clustertriggerbinding describe Describe a ClusterTriggerBinding. Example: Describe the myclusterbinding ClusterTriggerBinding USD tkn clustertriggerbinding describe myclusterbinding 5.3.11.17. clustertriggerbinding list List ClusterTriggerBindings. Example: List all ClusterTriggerBindings USD tkn clustertriggerbinding list 5.3.12. Hub interaction commands Interact with Tekton Hub for resources such as tasks and pipelines. 5.3.12.1. hub Interact with hub. Example: Display help USD tkn hub -h Example: Interact with a hub API server USD tkn hub --api-server https://api.hub.tekton.dev Note For each example, to get the corresponding sub-commands and flags, run tkn hub <command> --help . 5.3.12.2. hub downgrade Downgrade an installed resource. Example: Downgrade the mytask task in the mynamespace namespace to it's older version USD tkn hub downgrade task mytask --to version -n mynamespace 5.3.12.3. hub get Get a resource manifest by its name, kind, catalog, and version. Example: Get the manifest for a specific version of the myresource pipeline or task from the tekton catalog USD tkn hub get [pipeline | task] myresource --from tekton --version version 5.3.12.4. hub info Display information about a resource by its name, kind, catalog, and version. Example: Display information about a specific version of the mytask task from the tekton catalog USD tkn hub info task mytask --from tekton --version version 5.3.12.5. hub install Install a resource from a catalog by its kind, name, and version. Example: Install a specific version of the mytask task from the tekton catalog in the mynamespace namespace USD tkn hub install task mytask --from tekton --version version -n mynamespace 5.3.12.6. hub reinstall Reinstall a resource by its kind and name. Example: Reinstall a specific version of the mytask task from the tekton catalog in the mynamespace namespace USD tkn hub reinstall task mytask --from tekton --version version -n mynamespace 5.3.12.7. hub search Search a resource by a combination of name, kind, and tags. Example: Search a resource with a tag cli USD tkn hub search --tags cli 5.3.12.8. hub upgrade Upgrade an installed resource. Example: Upgrade the installed mytask task in the mynamespace namespace to a new version USD tkn hub upgrade task mytask --to version -n mynamespace | [
"tar xvzf <file>",
"echo USDPATH",
"subscription-manager register",
"subscription-manager refresh",
"subscription-manager list --available --matches '*pipelines*'",
"subscription-manager attach --pool=<pool_id>",
"subscription-manager repos --enable=\"pipelines-1.13-for-rhel-8-x86_64-rpms\"",
"subscription-manager repos --enable=\"pipelines-1.13-for-rhel-8-s390x-rpms\"",
"subscription-manager repos --enable=\"pipelines-1.13-for-rhel-8-ppc64le-rpms\"",
"subscription-manager repos --enable=\"pipelines-1.13-for-rhel-8-aarch64-rpms\"",
"yum install openshift-pipelines-client",
"tkn version",
"C:\\> path",
"echo USDPATH",
"tkn completion bash > tkn_bash_completion",
"sudo cp tkn_bash_completion /etc/bash_completion.d/",
"tkn",
"tkn completion bash",
"tkn version",
"tkn pipeline --help",
"tkn pipeline delete mypipeline -n myspace",
"tkn pipeline describe mypipeline",
"tkn pipeline list",
"tkn pipeline logs -f mypipeline",
"tkn pipeline start mypipeline",
"tkn pipelinerun -h",
"tkn pipelinerun cancel mypipelinerun -n myspace",
"tkn pipelinerun delete mypipelinerun1 mypipelinerun2 -n myspace",
"tkn pipelinerun delete -n myspace --keep 5 1",
"tkn pipelinerun delete --all",
"tkn pipelinerun describe mypipelinerun -n myspace",
"tkn pipelinerun list -n myspace",
"tkn pipelinerun logs mypipelinerun -a -n myspace",
"tkn task -h",
"tkn task delete mytask1 mytask2 -n myspace",
"tkn task describe mytask -n myspace",
"tkn task list -n myspace",
"tkn task logs mytask mytaskrun -n myspace",
"tkn task start mytask -s <ServiceAccountName> -n myspace",
"tkn taskrun -h",
"tkn taskrun cancel mytaskrun -n myspace",
"tkn taskrun delete mytaskrun1 mytaskrun2 -n myspace",
"tkn taskrun delete -n myspace --keep 5 1",
"tkn taskrun describe mytaskrun -n myspace",
"tkn taskrun list -n myspace",
"tkn taskrun logs -f mytaskrun -n myspace",
"tkn condition --help",
"tkn condition delete mycondition1 -n myspace",
"tkn condition describe mycondition1 -n myspace",
"tkn condition list -n myspace",
"tkn resource -h",
"tkn resource create -n myspace",
"tkn resource delete myresource -n myspace",
"tkn resource describe myresource -n myspace",
"tkn resource list -n myspace",
"tkn clustertask --help",
"tkn clustertask delete mytask1 mytask2",
"tkn clustertask describe mytask1",
"tkn clustertask list",
"tkn clustertask start mytask",
"tkn eventlistener -h",
"tkn eventlistener delete mylistener1 mylistener2 -n myspace",
"tkn eventlistener describe mylistener -n myspace",
"tkn eventlistener list -n myspace",
"tkn eventlistener logs mylistener -n myspace",
"tkn triggerbinding -h",
"tkn triggerbinding delete mybinding1 mybinding2 -n myspace",
"tkn triggerbinding describe mybinding -n myspace",
"tkn triggerbinding list -n myspace",
"tkn triggertemplate -h",
"tkn triggertemplate delete mytemplate1 mytemplate2 -n `myspace`",
"tkn triggertemplate describe mytemplate -n `myspace`",
"tkn triggertemplate list -n myspace",
"tkn clustertriggerbinding -h",
"tkn clustertriggerbinding delete myclusterbinding1 myclusterbinding2",
"tkn clustertriggerbinding describe myclusterbinding",
"tkn clustertriggerbinding list",
"tkn hub -h",
"tkn hub --api-server https://api.hub.tekton.dev",
"tkn hub downgrade task mytask --to version -n mynamespace",
"tkn hub get [pipeline | task] myresource --from tekton --version version",
"tkn hub info task mytask --from tekton --version version",
"tkn hub install task mytask --from tekton --version version -n mynamespace",
"tkn hub reinstall task mytask --from tekton --version version -n mynamespace",
"tkn hub search --tags cli",
"tkn hub upgrade task mytask --to version -n mynamespace"
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.14/html/cli_tools/pipelines-cli-tkn |
Chapter 19. Monitoring your cluster using JMX | Chapter 19. Monitoring your cluster using JMX Collecting metrics is critical for understanding the health and performance of your Kafka deployment. By monitoring metrics, you can actively identify issues before they become critical and make informed decisions about resource allocation and capacity planning. Without metrics, you may be left with limited visibility into the behavior of your Kafka deployment, which can make troubleshooting more difficult and time-consuming. Setting up metrics can save you time and resources in the long run, and help ensure the reliability of your Kafka deployment. Kafka components use Java Management Extensions (JMX) to share management information through metrics. These metrics are crucial for monitoring a Kafka cluster's performance and overall health. Like many other Java applications, Kafka employs Managed Beans (MBeans) to supply metric data to monitoring tools and dashboards. JMX operates at the JVM level, allowing external tools to connect and retrieve management information from Kafka components. To connect to the JVM, these tools typically need to run on the same machine and with the same user privileges by default. 19.1. Enabling the JMX agent Enable JMX monitoring of Kafka components using JVM system properties. Use the KAFKA_JMX_OPTS environment variable to set the JMX system properties required for enabling JMX monitoring. The scripts that run the Kafka component use these properties. Procedure Set the KAFKA_JMX_OPTS environment variable with the JMX properties for enabling JMX monitoring. export KAFKA_JMX_OPTS=-Dcom.sun.management.jmxremote=true -Dcom.sun.management.jmxremote.port=<port> -Dcom.sun.management.jmxremote.authenticate=false -Dcom.sun.management.jmxremote.ssl=false Replace <port> with the name of the port on which you want the Kafka component to listen for JMX connections. Add org.apache.kafka.common.metrics.JmxReporter to metric.reporters in the server.properties file. metric.reporters=org.apache.kafka.common.metrics.JmxReporter Start the Kafka component using the appropriate script, such as bin/kafka-server-start.sh for a broker or bin/connect-distributed.sh for Kafka Connect. Important It is recommended that you configure authentication and SSL to secure a remote JMX connection. For more information about the system properties needed to do this, see the Oracle documentation . 19.2. Disabling the JMX agent Disable JMX monitoring for Kafka components by updating the KAFKA_JMX_OPTS environment variable. Procedure Set the KAFKA_JMX_OPTS environment variable to disable JMX monitoring. export KAFKA_JMX_OPTS=-Dcom.sun.management.jmxremote=false Note Other JMX properties, like port, authentication, and SSL properties do not need to be specified when disabling JMX monitoring. Set auto.include.jmx.reporter to false in the Kafka server.properties file. auto.include.jmx.reporter=false Note The auto.include.jmx.reporter property is deprecated. From Kafka 4, the JMXReporter is only enabled if org.apache.kafka.common.metrics.JmxReporter is added to the metric.reporters configuration in the properties file. Start the Kafka component using the appropriate script, such as bin/kafka-server-start.sh for a broker or bin/connect-distributed.sh for Kafka Connect. 19.3. Metrics naming conventions When working with Kafka JMX metrics, it's important to understand the naming conventions used to identify and retrieve specific metrics. Kafka JMX metrics use the following format: Metrics format <metric_group>:type=<type_name>,name=<metric_name><other_attribute>=<value> <metric_group> is the name of the metric group <type_name> is the name of the type of metric <metric_name> is the name of the specific metric <other_attribute> represents zero or more additional attributes For example, the BytesInPerSec metric is a BrokerTopicMetrics type in the kafka.server group: kafka.server:type=BrokerTopicMetrics,name=BytesInPerSec In some cases, metrics may include the ID of an entity. For instance, when monitoring a specific client, the metric format includes the client ID: Metrics for a specific client kafka.consumer:type=consumer-fetch-manager-metrics,client-id=<client_id> Similarly, a metric can be further narrowed down to a specific client and topic: Metrics for a specific client and topic kafka.consumer:type=consumer-fetch-manager-metrics,client-id=<client_id>,topic=<topic_id> Understanding these naming conventions will allow you to accurately specify the metrics you want to monitor and analyze. Note To view the full list of available JMX metrics for a Strimzi installation, you can use a graphical tool like JConsole. JConsole is a Java Monitoring and Management Console that allows you to monitor and manage Java applications, including Kafka. By connecting to the JVM running the Kafka component using its process ID, the tool's user interface allows you to view the list of metrics. 19.4. Analyzing Kafka JMX metrics for troubleshooting JMX provides a way to gather metrics about Kafka brokers for monitoring and managing their performance and resource usage. By analyzing these metrics, common broker issues such as high CPU usage, memory leaks, thread contention, and slow response times can be diagnosed and resolved. Certain metrics can pinpoint the root cause of these issues. JMX metrics also provide insights into the overall health and performance of a Kafka cluster. They help monitor the system's throughput, latency, and availability, diagnose issues, and optimize performance. This section explores the use of JMX metrics to help identify common issues and provides insights into the performance of a Kafka cluster. Collecting and graphing these metrics using tools like Prometheus and Grafana allows you to visualize the information returned. This can be particularly helpful in detecting issues or optimizing performance. Graphing metrics over time can also help with identifying trends and forecasting resource consumption. 19.4.1. Checking for under-replicated partitions A balanced Kafka cluster is important for optimal performance. In a balanced cluster, partitions and leaders are evenly distributed across all brokers, and I/O metrics reflect this. As well as using metrics, you can use the kafka-topics.sh tool to get a list of under-replicated partitions and identify the problematic brokers. If the number of under-replicated partitions is fluctuating or many brokers show high request latency, this typically indicates a performance issue in the cluster that requires investigation. On the other hand, a steady (unchanging) number of under-replicated partitions reported by many of the brokers in a cluster normally indicates that one of the brokers in the cluster is offline. Use the describe --under-replicated-partitions option from the kafka-topics.sh tool to show information about partitions that are currently under-replicated in the cluster. These are the partitions that have fewer replicas than the configured replication factor. If the output is blank, the Kafka cluster has no under-replicated partitions. Otherwise, the output shows replicas that are not in sync or available. In the following example, only 2 of the 3 replicas are in sync for each partition, with a replica missing from the ISR (in-sync replica). Returning information on under-replicated partitions from the command line bin/kafka-topics.sh --bootstrap-server :9092 --describe --under-replicated-partitions Topic: topic-1 Partition: 0 Leader: 4 Replicas: 4,2,3 Isr: 4,3 Topic: topic-1 Partition: 1 Leader: 3 Replicas: 2,3,4 Isr: 3,4 Topic: topic-1 Partition: 2 Leader: 3 Replicas: 3,4,2 Isr: 3,4 Here are some metrics to check for I/O and under-replicated partitions: Metrics to check for under-replicated partitions kafka.server:type=ReplicaManager,name=PartitionCount 1 kafka.server:type=ReplicaManager,name=LeaderCount 2 kafka.server:type=BrokerTopicMetrics,name=BytesInPerSec 3 kafka.server:type=BrokerTopicMetrics,name=BytesOutPerSec 4 kafka.server:type=ReplicaManager,name=UnderReplicatedPartitions 5 kafka.server:type=ReplicaManager,name=UnderMinIsrPartitionCount 6 1 Total number of partitions across all topics in the cluster. 2 Total number of leaders across all topics in the cluster. 3 Rate of incoming bytes per second for each broker. 4 Rate of outgoing bytes per second for each broker. 5 Number of under-replicated partitions across all topics in the cluster. 6 Number of partitions below the minimum ISR. If topic configuration is set for high availability, with a replication factor of at least 3 for topics and a minimum number of in-sync replicas being 1 less than the replication factor, under-replicated partitions can still be usable. Conversely, partitions below the minimum ISR have reduced availability. You can monitor these using the kafka.server:type=ReplicaManager,name=UnderMinIsrPartitionCount metric and the under-min-isr-partitions option from the kafka-topics.sh tool. Tip Use Cruise Control to automate the task of monitoring and rebalancing a Kafka cluster to ensure that the partition load is evenly distributed. For more information, see Chapter 13, Using Cruise Control for cluster rebalancing . 19.4.2. Identifying performance problems in a Kafka cluster Spikes in cluster metrics may indicate a broker issue, which is often related to slow or failing storage devices or compute restraints from other processes. If there is no issue at the operating system or hardware level, an imbalance in the load of the Kafka cluster is likely, with some partitions receiving disproportionate traffic compared to others in the same Kafka topic. To anticipate performance problems in a Kafka cluster, it's useful to monitor the RequestHandlerAvgIdlePercent metric. RequestHandlerAvgIdlePercent provides a good overall indicator of how the cluster is behaving. The value of this metric is between 0 and 1. A value below 0.7 indicates that threads are busy 30% of the time and performance is starting to degrade. If the value drops below 50%, problems are likely to occur, especially if the cluster needs to scale or rebalance. At 30%, a cluster is barely usable. Another useful metric is kafka.network:type=Processor,name=IdlePercent , which you can use to monitor the extent (as a percentage) to which network processors in a Kafka cluster are idle. The metric helps identify whether the processors are over or underutilized. To ensure optimal performance, set the num.io.threads property equal to the number of processors in the system, including hyper-threaded processors. If the cluster is balanced, but a single client has changed its request pattern and is causing issues, reduce the load on the cluster or increase the number of brokers. It's important to note that a single disk failure on a single broker can severely impact the performance of an entire cluster. Since producer clients connect to all brokers that lead partitions for a topic, and those partitions are evenly spread over the entire cluster, a poorly performing broker will slow down produce requests and cause back pressure in the producers, slowing down requests to all brokers. A RAID (Redundant Array of Inexpensive Disks) storage configuration that combines multiple physical disk drives into a single logical unit can help prevent this issue. Here are some metrics to check the performance of a Kafka cluster: Metrics to check the performance of a Kafka cluster kafka.server:type=KafkaRequestHandlerPool,name=RequestHandlerAvgIdlePercent 1 # attributes: OneMinuteRate, FifteenMinuteRate kafka.server:type=socket-server-metrics,listener=([-.\w]+),networkProcessor=([\d]+) 2 # attributes: connection-creation-rate kafka.network:type=RequestChannel,name=RequestQueueSize 3 kafka.network:type=RequestChannel,name=ResponseQueueSize 4 kafka.network:type=Processor,name=IdlePercent,networkProcessor=([-.\w]+) 5 kafka.server:type=KafkaServer,name=TotalDiskReadBytes 6 kafka.server:type=KafkaServer,name=TotalDiskWriteBytes 7 1 Average idle percentage of the request handler threads in the Kafka broker's thread pool. The OneMinuteRate and FifteenMinuteRate attributes show the request rate of the last one minute and fifteen minutes, respectively. 2 Rate at which new connections are being created on a specific network processor of a specific listener in the Kafka broker. The listener attribute refers to the name of the listener, and the networkProcessor attribute refers to the ID of the network processor. The connection-creation-rate attribute shows the rate of connection creation in connections per second. 3 Current size of the request queue. 4 Current sizes of the response queue. 5 Percentage of time the specified network processor is idle. The networkProcessor specifies the ID of the network processor to monitor. 6 Total number of bytes read from disk by a Kafka server. 7 Total number of bytes written to disk by a Kafka server. 19.4.3. Identifying performance problems with a Kafka controller The Kafka controller is responsible for managing the overall state of the cluster, such as broker registration, partition reassignment, and topic management. Problems with the controller in the Kafka cluster are difficult to diagnose and often fall into the category of bugs in Kafka itself. Controller issues might manifest as broker metadata being out of sync, offline replicas when the brokers appear to be fine, or actions on topics like topic creation not happening correctly. There are not many ways to monitor the controller, but you can monitor the active controller count and the controller queue size. Monitoring these metrics gives a high-level indicator if there is a problem. Although spikes in the queue size are expected, if this value continuously increases, or stays steady at a high value and does not drop, it indicates that the controller may be stuck. If you encounter this problem, you can move the controller to a different broker, which requires shutting down the broker that is currently the controller. Here are some metrics to check the performance of a Kafka controller: Metrics to check the performance of a Kafka controller kafka.controller:type=KafkaController,name=ActiveControllerCount 1 kafka.controller:type=KafkaController,name=OfflinePartitionsCount 2 kafka.controller:type=ControllerEventManager,name=EventQueueSize 3 1 Number of active controllers in the Kafka cluster. A value of 1 indicates that there is only one active controller, which is the desired state. 2 Number of partitions that are currently offline. If this value is continuously increasing or stays at a high value, there may be a problem with the controller. 3 Size of the event queue in the controller. Events are actions that must be performed by the controller, such as creating a new topic or moving a partition to a new broker. if the value continuously increases or stays at a high value, the controller may be stuck and unable to perform the required actions. 19.4.4. Identifying problems with requests You can use the RequestHandlerAvgIdlePercent metric to determine if requests are slow. Additionally, request metrics can identify which specific requests are experiencing delays and other issues. To effectively monitor Kafka requests, it is crucial to collect two key metrics: count and 99th percentile latency, also known as tail latency. The count metric represents the number of requests processed within a specific time interval. It provides insights into the volume of requests handled by your Kafka cluster and helps identify spikes or drops in traffic. The 99th percentile latency metric measures the request latency, which is the time taken for a request to be processed. It represents the duration within which 99% of requests are handled. However, it does not provide information about the exact duration for the remaining 1% of requests. In other words, the 99th percentile latency metric tells you that 99% of the requests are handled within a certain duration, and the remaining 1% may take even longer, but the precise duration for this remaining 1% is not known. The choice of the 99th percentile is primarily to focus on the majority of requests and exclude outliers that can skew the results. This metric is particularly useful for identifying performance issues and bottlenecks related to the majority of requests, but it does not give a complete picture of the maximum latency experienced by a small fraction of requests. By collecting and analyzing both count and 99th percentile latency metrics, you can gain an understanding of the overall performance and health of your Kafka cluster, as well as the latency of the requests being processed. Here are some metrics to check the performance of Kafka requests: Metrics to check the performance of requests # requests: EndTxn, Fetch, FetchConsumer, FetchFollower, FindCoordinator, Heartbeat, InitProducerId, # JoinGroup, LeaderAndIsr, LeaveGroup, Metadata, Produce, SyncGroup, UpdateMetadata 1 kafka.network:type=RequestMetrics,name=RequestsPerSec,request=([\w]+) 2 kafka.network:type=RequestMetrics,name=RequestQueueTimeMs,request=([\w]+) 3 kafka.network:type=RequestMetrics,name=TotalTimeMs,request=([\w]+) 4 kafka.network:type=RequestMetrics,name=LocalTimeMs,request=([\w]+) 5 kafka.network:type=RequestMetrics,name=RemoteTimeMs,request=([\w]+) 6 kafka.network:type=RequestMetrics,name=ThrottleTimeMs,request=([\w]+) 7 kafka.network:type=RequestMetrics,name=ResponseQueueTimeMs,request=([\w]+) 8 kafka.network:type=RequestMetrics,name=ResponseSendTimeMs,request=([\w]+) 9 # attributes: Count, 99thPercentile 10 1 Request types to break down the request metrics. 2 Rate at which requests are being processed by the Kafka broker per second. 3 Time (in milliseconds) that a request spends waiting in the broker's request queue before being processed. 4 Total time (in milliseconds) that a request takes to complete, from the time it is received by the broker to the time the response is sent back to the client. 5 Time (in milliseconds) that a request spends being processed by the broker on the local machine. 6 Time (in milliseconds) that a request spends being processed by other brokers in the cluster. 7 Time (in milliseconds) that a request spends being throttled by the broker. Throttling occurs when the broker determines that a client is sending too many requests too quickly and needs to be slowed down. 8 Time (in milliseconds) that a response spends waiting in the broker's response queue before being sent back to the client. 9 Time (in milliseconds) that a response takes to be sent back to the client after it has been generated by the broker. 10 For all of the requests metrics, the Count and 99thPercentile attributes show the total number of requests that have been processed and the time it takes for the slowest 1% of requests to complete, respectively. 19.4.5. Using metrics to check the performance of clients By analyzing client metrics, you can monitor the performance of the Kafka clients (producers and consumers) connected to a broker. This can help identify issues highlighted in broker logs, such as consumers being frequently kicked off their consumer groups, high request failure rates, or frequent disconnections. Here are some metrics to check the performance of Kafka clients: Metrics to check the performance of client requests kafka.consumer:type=consumer-metrics,client-id=([-.\w]+) 1 # attributes: time-between-poll-avg, time-between-poll-max kafka.consumer:type=consumer-coordinator-metrics,client-id=([-.\w]+) 2 # attributes: heartbeat-response-time-max, heartbeat-rate, join-time-max, join-rate, rebalance-rate-per-hour kafka.producer:type=producer-metrics,client-id=([-.\w]+) 3 # attributes: buffer-available-bytes, bufferpool-wait-time, request-latency-max, requests-in-flight # attributes: txn-init-time-ns-total, txn-begin-time-ns-total, txn-send-offsets-time-ns-total, txn-commit-time-ns-total, txn-abort-time-ns-total # attributes: record-error-total, record-queue-time-avg, record-queue-time-max, record-retry-rate, record-retry-total, record-send-rate, record-send-total 1 (Consumer) Average and maximum time between poll requests, which can help determine if the consumers are polling for messages frequently enough to keep up with the message flow. The time-between-poll-avg and time-between-poll-max attributes show the average and maximum time in milliseconds between successive polls by a consumer, respectively. 2 (Consumer) Metrics to monitor the coordination process between Kafka consumers and the broker coordinator. Attributes relate to the heartbeat, join, and rebalance process. 3 (Producer) Metrics to monitor the performance of Kafka producers. Attributes relate to buffer usage, request latency, in-flight requests, transactional processing, and record handling. 19.4.6. Using metrics to check the performance of topics and partitions Metrics for topics and partitions can also be helpful in diagnosing issues in a Kafka cluster. You can also use them to debug issues with a specific client when you are unable to collect client metrics. Here are some metrics to check the performance of a specific topic and partition: Metrics to check the performance of topics and partitions #Topic metrics kafka.server:type=BrokerTopicMetrics,name=BytesInPerSec,topic=([-.\w]+) 1 kafka.server:type=BrokerTopicMetrics,name=BytesOutPerSec,topic=([-.\w]+) 2 kafka.server:type=BrokerTopicMetrics,name=FailedFetchRequestsPerSec,topic=([-.\w]+) 3 kafka.server:type=BrokerTopicMetrics,name=FailedProduceRequestsPerSec,topic=([-.\w]+) 4 kafka.server:type=BrokerTopicMetrics,name=MessagesInPerSec,topic=([-.\w]+) 5 kafka.server:type=BrokerTopicMetrics,name=TotalFetchRequestsPerSec,topic=([-.\w]+) 6 kafka.server:type=BrokerTopicMetrics,name=TotalProduceRequestsPerSec,topic=([-.\w]+) 7 #Partition metrics kafka.log:type=Log,name=Size,topic=([-.\w]+),partition=([\d]+)) 8 kafka.log:type=Log,name=NumLogSegments,topic=([-.\w]+),partition=([\d]+)) 9 kafka.log:type=Log,name=LogEndOffset,topic=([-.\w]+),partition=([\d]+)) 10 kafka.log:type=Log,name=LogStartOffset,topic=([-.\w]+),partition=([\d]+)) 11 1 Rate of incoming bytes per second for a specific topic. 2 Rate of outgoing bytes per second for a specific topic. 3 Rate of fetch requests that failed per second for a specific topic. 4 Rate of produce requests that failed per second for a specific topic. 5 Incoming message rate per second for a specific topic. 6 Total rate of fetch requests (successful and failed) per second for a specific topic. 7 Total rate of fetch requests (successful and failed) per second for a specific topic. 8 Size of a specific partition's log in bytes. 9 Number of log segments in a specific partition. 10 Offset of the last message in a specific partition's log. 11 Offset of the first message in a specific partition's log Additional resources Apache Kafka documentation for a full list of available metrics Prometheus documentation Grafana documentation | [
"export KAFKA_JMX_OPTS=-Dcom.sun.management.jmxremote=true -Dcom.sun.management.jmxremote.port=<port> -Dcom.sun.management.jmxremote.authenticate=false -Dcom.sun.management.jmxremote.ssl=false",
"metric.reporters=org.apache.kafka.common.metrics.JmxReporter",
"export KAFKA_JMX_OPTS=-Dcom.sun.management.jmxremote=false",
"auto.include.jmx.reporter=false",
"<metric_group>:type=<type_name>,name=<metric_name><other_attribute>=<value>",
"kafka.server:type=BrokerTopicMetrics,name=BytesInPerSec",
"kafka.consumer:type=consumer-fetch-manager-metrics,client-id=<client_id>",
"kafka.consumer:type=consumer-fetch-manager-metrics,client-id=<client_id>,topic=<topic_id>",
"bin/kafka-topics.sh --bootstrap-server :9092 --describe --under-replicated-partitions Topic: topic-1 Partition: 0 Leader: 4 Replicas: 4,2,3 Isr: 4,3 Topic: topic-1 Partition: 1 Leader: 3 Replicas: 2,3,4 Isr: 3,4 Topic: topic-1 Partition: 2 Leader: 3 Replicas: 3,4,2 Isr: 3,4",
"kafka.server:type=ReplicaManager,name=PartitionCount 1 kafka.server:type=ReplicaManager,name=LeaderCount 2 kafka.server:type=BrokerTopicMetrics,name=BytesInPerSec 3 kafka.server:type=BrokerTopicMetrics,name=BytesOutPerSec 4 kafka.server:type=ReplicaManager,name=UnderReplicatedPartitions 5 kafka.server:type=ReplicaManager,name=UnderMinIsrPartitionCount 6",
"kafka.server:type=KafkaRequestHandlerPool,name=RequestHandlerAvgIdlePercent 1 attributes: OneMinuteRate, FifteenMinuteRate kafka.server:type=socket-server-metrics,listener=([-.\\w]+),networkProcessor=([\\d]+) 2 attributes: connection-creation-rate kafka.network:type=RequestChannel,name=RequestQueueSize 3 kafka.network:type=RequestChannel,name=ResponseQueueSize 4 kafka.network:type=Processor,name=IdlePercent,networkProcessor=([-.\\w]+) 5 kafka.server:type=KafkaServer,name=TotalDiskReadBytes 6 kafka.server:type=KafkaServer,name=TotalDiskWriteBytes 7",
"kafka.controller:type=KafkaController,name=ActiveControllerCount 1 kafka.controller:type=KafkaController,name=OfflinePartitionsCount 2 kafka.controller:type=ControllerEventManager,name=EventQueueSize 3",
"requests: EndTxn, Fetch, FetchConsumer, FetchFollower, FindCoordinator, Heartbeat, InitProducerId, JoinGroup, LeaderAndIsr, LeaveGroup, Metadata, Produce, SyncGroup, UpdateMetadata 1 kafka.network:type=RequestMetrics,name=RequestsPerSec,request=([\\w]+) 2 kafka.network:type=RequestMetrics,name=RequestQueueTimeMs,request=([\\w]+) 3 kafka.network:type=RequestMetrics,name=TotalTimeMs,request=([\\w]+) 4 kafka.network:type=RequestMetrics,name=LocalTimeMs,request=([\\w]+) 5 kafka.network:type=RequestMetrics,name=RemoteTimeMs,request=([\\w]+) 6 kafka.network:type=RequestMetrics,name=ThrottleTimeMs,request=([\\w]+) 7 kafka.network:type=RequestMetrics,name=ResponseQueueTimeMs,request=([\\w]+) 8 kafka.network:type=RequestMetrics,name=ResponseSendTimeMs,request=([\\w]+) 9 attributes: Count, 99thPercentile 10",
"kafka.consumer:type=consumer-metrics,client-id=([-.\\w]+) 1 attributes: time-between-poll-avg, time-between-poll-max kafka.consumer:type=consumer-coordinator-metrics,client-id=([-.\\w]+) 2 attributes: heartbeat-response-time-max, heartbeat-rate, join-time-max, join-rate, rebalance-rate-per-hour kafka.producer:type=producer-metrics,client-id=([-.\\w]+) 3 attributes: buffer-available-bytes, bufferpool-wait-time, request-latency-max, requests-in-flight attributes: txn-init-time-ns-total, txn-begin-time-ns-total, txn-send-offsets-time-ns-total, txn-commit-time-ns-total, txn-abort-time-ns-total attributes: record-error-total, record-queue-time-avg, record-queue-time-max, record-retry-rate, record-retry-total, record-send-rate, record-send-total",
"#Topic metrics kafka.server:type=BrokerTopicMetrics,name=BytesInPerSec,topic=([-.\\w]+) 1 kafka.server:type=BrokerTopicMetrics,name=BytesOutPerSec,topic=([-.\\w]+) 2 kafka.server:type=BrokerTopicMetrics,name=FailedFetchRequestsPerSec,topic=([-.\\w]+) 3 kafka.server:type=BrokerTopicMetrics,name=FailedProduceRequestsPerSec,topic=([-.\\w]+) 4 kafka.server:type=BrokerTopicMetrics,name=MessagesInPerSec,topic=([-.\\w]+) 5 kafka.server:type=BrokerTopicMetrics,name=TotalFetchRequestsPerSec,topic=([-.\\w]+) 6 kafka.server:type=BrokerTopicMetrics,name=TotalProduceRequestsPerSec,topic=([-.\\w]+) 7 #Partition metrics kafka.log:type=Log,name=Size,topic=([-.\\w]+),partition=([\\d]+)) 8 kafka.log:type=Log,name=NumLogSegments,topic=([-.\\w]+),partition=([\\d]+)) 9 kafka.log:type=Log,name=LogEndOffset,topic=([-.\\w]+),partition=([\\d]+)) 10 kafka.log:type=Log,name=LogStartOffset,topic=([-.\\w]+),partition=([\\d]+)) 11"
] | https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.7/html/using_streams_for_apache_kafka_on_rhel_in_kraft_mode/monitoring-str |
Chapter 1. Deploying OpenShift Data Foundation using IBM Cloud | Chapter 1. Deploying OpenShift Data Foundation using IBM Cloud You can use Red Hat OpenShift Data Foundation for your workloads that run in IBM Cloud. These workloads might run in Red Hat OpenShift on IBM Cloud clusters that are in the public cloud or in your own IBM Cloud Satellite location. 1.1. Deploying on IBM Cloud public When you create a Red Hat OpenShift on IBM Cloud cluster, you can choose between classic or Virtual Private Cloud (VPC) infrastructure. The Red Hat OpenShift Data Foundation managed cluster add-on supports both infrastructure providers. For classic clusters, the add-on deploys the OpenShift Data Foundation operator with the Local Storage operator. For VPC clusters, the add-on deploys the OpenShift Data Foundation operator which you can use with IBM Cloud Block Storage on VPC storage volumes. Benefits of using the OpenShift Data Foundation managed cluster add-on to install OpenShift Data Foundation instead of installing from OperatorHub Deploy OpenShift Data Foundation from a single CRD instead of manually creating separate resources. For example, in the single CRD that add-on enables, you configure the namespaces, storagecluster, and other resources you need to run OpenShift Data Foundation. Classic - Automatically create PVs using the storage devices that you specify in your OpenShift Data Foundation CRD. VPC - Dynamically provision IBM Cloud Block Storage on VPC storage volumes for your OpenShift Data Foundation storage cluster. Get patch updates automatically for the managed add-on. Update the OpenShift Data Foundation version by modifying a single field in the CRD. Integrate with IBM Cloud Object Storage by providing credentials in the CRD. 1.1.1. Deploying on classic infrastructure in IBM Cloud You can deploy OpenShift Data Foundation on IBM Cloud classic clusters by using the managed cluster add-on to install the OpenShift Data Foundation operator and the Local Storage operator. After you install the OpenShift Data Foundation add-on in your IBM Cloud classic cluster, you create a single custom resource definition that contains your storage device configuration details. For more information, see the Preparing your cluster for OpenShift Data Foundation . 1.1.2. Deploying on VPC infrastructure in IBM Cloud You can deploy OpenShift Data Foundation on IBM Cloud VPC clusters by using the managed cluster add-on to install the OpenShift Data Foundation operator. After you install the OpenShift Data Foundation add-on in your IBM Cloud classic cluster, you create a custom resource definition that contains your worker node information and the IBM Cloud Block Storage for VPC storage classes that you want to use to dynamically provision the OpenShift Data Foundation storage devices. For more information, see the Preparing your cluster OpenShift Data Foundation . 1.2. Deploying on IBM Cloud Satellite With IBM Cloud Satellite, you can create a location with your own infrastructure, such as an on-premises data center or another cloud provider, to bring IBM Cloud services anywhere, including where your data resides. If you store your data by using Red Hat OpenShift Data Foundation, you can use Satellite storage templates to consistently install OpenShift Data Foundation across the clusters in your Satellite location. The templates help you create a Satellite configuration of the various OpenShift Data Foundation parameters, such as the device paths to your local disks or the storage classes that you want to use to dynamically provision volumes. Then, you assign the Satellite configuration to the clusters where you want to install OpenShift Data Foundation. Benefits of using Satellite storage to install OpenShift Data Foundation instead of installing from OperatorHub Create versions your OpenShift Data Foundation configuration to install across multiple clusters or expand your existing configuration. Update OpenShift Data Foundation across multiple clusters consistently. Standardize storage classes that developers can use for persistent storage across clusters. Use a similar deployment pattern for your apps with Satellite Config. Choose from templates for an OpenShift Data Foundation cluster using local disks on your worker nodes or an OpenShift Data Foundation cluster that uses dynamically provisioned volumes from your storage provider. Integrate with IBM Cloud Object Storage by providing credentials in the template. 1.2.1. Using OpenShift Data Foundation with the local storage present on your worker nodes in IBM Cloud Satellite For an OpenShift Data Foundation configuration that uses the local storage present on your worker nodes, you can use a Satellite template to configure your OpenShift Data Foundation configuration. Your cluster must meet certain requirements, such as CPU and memory requirements and size requirements of the available raw unformatted, unmounted disks. Choose a local OpenShift Data Foundation configuration when you want to use the local storage devices already present on your worker nodes, or statically provisioned raw volumes that you attach to your worker nodes. For more information, see the IBM Cloud Satellite local OpenShift Data Foundation storage documentation . 1.2.2. Using OpenShift Data Foundation with remote, dynamically provisioned storage volumes in IBM Cloud Satellite For an OpenShift Data Foundation configuration that uses remote, dynamically provisioned storage volumes from your preferred storage provider, you can use a Satellite storage template to create your storage configuration. In your OpenShift Data Foundation configuration, you specify the storage classes that you want use and the volume sizes that you want to provision. Your cluster must meet certain requirements, such as CPU and memory requirements. Choose the OpenShift Data Foundation-remote storage template when you want to use dynamically provisioned remote volumes from your storage provider in your OpenShift Data Foundation configuration. For more information, see the IBM Cloud Satellite remote OpenShift Data Foundation storage documentation . | null | https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.14/html/deploying_openshift_data_foundation_using_ibm_cloud/deploying_openshift_container_storage_using_ibm_cloud_rhodf |
E.3.11. /proc/tty/ | E.3.11. /proc/tty/ This directory contains information about the available and currently used tty devices on the system. Originally called teletype devices , any character-based data terminals are called tty devices. In Linux, there are three different kinds of tty devices. Serial devices are used with serial connections, such as over a modem or using a serial cable. Virtual terminals create the common console connection, such as the virtual consoles available when pressing Alt + <F-key> at the system console. Pseudo terminals create a two-way communication that is used by some higher level applications, such as XFree86. The drivers file is a list of the current tty devices in use, as in the following example: The /proc/tty/driver/serial file lists the usage statistics and status of each of the serial tty lines. In order for tty devices to be used as network devices, the Linux kernel enforces line discipline on the device. This allows the driver to place a specific type of header with every block of data transmitted over the device, making it possible for the remote end of the connection to treat a block of data as just one in a stream of data blocks. SLIP and PPP are common line disciplines, and each are commonly used to connect systems to one another over a serial link. | [
"serial /dev/cua 5 64-127 serial:callout serial /dev/ttyS 4 64-127 serial pty_slave /dev/pts 136 0-255 pty:slave pty_master /dev/ptm 128 0-255 pty:master pty_slave /dev/ttyp 3 0-255 pty:slave pty_master /dev/pty 2 0-255 pty:master /dev/vc/0 /dev/vc/0 4 0 system:vtmaster /dev/ptmx /dev/ptmx 5 2 system /dev/console /dev/console 5 1 system:console /dev/tty /dev/tty 5 0 system:/dev/tty unknown /dev/vc/%d 4 1-63 console"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/deployment_guide/s2-proc-tty |
Chapter 2. Configuring an IBM Cloud account | Chapter 2. Configuring an IBM Cloud account Before you can install OpenShift Container Platform, you must configure an IBM Cloud(R) account. 2.1. Prerequisites You have an IBM Cloud(R) account with a subscription. You cannot install OpenShift Container Platform on a free or trial IBM Cloud(R) account. 2.2. Quotas and limits on IBM Cloud The OpenShift Container Platform cluster uses a number of IBM Cloud(R) components, and the default quotas and limits affect your ability to install OpenShift Container Platform clusters. If you use certain cluster configurations, deploy your cluster in certain regions, or run multiple clusters from your account, you might need to request additional resources for your IBM Cloud(R) account. For a comprehensive list of the default IBM Cloud(R) quotas and service limits, see IBM Cloud(R)'s documentation for Quotas and service limits . Virtual Private Cloud (VPC) Each OpenShift Container Platform cluster creates its own VPC. The default quota of VPCs per region is 10 and will allow 10 clusters. To have more than 10 clusters in a single region, you must increase this quota. Application load balancer By default, each cluster creates three application load balancers (ALBs): Internal load balancer for the master API server External load balancer for the master API server Load balancer for the router You can create additional LoadBalancer service objects to create additional ALBs. The default quota of VPC ALBs are 50 per region. To have more than 50 ALBs, you must increase this quota. VPC ALBs are supported. Classic ALBs are not supported for IBM Cloud(R). Floating IP address By default, the installation program distributes control plane and compute machines across all availability zones within a region to provision the cluster in a highly available configuration. In each availability zone, a public gateway is created and requires a separate floating IP address. The default quota for a floating IP address is 20 addresses per availability zone. The default cluster configuration yields three floating IP addresses: Two floating IP addresses in the us-east-1 primary zone. The IP address associated with the bootstrap node is removed after installation. One floating IP address in the us-east-2 secondary zone. One floating IP address in the us-east-3 secondary zone. IBM Cloud(R) can support up to 19 clusters per region in an account. If you plan to have more than 19 default clusters, you must increase this quota. Virtual Server Instances (VSI) By default, a cluster creates VSIs using bx2-4x16 profiles, which includes the following resources by default: 4 vCPUs 16 GB RAM The following nodes are created: One bx2-4x16 bootstrap machine, which is removed after the installation is complete Three bx2-4x16 control plane nodes Three bx2-4x16 compute nodes For more information, see IBM Cloud(R)'s documentation on supported profiles . Table 2.1. VSI component quotas and limits VSI component Default IBM Cloud(R) quota Default cluster configuration Maximum number of clusters vCPU 200 vCPUs per region 28 vCPUs, or 24 vCPUs after bootstrap removal 8 per region RAM 1600 GB per region 112 GB, or 96 GB after bootstrap removal 16 per region Storage 18 TB per region 1050 GB, or 900 GB after bootstrap removal 19 per region If you plan to exceed the resources stated in the table, you must increase your IBM Cloud(R) account quota. Block Storage Volumes For each VPC machine, a block storage device is attached for its boot volume. The default cluster configuration creates seven VPC machines, resulting in seven block storage volumes. Additional Kubernetes persistent volume claims (PVCs) of the IBM Cloud(R) storage class create additional block storage volumes. The default quota of VPC block storage volumes are 300 per region. To have more than 300 volumes, you must increase this quota. 2.3. Configuring DNS resolution How you configure DNS resolution depends on the type of OpenShift Container Platform cluster you are installing: If you are installing a public cluster, you use IBM Cloud Internet Services (CIS). If you are installing a private cluster, you use IBM Cloud(R) DNS Services (DNS Services) 2.3.1. Using IBM Cloud Internet Services for DNS resolution The installation program uses IBM Cloud(R) Internet Services (CIS) to configure cluster DNS resolution and provide name lookup for a public cluster. Note This offering does not support IPv6, so dual stack or IPv6 environments are not possible. You must create a domain zone in CIS in the same account as your cluster. You must also ensure the zone is authoritative for the domain. You can do this using a root domain or subdomain. Prerequisites You have installed the IBM Cloud(R) CLI . You have an existing domain and registrar. For more information, see the IBM(R) documentation . Procedure Create a CIS instance to use with your cluster: Install the CIS plugin: USD ibmcloud plugin install cis Create the CIS instance: USD ibmcloud cis instance-create <instance_name> standard- 1 1 At a minimum, you require a Standard plan for CIS to manage the cluster subdomain and its DNS records. Note After you have configured your registrar or DNS provider, it can take up to 24 hours for the changes to take effect. Connect an existing domain to your CIS instance: Set the context instance for CIS: USD ibmcloud cis instance-set <instance_name> 1 1 The instance cloud resource name. Add the domain for CIS: USD ibmcloud cis domain-add <domain_name> 1 1 The fully qualified domain name. You can use either the root domain or subdomain value as the domain name, depending on which you plan to configure. Note A root domain uses the form openshiftcorp.com . A subdomain uses the form clusters.openshiftcorp.com . Open the CIS web console , navigate to the Overview page, and note your CIS name servers. These name servers will be used in the step. Configure the name servers for your domains or subdomains at the domain's registrar or DNS provider. For more information, see the IBM Cloud(R) documentation . 2.3.2. Using IBM Cloud DNS Services for DNS resolution The installation program uses IBM Cloud(R) DNS Services to configure cluster DNS resolution and provide name lookup for a private cluster. You configure DNS resolution by creating a DNS services instance for the cluster, and then adding a DNS zone to the DNS Services instance. Ensure that the zone is authoritative for the domain. You can do this using a root domain or subdomain. Note IBM Cloud(R) does not support IPv6, so dual stack or IPv6 environments are not possible. Prerequisites You have installed the IBM Cloud(R) CLI . You have an existing domain and registrar. For more information, see the IBM(R) documentation . Procedure Create a DNS Services instance to use with your cluster: Install the DNS Services plugin by running the following command: USD ibmcloud plugin install cloud-dns-services Create the DNS Services instance by running the following command: USD ibmcloud dns instance-create <instance-name> standard-dns 1 1 At a minimum, you require a Standard DNS plan for DNS Services to manage the cluster subdomain and its DNS records. Note After you have configured your registrar or DNS provider, it can take up to 24 hours for the changes to take effect. Create a DNS zone for the DNS Services instance: Set the target operating DNS Services instance by running the following command: USD ibmcloud dns instance-target <instance-name> Add the DNS zone to the DNS Services instance by running the following command: USD ibmcloud dns zone-create <zone-name> 1 1 The fully qualified zone name. You can use either the root domain or subdomain value as the zone name, depending on which you plan to configure. A root domain uses the form openshiftcorp.com . A subdomain uses the form clusters.openshiftcorp.com . Record the name of the DNS zone you have created. As part of the installation process, you must update the install-config.yaml file before deploying the cluster. Use the name of the DNS zone as the value for the baseDomain parameter. Note You do not have to manage permitted networks or configure an "A" DNS resource record. As required, the installation program configures these resources automatically. 2.4. IBM Cloud IAM Policies and API Key To install OpenShift Container Platform into your IBM Cloud(R) account, the installation program requires an IAM API key, which provides authentication and authorization to access IBM Cloud(R) service APIs. You can use an existing IAM API key that contains the required policies or create a new one. For an IBM Cloud(R) IAM overview, see the IBM Cloud(R) documentation . 2.4.1. Required access policies You must assign the required access policies to your IBM Cloud(R) account. Table 2.2. Required access policies Service type Service Access policy scope Platform access Service access Account management IAM Identity Service All resources or a subset of resources [1] Editor, Operator, Viewer, Administrator Service ID creator Account management [2] Identity and Access Management All resources Editor, Operator, Viewer, Administrator Account management Resource group only All resource groups in the account Administrator IAM services Cloud Object Storage All resources or a subset of resources [1] Editor, Operator, Viewer, Administrator Reader, Writer, Manager, Content Reader, Object Reader, Object Writer IAM services Internet Services All resources or a subset of resources [1] Editor, Operator, Viewer, Administrator Reader, Writer, Manager IAM services DNS Services All resources or a subset of resources [1] Editor, Operator, Viewer, Administrator Reader, Writer, Manager IAM services VPC Infrastructure Services All resources or a subset of resources [1] Editor, Operator, Viewer, Administrator Reader, Writer, Manager The policy access scope should be set based on how granular you want to assign access. The scope can be set to All resources or Resources based on selected attributes . Optional: This access policy is only required if you want the installation program to create a resource group. For more information about resource groups, see the IBM(R) documentation . 2.4.2. Access policy assignment In IBM Cloud(R) IAM, access policies can be attached to different subjects: Access group (Recommended) Service ID User The recommended method is to define IAM access policies in an access group . This helps organize all the access required for OpenShift Container Platform and enables you to onboard users and service IDs to this group. You can also assign access to users and service IDs directly, if desired. 2.4.3. Creating an API key You must create a user API key or a service ID API key for your IBM Cloud(R) account. Prerequisites You have assigned the required access policies to your IBM Cloud(R) account. You have attached you IAM access policies to an access group, or other appropriate resource. Procedure Create an API key, depending on how you defined your IAM access policies. For example, if you assigned your access policies to a user, you must create a user API key . If you assigned your access policies to a service ID, you must create a service ID API key . If your access policies are assigned to an access group, you can use either API key type. For more information on IBM Cloud(R) API keys, see Understanding API keys . 2.5. Supported IBM Cloud regions You can deploy an OpenShift Container Platform cluster to the following regions: au-syd (Sydney, Australia) br-sao (Sao Paulo, Brazil) ca-tor (Toronto, Canada) eu-de (Frankfurt, Germany) eu-gb (London, United Kingdom) eu-es (Madrid, Spain) jp-osa (Osaka, Japan) jp-tok (Tokyo, Japan) us-east (Washington DC, United States) us-south (Dallas, United States) Note Deploying your cluster in the eu-es (Madrid, Spain) region is not supported for OpenShift Container Platform 4.14.6 and earlier versions. 2.6. steps Configuring IAM for IBM Cloud(R) | [
"ibmcloud plugin install cis",
"ibmcloud cis instance-create <instance_name> standard-next 1",
"ibmcloud cis instance-set <instance_name> 1",
"ibmcloud cis domain-add <domain_name> 1",
"ibmcloud plugin install cloud-dns-services",
"ibmcloud dns instance-create <instance-name> standard-dns 1",
"ibmcloud dns instance-target <instance-name>",
"ibmcloud dns zone-create <zone-name> 1"
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.15/html/installing_on_ibm_cloud/installing-ibm-cloud-account |
Deploying OpenShift Data Foundation using Google Cloud | Deploying OpenShift Data Foundation using Google Cloud Red Hat OpenShift Data Foundation 4.17 Instructions on deploying OpenShift Data Foundation on existing Red Hat OpenShift Container Platform Google Cloud clusters Red Hat Storage Documentation Team Abstract Read this document for instructions about how to install Red Hat OpenShift Data Foundation using Red Hat OpenShift Container Platform on Google Cloud. | null | https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.17/html/deploying_openshift_data_foundation_using_google_cloud/index |
Chapter 1. Introduction | Chapter 1. Introduction Migration Toolkit for Runtimes product will be End of Life on September 30th, 2024 All customers using this product should start their transition to Migration Toolkit for Applications . Migration Toolkit for Applications is fully backwards compatible with all features and rulesets available in Migration Toolkit for Runtimes and will be maintained in the long term. Migration Toolkit for Runtimes (MTR) provides an extensible and customizable rule-based tool that simplifies the migration and modernization of Java applications, such as migrating JBoss Enterprise Application Platform (EAP) 7 to 8 or migrating from any other application server towards EAP at scale. MTR provides the same migration solution as provided in the Migration Toolkit for Applications 5 releases. These release notes cover all Z-stream releases of MTR 1.2 with the most recent release listed first. | null | https://docs.redhat.com/en/documentation/migration_toolkit_for_runtimes/1.2/html/release_notes/introduction |
Chapter 6. SubjectAccessReview [authorization.openshift.io/v1] | Chapter 6. SubjectAccessReview [authorization.openshift.io/v1] Description SubjectAccessReview is an object for requesting information about whether a user or group can perform an action Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object Required namespace verb resourceAPIGroup resourceAPIVersion resource resourceName path isNonResourceURL user groups scopes 6.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources content RawExtension Content is the actual content of the request for create and update groups array (string) GroupsSlice is optional. Groups is the list of groups to which the User belongs. isNonResourceURL boolean IsNonResourceURL is true if this is a request for a non-resource URL (outside of the resource hierarchy) kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds namespace string Namespace is the namespace of the action being requested. Currently, there is no distinction between no namespace and all namespaces path string Path is the path of a non resource URL resource string Resource is one of the existing resource types resourceAPIGroup string Group is the API group of the resource Serialized as resourceAPIGroup to avoid confusion with the 'groups' field when inlined resourceAPIVersion string Version is the API version of the resource Serialized as resourceAPIVersion to avoid confusion with TypeMeta.apiVersion and ObjectMeta.resourceVersion when inlined resourceName string ResourceName is the name of the resource being requested for a "get" or deleted for a "delete" scopes array (string) Scopes to use for the evaluation. Empty means "use the unscoped (full) permissions of the user/groups". Nil for a self-SAR, means "use the scopes on this request". Nil for a regular SAR, means the same as empty. user string User is optional. If both User and Groups are empty, the current authenticated user is used. verb string Verb is one of: get, list, watch, create, update, delete 6.2. API endpoints The following API endpoints are available: /apis/authorization.openshift.io/v1/subjectaccessreviews POST : create a SubjectAccessReview 6.2.1. /apis/authorization.openshift.io/v1/subjectaccessreviews Table 6.1. Global query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. pretty string If 'true', then the output is pretty printed. HTTP method POST Description create a SubjectAccessReview Table 6.2. Body parameters Parameter Type Description body SubjectAccessReview schema Table 6.3. HTTP responses HTTP code Reponse body 200 - OK SubjectAccessReview schema 201 - Created SubjectAccessReview schema 202 - Accepted SubjectAccessReview schema 401 - Unauthorized Empty | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.13/html/authorization_apis/subjectaccessreview-authorization-openshift-io-v1 |
Chapter 32. Jira Add Comment Sink | Chapter 32. Jira Add Comment Sink Add a new comment to an existing issue in Jira. The Kamelet expects the following headers to be set: issueKey / ce-issueKey : as the issue code. The comment is set in the body of the message. 32.1. Configuration Options The following table summarizes the configuration options available for the jira-add-comment-sink Kamelet: Property Name Description Type Default Example jiraUrl * Jira URL The URL of your instance of Jira string "http://my_jira.com:8081" password * Password The password or the API Token to access Jira string username * Username The username to access Jira string Note Fields marked with an asterisk (*) are mandatory. 32.2. Dependencies At runtime, the jira-add-comment-sink Kamelet relies upon the presence of the following dependencies: camel:core camel:jackson camel:jira camel:kamelet mvn:com.fasterxml.jackson.datatype:jackson-datatype-joda:2.12.4.redhat-00001 32.3. Usage This section describes how you can use the jira-add-comment-sink . 32.3.1. Knative Sink You can use the jira-add-comment-sink Kamelet as a Knative sink by binding it to a Knative object. jira-add-comment-sink-binding.yaml apiVersion: camel.apache.org/v1alpha1 kind: KameletBinding metadata: name: jira-add-comment-sink-binding spec: source: ref: kind: KafkaTopic apiVersion: kafka.strimzi.io/v1beta1 name: my-topic steps: - ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: insert-header-action properties: name: "issueKey" value: "MYP-167" sink: ref: kind: Channel apiVersion: messaging.knative.dev/v1 name: mychannel properties: jiraUrl: "jira server url" username: "username" password: "password" 32.3.1.1. Prerequisite Make sure you have "Red Hat Integration - Camel K" installed into the OpenShift cluster you're connected to. 32.3.1.2. Procedure for using the cluster CLI Save the jira-add-comment-sink-binding.yaml file to your local drive, and then edit it as needed for your configuration. Run the sink by using the following command: oc apply -f jira-add-comment-sink-binding.yaml 32.3.1.3. Procedure for using the Kamel CLI Configure and run the sink by using the following command: kamel bind --name jira-add-comment-sink-binding timer-source?message="The new comment"\&period=60000 --step insert-header-action -p step-0.name=issueKey -p step-0.value=MYP-167 jira-add-comment-sink?password="password"\&username="username"\&jiraUrl="jira url" This command creates the KameletBinding in the current namespace on the cluster. 32.3.2. Kafka Sink You can use the jira-add-comment-sink Kamelet as a Kafka sink by binding it to a Kafka topic. jira-add-comment-sink-binding.yaml apiVersion: camel.apache.org/v1alpha1 kind: KameletBinding metadata: name: jira-add-comment-sink-binding spec: source: ref: kind: KafkaTopic apiVersion: kafka.strimzi.io/v1beta1 name: my-topic steps: - ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: insert-header-action properties: name: "issueKey" value: "MYP-167" sink: ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: jira-add-comment-sink properties: jiraUrl: "jira server url" username: "username" password: "password" 32.3.2.1. Prerequisites Ensure that you've installed the AMQ Streams operator in your OpenShift cluster and created a topic named my-topic in the current namespace. Make also sure you have "Red Hat Integration - Camel K" installed into the OpenShift cluster you're connected to. 32.3.2.2. Procedure for using the cluster CLI Save the jira-add-comment-sink-binding.yaml file to your local drive, and then edit it as needed for your configuration. Run the sink by using the following command: oc apply -f jira-add-comment-sink-binding.yaml 32.3.2.3. Procedure for using the Kamel CLI Configure and run the sink by using the following command: kamel bind --name jira-add-comment-sink-binding timer-source?message="The new comment"\&period=60000 --step insert-header-action -p step-0.name=issueKey -p step-0.value=MYP-167 jira-add-comment-sink?password="password"\&username="username"\&jiraUrl="jira url" This command creates the KameletBinding in the current namespace on the cluster. 32.4. Kamelet source file https://github.com/openshift-integration/kamelet-catalog/jira-add-comment-sink.kamelet.yaml | [
"apiVersion: camel.apache.org/v1alpha1 kind: KameletBinding metadata: name: jira-add-comment-sink-binding spec: source: ref: kind: KafkaTopic apiVersion: kafka.strimzi.io/v1beta1 name: my-topic steps: - ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: insert-header-action properties: name: \"issueKey\" value: \"MYP-167\" sink: ref: kind: Channel apiVersion: messaging.knative.dev/v1 name: mychannel properties: jiraUrl: \"jira server url\" username: \"username\" password: \"password\"",
"apply -f jira-add-comment-sink-binding.yaml",
"kamel bind --name jira-add-comment-sink-binding timer-source?message=\"The new comment\"\\&period=60000 --step insert-header-action -p step-0.name=issueKey -p step-0.value=MYP-167 jira-add-comment-sink?password=\"password\"\\&username=\"username\"\\&jiraUrl=\"jira url\"",
"apiVersion: camel.apache.org/v1alpha1 kind: KameletBinding metadata: name: jira-add-comment-sink-binding spec: source: ref: kind: KafkaTopic apiVersion: kafka.strimzi.io/v1beta1 name: my-topic steps: - ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: insert-header-action properties: name: \"issueKey\" value: \"MYP-167\" sink: ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: jira-add-comment-sink properties: jiraUrl: \"jira server url\" username: \"username\" password: \"password\"",
"apply -f jira-add-comment-sink-binding.yaml",
"kamel bind --name jira-add-comment-sink-binding timer-source?message=\"The new comment\"\\&period=60000 --step insert-header-action -p step-0.name=issueKey -p step-0.value=MYP-167 jira-add-comment-sink?password=\"password\"\\&username=\"username\"\\&jiraUrl=\"jira url\""
] | https://docs.redhat.com/en/documentation/red_hat_build_of_apache_camel_k/1.10.5/html/kamelets_reference/jira-add-comment-sink |
Chapter 92. LRA | Chapter 92. LRA Since Camel 2.21 The LRA module provides bindings of the Saga EIP with any MicroProfile compatible LRA Coordinator . 92.1. Dependencies When using lra with Red Hat build of Camel Spring Boot make sure to use the following Maven dependency to have support for auto configuration: <dependency> <groupId>org.apache.camel.springboot</groupId> <artifactId>camel-lra-starter</artifactId> </dependency> 92.2. Spring Boot Auto-Configuration The component supports 5 options, which are listed below. Name Description Default Type camel.lra.coordinator-context-path The context path of the LRA coordinator service. String camel.lra.coordinator-url The base URL of the LRA coordinator service (e.g. ). String camel.lra.enabled Global option to enable/disable component auto-configuration, default is true. true Boolean camel.lra.local-participant-context-path The context path of the local participant callback services. String camel.lra.local-participant-url The local URL where the coordinator should send callbacks to (e.g. ). String | [
"<dependency> <groupId>org.apache.camel.springboot</groupId> <artifactId>camel-lra-starter</artifactId> </dependency>"
] | https://docs.redhat.com/en/documentation/red_hat_build_of_apache_camel/4.8/html/red_hat_build_of_apache_camel_for_spring_boot_reference/csb-camel-lra-component-starter |
Chapter 5. Uninstalling OpenShift Data Foundation | Chapter 5. Uninstalling OpenShift Data Foundation 5.1. Uninstalling OpenShift Data Foundation in Internal mode To uninstall OpenShift Data Foundation in Internal mode, refer to the knowledgebase article on Uninstalling OpenShift Data Foundation . | null | https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.18/html/deploying_openshift_data_foundation_using_bare_metal_infrastructure/uninstalling_openshift_data_foundation |
Managing automation content | Managing automation content Red Hat Ansible Automation Platform 2.5 Create and manage collections, content and repositories in automation hub Red Hat Customer Content Services | [
"curl https://sso.redhat.com/auth/realms/redhat-external/protocol/openid-connect/token -d grant_type=refresh_token -d client_id=\"cloud-services\" -d refresh_token=\"{{ user_token }}\" --fail --silent --show-error --output /dev/null",
"collections: # Install a collection from Ansible Galaxy. - name: community.aws version: 5.2.0 source: https://galaxy.ansible.com",
"collections: name: namespace.collection_name version: 1.0.0",
"ansible-galaxy collection install -r requirements.yml",
"ansible-galaxy collection install namespace.collection_name",
"{\"file\": \"filename\", \"signature\": \"filename.asc\"}",
"#!/usr/bin/env bash FILE_PATH=USD1 SIGNATURE_PATH=\"USD1.asc\" ADMIN_ID=\"USDPULP_SIGNING_KEY_FINGERPRINT\" PASSWORD=\"password\" Create a detached signature gpg --quiet --batch --pinentry-mode loopback --yes --passphrase USDPASSWORD --homedir ~/.gnupg/ --detach-sign --default-key USDADMIN_ID --armor --output USDSIGNATURE_PATH USDFILE_PATH Check the exit status STATUS=USD? if [ USDSTATUS -eq 0 ]; then echo {\\\"file\\\": \\\"USDFILE_PATH\\\", \\\"signature\\\": \\\"USDSIGNATURE_PATH\\\"} else exit USDSTATUS fi",
"[all:vars] . . . automationhub_create_default_collection_signing_service = True automationhub_auto_sign_collections = True automationhub_require_content_approval = True automationhub_collection_signing_service_key = /abs/path/to/galaxy_signing_service.gpg automationhub_collection_signing_service_script = /abs/path/to/collection_signing.sh",
"gpg --import --no-default-keyring --keyring ~/.ansible/pubring.kbx my-public-key.asc",
"ansible-galaxy collection install namespace.collection --signature https://examplehost.com/detached_signature.asc --signature file:///path/to/local/detached_signature.asc --keyring ~/.ansible/pubring.kbx",
"requirements.yml collections: - name: ns.coll version: 1.0.0 signatures: - https://examplehost.com/detached_signature.asc - file:///path/to/local/detached_signature.asc ansible-galaxy collection verify -r requirements.yml --keyring ~/.ansible/pubring.kbx",
"Collections: - name: community.kubernetes - name: community.aws version:\">=5.0.0\"",
"podman login registry.redhat.io",
"podman pull registry.redhat.io/ <ee_name> : <tag>",
"podman images",
"podman tag registry.redhat.io/ <ee_name> : <tag> <automation_hub_hostname> / <ee_name>",
"podman images",
"podman login -u= <username> -p= <password> <automation_hub_url>",
"podman push <automation_hub_url> / <ee_name>",
"#!/usr/bin/env bash pulp_container SigningService will pass the next 4 variables to the script. MANIFEST_PATH=USD1 FINGERPRINT=\"USDPULP_SIGNING_KEY_FINGERPRINT\" IMAGE_REFERENCE=\"USDREFERENCE\" SIGNATURE_PATH=\"USDSIG_PATH\" Create container signature using skopeo skopeo standalone-sign USDMANIFEST_PATH USDIMAGE_REFERENCE USDFINGERPRINT --output USDSIGNATURE_PATH Optionally pass the passphrase to the key if password protected. --passphrase-file /path/to/key_password.txt Check the exit status STATUS=USD? if [ USDSTATUS -eq 0 ]; then echo {\\\"signature_path\\\": \\\"USDSIGNATURE_PATH\\\"} else exit USDSTATUS fi",
"[all:vars] . . . automationhub_create_default_container_signing_service = True automationhub_container_signing_service_key = /absolute/path/to/key/to/sign automationhub_container_signing_service_script = /absolute/path/to/script/that/signs",
"> podman pull <container-name>",
"> podman tag <container-name> <server-address>/<container-name>:<tag name>",
"> podman push <server-address>/<container-name>:<tag name> --tls-verify=false --sign-by <reference to the gpg key on your local>",
"> podman push <server-address>/<container-name>:<tag name> --tls-verify=false",
"> sudo <name of editor> /etc/containers/policy.json",
"{ \"default\": [{\"type\": \"reject\"}], \"transports\": { \"docker\": { \"quay.io\": [{\"type\": \"insecureAcceptAnything\"}], \"docker.io\": [{\"type\": \"insecureAcceptAnything\"}], \"<server-address>\": [ { \"type\": \"signedBy\", \"keyType\": \"GPGKeys\", \"keyPath\": \"/tmp/containersig.txt\" }] } } }",
"{ \"default\": [{\"type\": \"reject\"}], \"transports\": { \"docker\": { \"quay.io\": [{\"type\": \"insecureAcceptAnything\"}], \"docker.io\": [{\"type\": \"insecureAcceptAnything\"}], \"<server-address>\": [{ \"type\": \"signedBy\", \"keyType\": \"GPGKeys\", \"keyPath\": \"/tmp/<key file name>\", \"signedIdentity\": { \"type\": \"matchExact\" } }] } } }",
"> podman pull <server-address>/<container-name>:<tag name> --tls-verify=false"
] | https://docs.redhat.com/en/documentation/red_hat_ansible_automation_platform/2.5/html-single/managing_automation_content/index |
2.2. File System Fragmentation | 2.2. File System Fragmentation While there is no defragmentation tool for GFS2 on Red Hat Enterprise Linux, you can defragment individual files by identifying them with the filefrag tool, copying them to temporary files, and renaming the temporary files to replace the originals. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/global_file_system_2/s1-filefragment-gfs2 |
Chapter 16. Storage | Chapter 16. Storage Data Deduplication and Compression with VDO Red Hat Enterprise Linux 7.5 introduces Virtual Data Optimizer (VDO). This feature enables you to create block devices that transparently provide data deduplication, compression, and thin provisioning. Standard file systems and applications can run on these virtual block devices without modification. VDO is currently available only on the AMD64 and Intel 64 architectures. For more information on VDO, see the chapter Data Deduplication and Compression with VDO in the Storage Administration Guide: https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/7/html/storage_administration_guide/vdo . (BZ#1480047) New boom utility for managing LVM snapshot and image boot entries This release adds the boom command, which you can use to manage additional boot loader entries on the system. You can use it to create, delete, list, and modify auxiliary boot entries for system snapshots and images. The utility provides a single tool for managing boot menu entries for LVM snapshots; therefore you no longer need to manually edit boot loader configuration files and work with detailed kernel parameters. The tool is provided by the lvm2-python-boom package. (BZ# 1278192 ) DM Multipath no longer requires reservation keys in advance DM Multipath now supports two new configuration options in the multipath.conf file: unpriv_sgio prkeys_file The reservation_key option of the defaults and multipaths sections accepts a new keyword: file . When set, the multipathd service will now use the file configured in the prkeys_file option of the defaults section to get the reservation key to use for the paths of a multipath device. The prkeys file is automatically updated by the mpathpersist utility. The default for the reservation_key option remains undefined, and default for the prkeys_file is /etc/multipath/prkeys . If the new unpriv_sgio option is set to yes , DM Multipath will now create all new devices and their paths with the unpriv_sgio attribute. This option is used internally by other software, and is unnecessary for most DM Multipath users. It defaults to no . These changes make it possible to use the mpathpersist utility without knowing ahead of time what reservation keys will be used and without adding them to the multipath.conf configuration file. As a result, it is now easier to use the mpathpersist utility to manage multipath persistent reservations in multiple setups. (BZ# 1452210 ) New property parameter supported in blacklist and blacklist_exception sections of multipath.conf The multipath.conf configuration file now supports the property parameter in the blacklist and blacklist_exception sections of the file. This parameter allows users to blacklist certain types of devices. The property parameter takes a regular expression string that is matched against the udev environment variable names for the device. The property parameter in blacklist_exception works differently than the other blacklist_exception parameters. If the parameter is set, the device must have a udev variable that matches. Otherwise, the device is blacklisted. Most usefully, this parameter allows users to blacklist SCSI devices that multipath should ignore, such as USB sticks and local hard drives. To allow only SCSI devices that could reasonably be multipathed, set this parameter to (SCSI_IDENT_|ID_WWN) in the blacklist_exceptions section of the multipath.conf file. (BZ# 1456955 ) smartmontools now support NVMe devices This update adds support for Nonvolatile Memory Express (NVMe) devices, especially Solid-state Drive (SSD) disks, into the smartmontools package. As a result, the smartmontools utilities can now be used for monitoring NVMe disks with the Self-Monitoring, Analysis and Reporting Technology System (S.M.A.R.T.). (BZ#1369731) Support for DIF/DIX (T10 PI) on specified hardware SCSI T10 DIF/DIX is fully supported in Red Hat Enterprise Linux 7.5, provided that the hardware vendor has qualified it and provides full support for the particular HBA and storage array configuration. DIF/DIX is not supported on other configurations, it is not supported for use on the boot device, and it is not supported on virtualized guests. At the current time, the following vendors are known to provide this support. FUJITSU supports DIF and DIX on: EMULEX 16G FC HBA: EMULEX LPe16000/LPe16002, 10.2.254.0 BIOS, 10.4.255.23 FW, with: FUJITSU ETERNUS DX100 S3, DX200 S3, DX500 S3, DX600 S3, DX8100 S3, DX8700 S3, DX8900 S3, DX200F, DX60 S3, AF250, AF650, DX60 S4, DX100 S4, DX200 S4, DX500 S4, DX600 S4, AF250 S2, AF650 S2 QLOGIC 16G FC HBA: QLOGIC QLE2670/QLE2672, 3.28 BIOS, 8.00.00 FW, with: FUJITSU ETERNUS DX100 S3, DX200 S3, DX500 S3, DX600 S3, DX8100 S3, DX8700 S3, DX8900 S3, DX200F, DX60 S3, AF250, AF650, DX60 S4, DX100 S4, DX200 S4, DX500 S4, DX600 S4, AF250 S2, AF650 S2 Note that T10 DIX requires database or some other software that provides generation and verification of checksums on disk blocks. No currently supported Linux file systems have this capability. EMC supports DIF on: EMULEX 8G FC HBA: LPe12000-E and LPe12002-E with firmware 2.01a10 or later, with: EMC VMAX3 Series with Enginuity 5977; EMC Symmetrix VMAX Series with Enginuity 5876.82.57 and later EMULEX 16G FC HBA: LPe16000B-E and LPe16002B-E with firmware 10.0.803.25 or later, with: EMC VMAX3 Series with Enginuity 5977; EMC Symmetrix VMAX Series with Enginuity 5876.82.57 and later QLOGIC 16G FC HBA: QLE2670-E-SP and QLE2672-E-SP, with: EMC VMAX3 Series with Enginuity 5977; EMC Symmetrix VMAX Series with Enginuity 5876.82.57 and later Please refer to the hardware vendor's support information for the latest status. Support for DIF/DIX remains in Technology Preview for other HBAs and storage arrays. (BZ#1499059) File system Direct Access (DAX) and device DAX now support huge pages Previously, each file system DAX and device DAX page fault mapped to a single page in the user space. With this update, file system DAX and device DAX can now map persistent memory in larger chunks, called huge pages. File system DAX supports huge pages that are, for example, 2 MiB in size on the AMD64 and Intel 64 architectures, and device DAX supports using either 2 MiB or 1 GiB huge pages on AMD64 and Intel 64. In comparison, a standard page is 4 KiB in size on the these architectures. When creating a DAX namespace, you can configure the page size that the namespace should use for all page faults. Huge pages lead to fewer page faults, smaller page tables, and less Translation Lookaside Buffer (TLB) contention. As a result, file system DAX and device DAX now use less memory and perform better. (BZ# 1457561 , BZ#1383493) fsadm can now grow and shrink LUKS-encrypted LVM volumes The fsadm utility is now able to grow and shrink Logical Volume Manager (LVM) volumes that are encrypted with Linux Unified Key Setup (LUKS). This applies both to using fsadm directly with the fsadm --lvresize command and to using it indirectly through the lvresize --resizefs command. Note that due to technical limitations, resizing of encrypted devices with a detached header is not supported. (BZ# 1113681 ) | null | https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/7.5_release_notes/new_features_storage |
Monitoring | Monitoring OpenShift Container Platform 4.15 Configuring and using the monitoring stack in OpenShift Container Platform Red Hat OpenShift Documentation Team | [
"curl http://<example_app_endpoint>/metrics",
"HELP http_requests_total Count of all HTTP requests TYPE http_requests_total counter http_requests_total{code=\"200\",method=\"get\"} 4 http_requests_total{code=\"404\",method=\"get\"} 2 HELP version Version information about this binary TYPE version gauge version{version=\"v0.1.0\"} 1",
"Disabling ownership via cluster version overrides prevents upgrades. Please remove overrides before continuing.",
"oc -n openshift-monitoring get configmap cluster-monitoring-config",
"apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: |",
"oc apply -f cluster-monitoring-config.yaml",
"oc adm policy add-role-to-user <role> <user> -n <namespace> --role-namespace <namespace> 1",
"oc adm policy add-cluster-role-to-user <cluster-role> <user> -n <namespace> 1",
"oc label nodes <node_name> <node_label> 1",
"oc -n openshift-monitoring edit configmap cluster-monitoring-config",
"apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | # <component>: 1 nodeSelector: <node_label_1> 2 <node_label_2> 3 #",
"oc -n openshift-monitoring edit configmap cluster-monitoring-config",
"apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | <component>: tolerations: <toleration_specification>",
"apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | alertmanagerMain: tolerations: - key: \"key1\" operator: \"Equal\" value: \"value1\" effect: \"NoSchedule\"",
"oc -n openshift-monitoring edit configmap cluster-monitoring-config",
"apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: |- prometheusK8s: enforcedBodySizeLimit: 40MB 1",
"oc -n openshift-monitoring edit configmap cluster-monitoring-config",
"apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | alertmanagerMain: resources: limits: cpu: 500m memory: 1Gi requests: cpu: 200m memory: 500Mi prometheusK8s: resources: limits: cpu: 500m memory: 3Gi requests: cpu: 200m memory: 500Mi thanosQuerier: resources: limits: cpu: 500m memory: 1Gi requests: cpu: 200m memory: 500Mi prometheusOperator: resources: limits: cpu: 500m memory: 1Gi requests: cpu: 200m memory: 500Mi k8sPrometheusAdapter: resources: limits: cpu: 500m memory: 1Gi requests: cpu: 200m memory: 500Mi kubeStateMetrics: resources: limits: cpu: 500m memory: 1Gi requests: cpu: 200m memory: 500Mi telemeterClient: resources: limits: cpu: 500m memory: 1Gi requests: cpu: 200m memory: 500Mi openshiftStateMetrics: resources: limits: cpu: 500m memory: 1Gi requests: cpu: 200m memory: 500Mi nodeExporter: resources: limits: cpu: 50m memory: 150Mi requests: cpu: 20m memory: 50Mi monitoringPlugin: resources: limits: cpu: 500m memory: 1Gi requests: cpu: 200m memory: 500Mi prometheusOperatorAdmissionWebhook: resources: limits: cpu: 50m memory: 100Mi requests: cpu: 20m memory: 50Mi",
"oc -n openshift-monitoring edit configmap cluster-monitoring-config",
"apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | prometheusK8s: collectionProfile: <metrics_collection_profile_name> 1",
"apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | prometheusK8s: collectionProfile: minimal",
"oc -n openshift-monitoring edit configmap cluster-monitoring-config",
"apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | <component>: 1 topologySpreadConstraints: - maxSkew: <n> 2 topologyKey: <key> 3 whenUnsatisfiable: <value> 4 labelSelector: 5 <match_option>",
"apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | prometheusK8s: topologySpreadConstraints: - maxSkew: 1 topologyKey: monitoring whenUnsatisfiable: DoNotSchedule labelSelector: matchLabels: app.kubernetes.io/name: prometheus",
"oc -n openshift-monitoring edit configmap cluster-monitoring-config",
"apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | <component>: 1 volumeClaimTemplate: spec: storageClassName: <storage_class> 2 resources: requests: storage: <amount_of_storage> 3",
"apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | prometheusK8s: volumeClaimTemplate: spec: storageClassName: my-storage-class resources: requests: storage: 40Gi",
"oc -n openshift-monitoring edit configmap cluster-monitoring-config",
"apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | <component>: 1 volumeClaimTemplate: spec: resources: requests: storage: <amount_of_storage> 2",
"apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | prometheusK8s: volumeClaimTemplate: spec: resources: requests: storage: 100Gi",
"oc -n openshift-monitoring edit configmap cluster-monitoring-config",
"apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | prometheusK8s: retention: <time_specification> 1 retentionSize: <size_specification> 2",
"apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | prometheusK8s: retention: 24h retentionSize: 10GB",
"oc -n openshift-monitoring edit configmap cluster-monitoring-config",
"apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | k8sPrometheusAdapter: audit: profile: <audit_log_level> 1",
"oc -n openshift-monitoring get pods",
"oc -n openshift-monitoring get deploy prometheus-adapter -o yaml",
"- --audit-policy-file=/etc/audit/request-profile.yaml - --audit-log-path=/var/log/adapter/audit.log",
"oc -n openshift-monitoring exec deploy/prometheus-adapter -c prometheus-adapter -- cat /etc/audit/request-profile.yaml",
"\"apiVersion\": \"audit.k8s.io/v1\" \"kind\": \"Policy\" \"metadata\": \"name\": \"Request\" \"omitStages\": - \"RequestReceived\" \"rules\": - \"level\": \"Request\"",
"oc -n openshift-monitoring exec -c <prometheus_adapter_pod_name> -- cat /var/log/adapter/audit.log",
"oc -n openshift-monitoring edit configmap cluster-monitoring-config",
"apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | <component>: 1 logLevel: <log_level> 2",
"oc -n openshift-monitoring get deploy prometheus-operator -o yaml | grep \"log-level\"",
"- --log-level=debug",
"oc -n openshift-monitoring get pods",
"oc -n openshift-monitoring edit configmap cluster-monitoring-config",
"apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | prometheusK8s: queryLogFile: <path> 1",
"oc -n openshift-monitoring get pods",
"prometheus-operator-567c9bc75c-96wkj 2/2 Running 0 62m prometheus-k8s-0 6/6 Running 1 57m prometheus-k8s-1 6/6 Running 1 57m thanos-querier-56c76d7df4-2xkpc 6/6 Running 0 57m thanos-querier-56c76d7df4-j5p29 6/6 Running 0 57m",
"oc -n openshift-monitoring exec prometheus-k8s-0 -- cat <path>",
"oc -n openshift-monitoring edit configmap cluster-monitoring-config",
"apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | thanosQuerier: enableRequestLogging: <value> 1 logLevel: <value> 2",
"oc -n openshift-monitoring get pods",
"token=`oc create token prometheus-k8s -n openshift-monitoring` oc -n openshift-monitoring exec -c prometheus prometheus-k8s-0 -- curl -k -H \"Authorization: Bearer USDtoken\" 'https://thanos-querier.openshift-monitoring.svc:9091/api/v1/query?query=cluster_version'",
"oc -n openshift-monitoring logs <thanos_querier_pod_name> -c thanos-query",
"oc -n openshift-monitoring edit configmap cluster-monitoring-config",
"apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | prometheusK8s: remoteWrite: - url: \"https://remote-write-endpoint.example.com\" 1 <endpoint_authentication_credentials> 2",
"apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | prometheusK8s: remoteWrite: - url: \"https://remote-write-endpoint.example.com\" <endpoint_authentication_credentials> writeRelabelConfigs: - <your_write_relabel_configs> 1",
"apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | prometheusK8s: remoteWrite: - url: \"https://remote-write-endpoint.example.com\" writeRelabelConfigs: - sourceLabels: [__name__] regex: 'my_metric' action: keep",
"apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | prometheusK8s: remoteWrite: - url: \"https://remote-write-endpoint.example.com\" writeRelabelConfigs: - sourceLabels: [__name__,namespace] regex: '(my_metric_1|my_metric_2);my_namespace' action: keep",
"apiVersion: v1 kind: Secret metadata: name: sigv4-credentials namespace: openshift-monitoring stringData: accessKey: <AWS_access_key> 1 secretKey: <AWS_secret_key> 2 type: Opaque",
"apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | prometheusK8s: remoteWrite: - url: \"https://authorization.example.com/api/write\" sigv4: region: <AWS_region> 1 accessKey: name: sigv4-credentials 2 key: accessKey 3 secretKey: name: sigv4-credentials 4 key: secretKey 5 profile: <AWS_profile_name> 6 roleArn: <AWS_role_arn> 7",
"apiVersion: v1 kind: Secret metadata: name: rw-basic-auth namespace: openshift-monitoring stringData: user: <basic_username> 1 password: <basic_password> 2 type: Opaque",
"apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | prometheusK8s: remoteWrite: - url: \"https://basicauth.example.com/api/write\" basicAuth: username: name: rw-basic-auth 1 key: user 2 password: name: rw-basic-auth 3 key: password 4",
"apiVersion: v1 kind: Secret metadata: name: rw-bearer-auth namespace: openshift-monitoring stringData: token: <authentication_token> 1 type: Opaque",
"apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | enableUserWorkload: true prometheusK8s: remoteWrite: - url: \"https://authorization.example.com/api/write\" authorization: type: Bearer 1 credentials: name: rw-bearer-auth 2 key: token 3",
"apiVersion: v1 kind: Secret metadata: name: oauth2-credentials namespace: openshift-monitoring stringData: id: <oauth2_id> 1 secret: <oauth2_secret> 2 type: Opaque",
"apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | prometheusK8s: remoteWrite: - url: \"https://test.example.com/api/write\" oauth2: clientId: secret: name: oauth2-credentials 1 key: id 2 clientSecret: name: oauth2-credentials 3 key: secret 4 tokenUrl: https://example.com/oauth2/token 5 scopes: 6 - <scope_1> - <scope_2> endpointParams: 7 param1: <parameter_1> param2: <parameter_2>",
"apiVersion: v1 kind: Secret metadata: name: mtls-bundle namespace: openshift-monitoring data: ca.crt: <ca_cert> 1 client.crt: <client_cert> 2 client.key: <client_key> 3 type: tls",
"apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | prometheusK8s: remoteWrite: - url: \"https://remote-write-endpoint.example.com\" tlsConfig: ca: secret: name: mtls-bundle 1 key: ca.crt 2 cert: secret: name: mtls-bundle 3 key: client.crt 4 keySecret: name: mtls-bundle 5 key: client.key 6",
"apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | prometheusK8s: remoteWrite: - url: \"https://remote-write-endpoint.example.com\" <endpoint_authentication_credentials> queueConfig: capacity: 10000 1 minShards: 1 2 maxShards: 50 3 maxSamplesPerSend: 2000 4 batchSendDeadline: 5s 5 minBackoff: 30ms 6 maxBackoff: 5s 7 retryOnRateLimit: false 8 sampleAgeLimit: 0s 9",
"oc -n openshift-monitoring edit configmap cluster-monitoring-config",
"apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | prometheusK8s: remoteWrite: - url: \"https://remote-write-endpoint.example.com\" <endpoint_authentication_credentials> writeRelabelConfigs: 1 - <relabel_config> 2",
"apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | prometheusK8s: remoteWrite: - url: \"https://remote-write-endpoint.example.com\" writeRelabelConfigs: - sourceLabels: - __tmp_openshift_cluster_id__ 1 targetLabel: cluster_id 2 action: replace 3",
"oc -n openshift-monitoring edit configmap cluster-monitoring-config",
"apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | prometheusK8s: additionalAlertmanagerConfigs: - <alertmanager_specification> 1",
"apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | prometheusK8s: additionalAlertmanagerConfigs: - scheme: https pathPrefix: / timeout: \"30s\" apiVersion: v1 bearerToken: name: alertmanager-bearer-token key: token tlsConfig: key: name: alertmanager-tls key: tls.key cert: name: alertmanager-tls key: tls.crt ca: name: alertmanager-tls key: tls.ca staticConfigs: - external-alertmanager1-remote.com - external-alertmanager1-remote2.com",
"oc -n openshift-monitoring edit configmap cluster-monitoring-config",
"apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | alertmanagerMain: enabled: false",
"oc -n openshift-monitoring edit configmap cluster-monitoring-config",
"apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | alertmanagerMain: secrets: 1 - <secret_name_1> 2 - <secret_name_2>",
"apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | alertmanagerMain: secrets: - test-secret-basic-auth - test-secret-api-token",
"oc -n openshift-monitoring edit configmap cluster-monitoring-config",
"apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | prometheusK8s: externalLabels: <key>: <value> 1",
"apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | prometheusK8s: externalLabels: region: eu environment: prod",
"oc -n openshift-monitoring get secret alertmanager-main --template='{{ index .data \"alertmanager.yaml\" }}' | base64 --decode > alertmanager.yaml",
"global: resolve_timeout: 5m route: group_wait: 30s 1 group_interval: 5m 2 repeat_interval: 12h 3 receiver: default routes: - matchers: - \"alertname=Watchdog\" repeat_interval: 2m receiver: watchdog - matchers: - \"service=<your_service>\" 4 routes: - matchers: - <your_matching_rules> 5 receiver: <receiver> 6 receivers: - name: default - name: watchdog - name: <receiver> <receiver_configuration> 7",
"global: resolve_timeout: 5m route: group_wait: 30s group_interval: 5m repeat_interval: 12h receiver: default routes: - matchers: - \"alertname=Watchdog\" repeat_interval: 2m receiver: watchdog - matchers: - \"service=example-app\" routes: - matchers: - \"severity=critical\" receiver: team-frontend-page receivers: - name: default - name: watchdog - name: team-frontend-page pagerduty_configs: - service_key: \"<your_key>\"",
"oc -n openshift-monitoring create secret generic alertmanager-main --from-file=alertmanager.yaml --dry-run=client -o=yaml | oc -n openshift-monitoring replace secret --filename=-",
"oc -n openshift-monitoring edit configmap cluster-monitoring-config",
"apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | enableUserWorkload: true 1",
"oc -n openshift-user-workload-monitoring get pod",
"NAME READY STATUS RESTARTS AGE prometheus-operator-6f7b748d5b-t7nbg 2/2 Running 0 3h prometheus-user-workload-0 4/4 Running 1 3h prometheus-user-workload-1 4/4 Running 1 3h thanos-ruler-user-workload-0 3/3 Running 0 3h thanos-ruler-user-workload-1 3/3 Running 0 3h",
"oc -n openshift-user-workload-monitoring adm policy add-role-to-user user-workload-monitoring-config-edit <user> --role-namespace openshift-user-workload-monitoring",
"oc describe rolebinding <role_binding_name> -n openshift-user-workload-monitoring",
"oc describe rolebinding user-workload-monitoring-config-edit -n openshift-user-workload-monitoring",
"Name: user-workload-monitoring-config-edit Labels: <none> Annotations: <none> Role: Kind: Role Name: user-workload-monitoring-config-edit Subjects: Kind Name Namespace ---- ---- --------- User user1 1",
"oc -n openshift-monitoring edit configmap cluster-monitoring-config",
"apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | # alertmanagerMain: enableUserAlertmanagerConfig: true 1 #",
"oc -n openshift-user-workload-monitoring edit configmap user-workload-monitoring-config",
"apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | alertmanager: enabled: true 1 enableAlertmanagerConfig: true 2",
"oc -n openshift-user-workload-monitoring get alertmanager",
"NAME VERSION REPLICAS AGE user-workload 0.24.0 2 100s",
"oc -n <namespace> adm policy add-role-to-user alert-routing-edit <user> 1",
"oc adm policy add-role-to-user <role> <user> -n <namespace> --role-namespace <namespace> 1",
"oc adm policy add-cluster-role-to-user <cluster-role> <user> -n <namespace> 1",
"oc label namespace my-project 'openshift.io/user-monitoring=false'",
"oc label namespace my-project 'openshift.io/user-monitoring-'",
"oc -n openshift-monitoring edit configmap cluster-monitoring-config",
"apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | enableUserWorkload: false",
"oc -n openshift-user-workload-monitoring get pod",
"No resources found in openshift-user-workload-monitoring project.",
"oc label nodes <node_name> <node_label> 1",
"oc -n openshift-user-workload-monitoring edit configmap user-workload-monitoring-config",
"apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | # <component>: 1 nodeSelector: <node_label_1> 2 <node_label_2> 3 #",
"oc -n openshift-user-workload-monitoring edit configmap user-workload-monitoring-config",
"apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | <component>: tolerations: <toleration_specification>",
"apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | thanosRuler: tolerations: - key: \"key1\" operator: \"Equal\" value: \"value1\" effect: \"NoSchedule\"",
"oc -n openshift-user-workload-monitoring edit configmap user-workload-monitoring-config",
"apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | alertmanager: resources: limits: cpu: 500m memory: 1Gi requests: cpu: 200m memory: 500Mi prometheus: resources: limits: cpu: 500m memory: 3Gi requests: cpu: 200m memory: 500Mi thanosRuler: resources: limits: cpu: 500m memory: 1Gi requests: cpu: 200m memory: 500Mi",
"oc -n openshift-user-workload-monitoring edit configmap user-workload-monitoring-config",
"apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | prometheus: enforcedSampleLimit: 50000 1",
"apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | prometheus: enforcedLabelLimit: 500 1 enforcedLabelNameLengthLimit: 50 2 enforcedLabelValueLengthLimit: 600 3",
"apiVersion: monitoring.coreos.com/v1 kind: PrometheusRule metadata: labels: prometheus: k8s role: alert-rules name: monitoring-stack-alerts 1 namespace: ns1 2 spec: groups: - name: general.rules rules: - alert: TargetDown 3 annotations: message: '{{ printf \"%.4g\" USDvalue }}% of the {{ USDlabels.job }}/{{ USDlabels.service }} targets in {{ USDlabels.namespace }} namespace are down.' 4 expr: 100 * (count(up == 0) BY (job, namespace, service) / count(up) BY (job, namespace, service)) > 10 for: 10m 5 labels: severity: warning 6 - alert: ApproachingEnforcedSamplesLimit 7 annotations: message: '{{ USDlabels.container }} container of the {{ USDlabels.pod }} pod in the {{ USDlabels.namespace }} namespace consumes {{ USDvalue | humanizePercentage }} of the samples limit budget.' 8 expr: scrape_samples_scraped/50000 > 0.8 9 for: 10m 10 labels: severity: warning 11",
"oc apply -f monitoring-stack-alerts.yaml",
"oc -n openshift-user-workload-monitoring edit configmap user-workload-monitoring-config",
"apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | <component>: 1 topologySpreadConstraints: - maxSkew: <n> 2 topologyKey: <key> 3 whenUnsatisfiable: <value> 4 labelSelector: 5 <match_option>",
"apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | thanosRuler: topologySpreadConstraints: - maxSkew: 1 topologyKey: monitoring whenUnsatisfiable: ScheduleAnyway labelSelector: matchLabels: app.kubernetes.io/name: thanos-ruler",
"oc -n openshift-user-workload-monitoring edit configmap user-workload-monitoring-config",
"apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | <component>: 1 volumeClaimTemplate: spec: storageClassName: <storage_class> 2 resources: requests: storage: <amount_of_storage> 3",
"apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | thanosRuler: volumeClaimTemplate: spec: storageClassName: my-storage-class resources: requests: storage: 10Gi",
"oc -n openshift-user-workload-monitoring edit configmap user-workload-monitoring-config",
"apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | <component>: 1 volumeClaimTemplate: spec: resources: requests: storage: <amount_of_storage> 2",
"apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | thanosRuler: volumeClaimTemplate: spec: resources: requests: storage: 20Gi",
"oc -n openshift-user-workload-monitoring edit configmap user-workload-monitoring-config",
"apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | prometheus: retention: <time_specification> 1 retentionSize: <size_specification> 2",
"apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | prometheus: retention: 24h retentionSize: 10GB",
"oc -n openshift-user-workload-monitoring edit configmap user-workload-monitoring-config",
"apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | thanosRuler: retention: <time_specification> 1",
"apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | thanosRuler: retention: 10d",
"oc -n openshift-user-workload-monitoring edit configmap user-workload-monitoring-config",
"apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | <component>: 1 logLevel: <log_level> 2",
"oc -n openshift-user-workload-monitoring get deploy prometheus-operator -o yaml | grep \"log-level\"",
"- --log-level=debug",
"oc -n openshift-user-workload-monitoring get pods",
"oc -n openshift-user-workload-monitoring edit configmap user-workload-monitoring-config",
"apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | prometheus: queryLogFile: <path> 1",
"oc -n openshift-user-workload-monitoring get pods",
"prometheus-operator-776fcbbd56-2nbfm 2/2 Running 0 132m prometheus-user-workload-0 5/5 Running 1 132m prometheus-user-workload-1 5/5 Running 1 132m thanos-ruler-user-workload-0 3/3 Running 0 132m thanos-ruler-user-workload-1 3/3 Running 0 132m",
"oc -n openshift-user-workload-monitoring exec prometheus-user-workload-0 -- cat <path>",
"oc -n openshift-user-workload-monitoring edit configmap user-workload-monitoring-config",
"apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | prometheus: remoteWrite: - url: \"https://remote-write-endpoint.example.com\" 1 <endpoint_authentication_credentials> 2",
"apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | prometheus: remoteWrite: - url: \"https://remote-write-endpoint.example.com\" <endpoint_authentication_credentials> writeRelabelConfigs: - <your_write_relabel_configs> 1",
"apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | prometheus: remoteWrite: - url: \"https://remote-write-endpoint.example.com\" writeRelabelConfigs: - sourceLabels: [__name__] regex: 'my_metric' action: keep",
"apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | prometheus: remoteWrite: - url: \"https://remote-write-endpoint.example.com\" writeRelabelConfigs: - sourceLabels: [__name__,namespace] regex: '(my_metric_1|my_metric_2);my_namespace' action: keep",
"apiVersion: v1 kind: Secret metadata: name: sigv4-credentials namespace: openshift-user-workload-monitoring stringData: accessKey: <AWS_access_key> 1 secretKey: <AWS_secret_key> 2 type: Opaque",
"apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | prometheus: remoteWrite: - url: \"https://authorization.example.com/api/write\" sigv4: region: <AWS_region> 1 accessKey: name: sigv4-credentials 2 key: accessKey 3 secretKey: name: sigv4-credentials 4 key: secretKey 5 profile: <AWS_profile_name> 6 roleArn: <AWS_role_arn> 7",
"apiVersion: v1 kind: Secret metadata: name: rw-basic-auth namespace: openshift-user-workload-monitoring stringData: user: <basic_username> 1 password: <basic_password> 2 type: Opaque",
"apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | prometheus: remoteWrite: - url: \"https://basicauth.example.com/api/write\" basicAuth: username: name: rw-basic-auth 1 key: user 2 password: name: rw-basic-auth 3 key: password 4",
"apiVersion: v1 kind: Secret metadata: name: rw-bearer-auth namespace: openshift-user-workload-monitoring stringData: token: <authentication_token> 1 type: Opaque",
"apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | enableUserWorkload: true prometheus: remoteWrite: - url: \"https://authorization.example.com/api/write\" authorization: type: Bearer 1 credentials: name: rw-bearer-auth 2 key: token 3",
"apiVersion: v1 kind: Secret metadata: name: oauth2-credentials namespace: openshift-user-workload-monitoring stringData: id: <oauth2_id> 1 secret: <oauth2_secret> 2 type: Opaque",
"apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | prometheus: remoteWrite: - url: \"https://test.example.com/api/write\" oauth2: clientId: secret: name: oauth2-credentials 1 key: id 2 clientSecret: name: oauth2-credentials 3 key: secret 4 tokenUrl: https://example.com/oauth2/token 5 scopes: 6 - <scope_1> - <scope_2> endpointParams: 7 param1: <parameter_1> param2: <parameter_2>",
"apiVersion: v1 kind: Secret metadata: name: mtls-bundle namespace: openshift-user-workload-monitoring data: ca.crt: <ca_cert> 1 client.crt: <client_cert> 2 client.key: <client_key> 3 type: tls",
"apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | prometheus: remoteWrite: - url: \"https://remote-write-endpoint.example.com\" tlsConfig: ca: secret: name: mtls-bundle 1 key: ca.crt 2 cert: secret: name: mtls-bundle 3 key: client.crt 4 keySecret: name: mtls-bundle 5 key: client.key 6",
"apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | prometheus: remoteWrite: - url: \"https://remote-write-endpoint.example.com\" <endpoint_authentication_credentials> queueConfig: capacity: 10000 1 minShards: 1 2 maxShards: 50 3 maxSamplesPerSend: 2000 4 batchSendDeadline: 5s 5 minBackoff: 30ms 6 maxBackoff: 5s 7 retryOnRateLimit: false 8 sampleAgeLimit: 0s 9",
"oc -n openshift-user-workload-monitoring edit configmap user-workload-monitoring-config",
"apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | prometheus: remoteWrite: - url: \"https://remote-write-endpoint.example.com\" <endpoint_authentication_credentials> writeRelabelConfigs: 1 - <relabel_config> 2",
"apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | prometheus: remoteWrite: - url: \"https://remote-write-endpoint.example.com\" writeRelabelConfigs: - sourceLabels: - __tmp_openshift_cluster_id__ 1 targetLabel: cluster_id 2 action: replace 3",
"apiVersion: v1 kind: Namespace metadata: name: ns1 --- apiVersion: apps/v1 kind: Deployment metadata: labels: app: prometheus-example-app name: prometheus-example-app namespace: ns1 spec: replicas: 1 selector: matchLabels: app: prometheus-example-app template: metadata: labels: app: prometheus-example-app spec: containers: - image: ghcr.io/rhobs/prometheus-example-app:0.4.2 imagePullPolicy: IfNotPresent name: prometheus-example-app --- apiVersion: v1 kind: Service metadata: labels: app: prometheus-example-app name: prometheus-example-app namespace: ns1 spec: ports: - port: 8080 protocol: TCP targetPort: 8080 name: web selector: app: prometheus-example-app type: ClusterIP",
"oc apply -f prometheus-example-app.yaml",
"oc -n ns1 get pod",
"NAME READY STATUS RESTARTS AGE prometheus-example-app-7857545cb7-sbgwq 1/1 Running 0 81m",
"apiVersion: monitoring.coreos.com/v1 kind: ServiceMonitor metadata: name: prometheus-example-monitor namespace: ns1 1 spec: endpoints: - interval: 30s port: web 2 scheme: http selector: 3 matchLabels: app: prometheus-example-app",
"oc apply -f example-app-service-monitor.yaml",
"oc -n <namespace> get servicemonitor",
"NAME AGE prometheus-example-monitor 81m",
"apiVersion: v1 kind: Secret metadata: name: example-bearer-auth namespace: ns1 stringData: token: <authentication_token> 1",
"apiVersion: monitoring.coreos.com/v1 kind: ServiceMonitor metadata: name: prometheus-example-monitor namespace: ns1 spec: endpoints: - authorization: credentials: key: token 1 name: example-bearer-auth 2 port: web selector: matchLabels: app: prometheus-example-app",
"apiVersion: v1 kind: Secret metadata: name: example-basic-auth namespace: ns1 stringData: user: <basic_username> 1 password: <basic_password> 2",
"apiVersion: monitoring.coreos.com/v1 kind: ServiceMonitor metadata: name: prometheus-example-monitor namespace: ns1 spec: endpoints: - basicAuth: username: key: user 1 name: example-basic-auth 2 password: key: password 3 name: example-basic-auth 4 port: web selector: matchLabels: app: prometheus-example-app",
"apiVersion: v1 kind: Secret metadata: name: example-oauth2 namespace: ns1 stringData: id: <oauth2_id> 1 secret: <oauth2_secret> 2",
"apiVersion: monitoring.coreos.com/v1 kind: ServiceMonitor metadata: name: prometheus-example-monitor namespace: ns1 spec: endpoints: - oauth2: clientId: secret: key: id 1 name: example-oauth2 2 clientSecret: key: secret 3 name: example-oauth2 4 tokenUrl: https://example.com/oauth2/token 5 port: web selector: matchLabels: app: prometheus-example-app",
"oc -n openshift-user-workload-monitoring edit configmap user-workload-monitoring-config",
"apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | <component>: 1 additionalAlertmanagerConfigs: - <alertmanager_specification> 2",
"apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | thanosRuler: additionalAlertmanagerConfigs: - scheme: https pathPrefix: / timeout: \"30s\" apiVersion: v1 bearerToken: name: alertmanager-bearer-token key: token tlsConfig: key: name: alertmanager-tls key: tls.key cert: name: alertmanager-tls key: tls.crt ca: name: alertmanager-tls key: tls.ca staticConfigs: - external-alertmanager1-remote.com - external-alertmanager1-remote2.com",
"oc -n openshift-user-workload-monitoring edit configmap user-workload-monitoring-config",
"apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | alertmanager: secrets: 1 - <secret_name_1> 2 - <secret_name_2>",
"apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | alertmanager: secrets: - test-secret-basic-auth - test-secret-api-token",
"oc -n openshift-user-workload-monitoring edit configmap user-workload-monitoring-config",
"apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | prometheus: externalLabels: <key>: <value> 1",
"apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | prometheus: externalLabels: region: eu environment: prod",
"apiVersion: monitoring.coreos.com/v1beta1 kind: AlertmanagerConfig metadata: name: example-routing namespace: ns1 spec: route: receiver: default groupBy: [job] receivers: - name: default webhookConfigs: - url: https://example.org/post",
"oc apply -f example-app-alert-routing.yaml",
"oc -n openshift-user-workload-monitoring get secret alertmanager-user-workload --template='{{ index .data \"alertmanager.yaml\" }}' | base64 --decode > alertmanager.yaml",
"route: receiver: Default group_by: - name: Default routes: - matchers: - \"service = prometheus-example-monitor\" 1 receiver: <receiver> 2 receivers: - name: Default - name: <receiver> <receiver_configuration> 3",
"oc -n openshift-user-workload-monitoring create secret generic alertmanager-user-workload --from-file=alertmanager.yaml --dry-run=client -o=yaml | oc -n openshift-user-workload-monitoring replace secret --filename=-",
"oc get routes -n openshift-monitoring thanos-querier -o jsonpath='{.status.ingress[0].host}'",
"curl -k -H \"Authorization: Bearer USD(oc whoami -t)\" https://<thanos_querier_route>/api/v1/metadata 1",
"oc get routes -n openshift-monitoring thanos-querier -o jsonpath='{.status.ingress[0].host}'",
"curl -k -H \"Authorization: Bearer USD(oc whoami -t)\" https://<thanos_querier_route>/api/v1/metadata 1",
"TOKEN=USD(oc whoami -t)",
"HOST=USD(oc -n openshift-monitoring get route alertmanager-main -ojsonpath={.status.ingress[].host})",
"curl -H \"Authorization: Bearer USDTOKEN\" -k \"https://USDHOST/api/v2/receivers\"",
"TOKEN=USD(oc whoami -t)",
"HOST=USD(oc -n openshift-monitoring get route prometheus-k8s-federate -ojsonpath={.status.ingress[].host})",
"curl -G -k -H \"Authorization: Bearer USDTOKEN\" https://USDHOST/federate --data-urlencode 'match[]=up'",
"TYPE up untyped up{apiserver=\"kube-apiserver\",endpoint=\"https\",instance=\"10.0.143.148:6443\",job=\"apiserver\",namespace=\"default\",service=\"kubernetes\",prometheus=\"openshift-monitoring/k8s\",prometheus_replica=\"prometheus-k8s-0\"} 1 1657035322214 up{apiserver=\"kube-apiserver\",endpoint=\"https\",instance=\"10.0.148.166:6443\",job=\"apiserver\",namespace=\"default\",service=\"kubernetes\",prometheus=\"openshift-monitoring/k8s\",prometheus_replica=\"prometheus-k8s-0\"} 1 1657035338597 up{apiserver=\"kube-apiserver\",endpoint=\"https\",instance=\"10.0.173.16:6443\",job=\"apiserver\",namespace=\"default\",service=\"kubernetes\",prometheus=\"openshift-monitoring/k8s\",prometheus_replica=\"prometheus-k8s-0\"} 1 1657035343834",
"TOKEN=USD(oc whoami -t)",
"HOST=USD(oc -n openshift-monitoring get route thanos-querier -ojsonpath={.status.ingress[].host})",
"NAMESPACE=ns1",
"curl -H \"Authorization: Bearer USDTOKEN\" -k \"https://USDHOST/api/v1/query?\" --data-urlencode \"query=up{namespace='USDNAMESPACE'}\"",
"{ \"status\": \"success\", \"data\": { \"resultType\": \"vector\", \"result\": [ { \"metric\": { \"__name__\": \"up\", \"endpoint\": \"web\", \"instance\": \"10.129.0.46:8080\", \"job\": \"prometheus-example-app\", \"namespace\": \"ns1\", \"pod\": \"prometheus-example-app-68d47c4fb6-jztp2\", \"service\": \"prometheus-example-app\" }, \"value\": [ 1591881154.748, \"1\" ] } ], } }",
"apiVersion: monitoring.openshift.io/v1 kind: AlertingRule metadata: name: example namespace: openshift-monitoring 1 spec: groups: - name: example-rules rules: - alert: ExampleAlert 2 for: 1m 3 expr: vector(1) 4 labels: severity: warning 5 annotations: message: This is an example alert. 6",
"oc apply -f example-alerting-rule.yaml",
"apiVersion: monitoring.openshift.io/v1 kind: AlertRelabelConfig metadata: name: watchdog namespace: openshift-monitoring 1 spec: configs: - sourceLabels: [alertname,severity] 2 regex: \"Watchdog;none\" 3 targetLabel: severity 4 replacement: critical 5 action: Replace 6",
"oc apply -f example-modified-alerting-rule.yaml",
"apiVersion: monitoring.coreos.com/v1 kind: PrometheusRule metadata: name: example-alert namespace: ns1 spec: groups: - name: example rules: - alert: VersionAlert 1 for: 1m 2 expr: version{job=\"prometheus-example-app\"} == 0 3 labels: severity: warning 4 annotations: message: This is an example alert. 5",
"oc apply -f example-app-alerting-rule.yaml",
"oc -n <namespace> delete prometheusrule <foo>",
"apiVersion: monitoring.coreos.com/v1 kind: PrometheusRule metadata: name: example-alert namespace: ns1 spec: groups: - name: example rules: - alert: VersionAlert 1 for: 1m 2 expr: version{job=\"prometheus-example-app\"} == 0 3 labels: severity: warning 4 annotations: message: This is an example alert. 5",
"oc apply -f example-app-alerting-rule.yaml",
"oc -n <project> get prometheusrule",
"oc -n <project> get prometheusrule <rule> -o yaml",
"oc -n <namespace> delete prometheusrule <foo>",
"oc -n ns1 get service prometheus-example-app -o yaml",
"labels: app: prometheus-example-app",
"oc -n ns1 get servicemonitor prometheus-example-monitor -o yaml",
"apiVersion: v1 kind: ServiceMonitor metadata: name: prometheus-example-monitor namespace: ns1 spec: endpoints: - interval: 30s port: web scheme: http selector: matchLabels: app: prometheus-example-app",
"oc -n openshift-user-workload-monitoring get pods",
"NAME READY STATUS RESTARTS AGE prometheus-operator-776fcbbd56-2nbfm 2/2 Running 0 132m prometheus-user-workload-0 5/5 Running 1 132m prometheus-user-workload-1 5/5 Running 1 132m thanos-ruler-user-workload-0 3/3 Running 0 132m thanos-ruler-user-workload-1 3/3 Running 0 132m",
"oc -n openshift-user-workload-monitoring logs prometheus-operator-776fcbbd56-2nbfm -c prometheus-operator",
"level=warn ts=2020-08-10T11:48:20.906739623Z caller=operator.go:1829 component=prometheusoperator msg=\"skipping servicemonitor\" error=\"it accesses file system via bearer token file which Prometheus specification prohibits\" servicemonitor=eagle/eagle namespace=openshift-user-workload-monitoring prometheus=user-workload",
"oc -n openshift-user-workload-monitoring edit configmap user-workload-monitoring-config",
"apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | prometheusOperator: logLevel: debug",
"oc -n openshift-user-workload-monitoring get deploy prometheus-operator -o yaml | grep \"log-level\"",
"- --log-level=debug",
"oc -n openshift-user-workload-monitoring get pods",
"topk(10, max by(namespace, job) (topk by(namespace, job) (1, scrape_samples_post_metric_relabeling)))",
"topk(10, sum by(namespace, job) (sum_over_time(scrape_series_added[1h])))",
"HOST=USD(oc -n openshift-monitoring get route prometheus-k8s -ojsonpath={.status.ingress[].host})",
"TOKEN=USD(oc whoami -t)",
"curl -H \"Authorization: Bearer USDTOKEN\" -k \"https://USDHOST/api/v1/status/tsdb\"",
"\"status\": \"success\",\"data\":{\"headStats\":{\"numSeries\":507473, \"numLabelPairs\":19832,\"chunkCount\":946298,\"minTime\":1712253600010, \"maxTime\":1712257935346},\"seriesCountByMetricName\": [{\"name\":\"etcd_request_duration_seconds_bucket\",\"value\":51840}, {\"name\":\"apiserver_request_sli_duration_seconds_bucket\",\"value\":47718},",
"oc debug <prometheus_k8s_pod_name> -n openshift-monitoring \\ 1 -c prometheus --image=USD(oc get po -n openshift-monitoring <prometheus_k8s_pod_name> \\ 2 -o jsonpath='{.spec.containers[?(@.name==\"prometheus\")].image}') -- sh -c 'cd /prometheus/;du -hs USD(ls -dt */ | grep -Eo \"[0-9|A-Z]{26}\")'",
"308M 01HVKMPKQWZYWS8WVDAYQHNMW6 52M 01HVK64DTDA81799TBR9QDECEZ 102M 01HVK64DS7TRZRWF2756KHST5X 140M 01HVJS59K11FBVAPVY57K88Z11 90M 01HVH2A5Z58SKT810EM6B9AT50 152M 01HV8ZDVQMX41MKCN84S32RRZ1 354M 01HV6Q2N26BK63G4RYTST71FBF 156M 01HV664H9J9Z1FTZD73RD1563E 216M 01HTHXB60A7F239HN7S2TENPNS 104M 01HTHMGRXGS0WXA3WATRXHR36B",
"oc debug prometheus-k8s-0 -n openshift-monitoring -c prometheus --image=USD(oc get po -n openshift-monitoring prometheus-k8s-0 -o jsonpath='{.spec.containers[?(@.name==\"prometheus\")].image}') -- sh -c 'ls -latr /prometheus/ | egrep -o \"[0-9|A-Z]{26}\" | head -3 | while read BLOCK; do rm -r /prometheus/USDBLOCK; done'",
"oc debug <prometheus_k8s_pod_name> -n openshift-monitoring \\ 1 --image=USD(oc get po -n openshift-monitoring <prometheus_k8s_pod_name> \\ 2 -o jsonpath='{.spec.containers[?(@.name==\"prometheus\")].image}') -- df -h /prometheus/",
"Starting pod/prometheus-k8s-0-debug-j82w4 Filesystem Size Used Avail Use% Mounted on /dev/nvme0n1p4 40G 15G 40G 37% /prometheus Removing debug pod"
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.15/html-single/monitoring/index |
Preface | Preface Open Java Development Kit (OpenJDK) is a free and open-source implementation of the Java Platform, Standard Edition (Java SE). Eclipse Temurin is available in three LTS versions: OpenJDK 8u, OpenJDK 11u, and OpenJDK 17u. Packages for Eclipse Temurin are made available on Microsoft Windows and on multiple Linux x86 Operating Systems including Red Hat Enterprise Linux and Ubuntu. | null | https://docs.redhat.com/en/documentation/red_hat_build_of_openjdk/11/html/release_notes_for_eclipse_temurin_11.0.16/pr01 |
Chapter 1. OpenShift Container Platform security and compliance | Chapter 1. OpenShift Container Platform security and compliance 1.1. Security overview It is important to understand how to properly secure various aspects of your OpenShift Container Platform cluster. Container security A good starting point to understanding OpenShift Container Platform security is to review the concepts in Understanding container security . This and subsequent sections provide a high-level walkthrough of the container security measures available in OpenShift Container Platform, including solutions for the host layer, the container and orchestration layer, and the build and application layer. These sections also include information on the following topics: Why container security is important and how it compares with existing security standards. Which container security measures are provided by the host (RHCOS and RHEL) layer and which are provided by OpenShift Container Platform. How to evaluate your container content and sources for vulnerabilities. How to design your build and deployment process to proactively check container content. How to control access to containers through authentication and authorization. How networking and attached storage are secured in OpenShift Container Platform. Containerized solutions for API management and SSO. Auditing OpenShift Container Platform auditing provides a security-relevant chronological set of records documenting the sequence of activities that have affected the system by individual users, administrators, or other components of the system. Administrators can configure the audit log policy and view audit logs . Certificates Certificates are used by various components to validate access to the cluster. Administrators can replace the default ingress certificate , add API server certificates , or add a service certificate . You can also review more details about the types of certificates used by the cluster: User-provided certificates for the API server Proxy certificates Service CA certificates Node certificates Bootstrap certificates etcd certificates OLM certificates Aggregated API client certificates Machine Config Operator certificates User-provided certificates for default ingress Ingress certificates Monitoring and cluster logging Operator component certificates Control plane certificates Encrypting data You can enable etcd encryption for your cluster to provide an additional layer of data security. For example, it can help protect the loss of sensitive data if an etcd backup is exposed to the incorrect parties. Vulnerability scanning Administrators can use the Red Hat Quay Container Security Operator to run vulnerability scans and review information about detected vulnerabilities. 1.2. Compliance overview For many OpenShift Container Platform customers, regulatory readiness, or compliance, on some level is required before any systems can be put into production. That regulatory readiness can be imposed by national standards, industry standards, or the organization's corporate governance framework. Compliance checking Administrators can use the Compliance Operator to run compliance scans and recommend remediations for any issues found. The oc-compliance plugin is an OpenShift CLI ( oc ) plugin that provides a set of utilities to easily interact with the Compliance Operator. File integrity checking Administrators can use the File Integrity Operator to continually run file integrity checks on cluster nodes and provide a log of files that have been modified. 1.3. Additional resources Understanding authentication Configuring the internal OAuth server Understanding identity provider configuration Using RBAC to define and apply permissions Managing security context constraints | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.15/html/security_and_compliance/security-compliance-overview |
Chapter 3. Red Hat build of OpenJDK 8.0.382 release notes | Chapter 3. Red Hat build of OpenJDK 8.0.382 release notes The latest Red Hat build of OpenJDK 8 release might include new features. Additionally, the latest release might enhance, deprecate, or remove features that originated from Red Hat build of OpenJDK 8 releases. Note For all the other changes and security fixes, see OpenJDK 8u382 Released . Red Hat build of OpenJDK new features and enhancements Review the following release notes to understand new features and feature enhancements that the Red Hat build of OpenJDK 8.0.382 release provides: Support for GB18030-2022 The Chinese Electronics Standardization Institute (CESI) recently published GB18030-2022 as an update to the GB18030 standard, synchronizing the character set with Unicode 11.0. The GB18030-2022 standard is now the default GB18030 character set that Red Hat build of OpenJDK 8.0.382 uses. However, this updated character set contains incompatible changes compared with GB18030-2000, which releases of Red Hat build of OpenJDK 8 used. From Red Hat build of OpenJDK 8.0.382 onward, if you want to use the version of the character set, ensure that the new system property jdk.charset.GB18030 is set to 2000 . See JDK-8301119 (JDK Bug System) . Additional characters for GB18030-2022 (Level 2) support allowed To support "Implementation Level 2" of the GB18030-2022 standard, Red Hat build of OpenJDK must support the use of characters that are in the Chinese Japanese Korean (CJK) Unified Ideographs Extension E block of Unicode 8.0. Maintenance Release 5 of the Java SE 8 specification adds support for these characters, which Red Hat build of OpenJDK 8.0.382 implements through the addition of a new UnicodeBlock instance, Character.CJK_UNIFIED_IDEOGRAPHS_EXTENSION_E . See JDK-8305681 (JDK Bug System) . Enhanced validation of JAR signature You can now configure the maximum number of bytes that are allowed for the signature-related files in a Java archive (JAR) file by setting a new system property, jdk.jar.maxSignatureFileSize . By default, the jdk.jar.maxSignatureFileSize property is set to 8000000 bytes (8 MB). JDK bug system reference ID: JDK-8300596. GTS root certificate authority (CA) certificates added In the Red Hat build of OpenJDK 8.0.382 release, the cacerts truststore includes four Google Trust Services (GTS) root certificates: Certificate 1 Name: Google Trust Services LLC Alias name: gtsrootcar1 Distinguished name: CN=GTS Root R1, O=Google Trust Services LLC, C=US Certificate 2 Name: Google Trust Services LLC Alias name: gtsrootcar2 Distinguished name: CN=GTS Root R2, O=Google Trust Services LLC, C=US Certificate 3 Name: Google Trust Services LLC Alias name: gtsrootcar3 Distinguished name: CN=GTS Root R3, O=Google Trust Services LLC, C=US Certificate 4 Name: Google Trust Services LLC Alias name: gtsrootcar4 Distinguished name: CN=GTS Root R4, O=Google Trust Services LLC, C=US See JDK-8307134 (JDK Bug System) . Microsoft Corporation root CA certificates added In the Red Hat build of OpenJDK 8.0.382 release, the cacerts truststore includes two Microsoft Corporation root certificates: Certificate 1 Name: Microsoft Corporation Alias name: microsoftecc2017 Distinguished name: CN=Microsoft ECC Root Certificate Authority 2017, O=Microsoft Corporation, C=US Certificate 2 Name: Microsoft Corporation Alias name: microsoftrsa2017 Distinguished name: CN=Microsoft RSA Root Certificate Authority 2017, O=Microsoft Corporation, C=US See JDK-8304760 (JDK Bug System) . TWCA root CA certificate added In the Red Hat build of OpenJDK 8.0.382 release, the cacerts truststore includes the Taiwan Certificate Authority (TWCA) root certificate: Name: TWCA Alias name: twcaglobalrootca Distinguished name: CN=TWCA Global Root CA, OU=Root CA, O=TAIWAN-CA, C=TW See JDK-8305975 (JDK Bug System) . | null | https://docs.redhat.com/en/documentation/red_hat_build_of_openjdk/8/html/release_notes_for_red_hat_build_of_openjdk_8.0.382/openjdk-80382-release-notes_openjdk |
function::s32_arg | function::s32_arg Name function::s32_arg - Return function argument as signed 32-bit value Synopsis Arguments n index of argument to return Description Return the signed 32-bit value of argument n, same as int_arg. | [
"s32_arg:long(n:long)"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/systemtap_tapset_reference/api-s32-arg |
Chapter 43. Red Hat Enterprise Linux Atomic Host 7.2 | Chapter 43. Red Hat Enterprise Linux Atomic Host 7.2 43.1. Atomic Host OStree update : New Tree Version: 7.2 (hash: ec85fba1bf789268d5fe954aac09e6bd58f718e47a2fcb18bf25073b396e695d) Changes since Tree Version 7.1.6 (hash: 23d96474f6775c27cf258e9872330b23f20e80ff4e0b61426debd00ca11a953f) 43.2. Extras Updated packages : atomic-1.6-6.gitca1e384.el7 cockpit-0.77-3.1.el7 docker-1.8.2-8.el7 flannel-0.5.3-8.el7 kubernetes-1.0.3-0.2.gitb9a88a7.el7 python-docker-py-1.4.0-118.el7 python-websocket-client-0.32.0-116.el7 storaged-2.2.0-3.el7 * New packages : docker-distribution-2.1.1-3.el7 * The asterisk (*) marks packages which are available for Red Hat Enterprise Linux only. 43.2.1. Container Images Updated : Red Hat Enterprise Linux Container Image (rhel7/rhel) Red Hat Enterprise Linux Atomic Tools Container Image (rhel7/rhel-tools) Red Hat Enterprise Linux Atomic rsyslog Container Image (rhel7/rsyslog) Red Hat Enterprise Linux Atomic sadc Container Image (rhel7/sadc) Red Hat Enterprise Linux Atomic cockpit-ws Container Image (rhel7/cockpit-ws) New : Red Hat Enterprise Linux Atomic etcd Container Image (rhel7/etcd) Red Hat Enterprise Linux Atomic Kubernetes-controller Container Image (rhel7/kubernetes-controller-mgr) Red Hat Enterprise Linux Atomic Kubernetes-apiserver Container Image (rhel7/kubernetes-apiserver) Red Hat Enterprise Linux Atomic Kubernetes-scheduler Container Image (rhel7/kubernetes-scheduler) 43.3. New Features docker has been upgraded to version 1.8.2 Notable changes: docker now displays a warning message if you are using the loopback device as a backend storage option. The docker info command now shows the rpm version of the client and server. The default mount propagation is Slave instead of Private . This allows volume (bind) mounts, to be altered on the host and the new mounts show up inside of the container. The --add-registry and --block-registry options have been added. This allows additional registries to be specified in addition to docker.io in /etc/sysconfig/docker . You can now inspect the content of remote repositories and check for newer versions. This functionality is implemented in the atomic verify command from the atomic command-line tool. flannel has been upgraded to version 0.5.3 Notable changes: flannel's network prefix was changed from coreos.com/network to atomic.io/network . flannel's behavior when the first ping packet was lost has been fixed. The flanneld.service now starts after the network is ready. Cockpit has been rebased to version 0.77 Notable changes: Cockpit now displays the limit for the number of supported hosts when adding servers to the dashboard. Cleaner bookmarkable URLs. Includes basic SSH key authentication functionality. Basic interactions with multipath storage have been fixed. When password authorization is not possible, Cockpit displays an informative message. Authentication now works when embedding Cockpit. Removed systemd socket activation For security reasons, systemd socket activation, which was supported in earlier versions of docker, has been removed. Now, it is not recommended to use the docker group as a mechanism for talking to the docker daemon as a non-privileged user. Instead, set up sudo for this type of access. If the docker daemon is not running after the upgrade, create the /etc/sysconfig/docker.rpmnew file, add any local customization to it and replace /etc/sysconfig/docker with it. Additionally, remove the -H fd:// line from /etc/sysconfig/docker if it is present. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux_atomic_host/7/html/release_notes/red_hat_enterprise_linux_atomic_host_7_2 |
Chapter 2. Red Hat Quay prerequisites | Chapter 2. Red Hat Quay prerequisites Before deploying Red Hat Quay, you must provision image storage, a database, and Redis. 2.1. Image storage backend Red Hat Quay stores all binary blobs in its storage backend. Local storage Red Hat Quay can work with local storage, however this should only be used for proof of concept or test setups, as the durability of the binary blobs cannot be guaranteed. HA storage setup For a Red Hat Quay HA deployment, you must provide HA image storage, for example: Red Hat OpenShift Data Foundation , previously known as Red Hat OpenShift Container Storage, is software-defined storage for containers. Engineered as the data and storage services platform for OpenShift Container Platform, Red Hat OpenShift Data Foundation helps teams develop and deploy applications quickly and efficiently across clouds. More information can be found at https://www.redhat.com/en/technologies/cloud-computing/openshift-data-foundation . Ceph Object Gateway (also called RADOS Gateway) is an example of a storage solution that can provide the the object storage needed by Red Hat Quay. Detailed instructions on how to use Ceph storage as a highly available storage backend can be found in the Quay High Availability Guide . Further information about Red Hat Ceph Storage and HA setups can be found in the Red Hat Ceph Storage Architecture Guide Geo-replication Local storage cannot be used for geo-replication, so a supported on premise or cloud based object storage solution must be deployed. Localized image storage is provided in each region and image pulls are served from the closest available storage engine. Container image pushes are written to the preferred storage engine for the Red Hat Quay instance, and will then be replicated, in the background, to the other storage engines. This requires the image storage to be accessible from all regions. 2.1.1. Supported image storage engines Red Hat Quay supports the following on premise storage types: Ceph/Rados RGW OpenStack Swift Red Hat OpenShift Data Foundation 4 (through NooBaa) Red Hat Quay supports the following public cloud storage engines: Amazon Web Services (AWS) S3 Google Cloud Storage Azure Blob Storage 2.1.2. Unsupported image storage engines Currently, Hitachi HCP is unsupported. Because every implementation of S3 is different, problems have arisen with Hitachi HCP in the past. Hitachi HCP might work if Ceph/RADOS drivers are used, however, Red Hat Quay cannot guarantee that it works properly in all scenarios and is therefor unsupported. 2.2. Database backend Red Hat Quay stores all of its configuration information in the config.yaml file. Registry metadata, for example, user information, robot accounts, team, permissions, organizations, images, tags, manifests, etc. are stored inside of the database backend. Logs can be pushed to ElasticSearch if required. PostgreSQL is the preferred database backend because it can be used for both Red Hat Quay and Clair. A future version of Red Hat Quay will remove support for using MySQL and MariaDB as the database backend, which has been deprecated since the Red Hat Quay 3.6 release. Until then, MySQL is still supported according to the support matrix , but will not receive additional features or explicit testing coverage. The Red Hat Quay Operator supports only PostgreSQL deployments when the database is managed. If you want to use MySQL, you must deploy it manually and set the database component to managed: false . Deploying Red Hat Quay in a highly available (HA) configuration requires that your database services are provisioned for high availability. If Red Hat Quay is running on public cloud infrastructure, it is recommended that you use the PostgreSQL services provided by your cloud provider, however MySQL is also supported. Geo-replication requires a single, shared database that is accessible from all regions. 2.3. Redis Red Hat Quay stores builder logs inside a Redis cache. Because the data stored is ephemeral, Redis does not need to be highly available even though it is stateful. If Redis fails, you will lose access to build logs, builders, and the garbage collector service. Additionally, user events will be unavailable. You can use a Redis image from the Red Hat Software Collections or from any other source you prefer. | null | https://docs.redhat.com/en/documentation/red_hat_quay/3.9/html/red_hat_quay_architecture/arch-prereqs |
4.168. lvm2 | 4.168. lvm2 4.168.1. RHBA-2011:1522 - lvm2 bug fix and enhancement update Updated lvm2 packages that fix several bugs and add three enhancements are now available for Red Hat Enterprise Linux 6. The lvm2 packages contain support for Logical Volume Management (LVM). Bug Fixes BZ# 743112 Due to locking errors, multiple failed cmirror devices were unable to be replaced. With this update, the underlying source code has been modified to address this issue, and the aforementioned devices are correctly replaced should a failure occur. BZ# 696251 Prior to this update, extending a mirror volume beyond available extents while using the cling by tags allocation policy did not work properly. Normally, such an action returns an error message informing the user that there are insufficient allocatable extents for use. However, this check failed and caused a volume to be corrupted. Because the allocation code has been revised, restructured, and made more robust, the problematic scenario with extending mirror volumes while using the cling by tags policy no longer occurs. BZ# 684083 While performing extensive I/O operations in the background, the pvmove command could become unresponsive. With this update, the underlying source code has been modified to address this issue, and the pvmove command no longer hangs. BZ# 733320 When a striped logical volume was resized with the lvresize command, the size was rounded down to the stripe boundary. This could pose a problem when shrinking the volume with a file system on it. Even if a user determined the new size so that the file system did fit entirely onto the volume, and resized the volume, the alignment done by the lvresize command might have cut off a part of the file system, causing it to become corrupted. This update fixes the rounding for striped volumes so that a volume is never reduced more than requested. BZ# 594525 Prior to this update, placing mirror legs on different physical devices with the lvcreate --alloc anywhere command did not guarantee placement of data on different physical devices. With this update, the above command tries to allocate each mirror image on a separate device first before placing it on a device that is already used. BZ# 737087 If the lvcreate command was used with large physical volumes while using %FREE , %VG , %PVS or %ORIGIN for size definition, the resulting LV size was incorrectly calculated. This was caused by an integer overflow while calculating the percentages. This update provides a better way of calculating the sizes, by using proper typecasting, so that the overflow no longer occurs. BZ# 715190 Several LVM locking error and warning messages were returned during the system start-up which were caused by cluster locking (configured globally in /etc/lvm/lvm.conf ). At the early stage of the system start-up, when the early init script tries to activate any existing VGs, the cluster infrastructure is still not initialized (as well as the network interface) and therefore cluster locking cannot be used and the system falls back to file-based locking instead, causing several misleading error and warning messages to be returned. With this update, these error and warning messages are suppressed during the system start-up, and the system falls back to usable locking mechanism silently. BZ# 712147 The vgimportclone script triggered a code path in LVM that caused it to access already-released memory when a duplicated PV was found. Consequently, the VG that contained such PV was found to be inconsistent and the process ended up with a failure to read the VG. This update fixes this failure by saving such problematic strings to a temporary buffer, and thus avoiding improper memory access. BZ# 697945 The cluster LVM daemon ( clvmd ) was crashing when attempting to create a high number of volume groups at once. This was caused by the limit set by the number of available file descriptors per process. While clvmd was creating pipes and the limit was reached under the pressure of high number of requests, clvmd did not return an error but continued to use uninitialized pipes instead, eventually causing it to crash. With this update, clvmd now returns an error message immediately if the pipe creation fails. BZ# 734193 When using striped mirrors, improper and overly-restrictive divisibility requirements for the extent count could cause a failure to create a striped mirror, even though it was correct and possible. The condition that was checked counted in the mirror count and the stripe count, though, only the stripe count alone was satisfactory. This update fixes this, and creating a striped mirror no longer fails. BZ# 732142 Before, an improper activation sequence was used while performing an image split operation. That caused a device-mapper table to be loaded while some of processed devices were known to be suspended. This has been fixed and the activation sequence has been reordered so that the table is always loaded at proper time. BZ# 570359 Issuing an lvremove command could cause a failure to remove a logical volume. This failure was caused by processing an asynchronous udev event that kept the volume opened while the lvremove command tried to remove it. These asynchronous events are triggered when the watch udev rule is applied (it is set for device-mapper/LVM2 devices when using the udisks package that installs /lib/udev/rules.d/80-udisks.rules ). To fix this issue, the number of device open calls in read-write mode has been minimized and read-only mode is used internally if possible (the event is generated when closing a device that has the watch rule set and is closed after a read-write open). Although this fixes a problem when opening a device internally within the command execution, the failure could still occur when using several commands quickly in a sequence where each one opens a device for read-write and then closes it immediately (for example in a script). In such a case, it is advisable to use the udevadm settle command in between. BZ# 695526 With this update, when using the lvconvert command, the Unable to create a snapshot of a locked|pvmove|mirrored LV error message has been changed to Unable to convert an LV into a snapshot of a locked|pvmove|mirrored LV. for clarity reasons. BZ# 711445 A hostname containing the slash character ( " / " ) caused LVM commands to fail while generating an archive of current metadata. Because a hostname is a part of the temporary archive file name, a file path that was ambiguous was created, which caused the whole archive operation to fail. This update fixes this by replacing any slash character ( " / " ) with a question mark character ( " ? " ) in the hostname string and then is used to compose the temporary archive file name. BZ# 712829 An issue was discovered when running several commands in parallel that activated or deactivated an LV or a VG. The symbolic links for LVs in /dev were created and removed incorrectly, causing them to exist when the device had already been removed or vice versa. This problem was caused by the fact that during the activation there was no write lock held that would protect individual activation commands as a whole (there was no metadata change). Together with non-atomicity of checking udev operations, an improper decision was made in the code based on the already stale information. This triggered a part of the code that attempted to repair the symbolic links as a fallback action. To fix this, these checks are no longer run by default, thus fully relying on udev . However, the old functionality can still be used for diagnosing other udev related problems by setting a new verify_udev_operations option found in the activation section of the /etc/lvm/lvm.conf file. BZ# 728157 This update removes the unsupported --force option from the lvrename manpage. BZ# 743932 With this update, the vgsplit command is now able to split a volume group containing a mirror with mirrored logs. Enhancements BZ# 623808 Prior to this update, it was not possible to create a PV object with all properties calculated (for example, the PE start value) without a need to write the PV label on the disk while using an LVM2 library ( lvm2app ). This has been changed so that the PV label is written out later in the process as a part of the lvm_vg_write call, making it possible to calculate all PV properties and query them without actually writing the PV label on the disk. BZ# 651493 This update adds support for issuing discards (TRIM) as part of lvm2 operations. BZ# 729712 In Red Hat Enterprise Linux 6.2, support for MD's RAID personalities has been added to LVM as a Technology Preview. For more information about this feature, refer to the Red Hat Enterprise Linux 6.2 Release Notes . Users are advised to upgrade to these updated lvm2 packages, which resolve these issues and add these enhancements. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.2_technical_notes/lvm2 |
Chapter 5. Appendix | Chapter 5. Appendix This appendix contains the following reference materials: System Facts Operators 5.1. System Facts The following table defines the system facts for use in system comparisons. Table 5.1. System facts and their functions Fact name Description Example value Ansible Category with a list of Ansible-related facts controller_version with a value of 4.0.0 arch System architecture x86_64 bios_release_date BIOS release date; typically MM/DD/YYYY 01/01/2011 bios_vendor BIOS vendor name LENOVO bios_version BIOS version 1.17.0 cloud_provider Cloud vendor. Values are google , azure , aws , alibaba , or empty google cores_per_socket Number of CPU cores per socket 2 cpu_flags Category with a list of CPU flags. Each name is the CPU flag (ex: vmx ), and the value is always enabled . vmx , with a value of enabled enabled_services Category with a list of enabled services. Each name in the category is the service name (ex: crond ), and the value is always enabled . crond , with a value of enabled fqdn The fully qualified domain name (FQDN) of the system system1.example.com infrastructure_type System infrastructure; common values are virtual or physical virtual infrastructure_vendor Infrastructure vendor; common values are kvm , vmware , baremetal , etc. kvm installed_packages List of installed RPM packages. This is a category. bash , with a value of 4.2.46-33.el7.x86_64 . installed_services Category with a list of installed services. Each name in the category is the service name (ex: crond ), and the value is always installed . crond , with a value of installed . kernel_modules List of kernel modules. Each name in the category is the kernel module (ex: nfs ), and the value is enabled . nfs , with a value of enabled . last_boot_time The boot time in YYYY-MM-DDTHH:MM:SS format. Informational only; we do not compare boot times across systems. 2019-09-18T16:54:56 mssql Category with a list of Microsoft SQL Server-related facts mssql_version with a value of 15.0.4153.1 network_interfaces List of facts related to network interfaces. There are six facts for each interface: ipv6_addresses , ipv4_addresses , mac_address , mtu , state and type . The two address fields are comma-separated lists of IP addresses. The state field is either UP or DOWN . The type field is the interface type (ex: ether , loopback , bridge , etc.). Each interface is prefixed to the fact name. For example, the interface em1 would have a mac_address system fact value of em1.mac_address . Most network interface facts are compared to ensure they are equal across systems. However, ipv4_addresses , ipv6_addresses , and mac_address are checked to ensure they are different across systems. A subexception for lo must always have the same IP and MAC address on all systems. number_of_cpus Total number of CPUs 1 number_of_sockets Total number of sockets 1 os_kernel_version Kernel version 4.18.0 os_release Kernel release 8.1 running_processes List of running processes. The fact name is the name of the process, and the value is the instance count. crond , with a value of 1 . sap_instance_number SAP instance number 42 sap_sids SAP system ID (SID) A42 sap_system Boolean field that indicates if SAP is installed on the system True sap_version SAP version number 2.00.052.00.1599235305 satellite_managed Boolean field that indicates whether a system is registered to a Satellite Server FALSE selinux_current_mode Current SELinux mode enforcing selinux_config_file SELinux mode set in the config file enforcing systemd The number of failures, the number of current jobs queued, and the current state of systemd state with a value of degraded system_memory_bytes Total system memory in bytes 8388608 tuned_profile Current profile resulting from the command tuned-adm active desktop yum_repos List of yum repositories. The repository name is added to the beginning of the fact. Each repository has the associated facts base_url , enabled , and gpgcheck . Red Hat Enterprise Linux 7 Server (RPMs).base_url would have the value https://cdn.redhat.com/content/dist/rhel/server/7/USDreleasever/USDbasearch/os 5.2. Operators Table 5.2. Available Operators in Conditions Operators Value Logical Operators AND OR Boolean Operators NOT ! = != Numeric Compare Operators > >= < <= String Compare Operators CONTAINS MATCHES Array Operators IN CONTAINS | null | https://docs.redhat.com/en/documentation/red_hat_insights/1-latest/html/monitoring_and_reacting_to_configuration_changes_using_policies/policies-appendix_intro-policies |
Chapter 1. Content and patch management with Red Hat Satellite | Chapter 1. Content and patch management with Red Hat Satellite With Red Hat Satellite, you can provide content and apply patches to hosts systematically in all lifecycle stages. 1.1. Content flow in Red Hat Satellite Content flow in Red Hat Satellite involves management and distribution of content from external sources to hosts. Content in Satellite flows from external content sources to Satellite Server . Capsule Servers mirror the content from Satellite Server to hosts . External content sources You can configure many content sources with Satellite. The supported content sources include the Red Hat Customer Portal, Git repositories, Ansible collections, Docker Hub, SCAP repositories, or internal data stores of your organization. Satellite Server On your Satellite Server, you plan and manage the content lifecycle. Capsule Servers By creating Capsule Servers, you can establish content sources in various locations based on your needs. For example, you can establish a content source for each geographical location or multiple content sources for a data center with separate networks. Hosts By assigning a host system to a Capsule Server or directly to your Satellite Server, you ensure the host receives the content they provide. Hosts can be physical or virtual. Additional resources See Chapter 4, Major Satellite components for details. See Managing Red Hat subscriptions in Managing content for information about Content Delivery Network (CDN). 1.2. Content views in Red Hat Satellite A content view is a deliberately curated subset of content that your hosts can access. By creating a content view, you can define the software versions used by a particular environment or Capsule Server. Each content view creates a set of repositories across each environment. Your Satellite Server stores and manages these repositories. For example, you can create content views in the following ways: A content view with older package versions for a production environment and another content view with newer package versions for a Development environment. A content view with a package repository required by an operating system and another content view with a package repository required by an application. A composite content view for a modular approach to managing content views. For example, you can use one content view for content for managing an operating system and another content view for content for managing an application. By creating a composite content view that combines both content views, you create a new repository that merges the repositories from each of the content views. However, the repositories for the content views still exist and you can keep managing them separately as well. Default Organization View A Default Organization View is an application-controlled content view for all content that is synchronized to Satellite. You can register a host to the Library environment on Satellite to consume the Default Organization View without configuring content views and lifecycle environments. Promoting a content view across environments When you promote a content view from one environment to the environment in the application lifecycle, Satellite updates the repository and publishes the packages. Example 1.1. Promoting a package from Development to Testing The repositories for Testing and Production contain the my-software -1.0-0.noarch.rpm package: Development Testing Production Version of the content view Version 2 Version 1 Version 1 Contents of the content view my-software -1.1-0.noarch.rpm my-software -1.0-0.noarch.rpm my-software -1.0-0.noarch.rpm If you promote Version 2 of the content view from Development to Testing , the repository for Testing updates to contain the my-software -1.1-0.noarch.rpm package: Development Testing Production Version of the content view Version 2 Version 2 Version 1 Contents of the content view my-software -1.1-0.noarch.rpm my-software -1.1-0.noarch.rpm my-software -1.0-0.noarch.rpm This ensures hosts are designated to a specific environment but receive updates when that environment uses a new version of the content view. Additional resources For more information, see Managing content views in Managing content . 1.3. Content types in Red Hat Satellite With Red Hat Satellite, you can import and manage many content types. For example, Satellite supports the following content types: RPM packages Import RPM packages from repositories related to your Red Hat subscriptions. Satellite Server downloads the RPM packages from the Red Hat Content Delivery Network and stores them locally. You can use these repositories and their RPM packages in content views. Kickstart trees Import the Kickstart trees to provision a host. New systems access these Kickstart trees over a network to use as base content for their installation. Red Hat Satellite contains predefined Kickstart templates. You can also create your own Kickstart templates. ISO and KVM images Download and manage media for installation and provisioning. For example, Satellite downloads, stores, and manages ISO images and guest images for specific Red Hat Enterprise Linux and non-Red Hat operating systems. Custom file type Manage custom content for any type of file you require, such as SSL certificates, ISO images, and OVAL files. 1.4. Additional resources For information about how to manage content with Satellite, see Managing content . | null | https://docs.redhat.com/en/documentation/red_hat_satellite/6.16/html/overview_concepts_and_deployment_considerations/content-and-patch-management-with-satellite_planning |
2.15. RHEA-2011:0777 - new package: libcxgb4 | 2.15. RHEA-2011:0777 - new package: libcxgb4 New libcxgb4 packages are available for Red Hat Enterprise Linux 6. libcxgb4 provides a userspace hardware driver for use with the libibverbs InfiniBand/iWARP verbs library. This driver enables Chelsio Internet Wide Area RDMA Protocol (iWARP) capable ethernet devices. This enhancement update adds the libcxgb4 package to Red Hat Enterprise Linux 6. (BZ# 675024 ) All users of Chelsio iWARP capable ethernet devices are advised to install these new packages. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.1_technical_notes/libcxgb4_new |
14.5.5. Displaying Device Block Statistics | 14.5.5. Displaying Device Block Statistics This command will display the block statistics for a running domain. You need to have both the domain name and the device name (use the virsh domblklist to list the devices.)In this case a block device is the unique target name (<target dev='name'/>) or a source file (< source file ='name'/>). Note that not every hypervisor can display every field. To make sure that the output is presented in its most legible form use the --human option, as shown: | [
"virsh domblklist rhel6 Target Source ------------------------------------------------ vda /VirtualMachines/rhel6.img hdc - virsh domblkstat --human rhel6 vda Device: vda number of read operations: 174670 number of bytes read: 3219440128 number of write operations: 23897 number of bytes written: 164849664 number of flush operations: 11577 total duration of reads (ns): 1005410244506 total duration of writes (ns): 1085306686457 total duration of flushes (ns): 340645193294"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/virtualization_administration_guide/sub-sect-domain_commands-displaying_device_block_statistics |
Chapter 1. Introduction to Red Hat Openstack Services on OpenShift certification policy guide | Chapter 1. Introduction to Red Hat Openstack Services on OpenShift certification policy guide The Red Hat OpenStack Services on OpenShift (RHOSO) certification policy guide outlines the certification requirements for partner solutions. Red Hat encourages Partners to test their plugins with pre-releases of both Red Hat builds and their own solutions. 1.1. Audience This guide describes the technical certification requirements for software certification Partners who want to offer their applications, management applications, or plugin or driver software for use with Red Hat OpenStack Services on OpenShift (RHOSO) in a jointly supported customer environment. 1.2. Create value for customers The certification process includes a series of tests to ensure that certified solutions meet enterprise cloud requirements. This process is supported by both Red Hat and the Partner's organization. The Red Hat OpenStack Services on Openshift Certification Workflow Guide includes multiple tests, each with a series of subtests and checks. Not all tests are required for each certification. Submit logs from a single run with all mandatory and optional tests to Red Hat for new certifications and recertifications. The certification tooling and workflow supports certifications that are in progress for 90 days. Red Hat encourages using the latest version of the certification tooling and workflow for the certification process. A 90-day grace period is provided for versions of the tooling and workflow upon a new release, allowing ongoing certifications to proceed without disruption. After the grace period, results from older tooling versions are not accepted. The latest version of the certification tooling and workflow is available via Red Hat Subscription Management and documented in the Red Hat OpenStack Services on OpenShift Certification Workflow Guide. Note Certification subtests provide an immediate Pass or Fail status. It is recommended to review the output of failed tests and check the tempest and services logs to diagnose and fix any issues. Some configurations within OpenStack services, external system integration, or test configurations might need to be adjusted to run such tests successfully. If you are unable to resolve the failure, please reach out to Red Hat Support and the Certification team. Additional resources For more information on running the tests, see Red Hat OpenStack Services on Openshift Certification Workflow Guide . 1.3. Red Hat OpenStack services on OpenShift certification prerequisites To start your certification journey, you must meet the following requirements: Join the Red Hat Partner Connect program. Establish a support relationship with Red Hat. You can do this through the multi-vendor support network of TSANet , or through a custom support agreement. You must have a good working knowledge of RHOSO, including installation and configuration of the product. Additional Resources For more information about the product, see detailed product documentation on Red Hat Customer Portal Undertake the product training or certification on Red Hat Training Page . For more information about TSANet, see TSANet web page . 1.4. Red Hat OpenStack services on Openshift component distribution As part of Red Hat OpenStack Services on OpenShift (RHOSO), Red Hat distributes components committed to a release of the upstream OpenStack project . These components are called In-tree components. You are responsible for the certification and distribution of all dependencies that are not part of the upstream OpenStack project. You are also responsible for distributing products or components that are not committed to the upstream OpenStack project. These components are referred to as Out-of-tree components. Additional resources For more information, see the Integrating partner content . | null | https://docs.redhat.com/en/documentation/red_hat_software_certification/2025/html/red_hat_openstack_services_on_openshift_certification_policy_guide/assembly-introduction-to-rhoso-certification_rhoso-policy-guide |
Chapter 2. Recommended performance and scalability practices | Chapter 2. Recommended performance and scalability practices 2.1. Recommended control plane practices This topic provides recommended performance and scalability practices for control planes in OpenShift Container Platform. 2.1.1. Recommended practices for scaling the cluster The guidance in this section is only relevant for installations with cloud provider integration. Apply the following best practices to scale the number of worker machines in your OpenShift Container Platform cluster. You scale the worker machines by increasing or decreasing the number of replicas that are defined in the worker machine set. When scaling up the cluster to higher node counts: Spread nodes across all of the available zones for higher availability. Scale up by no more than 25 to 50 machines at once. Consider creating new compute machine sets in each available zone with alternative instance types of similar size to help mitigate any periodic provider capacity constraints. For example, on AWS, use m5.large and m5d.large. Note Cloud providers might implement a quota for API services. Therefore, gradually scale the cluster. The controller might not be able to create the machines if the replicas in the compute machine sets are set to higher numbers all at one time. The number of requests the cloud platform, which OpenShift Container Platform is deployed on top of, is able to handle impacts the process. The controller will start to query more while trying to create, check, and update the machines with the status. The cloud platform on which OpenShift Container Platform is deployed has API request limits; excessive queries might lead to machine creation failures due to cloud platform limitations. Enable machine health checks when scaling to large node counts. In case of failures, the health checks monitor the condition and automatically repair unhealthy machines. Note When scaling large and dense clusters to lower node counts, it might take large amounts of time because the process involves draining or evicting the objects running on the nodes being terminated in parallel. Also, the client might start to throttle the requests if there are too many objects to evict. The default client queries per second (QPS) and burst rates are currently set to 50 and 100 respectively. These values cannot be modified in OpenShift Container Platform. 2.1.2. Control plane node sizing The control plane node resource requirements depend on the number and type of nodes and objects in the cluster. The following control plane node size recommendations are based on the results of a control plane density focused testing, or Cluster-density . This test creates the following objects across a given number of namespaces: 1 image stream 1 build 5 deployments, with 2 pod replicas in a sleep state, mounting 4 secrets, 4 config maps, and 1 downward API volume each 5 services, each one pointing to the TCP/8080 and TCP/8443 ports of one of the deployments 1 route pointing to the first of the services 10 secrets containing 2048 random string characters 10 config maps containing 2048 random string characters Number of worker nodes Cluster-density (namespaces) CPU cores Memory (GB) 24 500 4 16 120 1000 8 32 252 4000 16, but 24 if using the OVN-Kubernetes network plug-in 64, but 128 if using the OVN-Kubernetes network plug-in 501, but untested with the OVN-Kubernetes network plug-in 4000 16 96 The data from the table above is based on an OpenShift Container Platform running on top of AWS, using r5.4xlarge instances as control-plane nodes and m5.2xlarge instances as worker nodes. On a large and dense cluster with three control plane nodes, the CPU and memory usage will spike up when one of the nodes is stopped, rebooted, or fails. The failures can be due to unexpected issues with power, network, underlying infrastructure, or intentional cases where the cluster is restarted after shutting it down to save costs. The remaining two control plane nodes must handle the load in order to be highly available, which leads to increase in the resource usage. This is also expected during upgrades because the control plane nodes are cordoned, drained, and rebooted serially to apply the operating system updates, as well as the control plane Operators update. To avoid cascading failures, keep the overall CPU and memory resource usage on the control plane nodes to at most 60% of all available capacity to handle the resource usage spikes. Increase the CPU and memory on the control plane nodes accordingly to avoid potential downtime due to lack of resources. Important The node sizing varies depending on the number of nodes and object counts in the cluster. It also depends on whether the objects are actively being created on the cluster. During object creation, the control plane is more active in terms of resource usage compared to when the objects are in the Running phase. Operator Lifecycle Manager (OLM) runs on the control plane nodes and its memory footprint depends on the number of namespaces and user installed operators that OLM needs to manage on the cluster. Control plane nodes need to be sized accordingly to avoid OOM kills. Following data points are based on the results from cluster maximums testing. Number of namespaces OLM memory at idle state (GB) OLM memory with 5 user operators installed (GB) 500 0.823 1.7 1000 1.2 2.5 1500 1.7 3.2 2000 2 4.4 3000 2.7 5.6 4000 3.8 7.6 5000 4.2 9.02 6000 5.8 11.3 7000 6.6 12.9 8000 6.9 14.8 9000 8 17.7 10,000 9.9 21.6 Important You can modify the control plane node size in a running OpenShift Container Platform 4.18 cluster for the following configurations only: Clusters installed with a user-provisioned installation method. AWS clusters installed with an installer-provisioned infrastructure installation method. Clusters that use a control plane machine set to manage control plane machines. For all other configurations, you must estimate your total node count and use the suggested control plane node size during installation. Note In OpenShift Container Platform 4.18, half of a CPU core (500 millicore) is now reserved by the system by default compared to OpenShift Container Platform 3.11 and versions. The sizes are determined taking that into consideration. 2.1.2.1. Selecting a larger Amazon Web Services instance type for control plane machines If the control plane machines in an Amazon Web Services (AWS) cluster require more resources, you can select a larger AWS instance type for the control plane machines to use. Note The procedure for clusters that use a control plane machine set is different from the procedure for clusters that do not use a control plane machine set. If you are uncertain about the state of the ControlPlaneMachineSet CR in your cluster, you can verify the CR status . 2.1.2.1.1. Changing the Amazon Web Services instance type by using a control plane machine set You can change the Amazon Web Services (AWS) instance type that your control plane machines use by updating the specification in the control plane machine set custom resource (CR). Prerequisites Your AWS cluster uses a control plane machine set. Procedure Edit your control plane machine set CR by running the following command: USD oc --namespace openshift-machine-api edit controlplanemachineset.machine.openshift.io cluster Edit the following line under the providerSpec field: providerSpec: value: ... instanceType: <compatible_aws_instance_type> 1 1 Specify a larger AWS instance type with the same base as the selection. For example, you can change m6i.xlarge to m6i.2xlarge or m6i.4xlarge . Save your changes. For clusters that use the default RollingUpdate update strategy, the Operator automatically propagates the changes to your control plane configuration. For clusters that are configured to use the OnDelete update strategy, you must replace your control plane machines manually. Additional resources Managing control plane machines with control plane machine sets 2.1.2.1.2. Changing the Amazon Web Services instance type by using the AWS console You can change the Amazon Web Services (AWS) instance type that your control plane machines use by updating the instance type in the AWS console. Prerequisites You have access to the AWS console with the permissions required to modify the EC2 Instance for your cluster. You have access to the OpenShift Container Platform cluster as a user with the cluster-admin role. Procedure Open the AWS console and fetch the instances for the control plane machines. Choose one control plane machine instance. For the selected control plane machine, back up the etcd data by creating an etcd snapshot. For more information, see "Backing up etcd". In the AWS console, stop the control plane machine instance. Select the stopped instance, and click Actions Instance Settings Change instance type . Change the instance to a larger type, ensuring that the type is the same base as the selection, and apply changes. For example, you can change m6i.xlarge to m6i.2xlarge or m6i.4xlarge . Start the instance. If your OpenShift Container Platform cluster has a corresponding Machine object for the instance, update the instance type of the object to match the instance type set in the AWS console. Repeat this process for each control plane machine. Additional resources Backing up etcd AWS documentation about changing the instance type 2.2. Recommended infrastructure practices This topic provides recommended performance and scalability practices for infrastructure in OpenShift Container Platform. 2.2.1. Infrastructure node sizing Infrastructure nodes are nodes that are labeled to run pieces of the OpenShift Container Platform environment. The infrastructure node resource requirements depend on the cluster age, nodes, and objects in the cluster, as these factors can lead to an increase in the number of metrics or time series in Prometheus. The following infrastructure node size recommendations are based on the results observed in cluster-density testing detailed in the Control plane node sizing section, where the monitoring stack and the default ingress-controller were moved to these nodes. Number of worker nodes Cluster density, or number of namespaces CPU cores Memory (GB) 27 500 4 24 120 1000 8 48 252 4000 16 128 501 4000 32 128 In general, three infrastructure nodes are recommended per cluster. Important These sizing recommendations should be used as a guideline. Prometheus is a highly memory intensive application; the resource usage depends on various factors including the number of nodes, objects, the Prometheus metrics scraping interval, metrics or time series, and the age of the cluster. In addition, the router resource usage can also be affected by the number of routes and the amount/type of inbound requests. These recommendations apply only to infrastructure nodes hosting Monitoring, Ingress and Registry infrastructure components installed during cluster creation. Note In OpenShift Container Platform 4.18, half of a CPU core (500 millicore) is now reserved by the system by default compared to OpenShift Container Platform 3.11 and versions. This influences the stated sizing recommendations. 2.2.2. Scaling the Cluster Monitoring Operator OpenShift Container Platform exposes metrics that the Cluster Monitoring Operator (CMO) collects and stores in the Prometheus-based monitoring stack. As an administrator, you can view dashboards for system resources, containers, and components metrics in the OpenShift Container Platform web console by navigating to Observe Dashboards . 2.2.3. Prometheus database storage requirements Red Hat performed various tests for different scale sizes. Note The following Prometheus storage requirements are not prescriptive and should be used as a reference. Higher resource consumption might be observed in your cluster depending on workload activity and resource density, including the number of pods, containers, routes, or other resources exposing metrics collected by Prometheus. You can configure the size-based data retention policy to suit your storage requirements. Table 2.1. Prometheus Database storage requirements based on number of nodes/pods in the cluster Number of nodes Number of pods (2 containers per pod) Prometheus storage growth per day Prometheus storage growth per 15 days Network (per tsdb chunk) 50 1800 6.3 GB 94 GB 16 MB 100 3600 13 GB 195 GB 26 MB 150 5400 19 GB 283 GB 36 MB 200 7200 25 GB 375 GB 46 MB Approximately 20 percent of the expected size was added as overhead to ensure that the storage requirements do not exceed the calculated value. The above calculation is for the default OpenShift Container Platform Cluster Monitoring Operator. Note CPU utilization has minor impact. The ratio is approximately 1 core out of 40 per 50 nodes and 1800 pods. Recommendations for OpenShift Container Platform Use at least two infrastructure (infra) nodes. Use at least three openshift-container-storage nodes with non-volatile memory express (SSD or NVMe) drives. 2.2.4. Configuring cluster monitoring You can increase the storage capacity for the Prometheus component in the cluster monitoring stack. Procedure To increase the storage capacity for Prometheus: Create a YAML configuration file, cluster-monitoring-config.yaml . For example: apiVersion: v1 kind: ConfigMap data: config.yaml: | prometheusK8s: retention: {{PROMETHEUS_RETENTION_PERIOD}} 1 nodeSelector: node-role.kubernetes.io/infra: "" volumeClaimTemplate: spec: storageClassName: {{STORAGE_CLASS}} 2 resources: requests: storage: {{PROMETHEUS_STORAGE_SIZE}} 3 alertmanagerMain: nodeSelector: node-role.kubernetes.io/infra: "" volumeClaimTemplate: spec: storageClassName: {{STORAGE_CLASS}} 4 resources: requests: storage: {{ALERTMANAGER_STORAGE_SIZE}} 5 metadata: name: cluster-monitoring-config namespace: openshift-monitoring 1 The default value of Prometheus retention is PROMETHEUS_RETENTION_PERIOD=15d . Units are measured in time using one of these suffixes: s, m, h, d. 2 4 The storage class for your cluster. 3 A typical value is PROMETHEUS_STORAGE_SIZE=2000Gi . Storage values can be a plain integer or a fixed-point integer using one of these suffixes: E, P, T, G, M, K. You can also use the power-of-two equivalents: Ei, Pi, Ti, Gi, Mi, Ki. 5 A typical value is ALERTMANAGER_STORAGE_SIZE=20Gi . Storage values can be a plain integer or a fixed-point integer using one of these suffixes: E, P, T, G, M, K. You can also use the power-of-two equivalents: Ei, Pi, Ti, Gi, Mi, Ki. Add values for the retention period, storage class, and storage sizes. Save the file. Apply the changes by running: USD oc create -f cluster-monitoring-config.yaml 2.2.5. Additional resources Infrastructure Nodes in OpenShift 4 OpenShift Container Platform cluster maximums Creating infrastructure machine sets 2.3. Recommended etcd practices To ensure optimal performance and scalability for etcd in OpenShift Container Platform, you can complete the following practices. 2.3.1. Storage practices for etcd Because etcd writes data to disk and persists proposals on disk, its performance depends on disk performance. Although etcd is not particularly I/O intensive, it requires a low latency block device for optimal performance and stability. Because the consensus protocol for etcd depends on persistently storing metadata to a log (WAL), etcd is sensitive to disk-write latency. Slow disks and disk activity from other processes can cause long fsync latencies. Those latencies can cause etcd to miss heartbeats, not commit new proposals to the disk on time, and ultimately experience request timeouts and temporary leader loss. High write latencies also lead to an OpenShift API slowness, which affects cluster performance. Because of these reasons, avoid colocating other workloads on the control-plane nodes that are I/O sensitive or intensive and share the same underlying I/O infrastructure. Run etcd on a block device that can write at least 50 IOPS of 8KB sequentially, including fdatasync, in under 10ms. For heavy loaded clusters, sequential 500 IOPS of 8000 bytes (2 ms) are recommended. To measure those numbers, you can use a benchmarking tool, such as the fio command. To achieve such performance, run etcd on machines that are backed by SSD or NVMe disks with low latency and high throughput. Consider single-level cell (SLC) solid-state drives (SSDs), which provide 1 bit per memory cell, are durable and reliable, and are ideal for write-intensive workloads. Note The load on etcd arises from static factors, such as the number of nodes and pods, and dynamic factors, including changes in endpoints due to pod autoscaling, pod restarts, job executions, and other workload-related events. To accurately size your etcd setup, you must analyze the specific requirements of your workload. Consider the number of nodes, pods, and other relevant factors that impact the load on etcd. The following hard drive practices provide optimal etcd performance: Use dedicated etcd drives. Avoid drives that communicate over the network, such as iSCSI. Do not place log files or other heavy workloads on etcd drives. Prefer drives with low latency to support fast read and write operations. Prefer high-bandwidth writes for faster compactions and defragmentation. Prefer high-bandwidth reads for faster recovery from failures. Use solid state drives as a minimum selection. Prefer NVMe drives for production environments. Use server-grade hardware for increased reliability. Avoid NAS or SAN setups and spinning drives. Ceph Rados Block Device (RBD) and other types of network-attached storage can result in unpredictable network latency. To provide fast storage to etcd nodes at scale, use PCI passthrough to pass NVM devices directly to the nodes. Always benchmark by using utilities such as fio . You can use such utilities to continuously monitor the cluster performance as it increases. Avoid using the Network File System (NFS) protocol or other network based file systems. Some key metrics to monitor on a deployed OpenShift Container Platform cluster are p99 of etcd disk write ahead log duration and the number of etcd leader changes. Use Prometheus to track these metrics. Note The etcd member database sizes can vary in a cluster during normal operations. This difference does not affect cluster upgrades, even if the leader size is different from the other members. 2.3.2. Validating the hardware for etcd To validate the hardware for etcd before or after you create the OpenShift Container Platform cluster, you can use fio. Prerequisites Container runtimes such as Podman or Docker are installed on the machine that you are testing. Data is written to the /var/lib/etcd path. Procedure Run fio and analyze the results: If you use Podman, run this command: USD sudo podman run --volume /var/lib/etcd:/var/lib/etcd:Z quay.io/cloud-bulldozer/etcd-perf If you use Docker, run this command: USD sudo docker run --volume /var/lib/etcd:/var/lib/etcd:Z quay.io/cloud-bulldozer/etcd-perf The output reports whether the disk is fast enough to host etcd by comparing the 99th percentile of the fsync metric captured from the run to see if it is less than 10 ms. A few of the most important etcd metrics that might affected by I/O performance are as follows: etcd_disk_wal_fsync_duration_seconds_bucket metric reports the etcd's WAL fsync duration etcd_disk_backend_commit_duration_seconds_bucket metric reports the etcd backend commit latency duration etcd_server_leader_changes_seen_total metric reports the leader changes Because etcd replicates the requests among all the members, its performance strongly depends on network input/output (I/O) latency. High network latencies result in etcd heartbeats taking longer than the election timeout, which results in leader elections that are disruptive to the cluster. A key metric to monitor on a deployed OpenShift Container Platform cluster is the 99th percentile of etcd network peer latency on each etcd cluster member. Use Prometheus to track the metric. The histogram_quantile(0.99, rate(etcd_network_peer_round_trip_time_seconds_bucket[2m])) metric reports the round trip time for etcd to finish replicating the client requests between the members. Ensure that it is less than 50 ms. Additional resources How to use fio to check etcd disk performance in OpenShift Container Platform etcd performance troubleshooting guide for OpenShift Container Platform 2.3.3. Node scaling for etcd In general, clusters must have 3 control plane nodes. However, if your cluster is installed on a bare metal platform, it can have up to 5 control plane nodes. If an existing bare-metal cluster has fewer than 5 control plane nodes, you can scale the cluster up as a postinstallation task. For example, to scale from 3 to 4 control plane nodes after installation, you can add a host and install it as a control plane node. Then, the etcd Operator scales accordingly to account for the additional control plane node. Scaling a cluster to 4 or 5 control plane nodes is available only on bare metal platforms. For more information about how to scale control plane nodes by using the Assisted Installer, see "Adding hosts with the API" and "Installing a primary control plane node on a healthy cluster". The following table shows failure tolerance for clusters of different sizes: Table 2.2. Failure tolerances by cluster size Cluster size Majority Failure tolerance 1 node 1 0 3 nodes 2 1 4 nodes 3 1 5 nodes 3 2 For more information about recovering from quorum loss, see "Restoring to a cluster state". Additional resources Adding hosts with the API Installing a primary control plane node on a healthy cluster Expanding the cluster Restoring to a cluster state 2.3.4. Moving etcd to a different disk You can move etcd from a shared disk to a separate disk to prevent or resolve performance issues. The Machine Config Operator (MCO) is responsible for mounting a secondary disk for OpenShift Container Platform 4.18 container storage. Note This encoded script only supports device names for the following device types: SCSI or SATA /dev/sd* Virtual device /dev/vd* NVMe /dev/nvme*[0-9]*n* Limitations When the new disk is attached to the cluster, the etcd database is part of the root mount. It is not part of the secondary disk or the intended disk when the primary node is recreated. As a result, the primary node will not create a separate /var/lib/etcd mount. Prerequisites You have a backup of your cluster's etcd data. You have installed the OpenShift CLI ( oc ). You have access to the cluster with cluster-admin privileges. Add additional disks before uploading the machine configuration. The MachineConfigPool must match metadata.labels[machineconfiguration.openshift.io/role] . This applies to a controller, worker, or a custom pool. Note This procedure does not move parts of the root file system, such as /var/ , to another disk or partition on an installed node. Important This procedure is not supported when using control plane machine sets. Procedure Attach the new disk to the cluster and verify that the disk is detected in the node by running the lsblk command in a debug shell: USD oc debug node/<node_name> # lsblk Note the device name of the new disk reported by the lsblk command. Create the following script and name it etcd-find-secondary-device.sh : #!/bin/bash set -uo pipefail for device in <device_type_glob>; do 1 /usr/sbin/blkid "USD{device}" &> /dev/null if [ USD? == 2 ]; then echo "secondary device found USD{device}" echo "creating filesystem for etcd mount" mkfs.xfs -L var-lib-etcd -f "USD{device}" &> /dev/null udevadm settle touch /etc/var-lib-etcd-mount exit fi done echo "Couldn't find secondary block device!" >&2 exit 77 1 Replace <device_type_glob> with a shell glob for your block device type. For SCSI or SATA drives, use /dev/sd* ; for virtual drives, use /dev/vd* ; for NVMe drives, use /dev/nvme*[0-9]*n* . Create a base64-encoded string from the etcd-find-secondary-device.sh script and note its contents: USD base64 -w0 etcd-find-secondary-device.sh Create a MachineConfig YAML file named etcd-mc.yml with contents such as the following: apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: master name: 98-var-lib-etcd spec: config: ignition: version: 3.4.0 storage: files: - path: /etc/find-secondary-device mode: 0755 contents: source: data:text/plain;charset=utf-8;base64,<encoded_etcd_find_secondary_device_script> 1 systemd: units: - name: find-secondary-device.service enabled: true contents: | [Unit] Description=Find secondary device DefaultDependencies=false After=systemd-udev-settle.service Before=local-fs-pre.target ConditionPathExists=!/etc/var-lib-etcd-mount [Service] RemainAfterExit=yes ExecStart=/etc/find-secondary-device RestartForceExitStatus=77 [Install] WantedBy=multi-user.target - name: var-lib-etcd.mount enabled: true contents: | [Unit] Before=local-fs.target [Mount] What=/dev/disk/by-label/var-lib-etcd Where=/var/lib/etcd Type=xfs TimeoutSec=120s [Install] RequiredBy=local-fs.target - name: sync-var-lib-etcd-to-etcd.service enabled: true contents: | [Unit] Description=Sync etcd data if new mount is empty DefaultDependencies=no After=var-lib-etcd.mount var.mount Before=crio.service [Service] Type=oneshot RemainAfterExit=yes ExecCondition=/usr/bin/test ! -d /var/lib/etcd/member ExecStart=/usr/sbin/setsebool -P rsync_full_access 1 ExecStart=/bin/rsync -ar /sysroot/ostree/deploy/rhcos/var/lib/etcd/ /var/lib/etcd/ ExecStart=/usr/sbin/semanage fcontext -a -t container_var_lib_t '/var/lib/etcd(/.*)?' ExecStart=/usr/sbin/setsebool -P rsync_full_access 0 TimeoutSec=0 [Install] WantedBy=multi-user.target graphical.target - name: restorecon-var-lib-etcd.service enabled: true contents: | [Unit] Description=Restore recursive SELinux security contexts DefaultDependencies=no After=var-lib-etcd.mount Before=crio.service [Service] Type=oneshot RemainAfterExit=yes ExecStart=/sbin/restorecon -R /var/lib/etcd/ TimeoutSec=0 [Install] WantedBy=multi-user.target graphical.target 1 Replace <encoded_etcd_find_secondary_device_script> with the encoded script contents that you noted. Verification steps Run the grep /var/lib/etcd /proc/mounts command in a debug shell for the node to ensure that the disk is mounted: USD oc debug node/<node_name> # grep -w "/var/lib/etcd" /proc/mounts Example output /dev/sdb /var/lib/etcd xfs rw,seclabel,relatime,attr2,inode64,logbufs=8,logbsize=32k,noquota 0 0 Additional resources Red Hat Enterprise Linux CoreOS (RHCOS) 2.3.5. Defragmenting etcd data For large and dense clusters, etcd can suffer from poor performance if the keyspace grows too large and exceeds the space quota. Periodically maintain and defragment etcd to free up space in the data store. Monitor Prometheus for etcd metrics and defragment it when required; otherwise, etcd can raise a cluster-wide alarm that puts the cluster into a maintenance mode that accepts only key reads and deletes. Monitor these key metrics: etcd_server_quota_backend_bytes , which is the current quota limit etcd_mvcc_db_total_size_in_use_in_bytes , which indicates the actual database usage after a history compaction etcd_mvcc_db_total_size_in_bytes , which shows the database size, including free space waiting for defragmentation Defragment etcd data to reclaim disk space after events that cause disk fragmentation, such as etcd history compaction. History compaction is performed automatically every five minutes and leaves gaps in the back-end database. This fragmented space is available for use by etcd, but is not available to the host file system. You must defragment etcd to make this space available to the host file system. Defragmentation occurs automatically, but you can also trigger it manually. Note Automatic defragmentation is good for most cases, because the etcd operator uses cluster information to determine the most efficient operation for the user. 2.3.5.1. Automatic defragmentation The etcd Operator automatically defragments disks. No manual intervention is needed. Verify that the defragmentation process is successful by viewing one of these logs: etcd logs cluster-etcd-operator pod operator status error log Warning Automatic defragmentation can cause leader election failure in various OpenShift core components, such as the Kubernetes controller manager, which triggers a restart of the failing component. The restart is harmless and either triggers failover to the running instance or the component resumes work again after the restart. Example log output for successful defragmentation etcd member has been defragmented: <member_name> , memberID: <member_id> Example log output for unsuccessful defragmentation failed defrag on member: <member_name> , memberID: <member_id> : <error_message> 2.3.5.2. Manual defragmentation A Prometheus alert indicates when you need to use manual defragmentation. The alert is displayed in two cases: When etcd uses more than 50% of its available space for more than 10 minutes When etcd is actively using less than 50% of its total database size for more than 10 minutes You can also determine whether defragmentation is needed by checking the etcd database size in MB that will be freed by defragmentation with the PromQL expression: (etcd_mvcc_db_total_size_in_bytes - etcd_mvcc_db_total_size_in_use_in_bytes)/1024/1024 Warning Defragmenting etcd is a blocking action. The etcd member will not respond until defragmentation is complete. For this reason, wait at least one minute between defragmentation actions on each of the pods to allow the cluster to recover. Follow this procedure to defragment etcd data on each etcd member. Prerequisites You have access to the cluster as a user with the cluster-admin role. Procedure Determine which etcd member is the leader, because the leader should be defragmented last. Get the list of etcd pods: USD oc -n openshift-etcd get pods -l k8s-app=etcd -o wide Example output etcd-ip-10-0-159-225.example.redhat.com 3/3 Running 0 175m 10.0.159.225 ip-10-0-159-225.example.redhat.com <none> <none> etcd-ip-10-0-191-37.example.redhat.com 3/3 Running 0 173m 10.0.191.37 ip-10-0-191-37.example.redhat.com <none> <none> etcd-ip-10-0-199-170.example.redhat.com 3/3 Running 0 176m 10.0.199.170 ip-10-0-199-170.example.redhat.com <none> <none> Choose a pod and run the following command to determine which etcd member is the leader: USD oc rsh -n openshift-etcd etcd-ip-10-0-159-225.example.redhat.com etcdctl endpoint status --cluster -w table Example output Defaulting container name to etcdctl. Use 'oc describe pod/etcd-ip-10-0-159-225.example.redhat.com -n openshift-etcd' to see all of the containers in this pod. +---------------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+ | ENDPOINT | ID | VERSION | DB SIZE | IS LEADER | IS LEARNER | RAFT TERM | RAFT INDEX | RAFT APPLIED INDEX | ERRORS | +---------------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+ | https://10.0.191.37:2379 | 251cd44483d811c3 | 3.5.9 | 104 MB | false | false | 7 | 91624 | 91624 | | | https://10.0.159.225:2379 | 264c7c58ecbdabee | 3.5.9 | 104 MB | false | false | 7 | 91624 | 91624 | | | https://10.0.199.170:2379 | 9ac311f93915cc79 | 3.5.9 | 104 MB | true | false | 7 | 91624 | 91624 | | +---------------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+ Based on the IS LEADER column of this output, the https://10.0.199.170:2379 endpoint is the leader. Matching this endpoint with the output of the step, the pod name of the leader is etcd-ip-10-0-199-170.example.redhat.com . Defragment an etcd member. Connect to the running etcd container, passing in the name of a pod that is not the leader: USD oc rsh -n openshift-etcd etcd-ip-10-0-159-225.example.redhat.com Unset the ETCDCTL_ENDPOINTS environment variable: sh-4.4# unset ETCDCTL_ENDPOINTS Defragment the etcd member: sh-4.4# etcdctl --command-timeout=30s --endpoints=https://localhost:2379 defrag Example output Finished defragmenting etcd member[https://localhost:2379] If a timeout error occurs, increase the value for --command-timeout until the command succeeds. Verify that the database size was reduced: sh-4.4# etcdctl endpoint status -w table --cluster Example output +---------------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+ | ENDPOINT | ID | VERSION | DB SIZE | IS LEADER | IS LEARNER | RAFT TERM | RAFT INDEX | RAFT APPLIED INDEX | ERRORS | +---------------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+ | https://10.0.191.37:2379 | 251cd44483d811c3 | 3.5.9 | 104 MB | false | false | 7 | 91624 | 91624 | | | https://10.0.159.225:2379 | 264c7c58ecbdabee | 3.5.9 | 41 MB | false | false | 7 | 91624 | 91624 | | 1 | https://10.0.199.170:2379 | 9ac311f93915cc79 | 3.5.9 | 104 MB | true | false | 7 | 91624 | 91624 | | +---------------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+ This example shows that the database size for this etcd member is now 41 MB as opposed to the starting size of 104 MB. Repeat these steps to connect to each of the other etcd members and defragment them. Always defragment the leader last. Wait at least one minute between defragmentation actions to allow the etcd pod to recover. Until the etcd pod recovers, the etcd member will not respond. If any NOSPACE alarms were triggered due to the space quota being exceeded, clear them. Check if there are any NOSPACE alarms: sh-4.4# etcdctl alarm list Example output memberID:12345678912345678912 alarm:NOSPACE Clear the alarms: sh-4.4# etcdctl alarm disarm 2.3.6. Setting tuning parameters for etcd You can set the control plane hardware speed to "Standard" , "Slower" , or the default, which is "" . The default setting allows the system to decide which speed to use. This value enables upgrades from versions where this feature does not exist, as the system can select values from versions. By selecting one of the other values, you are overriding the default. If you see many leader elections due to timeouts or missed heartbeats and your system is set to "" or "Standard" , set the hardware speed to "Slower" to make the system more tolerant to the increased latency. 2.3.6.1. Changing hardware speed tolerance To change the hardware speed tolerance for etcd, complete the following steps. Procedure Check to see what the current value is by entering the following command: USD oc describe etcd/cluster | grep "Control Plane Hardware Speed" Example output Control Plane Hardware Speed: <VALUE> Note If the output is empty, the field has not been set and should be considered as the default (""). Change the value by entering the following command. Replace <value> with one of the valid values: "" , "Standard" , or "Slower" : USD oc patch etcd/cluster --type=merge -p '{"spec": {"controlPlaneHardwareSpeed": "<value>"}}' The following table indicates the heartbeat interval and leader election timeout for each profile. These values are subject to change. Profile ETCD_HEARTBEAT_INTERVAL ETCD_LEADER_ELECTION_TIMEOUT "" Varies depending on platform Varies depending on platform Standard 100 1000 Slower 500 2500 Review the output: Example output etcd.operator.openshift.io/cluster patched If you enter any value besides the valid values, error output is displayed. For example, if you entered "Faster" as the value, the output is as follows: Example output The Etcd "cluster" is invalid: spec.controlPlaneHardwareSpeed: Unsupported value: "Faster": supported values: "", "Standard", "Slower" Verify that the value was changed by entering the following command: USD oc describe etcd/cluster | grep "Control Plane Hardware Speed" Example output Control Plane Hardware Speed: "" Wait for etcd pods to roll out: USD oc get pods -n openshift-etcd -w The following output shows the expected entries for master-0. Before you continue, wait until all masters show a status of 4/4 Running . Example output installer-9-ci-ln-qkgs94t-72292-9clnd-master-0 0/1 Pending 0 0s installer-9-ci-ln-qkgs94t-72292-9clnd-master-0 0/1 Pending 0 0s installer-9-ci-ln-qkgs94t-72292-9clnd-master-0 0/1 ContainerCreating 0 0s installer-9-ci-ln-qkgs94t-72292-9clnd-master-0 0/1 ContainerCreating 0 1s installer-9-ci-ln-qkgs94t-72292-9clnd-master-0 1/1 Running 0 2s installer-9-ci-ln-qkgs94t-72292-9clnd-master-0 0/1 Completed 0 34s installer-9-ci-ln-qkgs94t-72292-9clnd-master-0 0/1 Completed 0 36s installer-9-ci-ln-qkgs94t-72292-9clnd-master-0 0/1 Completed 0 36s etcd-guard-ci-ln-qkgs94t-72292-9clnd-master-0 0/1 Running 0 26m etcd-ci-ln-qkgs94t-72292-9clnd-master-0 4/4 Terminating 0 11m etcd-ci-ln-qkgs94t-72292-9clnd-master-0 4/4 Terminating 0 11m etcd-ci-ln-qkgs94t-72292-9clnd-master-0 0/4 Pending 0 0s etcd-ci-ln-qkgs94t-72292-9clnd-master-0 0/4 Init:1/3 0 1s etcd-ci-ln-qkgs94t-72292-9clnd-master-0 0/4 Init:2/3 0 2s etcd-ci-ln-qkgs94t-72292-9clnd-master-0 0/4 PodInitializing 0 3s etcd-ci-ln-qkgs94t-72292-9clnd-master-0 3/4 Running 0 4s etcd-guard-ci-ln-qkgs94t-72292-9clnd-master-0 1/1 Running 0 26m etcd-ci-ln-qkgs94t-72292-9clnd-master-0 3/4 Running 0 20s etcd-ci-ln-qkgs94t-72292-9clnd-master-0 4/4 Running 0 20s Enter the following command to review to the values: USD oc describe -n openshift-etcd pod/<ETCD_PODNAME> | grep -e HEARTBEAT_INTERVAL -e ELECTION_TIMEOUT Note These values might not have changed from the default. Additional resources Understanding feature gates 2.3.7. Increasing the database size for etcd You can set the disk quota in gibibytes (GiB) for each etcd instance. If you set a disk quota for your etcd instance, you can specify integer values from 8 to 32. The default value is 8. You can specify only increasing values. You might want to increase the disk quota if you encounter a low space alert. This alert indicates that the cluster is too large to fit in etcd despite automatic compaction and defragmentation. If you see this alert, you need to increase the disk quota immediately because after etcd runs out of space, writes fail. Another scenario where you might want to increase the disk quota is if you encounter an excessive database growth alert. This alert is a warning that the database might grow too large in the four hours. In this scenario, consider increasing the disk quota so that you do not eventually encounter a low space alert and possible write fails. If you increase the disk quota, the disk space that you specify is not immediately reserved. Instead, etcd can grow to that size if needed. Ensure that etcd is running on a dedicated disk that is larger than the value that you specify for the disk quota. For large etcd databases, the control plane nodes must have additional memory and storage. Because you must account for the API server cache, the minimum memory required is at least three times the configured size of the etcd database. Important Increasing the database size for etcd is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . 2.3.7.1. Changing the etcd database size To change the database size for etcd, complete the following steps. Procedure Check the current value of the disk quota for each etcd instance by entering the following command: USD oc describe etcd/cluster | grep "Backend Quota" Example output Backend Quota Gi B: <value> Change the value of the disk quota by entering the following command: USD oc patch etcd/cluster --type=merge -p '{"spec": {"backendQuotaGiB": <value>}}' Example output etcd.operator.openshift.io/cluster patched Verification Verify that the new value for the disk quota is set by entering the following command: USD oc describe etcd/cluster | grep "Backend Quota" The etcd Operator automatically rolls out the etcd instances with the new values. Verify that the etcd pods are up and running by entering the following command: USD oc get pods -n openshift-etcd The following output shows the expected entries. Example output NAME READY STATUS RESTARTS AGE etcd-ci-ln-b6kfsw2-72292-mzwbq-master-0 4/4 Running 0 39m etcd-ci-ln-b6kfsw2-72292-mzwbq-master-1 4/4 Running 0 37m etcd-ci-ln-b6kfsw2-72292-mzwbq-master-2 4/4 Running 0 41m etcd-guard-ci-ln-b6kfsw2-72292-mzwbq-master-0 1/1 Running 0 51m etcd-guard-ci-ln-b6kfsw2-72292-mzwbq-master-1 1/1 Running 0 49m etcd-guard-ci-ln-b6kfsw2-72292-mzwbq-master-2 1/1 Running 0 54m installer-5-ci-ln-b6kfsw2-72292-mzwbq-master-1 0/1 Completed 0 51m installer-7-ci-ln-b6kfsw2-72292-mzwbq-master-0 0/1 Completed 0 46m installer-7-ci-ln-b6kfsw2-72292-mzwbq-master-1 0/1 Completed 0 44m installer-7-ci-ln-b6kfsw2-72292-mzwbq-master-2 0/1 Completed 0 49m installer-8-ci-ln-b6kfsw2-72292-mzwbq-master-0 0/1 Completed 0 40m installer-8-ci-ln-b6kfsw2-72292-mzwbq-master-1 0/1 Completed 0 38m installer-8-ci-ln-b6kfsw2-72292-mzwbq-master-2 0/1 Completed 0 42m revision-pruner-7-ci-ln-b6kfsw2-72292-mzwbq-master-0 0/1 Completed 0 43m revision-pruner-7-ci-ln-b6kfsw2-72292-mzwbq-master-1 0/1 Completed 0 43m revision-pruner-7-ci-ln-b6kfsw2-72292-mzwbq-master-2 0/1 Completed 0 43m revision-pruner-8-ci-ln-b6kfsw2-72292-mzwbq-master-0 0/1 Completed 0 42m revision-pruner-8-ci-ln-b6kfsw2-72292-mzwbq-master-1 0/1 Completed 0 42m revision-pruner-8-ci-ln-b6kfsw2-72292-mzwbq-master-2 0/1 Completed 0 42m Verify that the disk quota value is updated for the etcd pod by entering the following command: USD oc describe -n openshift-etcd pod/<etcd_podname> | grep "ETCD_QUOTA_BACKEND_BYTES" The value might not have changed from the default value of 8 . Example output ETCD_QUOTA_BACKEND_BYTES: 8589934592 Note While the value that you set is an integer in GiB, the value shown in the output is converted to bytes. 2.3.7.2. Troubleshooting If you encounter issues when you try to increase the database size for etcd, the following troubleshooting steps might help. 2.3.7.2.1. Value is too small If the value that you specify is less than 8 , you see the following error message: USD oc patch etcd/cluster --type=merge -p '{"spec": {"backendQuotaGiB": 5}}' Example error message The Etcd "cluster" is invalid: * spec.backendQuotaGiB: Invalid value: 5: spec.backendQuotaGiB in body should be greater than or equal to 8 * spec.backendQuotaGiB: Invalid value: "integer": etcd backendQuotaGiB may not be decreased To resolve this issue, specify an integer between 8 and 32 . 2.3.7.2.2. Value is too large If the value that you specify is greater than 32 , you see the following error message: USD oc patch etcd/cluster --type=merge -p '{"spec": {"backendQuotaGiB": 64}}' Example error message The Etcd "cluster" is invalid: spec.backendQuotaGiB: Invalid value: 64: spec.backendQuotaGiB in body should be less than or equal to 32 To resolve this issue, specify an integer between 8 and 32 . 2.3.7.2.3. Value is decreasing If the value is set to a valid value between 8 and 32 , you cannot decrease the value. Otherwise, you see an error message. Check to see the current value by entering the following command: USD oc describe etcd/cluster | grep "Backend Quota" Example output Backend Quota Gi B: 10 Decrease the disk quota value by entering the following command: USD oc patch etcd/cluster --type=merge -p '{"spec": {"backendQuotaGiB": 8}}' Example error message The Etcd "cluster" is invalid: spec.backendQuotaGiB: Invalid value: "integer": etcd backendQuotaGiB may not be decreased To resolve this issue, specify an integer greater than 10 . | [
"oc --namespace openshift-machine-api edit controlplanemachineset.machine.openshift.io cluster",
"providerSpec: value: instanceType: <compatible_aws_instance_type> 1",
"apiVersion: v1 kind: ConfigMap data: config.yaml: | prometheusK8s: retention: {{PROMETHEUS_RETENTION_PERIOD}} 1 nodeSelector: node-role.kubernetes.io/infra: \"\" volumeClaimTemplate: spec: storageClassName: {{STORAGE_CLASS}} 2 resources: requests: storage: {{PROMETHEUS_STORAGE_SIZE}} 3 alertmanagerMain: nodeSelector: node-role.kubernetes.io/infra: \"\" volumeClaimTemplate: spec: storageClassName: {{STORAGE_CLASS}} 4 resources: requests: storage: {{ALERTMANAGER_STORAGE_SIZE}} 5 metadata: name: cluster-monitoring-config namespace: openshift-monitoring",
"oc create -f cluster-monitoring-config.yaml",
"sudo podman run --volume /var/lib/etcd:/var/lib/etcd:Z quay.io/cloud-bulldozer/etcd-perf",
"sudo docker run --volume /var/lib/etcd:/var/lib/etcd:Z quay.io/cloud-bulldozer/etcd-perf",
"oc debug node/<node_name>",
"lsblk",
"#!/bin/bash set -uo pipefail for device in <device_type_glob>; do 1 /usr/sbin/blkid \"USD{device}\" &> /dev/null if [ USD? == 2 ]; then echo \"secondary device found USD{device}\" echo \"creating filesystem for etcd mount\" mkfs.xfs -L var-lib-etcd -f \"USD{device}\" &> /dev/null udevadm settle touch /etc/var-lib-etcd-mount exit fi done echo \"Couldn't find secondary block device!\" >&2 exit 77",
"base64 -w0 etcd-find-secondary-device.sh",
"apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: master name: 98-var-lib-etcd spec: config: ignition: version: 3.4.0 storage: files: - path: /etc/find-secondary-device mode: 0755 contents: source: data:text/plain;charset=utf-8;base64,<encoded_etcd_find_secondary_device_script> 1 systemd: units: - name: find-secondary-device.service enabled: true contents: | [Unit] Description=Find secondary device DefaultDependencies=false After=systemd-udev-settle.service Before=local-fs-pre.target ConditionPathExists=!/etc/var-lib-etcd-mount [Service] RemainAfterExit=yes ExecStart=/etc/find-secondary-device RestartForceExitStatus=77 [Install] WantedBy=multi-user.target - name: var-lib-etcd.mount enabled: true contents: | [Unit] Before=local-fs.target [Mount] What=/dev/disk/by-label/var-lib-etcd Where=/var/lib/etcd Type=xfs TimeoutSec=120s [Install] RequiredBy=local-fs.target - name: sync-var-lib-etcd-to-etcd.service enabled: true contents: | [Unit] Description=Sync etcd data if new mount is empty DefaultDependencies=no After=var-lib-etcd.mount var.mount Before=crio.service [Service] Type=oneshot RemainAfterExit=yes ExecCondition=/usr/bin/test ! -d /var/lib/etcd/member ExecStart=/usr/sbin/setsebool -P rsync_full_access 1 ExecStart=/bin/rsync -ar /sysroot/ostree/deploy/rhcos/var/lib/etcd/ /var/lib/etcd/ ExecStart=/usr/sbin/semanage fcontext -a -t container_var_lib_t '/var/lib/etcd(/.*)?' ExecStart=/usr/sbin/setsebool -P rsync_full_access 0 TimeoutSec=0 [Install] WantedBy=multi-user.target graphical.target - name: restorecon-var-lib-etcd.service enabled: true contents: | [Unit] Description=Restore recursive SELinux security contexts DefaultDependencies=no After=var-lib-etcd.mount Before=crio.service [Service] Type=oneshot RemainAfterExit=yes ExecStart=/sbin/restorecon -R /var/lib/etcd/ TimeoutSec=0 [Install] WantedBy=multi-user.target graphical.target",
"oc debug node/<node_name>",
"grep -w \"/var/lib/etcd\" /proc/mounts",
"/dev/sdb /var/lib/etcd xfs rw,seclabel,relatime,attr2,inode64,logbufs=8,logbsize=32k,noquota 0 0",
"etcd member has been defragmented: <member_name> , memberID: <member_id>",
"failed defrag on member: <member_name> , memberID: <member_id> : <error_message>",
"oc -n openshift-etcd get pods -l k8s-app=etcd -o wide",
"etcd-ip-10-0-159-225.example.redhat.com 3/3 Running 0 175m 10.0.159.225 ip-10-0-159-225.example.redhat.com <none> <none> etcd-ip-10-0-191-37.example.redhat.com 3/3 Running 0 173m 10.0.191.37 ip-10-0-191-37.example.redhat.com <none> <none> etcd-ip-10-0-199-170.example.redhat.com 3/3 Running 0 176m 10.0.199.170 ip-10-0-199-170.example.redhat.com <none> <none>",
"oc rsh -n openshift-etcd etcd-ip-10-0-159-225.example.redhat.com etcdctl endpoint status --cluster -w table",
"Defaulting container name to etcdctl. Use 'oc describe pod/etcd-ip-10-0-159-225.example.redhat.com -n openshift-etcd' to see all of the containers in this pod. +---------------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+ | ENDPOINT | ID | VERSION | DB SIZE | IS LEADER | IS LEARNER | RAFT TERM | RAFT INDEX | RAFT APPLIED INDEX | ERRORS | +---------------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+ | https://10.0.191.37:2379 | 251cd44483d811c3 | 3.5.9 | 104 MB | false | false | 7 | 91624 | 91624 | | | https://10.0.159.225:2379 | 264c7c58ecbdabee | 3.5.9 | 104 MB | false | false | 7 | 91624 | 91624 | | | https://10.0.199.170:2379 | 9ac311f93915cc79 | 3.5.9 | 104 MB | true | false | 7 | 91624 | 91624 | | +---------------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+",
"oc rsh -n openshift-etcd etcd-ip-10-0-159-225.example.redhat.com",
"sh-4.4# unset ETCDCTL_ENDPOINTS",
"sh-4.4# etcdctl --command-timeout=30s --endpoints=https://localhost:2379 defrag",
"Finished defragmenting etcd member[https://localhost:2379]",
"sh-4.4# etcdctl endpoint status -w table --cluster",
"+---------------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+ | ENDPOINT | ID | VERSION | DB SIZE | IS LEADER | IS LEARNER | RAFT TERM | RAFT INDEX | RAFT APPLIED INDEX | ERRORS | +---------------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+ | https://10.0.191.37:2379 | 251cd44483d811c3 | 3.5.9 | 104 MB | false | false | 7 | 91624 | 91624 | | | https://10.0.159.225:2379 | 264c7c58ecbdabee | 3.5.9 | 41 MB | false | false | 7 | 91624 | 91624 | | 1 | https://10.0.199.170:2379 | 9ac311f93915cc79 | 3.5.9 | 104 MB | true | false | 7 | 91624 | 91624 | | +---------------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+",
"sh-4.4# etcdctl alarm list",
"memberID:12345678912345678912 alarm:NOSPACE",
"sh-4.4# etcdctl alarm disarm",
"oc describe etcd/cluster | grep \"Control Plane Hardware Speed\"",
"Control Plane Hardware Speed: <VALUE>",
"oc patch etcd/cluster --type=merge -p '{\"spec\": {\"controlPlaneHardwareSpeed\": \"<value>\"}}'",
"etcd.operator.openshift.io/cluster patched",
"The Etcd \"cluster\" is invalid: spec.controlPlaneHardwareSpeed: Unsupported value: \"Faster\": supported values: \"\", \"Standard\", \"Slower\"",
"oc describe etcd/cluster | grep \"Control Plane Hardware Speed\"",
"Control Plane Hardware Speed: \"\"",
"oc get pods -n openshift-etcd -w",
"installer-9-ci-ln-qkgs94t-72292-9clnd-master-0 0/1 Pending 0 0s installer-9-ci-ln-qkgs94t-72292-9clnd-master-0 0/1 Pending 0 0s installer-9-ci-ln-qkgs94t-72292-9clnd-master-0 0/1 ContainerCreating 0 0s installer-9-ci-ln-qkgs94t-72292-9clnd-master-0 0/1 ContainerCreating 0 1s installer-9-ci-ln-qkgs94t-72292-9clnd-master-0 1/1 Running 0 2s installer-9-ci-ln-qkgs94t-72292-9clnd-master-0 0/1 Completed 0 34s installer-9-ci-ln-qkgs94t-72292-9clnd-master-0 0/1 Completed 0 36s installer-9-ci-ln-qkgs94t-72292-9clnd-master-0 0/1 Completed 0 36s etcd-guard-ci-ln-qkgs94t-72292-9clnd-master-0 0/1 Running 0 26m etcd-ci-ln-qkgs94t-72292-9clnd-master-0 4/4 Terminating 0 11m etcd-ci-ln-qkgs94t-72292-9clnd-master-0 4/4 Terminating 0 11m etcd-ci-ln-qkgs94t-72292-9clnd-master-0 0/4 Pending 0 0s etcd-ci-ln-qkgs94t-72292-9clnd-master-0 0/4 Init:1/3 0 1s etcd-ci-ln-qkgs94t-72292-9clnd-master-0 0/4 Init:2/3 0 2s etcd-ci-ln-qkgs94t-72292-9clnd-master-0 0/4 PodInitializing 0 3s etcd-ci-ln-qkgs94t-72292-9clnd-master-0 3/4 Running 0 4s etcd-guard-ci-ln-qkgs94t-72292-9clnd-master-0 1/1 Running 0 26m etcd-ci-ln-qkgs94t-72292-9clnd-master-0 3/4 Running 0 20s etcd-ci-ln-qkgs94t-72292-9clnd-master-0 4/4 Running 0 20s",
"oc describe -n openshift-etcd pod/<ETCD_PODNAME> | grep -e HEARTBEAT_INTERVAL -e ELECTION_TIMEOUT",
"oc describe etcd/cluster | grep \"Backend Quota\"",
"Backend Quota Gi B: <value>",
"oc patch etcd/cluster --type=merge -p '{\"spec\": {\"backendQuotaGiB\": <value>}}'",
"etcd.operator.openshift.io/cluster patched",
"oc describe etcd/cluster | grep \"Backend Quota\"",
"oc get pods -n openshift-etcd",
"NAME READY STATUS RESTARTS AGE etcd-ci-ln-b6kfsw2-72292-mzwbq-master-0 4/4 Running 0 39m etcd-ci-ln-b6kfsw2-72292-mzwbq-master-1 4/4 Running 0 37m etcd-ci-ln-b6kfsw2-72292-mzwbq-master-2 4/4 Running 0 41m etcd-guard-ci-ln-b6kfsw2-72292-mzwbq-master-0 1/1 Running 0 51m etcd-guard-ci-ln-b6kfsw2-72292-mzwbq-master-1 1/1 Running 0 49m etcd-guard-ci-ln-b6kfsw2-72292-mzwbq-master-2 1/1 Running 0 54m installer-5-ci-ln-b6kfsw2-72292-mzwbq-master-1 0/1 Completed 0 51m installer-7-ci-ln-b6kfsw2-72292-mzwbq-master-0 0/1 Completed 0 46m installer-7-ci-ln-b6kfsw2-72292-mzwbq-master-1 0/1 Completed 0 44m installer-7-ci-ln-b6kfsw2-72292-mzwbq-master-2 0/1 Completed 0 49m installer-8-ci-ln-b6kfsw2-72292-mzwbq-master-0 0/1 Completed 0 40m installer-8-ci-ln-b6kfsw2-72292-mzwbq-master-1 0/1 Completed 0 38m installer-8-ci-ln-b6kfsw2-72292-mzwbq-master-2 0/1 Completed 0 42m revision-pruner-7-ci-ln-b6kfsw2-72292-mzwbq-master-0 0/1 Completed 0 43m revision-pruner-7-ci-ln-b6kfsw2-72292-mzwbq-master-1 0/1 Completed 0 43m revision-pruner-7-ci-ln-b6kfsw2-72292-mzwbq-master-2 0/1 Completed 0 43m revision-pruner-8-ci-ln-b6kfsw2-72292-mzwbq-master-0 0/1 Completed 0 42m revision-pruner-8-ci-ln-b6kfsw2-72292-mzwbq-master-1 0/1 Completed 0 42m revision-pruner-8-ci-ln-b6kfsw2-72292-mzwbq-master-2 0/1 Completed 0 42m",
"oc describe -n openshift-etcd pod/<etcd_podname> | grep \"ETCD_QUOTA_BACKEND_BYTES\"",
"ETCD_QUOTA_BACKEND_BYTES: 8589934592",
"oc patch etcd/cluster --type=merge -p '{\"spec\": {\"backendQuotaGiB\": 5}}'",
"The Etcd \"cluster\" is invalid: * spec.backendQuotaGiB: Invalid value: 5: spec.backendQuotaGiB in body should be greater than or equal to 8 * spec.backendQuotaGiB: Invalid value: \"integer\": etcd backendQuotaGiB may not be decreased",
"oc patch etcd/cluster --type=merge -p '{\"spec\": {\"backendQuotaGiB\": 64}}'",
"The Etcd \"cluster\" is invalid: spec.backendQuotaGiB: Invalid value: 64: spec.backendQuotaGiB in body should be less than or equal to 32",
"oc describe etcd/cluster | grep \"Backend Quota\"",
"Backend Quota Gi B: 10",
"oc patch etcd/cluster --type=merge -p '{\"spec\": {\"backendQuotaGiB\": 8}}'",
"The Etcd \"cluster\" is invalid: spec.backendQuotaGiB: Invalid value: \"integer\": etcd backendQuotaGiB may not be decreased"
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.18/html/scalability_and_performance/recommended-performance-and-scalability-practices-2 |
Chapter 10. Using ldapmodify to manage IdM users externally | Chapter 10. Using ldapmodify to manage IdM users externally As an IdM administrators you can use the ipa commands to manage your directory content. Alternatively, you can use the ldapmodify command to achieve similar goals. You can use this command interactively and provide all the data directly in the command line. You also can provide data in the file in the LDAP Data Interchange Format (LDIF) to ldapmodify command. 10.1. Templates for managing IdM user accounts externally The following templates can be used for various user management operations in IdM. The templates show which attributes you must modify using ldapmodify to achieve the following goals: Adding a new stage user Modifying a user's attribute Enabling a user Disabling a user Preserving a user The templates are formatted in the LDAP Data Interchange Format (LDIF). LDIF is a standard plain text data interchange format for representing LDAP directory content and update requests. Using the templates, you can configure the LDAP provider of your provisioning system to manage IdM user accounts. For detailed example procedures, see the following sections: Adding an IdM stage user defined in an LDIF file Adding an IdM stage user directly from the CLI using ldapmodify Preserving an IdM user with ldapmodify Templates for adding a new stage user A template for adding a user with UID and GID assigned automatically . The distinguished name (DN) of the created entry must start with uid=user_login : A template for adding a user with UID and GID assigned statically : You are not required to specify any IdM object classes when adding stage users. IdM adds these classes automatically after the users are activated. Templates for modifying existing users Modifying a user's attribute : Disabling a user : Enabling a user : Updating the nssAccountLock attribute has no effect on stage and preserved users. Even though the update operation completes successfully, the attribute value remains nssAccountLock: TRUE . Preserving a user : Note Before modifying a user, obtain the user's distinguished name (DN) by searching using the user's login. In the following example, the user_allowed_to_modify_user_entries user is a user allowed to modify user and group information, for example activator or IdM administrator. The password in the example is this user's password: 10.2. Templates for managing IdM group accounts externally The following templates can be used for various user group management operations in IdM. The templates show which attributes you must modify using ldapmodify to achieve the following aims: Creating a new group Deleting an existing group Adding a member to a group Removing a member from a group The templates are formatted in the LDAP Data Interchange Format (LDIF). LDIF is a standard plain text data interchange format for representing LDAP directory content and update requests. Using the templates, you can configure the LDAP provider of your provisioning system to manage IdM group accounts. Creating a new group Modifying groups Deleting an existing group : Adding a member to a group : Do not add stage or preserved users to groups. Even though the update operation completes successfully, the users will not be updated as members of the group. Only active users can belong to groups. Removing a member from a group : Note Before modifying a group, obtain the group's distinguished name (DN) by searching using the group's name. 10.3. Using ldapmodify command interactively You can modify Lightweight Directory Access Protocol (LDAP) entries in the interactive mode. Procedure In a command line, enter the LDAP Data Interchange Format (LDIF) statement after the ldapmodify command. Example 10.1. Changing the telephone number for a testuser Note that you need to obtain a Kerberos ticket for using -Y option. Press Ctlr+D to exit the interactive mode. Alternatively, provide an LDIF file after ldapmodify command: Example 10.2. The ldapmodify command reads modification data from an LDIF file Additional resources For more information about how to use the ldapmodify command see ldapmodify(1) man page on your system. For more information about the LDIF structure, see ldif(5) man page on your system. 10.4. Preserving an IdM user with ldapmodify Follow this procedure to use ldapmodify to preserve an IdM user; that is, how to deactivate a user account after the employee has left the company. Prerequisites You can authenticate as an IdM user with a role to preserve users. Procedure Log in as an IdM user with a role to preserve users: Enter the ldapmodify command and specify the Generic Security Services API (GSSAPI) as the Simple Authentication and Security Layer (SASL) mechanism to be used for authentication: Enter the dn of the user you want to preserve: Enter modrdn as the type of change you want to perform: Specify the newrdn for the user: Indicate that you want to preserve the user: Specify the new superior DN : Preserving a user moves the entry to a new location in the directory information tree (DIT). For this reason, you must specify the DN of the new parent entry as the new superior DN. Press Enter again to confirm that this is the end of the entry: Exit the connection using Ctrl + C . Verification Verify that the user has been preserved by listing all preserved users: | [
"dn: uid=user_login ,cn=staged users,cn=accounts,cn=provisioning,dc=idm,dc=example,dc=com changetype: add objectClass: top objectClass: inetorgperson uid: user_login sn: surname givenName: first_name cn: full_name",
"dn: uid=user_login,cn=staged users,cn=accounts,cn=provisioning,dc=idm,dc=example,dc=com changetype: add objectClass: top objectClass: person objectClass: inetorgperson objectClass: organizationalperson objectClass: posixaccount uid: user_login uidNumber: UID_number gidNumber: GID_number sn: surname givenName: first_name cn: full_name homeDirectory: /home/user_login",
"dn: distinguished_name changetype: modify replace: attribute_to_modify attribute_to_modify: new_value",
"dn: distinguished_name changetype: modify replace: nsAccountLock nsAccountLock: TRUE",
"dn: distinguished_name changetype: modify replace: nsAccountLock nsAccountLock: FALSE",
"dn: distinguished_name changetype: modrdn newrdn: uid=user_login deleteoldrdn: 0 newsuperior: cn=deleted users,cn=accounts,cn=provisioning,dc=idm,dc=example,dc=com",
"ldapsearch -LLL -x -D \"uid= user_allowed_to_modify_user_entries ,cn=users,cn=accounts,dc=idm,dc=example,dc=com\" -w \"Secret123\" -H ldap://r8server.idm.example.com -b \"cn=users,cn=accounts,dc=idm,dc=example,dc=com\" uid=test_user dn: uid=test_user,cn=users,cn=accounts,dc=idm,dc=example,dc=com memberOf: cn=ipausers,cn=groups,cn=accounts,dc=idm,dc=example,dc=com",
"dn: cn=group_name,cn=groups,cn=accounts,dc=idm,dc=example,dc=com changetype: add objectClass: top objectClass: ipaobject objectClass: ipausergroup objectClass: groupofnames objectClass: nestedgroup objectClass: posixgroup uid: group_name cn: group_name gidNumber: GID_number",
"dn: group_distinguished_name changetype: delete",
"dn: group_distinguished_name changetype: modify add: member member: uid=user_login,cn=users,cn=accounts,dc=idm,dc=example,dc=com",
"dn: distinguished_name changetype: modify delete: member member: uid=user_login,cn=users,cn=accounts,dc=idm,dc=example,dc=com",
"ldapsearch -YGSSAPI -H ldap://server.idm.example.com -b \"cn=groups,cn=accounts,dc=idm,dc=example,dc=com\" \"cn=group_name\" dn: cn=group_name,cn=groups,cn=accounts,dc=idm,dc=example,dc=com ipaNTSecurityIdentifier: S-1-5-21-1650388524-2605035987-2578146103-11017 cn: testgroup objectClass: top objectClass: groupofnames objectClass: nestedgroup objectClass: ipausergroup objectClass: ipaobject objectClass: posixgroup objectClass: ipantgroupattrs ipaUniqueID: 569bf864-9d45-11ea-bea3-525400f6f085 gidNumber: 1997010017",
"ldapmodify -Y GSSAPI -H ldap://server.example.com dn: uid=testuser,cn=users,cn=accounts,dc=example,dc=com changetype: modify replace: telephoneNumber telephonenumber: 88888888",
"ldapmodify -Y GSSAPI -H ldap://server.example.com -f ~/example.ldif",
"kinit admin",
"ldapmodify -Y GSSAPI SASL/GSSAPI authentication started SASL username: [email protected] SASL SSF: 256 SASL data security layer installed.",
"dn: uid=user1,cn=users,cn=accounts,dc=idm,dc=example,dc=com",
"changetype: modrdn",
"newrdn: uid=user1",
"deleteoldrdn: 0",
"newsuperior: cn=deleted users,cn=accounts,cn=provisioning,dc=idm,dc=example,dc=com",
"[Enter] modifying rdn of entry \"uid=user1,cn=users,cn=accounts,dc=idm,dc=example,dc=com\"",
"ipa user-find --preserved=true -------------- 1 user matched -------------- User login: user1 First name: First 1 Last name: Last 1 Home directory: /home/user1 Login shell: /bin/sh Principal name: [email protected] Principal alias: [email protected] Email address: [email protected] UID: 1997010003 GID: 1997010003 Account disabled: True Preserved user: True ---------------------------- Number of entries returned 1 ----------------------------"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html/managing_idm_users_groups_hosts_and_access_control_rules/using-ldapmodify-to-manage-IdM-users-externally_managing-users-groups-hosts |
D.5. Online Certificate Status Manager-Specific ACLs | D.5. Online Certificate Status Manager-Specific ACLs This section covers the default access control configuration attributes which are set specifically for the Online Certificate Status Manager. The OCSP responder's ACL configuration also includes all of the common ACLs listed in Section D.2, "Common ACLs" . There are access control rules set for each of the OCSP's interfaces (administrative console and agents and end-entities services pages) and for common operations like listing and downloading CRLs. D.5.1. certServer.ee.crl Controls access to CRLs through the end-entities page. Table D.57. certServer.ee.crl ACL Summary Operations Description Allow/Deny Access Targeted Users/Groups read Retrieve and view the certificate revocation list. Allow Anyone D.5.2. certServer.ee.request.ocsp Controls access, based on IP address, on which clients submit OCSP requests. Table D.58. certServer.ee.request.ocsp ACL Summary Operations Description Allow/Deny Access Targeted Users/Groups submit Submit OCSP requests. Allow All IP addresses D.5.3. certServer.ocsp.ca Controls who can instruct the OCSP responder. The default setting is: Table D.59. certServer.ocsp.ca ACL Summary Operations Description Allow/Deny Access Targeted Users/Groups Add Instruct the OCSP responder to respond to OCSP requests for a new CA. Allow OCSP Manager Agents D.5.4. certServer.ocsp.cas Controls who can list, in the agent services interface, all of the Certificate Managers which publish CRLs to the Online Certificate Status Manager. The default setting is: Table D.60. certServer.ocsp.cas ACL Summary Operations Description Allow/Deny Access Targeted Users/Groups list Lists all of the Certificate Managers which publish CRLs to the OCSP responder. Allow Agents D.5.5. certServer.ocsp.certificate Controls who can validate the status of a certificate. The default setting is: Table D.61. certServer.ocsp.certificate ACL Summary Operations Description Allow/Deny Access Targeted Users/Groups validate Verifies the status of a specified certificate. Allow OCSP Agents D.5.6. certServer.ocsp.configuration Controls who can access, view, or modify the configuration for the Certificate Manager's OCSP services. The default configuration is: Table D.62. certServer.ocsp.configuration ACL Summary Operations Description Allow/Deny Access Targeted Users/Groups read View OCSP plug-in information, OCSP configuration, and OCSP stores configuration. List OCSP stores configuration. Allow Administrators Online Certificate Status Manager Agents Auditors modify Modify the OCSP configuration, OCSP stores configuration, and default OCSP store. Allow Administrators D.5.7. certServer.ocsp.crl Controls access to read or update CRLs through the agent services interface. The default setting is: Table D.63. certServer.ocsp.crl ACL Summary Operations Description Allow/Deny Access Targeted Users/Groups add Add new CRLs to those managed by the OCSP responder. Allow OCSP Agents Trusted Managers D.5.8. certServer.ocsp.group Controls access to the internal database for adding users and groups for the Online Certificate Status Manager instance. Table D.64. certServer.ocsp.group ACL Summary Operations Description Allow/Deny Access Targeted Users/Groups modify Create, edit or delete user and group entries for the instance. Allow Administrators read View user and group entries for the instance. Allow Administrators D.5.9. certServer.ocsp.info Controls who can read information about the OCSP responder. Table D.65. certServer.ocsp.info ACL Summary Operations Description Allow/Deny Access Targeted Users/Groups read View OCSP responder information. Allow OCSP Agents | [
"allow (read) user=\"anybody\"",
"allow (submit) ipaddress=\".*\"",
"allow (add) group=\"Online Certificate Status Manager Agents\"",
"allow (list) group=\"Online Certificate Status Manager Agents\"",
"allow (validate) group=\"Online Certificate Status Manager Agents\"",
"allow (read) group=\"Administrators\" || group=\"Online Certificate Status Manager Agents\" || group=\"Auditors\";allow (modify) group=\"Administrators\"",
"allow (add) group=\"Online Certificate Status Manager Agents\" || group=\"Trusted Managers\"",
"allow (modify,read) group=\"Administrators\"",
"allow (read) group=\"Online Certificate Status Manager Agents\""
] | https://docs.redhat.com/en/documentation/red_hat_certificate_system/10/html/administration_guide/ocsp-acl-reference |
4.3. SQL Translation Extension | 4.3. SQL Translation Extension The JDBCExcecutionFactory provides several methods to modify the command and the string form of the resulting syntax before it is sent to the JDBC driver, including: Change basic SQL syntax options. See the useXXX methods, e.g. useSelectLimit returns true for SQLServer to indicate that limits are applied in the SELECT clause. Register one or more FunctionModifiers that define how a scalar function is to be modified or transformed. Modify a LanguageObject (see the translate , translateXXX , and FunctionModifier.translate methods). Modify the passed in object and return null to indicate that the standard syntax output will be used. Change the way SQL strings are formed for a LanguageObject (see the translate , translateXXX , and FunctionModifier.translate methods). This returns a list of parts which can contain strings and LanguageObjects. These are appended to the SQL string in order. If the incoming LanguageObject appears in the returned list it is not translated again. | null | https://docs.redhat.com/en/documentation/red_hat_jboss_data_virtualization/6.4/html/development_guide_volume_4_server_development/sql_translation_extension |
Chapter 9. Hardware Drivers and Devices | Chapter 9. Hardware Drivers and Devices 9.1. Virtualized Hardware Red Hat Virtualization presents three distinct types of system devices to virtualized guests. These hardware devices all appear as physically attached hardware devices to the virtualized guest but the device drivers work in different ways. Emulated devices Emulated devices, sometimes referred to as virtual devices, exist entirely in software. Emulated device drivers are a translation layer between the operating system running on the host (which manages the source device) and the operating systems running on the guests. The device level instructions directed to and from the emulated device are intercepted and translated by the hypervisor. Any device of the same type as that being emulated and recognized by the Linux kernel is able to be used as the backing source device for the emulated drivers. Para-virtualized Devices Para-virtualized devices require the installation of device drivers on the guest operating system providing it with an interface to communicate with the hypervisor on the host machine. This interface is used to allow traditionally intensive tasks such as disk I/O to be performed outside of the virtualized environment. Lowering the overhead inherent in virtualization in this manner is intended to allow guest operating system performance closer to that expected when running directly on physical hardware. Physically shared devices Certain hardware platforms allow virtualized guests to directly access various hardware devices and components. This process in virtualization is known as passthrough or device assignment. Passthrough allows devices to appear and behave as if they were physically attached to the guest operating system. | null | https://docs.redhat.com/en/documentation/red_hat_virtualization/4.4/html/technical_reference/chap-hardware_drivers_and_devices |
Chapter 12. Managing control plane machines | Chapter 12. Managing control plane machines 12.1. About control plane machine sets With control plane machine sets, you can automate management of the control plane machine resources within your OpenShift Container Platform cluster. Important Control plane machine sets cannot manage compute machines, and compute machine sets cannot manage control plane machines. Control plane machine sets provide for control plane machines similar management capabilities as compute machine sets provide for compute machines. However, these two types of machine sets are separate custom resources defined within the Machine API and have several fundamental differences in their architecture and functionality. 12.1.1. Control Plane Machine Set Operator overview The Control Plane Machine Set Operator uses the ControlPlaneMachineSet custom resource (CR) to automate management of the control plane machine resources within your OpenShift Container Platform cluster. When the state of the cluster control plane machine set is set to Active , the Operator ensures that the cluster has the correct number of control plane machines with the specified configuration. This allows the automated replacement of degraded control plane machines and rollout of changes to the control plane. A cluster has only one control plane machine set, and the Operator only manages objects in the openshift-machine-api namespace. 12.1.1.1. Control Plane Machine Set Operator limitations The Control Plane Machine Set Operator has the following limitations: Only Amazon Web Services (AWS), Google Cloud Platform (GCP), IBM Power(R) Virtual Server, Microsoft Azure, Nutanix, VMware vSphere, and Red Hat OpenStack Platform (RHOSP) clusters are supported. Clusters that do not have preexisting machines that represent the control plane nodes cannot use a control plane machine set or enable the use of a control plane machine set after installation. Generally, preexisting control plane machines are only present if a cluster was installed using infrastructure provisioned by the installation program. To determine if a cluster has the required preexisting control plane machines, run the following command as a user with administrator privileges: USD oc get machine \ -n openshift-machine-api \ -l machine.openshift.io/cluster-api-machine-role=master Example output showing preexisting control plane machines NAME PHASE TYPE REGION ZONE AGE <infrastructure_id>-master-0 Running m6i.xlarge us-west-1 us-west-1a 5h19m <infrastructure_id>-master-1 Running m6i.xlarge us-west-1 us-west-1b 5h19m <infrastructure_id>-master-2 Running m6i.xlarge us-west-1 us-west-1a 5h19m Example output missing preexisting control plane machines No resources found in openshift-machine-api namespace. The Operator requires the Machine API Operator to be operational and is therefore not supported on clusters with manually provisioned machines. When installing a OpenShift Container Platform cluster with manually provisioned machines for a platform that creates an active generated ControlPlaneMachineSet custom resource (CR), you must remove the Kubernetes manifest files that define the control plane machine set as instructed in the installation process. Only clusters with three control plane machines are supported. Horizontal scaling of the control plane is not supported. Deploying Azure control plane machines on Ephemeral OS disks increases risk for data loss and is not supported. Deploying control plane machines as AWS Spot Instances, GCP preemptible VMs, or Azure Spot VMs is not supported. Important Attempting to deploy control plane machines as AWS Spot Instances, GCP preemptible VMs, or Azure Spot VMs might cause the cluster to lose etcd quorum. A cluster that loses all control plane machines simultaneously is unrecoverable. Making changes to the control plane machine set during or prior to installation is not supported. You must make any changes to the control plane machine set only after installation. 12.1.2. Additional resources Control Plane Machine Set Operator reference ControlPlaneMachineSet custom resource 12.2. Getting started with control plane machine sets The process for getting started with control plane machine sets depends on the state of the ControlPlaneMachineSet custom resource (CR) in your cluster. Clusters with an active generated CR Clusters that have a generated CR with an active state use the control plane machine set by default. No administrator action is required. Clusters with an inactive generated CR For clusters that include an inactive generated CR, you must review the CR configuration and activate the CR . Clusters without a generated CR For clusters that do not include a generated CR, you must create and activate a CR with the appropriate configuration for your cluster. If you are uncertain about the state of the ControlPlaneMachineSet CR in your cluster, you can verify the CR status . 12.2.1. Supported cloud providers In OpenShift Container Platform 4.15, the control plane machine set is supported for Amazon Web Services (AWS), Google Cloud Platform (GCP), Microsoft Azure, Nutanix, and VMware vSphere clusters. The status of the control plane machine set after installation depends on your cloud provider and the version of OpenShift Container Platform that you installed on your cluster. Table 12.1. Control plane machine set implementation for OpenShift Container Platform 4.15 Cloud provider Active by default Generated CR Manual CR required Amazon Web Services (AWS) X [1] X Google Cloud Platform (GCP) X [2] X Microsoft Azure X [2] X Nutanix X [3] X VMware vSphere X [4] X [4] X Red Hat OpenStack Platform (RHOSP) X [3] X AWS clusters that are upgraded from version 4.11 or earlier require CR activation . GCP and Azure clusters that are upgraded from version 4.12 or earlier require CR activation . Nutanix and RHOSP clusters that are upgraded from version 4.13 or earlier require CR activation . In OpenShift Container Platform 4.15, installing a cluster with an active generated CR on VWware vSphere is available as a Technology Preview feature. To enable the feature, set the featureSet parameter to TechPreviewNoUpgrade in the install-config.yaml file . 12.2.2. Checking the control plane machine set custom resource state You can verify the existence and state of the ControlPlaneMachineSet custom resource (CR). Procedure Determine the state of the CR by running the following command: USD oc get controlplanemachineset.machine.openshift.io cluster \ --namespace openshift-machine-api A result of Active indicates that the ControlPlaneMachineSet CR exists and is activated. No administrator action is required. A result of Inactive indicates that a ControlPlaneMachineSet CR exists but is not activated. A result of NotFound indicates that there is no existing ControlPlaneMachineSet CR. steps To use the control plane machine set, you must ensure that a ControlPlaneMachineSet CR with the correct settings for your cluster exists. If your cluster has an existing CR, you must verify that the configuration in the CR is correct for your cluster. If your cluster does not have an existing CR, you must create one with the correct configuration for your cluster. 12.2.3. Activating the control plane machine set custom resource To use the control plane machine set, you must ensure that a ControlPlaneMachineSet custom resource (CR) with the correct settings for your cluster exists. On a cluster with a generated CR, you must verify that the configuration in the CR is correct for your cluster and activate it. Note For more information about the parameters in the CR, see "Control plane machine set configuration". Procedure View the configuration of the CR by running the following command: USD oc --namespace openshift-machine-api edit controlplanemachineset.machine.openshift.io cluster Change the values of any fields that are incorrect for your cluster configuration. When the configuration is correct, activate the CR by setting the .spec.state field to Active and saving your changes. Important To activate the CR, you must change the .spec.state field to Active in the same oc edit session that you use to update the CR configuration. If the CR is saved with the state left as Inactive , the control plane machine set generator resets the CR to its original settings. Additional resources Control plane machine set configuration 12.2.4. Creating a control plane machine set custom resource To use the control plane machine set, you must ensure that a ControlPlaneMachineSet custom resource (CR) with the correct settings for your cluster exists. On a cluster without a generated CR, you must create the CR manually and activate it. Note For more information about the structure and parameters of the CR, see "Control plane machine set configuration". Procedure Create a YAML file using the following template: Control plane machine set CR YAML file template apiVersion: machine.openshift.io/v1 kind: ControlPlaneMachineSet metadata: name: cluster namespace: openshift-machine-api spec: replicas: 3 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <cluster_id> 1 machine.openshift.io/cluster-api-machine-role: master machine.openshift.io/cluster-api-machine-type: master state: Active 2 strategy: type: RollingUpdate 3 template: machineType: machines_v1beta1_machine_openshift_io machines_v1beta1_machine_openshift_io: failureDomains: platform: <platform> 4 <platform_failure_domains> 5 metadata: labels: machine.openshift.io/cluster-api-cluster: <cluster_id> 6 machine.openshift.io/cluster-api-machine-role: master machine.openshift.io/cluster-api-machine-type: master spec: providerSpec: value: <platform_provider_spec> 7 1 Specify the infrastructure ID that is based on the cluster ID that you set when you provisioned the cluster. You must specify this value when you create a ControlPlaneMachineSet CR. If you have the OpenShift CLI ( oc ) installed, you can obtain the infrastructure ID by running the following command: USD oc get -o jsonpath='{.status.infrastructureName}{"\n"}' infrastructure cluster 2 Specify the state of the Operator. When the state is Inactive , the Operator is not operational. You can activate the Operator by setting the value to Active . Important Before you activate the CR, you must ensure that its configuration is correct for your cluster requirements. 3 Specify the update strategy for the cluster. Valid values are OnDelete and RollingUpdate . The default value is RollingUpdate . For more information about update strategies, see "Updating the control plane configuration". 4 Specify your cloud provider platform name. Valid values are AWS , Azure , GCP , Nutanix , VSphere , and OpenStack . 5 Add the <platform_failure_domains> configuration for the cluster. The format and values of this section are provider-specific. For more information, see the sample failure domain configuration for your cloud provider. 6 Specify the infrastructure ID. 7 Add the <platform_provider_spec> configuration for the cluster. The format and values of this section are provider-specific. For more information, see the sample provider specification for your cloud provider. Refer to the sample YAML for a control plane machine set CR and populate your file with values that are appropriate for your cluster configuration. Refer to the sample failure domain configuration and sample provider specification for your cloud provider and update those sections of your file with the appropriate values. When the configuration is correct, activate the CR by setting the .spec.state field to Active and saving your changes. Create the CR from your YAML file by running the following command: USD oc create -f <control_plane_machine_set>.yaml where <control_plane_machine_set> is the name of the YAML file that contains the CR configuration. Additional resources Updating the control plane configuration Control plane machine set configuration Provider-specific configuration options 12.3. Managing control plane machines with control plane machine sets Control plane machine sets automate several essential aspects of control plane management. 12.3.1. Updating the control plane configuration You can make changes to the configuration of the machines in the control plane by updating the specification in the control plane machine set custom resource (CR). The Control Plane Machine Set Operator monitors the control plane machines and compares their configuration with the specification in the control plane machine set CR. When there is a discrepancy between the specification in the CR and the configuration of a control plane machine, the Operator marks that control plane machine for replacement. Note For more information about the parameters in the CR, see "Control plane machine set configuration". Prerequisites Your cluster has an activated and functioning Control Plane Machine Set Operator. Procedure Edit your control plane machine set CR by running the following command: USD oc edit controlplanemachineset.machine.openshift.io cluster \ -n openshift-machine-api Change the values of any fields that you want to update in your cluster configuration. Save your changes. steps For clusters that use the default RollingUpdate update strategy, the control plane machine set propagates changes to your control plane configuration automatically. For clusters that are configured to use the OnDelete update strategy, you must replace your control plane machines manually. 12.3.1.1. Automatic updates to the control plane configuration The RollingUpdate update strategy automatically propagates changes to your control plane configuration. This update strategy is the default configuration for the control plane machine set. For clusters that use the RollingUpdate update strategy, the Operator creates a replacement control plane machine with the configuration that is specified in the CR. When the replacement control plane machine is ready, the Operator deletes the control plane machine that is marked for replacement. The replacement machine then joins the control plane. If multiple control plane machines are marked for replacement, the Operator protects etcd health during replacement by repeating this replacement process one machine at a time until it has replaced each machine. 12.3.1.2. Manual updates to the control plane configuration You can use the OnDelete update strategy to propagate changes to your control plane configuration by replacing machines manually. Manually replacing machines allows you to test changes to your configuration on a single machine before applying the changes more broadly. For clusters that are configured to use the OnDelete update strategy, the Operator creates a replacement control plane machine when you delete an existing machine. When the replacement control plane machine is ready, the etcd Operator allows the existing machine to be deleted. The replacement machine then joins the control plane. If multiple control plane machines are deleted, the Operator creates all of the required replacement machines simultaneously. The Operator maintains etcd health by preventing more than one machine being removed from the control plane at once. 12.3.2. Replacing a control plane machine To replace a control plane machine in a cluster that has a control plane machine set, you delete the machine manually. The control plane machine set replaces the deleted machine with one using the specification in the control plane machine set custom resource (CR). Prerequisites If your cluster runs on Red Hat OpenStack Platform (RHOSP) and you need to evacuate a compute server, such as for an upgrade, you must disable the RHOSP compute node that the machine runs on by running the following command: USD openstack compute service set <target_node_host_name> nova-compute --disable For more information, see Preparing to migrate in the RHOSP documentation. Procedure List the control plane machines in your cluster by running the following command: USD oc get machines \ -l machine.openshift.io/cluster-api-machine-role==master \ -n openshift-machine-api Delete a control plane machine by running the following command: USD oc delete machine \ -n openshift-machine-api \ <control_plane_machine_name> 1 1 Specify the name of the control plane machine to delete. Note If you delete multiple control plane machines, the control plane machine set replaces them according to the configured update strategy: For clusters that use the default RollingUpdate update strategy, the Operator replaces one machine at a time until each machine is replaced. For clusters that are configured to use the OnDelete update strategy, the Operator creates all of the required replacement machines simultaneously. Both strategies maintain etcd health during control plane machine replacement. 12.3.3. Additional resources Control plane machine set configuration Provider-specific configuration options 12.4. Control plane machine set configuration This example YAML snippet shows the base structure for a control plane machine set custom resource (CR). 12.4.1. Sample YAML for a control plane machine set custom resource The base of the ControlPlaneMachineSet CR is structured the same way for all platforms. Sample ControlPlaneMachineSet CR YAML file apiVersion: machine.openshift.io/v1 kind: ControlPlaneMachineSet metadata: name: cluster 1 namespace: openshift-machine-api spec: replicas: 3 2 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <cluster_id> 3 machine.openshift.io/cluster-api-machine-role: master machine.openshift.io/cluster-api-machine-type: master state: Active 4 strategy: type: RollingUpdate 5 template: machineType: machines_v1beta1_machine_openshift_io machines_v1beta1_machine_openshift_io: failureDomains: platform: <platform> 6 <platform_failure_domains> 7 metadata: labels: machine.openshift.io/cluster-api-cluster: <cluster_id> machine.openshift.io/cluster-api-machine-role: master machine.openshift.io/cluster-api-machine-type: master spec: providerSpec: value: <platform_provider_spec> 8 1 Specifies the name of the ControlPlaneMachineSet CR, which is cluster . Do not change this value. 2 Specifies the number of control plane machines. Only clusters with three control plane machines are supported, so the replicas value is 3 . Horizontal scaling is not supported. Do not change this value. 3 Specifies the infrastructure ID that is based on the cluster ID that you set when you provisioned the cluster. You must specify this value when you create a ControlPlaneMachineSet CR. If you have the OpenShift CLI ( oc ) installed, you can obtain the infrastructure ID by running the following command: USD oc get -o jsonpath='{.status.infrastructureName}{"\n"}' infrastructure cluster 4 Specifies the state of the Operator. When the state is Inactive , the Operator is not operational. You can activate the Operator by setting the value to Active . Important Before you activate the Operator, you must ensure that the ControlPlaneMachineSet CR configuration is correct for your cluster requirements. For more information about activating the Control Plane Machine Set Operator, see "Getting started with control plane machine sets". 5 Specifies the update strategy for the cluster. The allowed values are OnDelete and RollingUpdate . The default value is RollingUpdate . For more information about update strategies, see "Updating the control plane configuration". 6 Specifies the cloud provider platform name. Do not change this value. 7 Specifies the <platform_failure_domains> configuration for the cluster. The format and values of this section are provider-specific. For more information, see the sample failure domain configuration for your cloud provider. 8 Specifies the <platform_provider_spec> configuration for the cluster. The format and values of this section are provider-specific. For more information, see the sample provider specification for your cloud provider. Additional resources Getting started with control plane machine sets Updating the control plane configuration 12.4.2. Provider-specific configuration options The <platform_provider_spec> and <platform_failure_domains> sections of the control plane machine set manifests are provider specific. For provider-specific configuration options for your cluster, see the following resources: Control plane configuration options for Amazon Web Services Control plane configuration options for Google Cloud Platform Control plane configuration options for Microsoft Azure Control plane configuration options for Nutanix Control plane configuration options for Red Hat OpenStack Platform (RHOSP) Control plane configuration options for VMware vSphere 12.5. Configuration options for control plane machines 12.5.1. Control plane configuration options for Amazon Web Services You can change the configuration of your Amazon Web Services (AWS) control plane machines and enable features by updating values in the control plane machine set. When you save an update to the control plane machine set, the Control Plane Machine Set Operator updates the control plane machines according to your configured update strategy . 12.5.1.1. Sample YAML for configuring Amazon Web Services clusters The following example YAML snippets show provider specification and failure domain configurations for an AWS cluster. 12.5.1.1.1. Sample AWS provider specification When you create a control plane machine set for an existing cluster, the provider specification must match the providerSpec configuration in the control plane machine custom resource (CR) that is created by the installation program. You can omit any field that is set in the failure domain section of the CR. In the following example, <cluster_id> is the infrastructure ID that is based on the cluster ID that you set when you provisioned the cluster. If you have the OpenShift CLI installed, you can obtain the infrastructure ID by running the following command: USD oc get -o jsonpath='{.status.infrastructureName}{"\n"}' infrastructure cluster Sample AWS providerSpec values apiVersion: machine.openshift.io/v1 kind: ControlPlaneMachineSet metadata: name: cluster namespace: openshift-machine-api spec: # ... template: # ... spec: providerSpec: value: ami: id: ami-<ami_id_string> 1 apiVersion: machine.openshift.io/v1beta1 blockDevices: - ebs: 2 encrypted: true iops: 0 kmsKey: arn: "" volumeSize: 120 volumeType: gp3 credentialsSecret: name: aws-cloud-credentials 3 deviceIndex: 0 iamInstanceProfile: id: <cluster_id>-master-profile 4 instanceType: m6i.xlarge 5 kind: AWSMachineProviderConfig 6 loadBalancers: 7 - name: <cluster_id>-int type: network - name: <cluster_id>-ext type: network metadata: creationTimestamp: null metadataServiceOptions: {} placement: 8 region: <region> 9 availabilityZone: "" 10 tenancy: 11 securityGroups: - filters: - name: tag:Name values: - <cluster_id>-master-sg 12 subnet: {} 13 userDataSecret: name: master-user-data 14 1 Specifies the Red Hat Enterprise Linux CoreOS (RHCOS) Amazon Machine Images (AMI) ID for the cluster. The AMI must belong to the same region as the cluster. If you want to use an AWS Marketplace image, you must complete the OpenShift Container Platform subscription from the AWS Marketplace to obtain an AMI ID for your region. 2 Specifies the configuration of an encrypted EBS volume. 3 Specifies the secret name for the cluster. Do not change this value. 4 Specifies the AWS Identity and Access Management (IAM) instance profile. Do not change this value. 5 Specifies the AWS instance type for the control plane. 6 Specifies the cloud provider platform type. Do not change this value. 7 Specifies the internal ( int ) and external ( ext ) load balancers for the cluster. Note You can omit the external ( ext ) load balancer parameters on private OpenShift Container Platform clusters. 8 Specifies where to create the control plane instance in AWS. 9 Specifies the AWS region for the cluster. 10 This parameter is configured in the failure domain and is shown with an empty value here. If a value specified for this parameter differs from the value in the failure domain, the Control Plane Machine Set Operator overwrites it with the value in the failure domain. 11 Specifies the AWS Dedicated Instance configuration for the control plane. For more information, see AWS documentation about Dedicated Instances . The following values are valid: default : The Dedicated Instance runs on shared hardware. dedicated : The Dedicated Instance runs on single-tenant hardware. host : The Dedicated Instance runs on a Dedicated Host, which is an isolated server with configurations that you can control. 12 Specifies the control plane machines security group. 13 This parameter is configured in the failure domain and is shown with an empty value here. If a value specified for this parameter differs from the value in the failure domain, the Control Plane Machine Set Operator overwrites it with the value in the failure domain. Note If the failure domain configuration does not specify a value, the value in the provider specification is used. Configuring a subnet in the failure domain overwrites the subnet value in the provider specification. 14 Specifies the control plane user data secret. Do not change this value. 12.5.1.1.2. Sample AWS failure domain configuration The control plane machine set concept of a failure domain is analogous to existing AWS concept of an Availability Zone (AZ) . The ControlPlaneMachineSet CR spreads control plane machines across multiple failure domains when possible. When configuring AWS failure domains in the control plane machine set, you must specify the availability zone name and the subnet to use. Sample AWS failure domain values apiVersion: machine.openshift.io/v1 kind: ControlPlaneMachineSet metadata: name: cluster namespace: openshift-machine-api spec: # ... template: # ... machines_v1beta1_machine_openshift_io: failureDomains: aws: - placement: availabilityZone: <aws_zone_a> 1 subnet: 2 filters: - name: tag:Name values: - <cluster_id>-private-<aws_zone_a> 3 type: Filters 4 - placement: availabilityZone: <aws_zone_b> 5 subnet: filters: - name: tag:Name values: - <cluster_id>-private-<aws_zone_b> 6 type: Filters platform: AWS 7 # ... 1 Specifies an AWS availability zone for the first failure domain. 2 Specifies a subnet configuration. In this example, the subnet type is Filters , so there is a filters stanza. 3 Specifies the subnet name for the first failure domain, using the infrastructure ID and the AWS availability zone. 4 Specifies the subnet type. The allowed values are: ARN , Filters and ID . The default value is Filters . 5 Specifies the subnet name for an additional failure domain, using the infrastructure ID and the AWS availability zone. 6 Specifies the cluster's infrastructure ID and the AWS availability zone for the additional failure domain. 7 Specifies the cloud provider platform name. Do not change this value. 12.5.1.2. Enabling Amazon Web Services features for control plane machines You can enable features by updating values in the control plane machine set. 12.5.1.2.1. Restricting the API server to private After you deploy a cluster to Amazon Web Services (AWS), you can reconfigure the API server to use only the private zone. Prerequisites Install the OpenShift CLI ( oc ). Have access to the web console as a user with admin privileges. Procedure In the web portal or console for your cloud provider, take the following actions: Locate and delete the appropriate load balancer component: For AWS, delete the external load balancer. The API DNS entry in the private zone already points to the internal load balancer, which uses an identical configuration, so you do not need to modify the internal load balancer. Delete the api.USDclustername.USDyourdomain DNS entry in the public zone. Remove the external load balancers by deleting the following indicated lines in the control plane machine set custom resource: # ... providerSpec: value: # ... loadBalancers: - name: lk4pj-ext 1 type: network 2 - name: lk4pj-int type: network # ... 1 Delete the name value for the external load balancer, which ends in -ext . 2 Delete the type value for the external load balancer. Additional resources Configuring the Ingress Controller endpoint publishing scope to Internal 12.5.1.2.2. Changing the Amazon Web Services instance type by using a control plane machine set You can change the Amazon Web Services (AWS) instance type that your control plane machines use by updating the specification in the control plane machine set custom resource (CR). Prerequisites Your AWS cluster uses a control plane machine set. Procedure Edit the following line under the providerSpec field: providerSpec: value: ... instanceType: <compatible_aws_instance_type> 1 1 Specify a larger AWS instance type with the same base as the selection. For example, you can change m6i.xlarge to m6i.2xlarge or m6i.4xlarge . Save your changes. 12.5.1.2.3. Assigning machines to placement groups for Elastic Fabric Adapter instances by using machine sets You can configure a machine set to deploy machines on Elastic Fabric Adapter (EFA) instances within an existing AWS placement group. EFA instances do not require placement groups, and you can use placement groups for purposes other than configuring an EFA. This example uses both to demonstrate a configuration that can improve network performance for machines within the specified placement group. Prerequisites You created a placement group in the AWS console. Note Ensure that the rules and limitations for the type of placement group that you create are compatible with your intended use case. The control plane machine set spreads the control plane machines across multiple failure domains when possible. To use placement groups for the control plane, you must use a placement group type that can span multiple Availability Zones. Procedure In a text editor, open the YAML file for an existing machine set or create a new one. Edit the following lines under the providerSpec field: apiVersion: machine.openshift.io/v1 kind: ControlPlaneMachineSet # ... spec: template: spec: providerSpec: value: instanceType: <supported_instance_type> 1 networkInterfaceType: EFA 2 placement: availabilityZone: <zone> 3 region: <region> 4 placementGroupName: <placement_group> 5 # ... 1 Specify an instance type that supports EFAs . 2 Specify the EFA network interface type. 3 Specify the zone, for example, us-east-1a . 4 Specify the region, for example, us-east-1 . 5 Specify the name of the existing AWS placement group to deploy machines in. Verification In the AWS console, find a machine that the machine set created and verify the following in the machine properties: The placement group field has the value that you specified for the placementGroupName parameter in the machine set. The interface type field indicates that it uses an EFA. 12.5.1.2.4. Machine set options for the Amazon EC2 Instance Metadata Service You can use machine sets to create machines that use a specific version of the Amazon EC2 Instance Metadata Service (IMDS). Machine sets can create machines that allow the use of both IMDSv1 and IMDSv2 or machines that require the use of IMDSv2. Note Using IMDSv2 is only supported on AWS clusters that were created with OpenShift Container Platform version 4.7 or later. Important Before configuring a machine set to create machines that require IMDSv2, ensure that any workloads that interact with the AWS metadata service support IMDSv2. 12.5.1.2.4.1. Configuring IMDS by using machine sets You can specify whether to require the use of IMDSv2 by adding or editing the value of metadataServiceOptions.authentication in the machine set YAML file for your machines. Prerequisites To use IMDSv2, your AWS cluster must have been created with OpenShift Container Platform version 4.7 or later. Procedure Add or edit the following lines under the providerSpec field: providerSpec: value: metadataServiceOptions: authentication: Required 1 1 To require IMDSv2, set the parameter value to Required . To allow the use of both IMDSv1 and IMDSv2, set the parameter value to Optional . If no value is specified, both IMDSv1 and IMDSv2 are allowed. 12.5.1.2.5. Machine sets that deploy machines as Dedicated Instances You can create a machine set running on AWS that deploys machines as Dedicated Instances. Dedicated Instances run in a virtual private cloud (VPC) on hardware that is dedicated to a single customer. These Amazon EC2 instances are physically isolated at the host hardware level. The isolation of Dedicated Instances occurs even if the instances belong to different AWS accounts that are linked to a single payer account. However, other instances that are not dedicated can share hardware with Dedicated Instances if they belong to the same AWS account. Instances with either public or dedicated tenancy are supported by the Machine API. Instances with public tenancy run on shared hardware. Public tenancy is the default tenancy. Instances with dedicated tenancy run on single-tenant hardware. 12.5.1.2.5.1. Creating Dedicated Instances by using machine sets You can run a machine that is backed by a Dedicated Instance by using Machine API integration. Set the tenancy field in your machine set YAML file to launch a Dedicated Instance on AWS. Procedure Specify a dedicated tenancy under the providerSpec field: providerSpec: placement: tenancy: dedicated 12.5.2. Control plane configuration options for Microsoft Azure You can change the configuration of your Microsoft Azure control plane machines and enable features by updating values in the control plane machine set. When you save an update to the control plane machine set, the Control Plane Machine Set Operator updates the control plane machines according to your configured update strategy . 12.5.2.1. Sample YAML for configuring Microsoft Azure clusters The following example YAML snippets show provider specification and failure domain configurations for an Azure cluster. 12.5.2.1.1. Sample Azure provider specification When you create a control plane machine set for an existing cluster, the provider specification must match the providerSpec configuration in the control plane Machine CR that is created by the installation program. You can omit any field that is set in the failure domain section of the CR. In the following example, <cluster_id> is the infrastructure ID that is based on the cluster ID that you set when you provisioned the cluster. If you have the OpenShift CLI installed, you can obtain the infrastructure ID by running the following command: USD oc get -o jsonpath='{.status.infrastructureName}{"\n"}' infrastructure cluster Sample Azure providerSpec values apiVersion: machine.openshift.io/v1 kind: ControlPlaneMachineSet metadata: name: cluster namespace: openshift-machine-api spec: # ... template: # ... spec: providerSpec: value: acceleratedNetworking: true apiVersion: machine.openshift.io/v1beta1 credentialsSecret: name: azure-cloud-credentials 1 namespace: openshift-machine-api diagnostics: {} image: 2 offer: "" publisher: "" resourceID: /resourceGroups/<cluster_id>-rg/providers/Microsoft.Compute/galleries/gallery_<cluster_id>/images/<cluster_id>-gen2/versions/412.86.20220930 3 sku: "" version: "" internalLoadBalancer: <cluster_id>-internal 4 kind: AzureMachineProviderSpec 5 location: <region> 6 managedIdentity: <cluster_id>-identity metadata: creationTimestamp: null name: <cluster_id> networkResourceGroup: <cluster_id>-rg osDisk: 7 diskSettings: {} diskSizeGB: 1024 managedDisk: storageAccountType: Premium_LRS osType: Linux publicIP: false publicLoadBalancer: <cluster_id> 8 resourceGroup: <cluster_id>-rg subnet: <cluster_id>-master-subnet 9 userDataSecret: name: master-user-data 10 vmSize: Standard_D8s_v3 vnet: <cluster_id>-vnet zone: "1" 11 1 Specifies the secret name for the cluster. Do not change this value. 2 Specifies the image details for your control plane machine set. 3 Specifies an image that is compatible with your instance type. The Hyper-V generation V2 images created by the installation program have a -gen2 suffix, while V1 images have the same name without the suffix. 4 Specifies the internal load balancer for the control plane. This field might not be preconfigured but is required in both the ControlPlaneMachineSet and control plane Machine CRs. 5 Specifies the cloud provider platform type. Do not change this value. 6 Specifies the region to place control plane machines on. 7 Specifies the disk configuration for the control plane. 8 Specifies the public load balancer for the control plane. Note You can omit the publicLoadBalancer parameter on private OpenShift Container Platform clusters that have user-defined outbound routing. 9 Specifies the subnet for the control plane. 10 Specifies the control plane user data secret. Do not change this value. 11 Specifies the zone configuration for clusters that use a single zone for all failure domains. Note If the cluster is configured to use a different zone for each failure domain, this parameter is configured in the failure domain. If you specify this value in the provider specification when using different zones for each failure domain, the Control Plane Machine Set Operator ignores it. 12.5.2.1.2. Sample Azure failure domain configuration The control plane machine set concept of a failure domain is analogous to existing Azure concept of an Azure availability zone . The ControlPlaneMachineSet CR spreads control plane machines across multiple failure domains when possible. When configuring Azure failure domains in the control plane machine set, you must specify the availability zone name. An Azure cluster uses a single subnet that spans multiple zones. Sample Azure failure domain values apiVersion: machine.openshift.io/v1 kind: ControlPlaneMachineSet metadata: name: cluster namespace: openshift-machine-api spec: # ... template: # ... machines_v1beta1_machine_openshift_io: failureDomains: azure: - zone: "1" 1 - zone: "2" - zone: "3" platform: Azure 2 # ... 1 Each instance of zone specifies an Azure availability zone for a failure domain. Note If the cluster is configured to use a single zone for all failure domains, the zone parameter is configured in the provider specification instead of in the failure domain configuration. 2 Specifies the cloud provider platform name. Do not change this value. 12.5.2.2. Enabling Microsoft Azure features for control plane machines You can enable features by updating values in the control plane machine set. 12.5.2.2.1. Restricting the API server to private After you deploy a cluster to Amazon Web Services (AWS), you can reconfigure the API server to use only the private zone. Prerequisites Install the OpenShift CLI ( oc ). Have access to the web console as a user with admin privileges. Procedure In the web portal or console for your cloud provider, take the following actions: Locate and delete the appropriate load balancer component: Delete the api.USDclustername.USDyourdomain DNS entry in the public zone. Remove the external load balancers by deleting the following indicated lines in the control plane machine set custom resource: # ... providerSpec: value: # ... loadBalancers: - name: lk4pj-ext 1 type: network 2 - name: lk4pj-int type: network # ... 1 Delete the name value for the external load balancer, which ends in -ext . 2 Delete the type value for the external load balancer. Additional resources Configuring the Ingress Controller endpoint publishing scope to Internal 12.5.2.2.2. Using the Azure Marketplace offering You can create a machine set running on Azure that deploys machines that use the Azure Marketplace offering. To use this offering, you must first obtain the Azure Marketplace image. When obtaining your image, consider the following: While the images are the same, the Azure Marketplace publisher is different depending on your region. If you are located in North America, specify redhat as the publisher. If you are located in EMEA, specify redhat-limited as the publisher. The offer includes a rh-ocp-worker SKU and a rh-ocp-worker-gen1 SKU. The rh-ocp-worker SKU represents a Hyper-V generation version 2 VM image. The default instance types used in OpenShift Container Platform are version 2 compatible. If you plan to use an instance type that is only version 1 compatible, use the image associated with the rh-ocp-worker-gen1 SKU. The rh-ocp-worker-gen1 SKU represents a Hyper-V version 1 VM image. Important Installing images with the Azure marketplace is not supported on clusters with 64-bit ARM instances. Prerequisites You have installed the Azure CLI client (az) . Your Azure account is entitled for the offer and you have logged into this account with the Azure CLI client. Procedure Display all of the available OpenShift Container Platform images by running one of the following commands: North America: USD az vm image list --all --offer rh-ocp-worker --publisher redhat -o table Example output Offer Publisher Sku Urn Version ------------- -------------- ------------------ -------------------------------------------------------------- ----------------- rh-ocp-worker RedHat rh-ocp-worker RedHat:rh-ocp-worker:rh-ocp-worker:4.15.2024072409 4.15.2024072409 rh-ocp-worker RedHat rh-ocp-worker-gen1 RedHat:rh-ocp-worker:rh-ocp-worker-gen1:4.15.2024072409 4.15.2024072409 EMEA: USD az vm image list --all --offer rh-ocp-worker --publisher redhat-limited -o table Example output Offer Publisher Sku Urn Version ------------- -------------- ------------------ -------------------------------------------------------------- ----------------- rh-ocp-worker redhat-limited rh-ocp-worker redhat-limited:rh-ocp-worker:rh-ocp-worker:4.15.2024072409 4.15.2024072409 rh-ocp-worker redhat-limited rh-ocp-worker-gen1 redhat-limited:rh-ocp-worker:rh-ocp-worker-gen1:4.15.2024072409 4.15.2024072409 Note Regardless of the version of OpenShift Container Platform that you install, the correct version of the Azure Marketplace image to use is 4.13. If required, your VMs are automatically upgraded as part of the installation process. Inspect the image for your offer by running one of the following commands: North America: USD az vm image show --urn redhat:rh-ocp-worker:rh-ocp-worker:<version> EMEA: USD az vm image show --urn redhat-limited:rh-ocp-worker:rh-ocp-worker:<version> Review the terms of the offer by running one of the following commands: North America: USD az vm image terms show --urn redhat:rh-ocp-worker:rh-ocp-worker:<version> EMEA: USD az vm image terms show --urn redhat-limited:rh-ocp-worker:rh-ocp-worker:<version> Accept the terms of the offering by running one of the following commands: North America: USD az vm image terms accept --urn redhat:rh-ocp-worker:rh-ocp-worker:<version> EMEA: USD az vm image terms accept --urn redhat-limited:rh-ocp-worker:rh-ocp-worker:<version> Record the image details of your offer, specifically the values for publisher , offer , sku , and version . Add the following parameters to the providerSpec section of your machine set YAML file using the image details for your offer: Sample providerSpec image values for Azure Marketplace machines providerSpec: value: image: offer: rh-ocp-worker publisher: redhat resourceID: "" sku: rh-ocp-worker type: MarketplaceWithPlan version: 413.92.2023101700 12.5.2.2.3. Enabling Azure boot diagnostics You can enable boot diagnostics on Azure machines that your machine set creates. Prerequisites Have an existing Microsoft Azure cluster. Procedure Add the diagnostics configuration that is applicable to your storage type to the providerSpec field in your machine set YAML file: For an Azure Managed storage account: providerSpec: diagnostics: boot: storageAccountType: AzureManaged 1 1 Specifies an Azure Managed storage account. For an Azure Unmanaged storage account: providerSpec: diagnostics: boot: storageAccountType: CustomerManaged 1 customerManaged: storageAccountURI: https://<storage-account>.blob.core.windows.net 2 1 Specifies an Azure Unmanaged storage account. 2 Replace <storage-account> with the name of your storage account. Note Only the Azure Blob Storage data service is supported. Verification On the Microsoft Azure portal, review the Boot diagnostics page for a machine deployed by the machine set, and verify that you can see the serial logs for the machine. 12.5.2.2.4. Machine sets that deploy machines with ultra disks as data disks You can create a machine set running on Azure that deploys machines with ultra disks. Ultra disks are high-performance storage that are intended for use with the most demanding data workloads. Additional resources Microsoft Azure ultra disks documentation 12.5.2.2.4.1. Creating machines with ultra disks by using machine sets You can deploy machines with ultra disks on Azure by editing your machine set YAML file. Prerequisites Have an existing Microsoft Azure cluster. Procedure Create a custom secret in the openshift-machine-api namespace using the master data secret by running the following command: USD oc -n openshift-machine-api \ get secret <role>-user-data \ 1 --template='{{index .data.userData | base64decode}}' | jq > userData.txt 2 1 Replace <role> with master . 2 Specify userData.txt as the name of the new custom secret. In a text editor, open the userData.txt file and locate the final } character in the file. On the immediately preceding line, add a , . Create a new line after the , and add the following configuration details: "storage": { "disks": [ 1 { "device": "/dev/disk/azure/scsi1/lun0", 2 "partitions": [ 3 { "label": "lun0p1", 4 "sizeMiB": 1024, 5 "startMiB": 0 } ] } ], "filesystems": [ 6 { "device": "/dev/disk/by-partlabel/lun0p1", "format": "xfs", "path": "/var/lib/lun0p1" } ] }, "systemd": { "units": [ 7 { "contents": "[Unit]\nBefore=local-fs.target\n[Mount]\nWhere=/var/lib/lun0p1\nWhat=/dev/disk/by-partlabel/lun0p1\nOptions=defaults,pquota\n[Install]\nWantedBy=local-fs.target\n", 8 "enabled": true, "name": "var-lib-lun0p1.mount" } ] } 1 The configuration details for the disk that you want to attach to a node as an ultra disk. 2 Specify the lun value that is defined in the dataDisks stanza of the machine set you are using. For example, if the machine set contains lun: 0 , specify lun0 . You can initialize multiple data disks by specifying multiple "disks" entries in this configuration file. If you specify multiple "disks" entries, ensure that the lun value for each matches the value in the machine set. 3 The configuration details for a new partition on the disk. 4 Specify a label for the partition. You might find it helpful to use hierarchical names, such as lun0p1 for the first partition of lun0 . 5 Specify the total size in MiB of the partition. 6 Specify the filesystem to use when formatting a partition. Use the partition label to specify the partition. 7 Specify a systemd unit to mount the partition at boot. Use the partition label to specify the partition. You can create multiple partitions by specifying multiple "partitions" entries in this configuration file. If you specify multiple "partitions" entries, you must specify a systemd unit for each. 8 For Where , specify the value of storage.filesystems.path . For What , specify the value of storage.filesystems.device . Extract the disabling template value to a file called disableTemplating.txt by running the following command: USD oc -n openshift-machine-api get secret <role>-user-data \ 1 --template='{{index .data.disableTemplating | base64decode}}' | jq > disableTemplating.txt 1 Replace <role> with master . Combine the userData.txt file and disableTemplating.txt file to create a data secret file by running the following command: USD oc -n openshift-machine-api create secret generic <role>-user-data-x5 \ 1 --from-file=userData=userData.txt \ --from-file=disableTemplating=disableTemplating.txt 1 For <role>-user-data-x5 , specify the name of the secret. Replace <role> with master . Edit your control plane machine set CR by running the following command: USD oc --namespace openshift-machine-api edit controlplanemachineset.machine.openshift.io cluster Add the following lines in the positions indicated: apiVersion: machine.openshift.io/v1beta1 kind: ControlPlaneMachineSet spec: template: spec: metadata: labels: disk: ultrassd 1 providerSpec: value: ultraSSDCapability: Enabled 2 dataDisks: 3 - nameSuffix: ultrassd lun: 0 diskSizeGB: 4 deletionPolicy: Delete cachingType: None managedDisk: storageAccountType: UltraSSD_LRS userDataSecret: name: <role>-user-data-x5 4 1 Specify a label to use to select a node that is created by this machine set. This procedure uses disk.ultrassd for this value. 2 3 These lines enable the use of ultra disks. For dataDisks , include the entire stanza. 4 Specify the user data secret created earlier. Replace <role> with master . Save your changes. For clusters that use the default RollingUpdate update strategy, the Operator automatically propagates the changes to your control plane configuration. For clusters that are configured to use the OnDelete update strategy, you must replace your control plane machines manually. Verification Validate that the machines are created by running the following command: USD oc get machines The machines should be in the Running state. For a machine that is running and has a node attached, validate the partition by running the following command: USD oc debug node/<node-name> -- chroot /host lsblk In this command, oc debug node/<node-name> starts a debugging shell on the node <node-name> and passes a command with -- . The passed command chroot /host provides access to the underlying host OS binaries, and lsblk shows the block devices that are attached to the host OS machine. steps To use an ultra disk on the control plane, reconfigure your workload to use the control plane's ultra disk mount point. 12.5.2.2.4.2. Troubleshooting resources for machine sets that enable ultra disks Use the information in this section to understand and recover from issues you might encounter. 12.5.2.2.4.2.1. Incorrect ultra disk configuration If an incorrect configuration of the ultraSSDCapability parameter is specified in the machine set, the machine provisioning fails. For example, if the ultraSSDCapability parameter is set to Disabled , but an ultra disk is specified in the dataDisks parameter, the following error message appears: StorageAccountType UltraSSD_LRS can be used only when additionalCapabilities.ultraSSDEnabled is set. To resolve this issue, verify that your machine set configuration is correct. 12.5.2.2.4.2.2. Unsupported disk parameters If a region, availability zone, or instance size that is not compatible with ultra disks is specified in the machine set, the machine provisioning fails. Check the logs for the following error message: failed to create vm <machine_name>: failure sending request for machine <machine_name>: cannot create vm: compute.VirtualMachinesClient#CreateOrUpdate: Failure sending request: StatusCode=400 -- Original Error: Code="BadRequest" Message="Storage Account type 'UltraSSD_LRS' is not supported <more_information_about_why>." To resolve this issue, verify that you are using this feature in a supported environment and that your machine set configuration is correct. 12.5.2.2.4.2.3. Unable to delete disks If the deletion of ultra disks as data disks is not working as expected, the machines are deleted and the data disks are orphaned. You must delete the orphaned disks manually if desired. 12.5.2.2.5. Enabling customer-managed encryption keys for a machine set You can supply an encryption key to Azure to encrypt data on managed disks at rest. You can enable server-side encryption with customer-managed keys by using the Machine API. An Azure Key Vault, a disk encryption set, and an encryption key are required to use a customer-managed key. The disk encryption set must be in a resource group where the Cloud Credential Operator (CCO) has granted permissions. If not, an additional reader role is required to be granted on the disk encryption set. Prerequisites Create an Azure Key Vault instance . Create an instance of a disk encryption set . Grant the disk encryption set access to key vault . Procedure Configure the disk encryption set under the providerSpec field in your machine set YAML file. For example: providerSpec: value: osDisk: diskSizeGB: 128 managedDisk: diskEncryptionSet: id: /subscriptions/<subscription_id>/resourceGroups/<resource_group_name>/providers/Microsoft.Compute/diskEncryptionSets/<disk_encryption_set_name> storageAccountType: Premium_LRS Additional resources Azure documentation about customer-managed keys 12.5.2.2.6. Configuring trusted launch for Azure virtual machines by using machine sets Important Using trusted launch for Azure virtual machines is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . OpenShift Container Platform 4.15 supports trusted launch for Azure virtual machines (VMs). By editing the machine set YAML file, you can configure the trusted launch options that a machine set uses for machines that it deploys. For example, you can configure these machines to use UEFI security features such as Secure Boot or a dedicated virtual Trusted Platform Module (vTPM) instance. Note Some feature combinations result in an invalid configuration. Table 12.2. UEFI feature combination compatibility Secure Boot [1] vTPM [2] Valid configuration Enabled Enabled Yes Enabled Disabled Yes Enabled Omitted Yes Disabled Enabled Yes Omitted Enabled Yes Disabled Disabled No Omitted Disabled No Omitted Omitted No Using the secureBoot field. Using the virtualizedTrustedPlatformModule field. For more information about related features and functionality, see the Microsoft Azure documentation about Trusted launch for Azure virtual machines . Procedure In a text editor, open the YAML file for an existing machine set or create a new one. Edit the following section under the providerSpec field to provide a valid configuration: Sample valid configuration with UEFI Secure Boot and vTPM enabled apiVersion: machine.openshift.io/v1 kind: ControlPlaneMachineSet # ... spec: template: spec: providerSpec: value: securityProfile: settings: securityType: TrustedLaunch 1 trustedLaunch: uefiSettings: 2 secureBoot: Enabled 3 virtualizedTrustedPlatformModule: Enabled 4 # ... 1 Enables the use of trusted launch for Azure virtual machines. This value is required for all valid configurations. 2 Specifies which UEFI security features to use. This section is required for all valid configurations. 3 Enables UEFI Secure Boot. 4 Enables the use of a vTPM. Verification On the Azure portal, review the details for a machine deployed by the machine set and verify that the trusted launch options match the values that you configured. 12.5.2.2.7. Configuring Azure confidential virtual machines by using machine sets Important Using Azure confidential virtual machines is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . OpenShift Container Platform 4.15 supports Azure confidential virtual machines (VMs). Note Confidential VMs are currently not supported on 64-bit ARM architectures. By editing the machine set YAML file, you can configure the confidential VM options that a machine set uses for machines that it deploys. For example, you can configure these machines to use UEFI security features such as Secure Boot or a dedicated virtual Trusted Platform Module (vTPM) instance. Warning Not all instance types support confidential VMs. Do not change the instance type for a control plane machine set that is configured to use confidential VMs to a type that is incompatible. Using an incompatible instance type can cause your cluster to become unstable. For more information about related features and functionality, see the Microsoft Azure documentation about Confidential virtual machines . Procedure In a text editor, open the YAML file for an existing machine set or create a new one. Edit the following section under the providerSpec field: Sample configuration apiVersion: machine.openshift.io/v1 kind: ControlPlaneMachineSet # ... spec: template: spec: providerSpec: value: osDisk: # ... managedDisk: securityProfile: 1 securityEncryptionType: VMGuestStateOnly 2 # ... securityProfile: 3 settings: securityType: ConfidentialVM 4 confidentialVM: uefiSettings: 5 secureBoot: Disabled 6 virtualizedTrustedPlatformModule: Enabled 7 vmSize: Standard_DC16ads_v5 8 # ... 1 Specifies security profile settings for the managed disk when using a confidential VM. 2 Enables encryption of the Azure VM Guest State (VMGS) blob. This setting requires the use of vTPM. 3 Specifies security profile settings for the confidential VM. 4 Enables the use of confidential VMs. This value is required for all valid configurations. 5 Specifies which UEFI security features to use. This section is required for all valid configurations. 6 Disables UEFI Secure Boot. 7 Enables the use of a vTPM. 8 Specifies an instance type that supports confidential VMs. Verification On the Azure portal, review the details for a machine deployed by the machine set and verify that the confidential VM options match the values that you configured. 12.5.2.2.8. Accelerated Networking for Microsoft Azure VMs Accelerated Networking uses single root I/O virtualization (SR-IOV) to provide Microsoft Azure VMs with a more direct path to the switch. This enhances network performance. This feature can be enabled after installation. 12.5.2.2.8.1. Limitations Consider the following limitations when deciding whether to use Accelerated Networking: Accelerated Networking is only supported on clusters where the Machine API is operational. Accelerated Networking requires an Azure VM size that includes at least four vCPUs. To satisfy this requirement, you can change the value of vmSize in your machine set. For information about Azure VM sizes, see Microsoft Azure documentation . 12.5.2.2.9. Configuring Capacity Reservation by using machine sets OpenShift Container Platform version 4.15.25 and later supports on-demand Capacity Reservation with Capacity Reservation groups on Microsoft Azure clusters. You can configure a machine set to deploy machines on any available resources that match the parameters of a capacity request that you define. These parameters specify the VM size, region, and number of instances that you want to reserve. If your Azure subscription quota can accommodate the capacity request, the deployment succeeds. For more information, including limitations and suggested use cases for this Azure instance type, see the Microsoft Azure documentation about On-demand Capacity Reservation . Note You cannot change an existing Capacity Reservation configuration for a machine set. To use a different Capacity Reservation group, you must replace the machine set and the machines that the machine set deployed. Prerequisites You have access to the cluster with cluster-admin privileges. You installed the OpenShift CLI ( oc ). You created a Capacity Reservation group. For more information, see the Microsoft Azure documentation Create a Capacity Reservation . Procedure In a text editor, open the YAML file for an existing machine set or create a new one. Edit the following section under the providerSpec field: Sample configuration apiVersion: machine.openshift.io/v1 kind: ControlPlaneMachineSet # ... spec: template: machines_v1beta1_machine_openshift_io: spec: providerSpec: value: capacityReservationGroupID: <capacity_reservation_group> 1 # ... 1 Specify the ID of the Capacity Reservation group that you want the machine set to deploy machines on. Verification To verify machine deployment, list the machines that the machine set created by running the following command: USD oc get machine \ -n openshift-machine-api \ -l machine.openshift.io/cluster-api-machine-role=master In the output, verify that the characteristics of the listed machines match the parameters of your Capacity Reservation. 12.5.2.2.9.1. Enabling Accelerated Networking on an existing Microsoft Azure cluster You can enable Accelerated Networking on Azure by adding acceleratedNetworking to your machine set YAML file. Prerequisites Have an existing Microsoft Azure cluster where the Machine API is operational. Procedure Add the following to the providerSpec field: providerSpec: value: acceleratedNetworking: true 1 vmSize: <azure-vm-size> 2 1 This line enables Accelerated Networking. 2 Specify an Azure VM size that includes at least four vCPUs. For information about VM sizes, see Microsoft Azure documentation . Verification On the Microsoft Azure portal, review the Networking settings page for a machine provisioned by the machine set, and verify that the Accelerated networking field is set to Enabled . 12.5.3. Control plane configuration options for Google Cloud Platform You can change the configuration of your Google Cloud Platform (GCP) control plane machines and enable features by updating values in the control plane machine set. When you save an update to the control plane machine set, the Control Plane Machine Set Operator updates the control plane machines according to your configured update strategy . 12.5.3.1. Sample YAML for configuring Google Cloud Platform clusters The following example YAML snippets show provider specification and failure domain configurations for a GCP cluster. 12.5.3.1.1. Sample GCP provider specification When you create a control plane machine set for an existing cluster, the provider specification must match the providerSpec configuration in the control plane machine custom resource (CR) that is created by the installation program. You can omit any field that is set in the failure domain section of the CR. Values obtained by using the OpenShift CLI In the following example, you can obtain some of the values for your cluster by using the OpenShift CLI. Infrastructure ID The <cluster_id> string is the infrastructure ID that is based on the cluster ID that you set when you provisioned the cluster. If you have the OpenShift CLI installed, you can obtain the infrastructure ID by running the following command: USD oc get -o jsonpath='{.status.infrastructureName}{"\n"}' infrastructure cluster Image path The <path_to_image> string is the path to the image that was used to create the disk. If you have the OpenShift CLI installed, you can obtain the path to the image by running the following command: USD oc -n openshift-machine-api \ -o jsonpath='{.spec.template.machines_v1beta1_machine_openshift_io.spec.providerSpec.value.disks[0].image}{"\n"}' \ get ControlPlaneMachineSet/cluster Sample GCP providerSpec values apiVersion: machine.openshift.io/v1 kind: ControlPlaneMachineSet metadata: name: cluster namespace: openshift-machine-api spec: # ... template: # ... spec: providerSpec: value: apiVersion: machine.openshift.io/v1beta1 canIPForward: false credentialsSecret: name: gcp-cloud-credentials 1 deletionProtection: false disks: - autoDelete: true boot: true image: <path_to_image> 2 labels: null sizeGb: 200 type: pd-ssd kind: GCPMachineProviderSpec 3 machineType: e2-standard-4 metadata: creationTimestamp: null metadataServiceOptions: {} networkInterfaces: - network: <cluster_id>-network subnetwork: <cluster_id>-master-subnet projectID: <project_name> 4 region: <region> 5 serviceAccounts: 6 - email: <cluster_id>-m@<project_name>.iam.gserviceaccount.com scopes: - https://www.googleapis.com/auth/cloud-platform shieldedInstanceConfig: {} tags: - <cluster_id>-master targetPools: - <cluster_id>-api userDataSecret: name: master-user-data 7 zone: "" 8 1 Specifies the secret name for the cluster. Do not change this value. 2 Specifies the path to the image that was used to create the disk. To use a GCP Marketplace image, specify the offer to use: OpenShift Container Platform: https://www.googleapis.com/compute/v1/projects/redhat-marketplace-public/global/images/redhat-coreos-ocp-413-x86-64-202305021736 OpenShift Platform Plus: https://www.googleapis.com/compute/v1/projects/redhat-marketplace-public/global/images/redhat-coreos-opp-413-x86-64-202305021736 OpenShift Kubernetes Engine: https://www.googleapis.com/compute/v1/projects/redhat-marketplace-public/global/images/redhat-coreos-oke-413-x86-64-202305021736 3 Specifies the cloud provider platform type. Do not change this value. 4 Specifies the name of the GCP project that you use for your cluster. 5 Specifies the GCP region for the cluster. 6 Specifies a single service account. Multiple service accounts are not supported. 7 Specifies the control plane user data secret. Do not change this value. 8 This parameter is configured in the failure domain, and is shown with an empty value here. If a value specified for this parameter differs from the value in the failure domain, the Operator overwrites it with the value in the failure domain. 12.5.3.1.2. Sample GCP failure domain configuration The control plane machine set concept of a failure domain is analogous to the existing GCP concept of a zone . The ControlPlaneMachineSet CR spreads control plane machines across multiple failure domains when possible. When configuring GCP failure domains in the control plane machine set, you must specify the zone name to use. Sample GCP failure domain values apiVersion: machine.openshift.io/v1 kind: ControlPlaneMachineSet metadata: name: cluster namespace: openshift-machine-api spec: # ... template: # ... machines_v1beta1_machine_openshift_io: failureDomains: gcp: - zone: <gcp_zone_a> 1 - zone: <gcp_zone_b> 2 - zone: <gcp_zone_c> - zone: <gcp_zone_d> platform: GCP 3 # ... 1 Specifies a GCP zone for the first failure domain. 2 Specifies an additional failure domain. Further failure domains are added the same way. 3 Specifies the cloud provider platform name. Do not change this value. 12.5.3.2. Enabling Google Cloud Platform features for control plane machines You can enable features by updating values in the control plane machine set. 12.5.3.2.1. Configuring persistent disk types by using machine sets You can configure the type of persistent disk that a machine set deploys machines on by editing the machine set YAML file. For more information about persistent disk types, compatibility, regional availability, and limitations, see the GCP Compute Engine documentation about persistent disks . Procedure In a text editor, open the YAML file for an existing machine set or create a new one. Edit the following line under the providerSpec field: apiVersion: machine.openshift.io/v1 kind: ControlPlaneMachineSet ... spec: template: spec: providerSpec: value: disks: type: pd-ssd 1 1 Control plane nodes must use the pd-ssd disk type. Verification Using the Google Cloud console, review the details for a machine deployed by the machine set and verify that the Type field matches the configured disk type. 12.5.3.2.2. Configuring Confidential VM by using machine sets By editing the machine set YAML file, you can configure the Confidential VM options that a machine set uses for machines that it deploys. For more information about Confidential VM features, functions, and compatibility, see the GCP Compute Engine documentation about Confidential VM . Note Confidential VMs are currently not supported on 64-bit ARM architectures. Important OpenShift Container Platform 4.15 does not support some Confidential Compute features, such as Confidential VMs with AMD Secure Encrypted Virtualization Secure Nested Paging (SEV-SNP). Procedure In a text editor, open the YAML file for an existing machine set or create a new one. Edit the following section under the providerSpec field: apiVersion: machine.openshift.io/v1 kind: ControlPlaneMachineSet ... spec: template: spec: providerSpec: value: confidentialCompute: Enabled 1 onHostMaintenance: Terminate 2 machineType: n2d-standard-8 3 ... 1 Specify whether Confidential VM is enabled. Valid values are Disabled or Enabled . 2 Specify the behavior of the VM during a host maintenance event, such as a hardware or software update. For a machine that uses Confidential VM, this value must be set to Terminate , which stops the VM. Confidential VM does not support live VM migration. 3 Specify a machine type that supports Confidential VM. Confidential VM supports the N2D and C2D series of machine types. Verification On the Google Cloud console, review the details for a machine deployed by the machine set and verify that the Confidential VM options match the values that you configured. 12.5.3.2.3. Configuring Shielded VM options by using machine sets By editing the machine set YAML file, you can configure the Shielded VM options that a machine set uses for machines that it deploys. For more information about Shielded VM features and functionality, see the GCP Compute Engine documentation about Shielded VM . Procedure In a text editor, open the YAML file for an existing machine set or create a new one. Edit the following section under the providerSpec field: apiVersion: machine.openshift.io/v1 kind: ControlPlaneMachineSet # ... spec: template: spec: providerSpec: value: shieldedInstanceConfig: 1 integrityMonitoring: Enabled 2 secureBoot: Disabled 3 virtualizedTrustedPlatformModule: Enabled 4 # ... 1 In this section, specify any Shielded VM options that you want. 2 Specify whether integrity monitoring is enabled. Valid values are Disabled or Enabled . Note When integrity monitoring is enabled, you must not disable virtual trusted platform module (vTPM). 3 Specify whether UEFI Secure Boot is enabled. Valid values are Disabled or Enabled . 4 Specify whether vTPM is enabled. Valid values are Disabled or Enabled . Verification Using the Google Cloud console, review the details for a machine deployed by the machine set and verify that the Shielded VM options match the values that you configured. Additional resources What is Shielded VM? Secure Boot Virtual Trusted Platform Module (vTPM) Integrity monitoring 12.5.3.2.4. Enabling customer-managed encryption keys for a machine set Google Cloud Platform (GCP) Compute Engine allows users to supply an encryption key to encrypt data on disks at rest. The key is used to encrypt the data encryption key, not to encrypt the customer's data. By default, Compute Engine encrypts this data by using Compute Engine keys. You can enable encryption with a customer-managed key in clusters that use the Machine API. You must first create a KMS key and assign the correct permissions to a service account. The KMS key name, key ring name, and location are required to allow a service account to use your key. Note If you do not want to use a dedicated service account for the KMS encryption, the Compute Engine default service account is used instead. You must grant the default service account permission to access the keys if you do not use a dedicated service account. The Compute Engine default service account name follows the service-<project_number>@compute-system.iam.gserviceaccount.com pattern. Procedure To allow a specific service account to use your KMS key and to grant the service account the correct IAM role, run the following command with your KMS key name, key ring name, and location: USD gcloud kms keys add-iam-policy-binding <key_name> \ --keyring <key_ring_name> \ --location <key_ring_location> \ --member "serviceAccount:service-<project_number>@compute-system.iam.gserviceaccount.com" \ --role roles/cloudkms.cryptoKeyEncrypterDecrypter Configure the encryption key under the providerSpec field in your machine set YAML file. For example: apiVersion: machine.openshift.io/v1 kind: ControlPlaneMachineSet ... spec: template: spec: providerSpec: value: disks: - type: encryptionKey: kmsKey: name: machine-encryption-key 1 keyRing: openshift-encrpytion-ring 2 location: global 3 projectID: openshift-gcp-project 4 kmsKeyServiceAccount: openshift-service-account@openshift-gcp-project.iam.gserviceaccount.com 5 1 The name of the customer-managed encryption key that is used for the disk encryption. 2 The name of the KMS key ring that the KMS key belongs to. 3 The GCP location in which the KMS key ring exists. 4 Optional: The ID of the project in which the KMS key ring exists. If a project ID is not set, the machine set projectID in which the machine set was created is used. 5 Optional: The service account that is used for the encryption request for the given KMS key. If a service account is not set, the Compute Engine default service account is used. When a new machine is created by using the updated providerSpec object configuration, the disk encryption key is encrypted with the KMS key. 12.5.4. Control plane configuration options for Nutanix You can change the configuration of your Nutanix control plane machines by updating values in the control plane machine set. When you save an update to the control plane machine set, the Control Plane Machine Set Operator updates the control plane machines according to your configured update strategy . 12.5.4.1. Sample YAML for configuring Nutanix clusters The following example YAML snippet shows a provider specification configuration for a Nutanix cluster. 12.5.4.1.1. Sample Nutanix provider specification When you create a control plane machine set for an existing cluster, the provider specification must match the providerSpec configuration in the control plane machine custom resource (CR) that is created by the installation program. Values obtained by using the OpenShift CLI In the following example, you can obtain some of the values for your cluster by using the OpenShift CLI. Infrastructure ID The <cluster_id> string is the infrastructure ID that is based on the cluster ID that you set when you provisioned the cluster. If you have the OpenShift CLI installed, you can obtain the infrastructure ID by running the following command: USD oc get -o jsonpath='{.status.infrastructureName}{"\n"}' infrastructure cluster Sample Nutanix providerSpec values apiVersion: machine.openshift.io/v1 kind: ControlPlaneMachineSet metadata: name: cluster namespace: openshift-machine-api spec: # ... template: # ... spec: providerSpec: value: apiVersion: machine.openshift.io/v1 bootType: "" 1 categories: 2 - key: <category_name> value: <category_value> cluster: 3 type: uuid uuid: <cluster_uuid> credentialsSecret: name: nutanix-credentials 4 image: 5 name: <cluster_id>-rhcos type: name kind: NutanixMachineProviderConfig 6 memorySize: 16Gi 7 metadata: creationTimestamp: null project: 8 type: name name: <project_name> subnets: 9 - type: uuid uuid: <subnet_uuid> systemDiskSize: 120Gi 10 userDataSecret: name: master-user-data 11 vcpuSockets: 8 12 vcpusPerSocket: 1 13 1 Specifies the boot type that the control plane machines use. For more information about boot types, see Understanding UEFI, Secure Boot, and TPM in the Virtualized Environment . Valid values are Legacy , SecureBoot , or UEFI . The default is Legacy . Note You must use the Legacy boot type in OpenShift Container Platform 4.15. 2 Specifies one or more Nutanix Prism categories to apply to control plane machines. This stanza requires key and value parameters for a category key-value pair that exists in Prism Central. For more information about categories, see Category management . 3 Specifies a Nutanix Prism Element cluster configuration. In this example, the cluster type is uuid , so there is a uuid stanza. Note Clusters that use OpenShift Container Platform version 4.15 or later can use failure domain configurations. If the cluster is configured to use a failure domain, this parameter is configured in the failure domain. If you specify this value in the provider specification when using failure domains, the Control Plane Machine Set Operator ignores it. 4 Specifies the secret name for the cluster. Do not change this value. 5 Specifies the image that was used to create the disk. 6 Specifies the cloud provider platform type. Do not change this value. 7 Specifies the memory allocated for the control plane machines. 8 Specifies the Nutanix project that you use for your cluster. In this example, the project type is name , so there is a name stanza. 9 Specifies a subnet configuration. In this example, the subnet type is uuid , so there is a uuid stanza. Note Clusters that use OpenShift Container Platform version 4.15 or later can use failure domain configurations. If the cluster is configured to use a failure domain, this parameter is configured in the failure domain. If you specify this value in the provider specification when using failure domains, the Control Plane Machine Set Operator ignores it. 10 Specifies the VM disk size for the control plane machines. 11 Specifies the control plane user data secret. Do not change this value. 12 Specifies the number of vCPU sockets allocated for the control plane machines. 13 Specifies the number of vCPUs for each control plane vCPU socket. 12.5.4.1.2. Failure domains for Nutanix clusters To add or update the failure domain configuration on a Nutanix cluster, you must make coordinated changes to several resources. The following actions are required: Modify the cluster infrastructure custom resource (CR). Modify the cluster control plane machine set CR. Modify or replace the compute machine set CRs. For more information, see "Adding failure domains to an existing Nutanix cluster" in the Post-installation configuration content. Additional resources Adding failure domains to an existing Nutanix cluster 12.5.5. Control plane configuration options for Red Hat OpenStack Platform You can change the configuration of your Red Hat OpenStack Platform (RHOSP) control plane machines and enable features by updating values in the control plane machine set. When you save an update to the control plane machine set, the Control Plane Machine Set Operator updates the control plane machines according to your configured update strategy . 12.5.5.1. Sample YAML for configuring Red Hat OpenStack Platform (RHOSP) clusters The following example YAML snippets show provider specification and failure domain configurations for an RHOSP cluster. 12.5.5.1.1. Sample RHOSP provider specification When you create a control plane machine set for an existing cluster, the provider specification must match the providerSpec configuration in the control plane machine custom resource (CR) that is created by the installation program. Sample OpenStack providerSpec values apiVersion: machine.openshift.io/v1 kind: ControlPlaneMachineSet metadata: name: cluster namespace: openshift-machine-api spec: # ... template: # ... spec: providerSpec: value: apiVersion: machine.openshift.io/v1alpha1 cloudName: openstack cloudsSecret: name: openstack-cloud-credentials 1 namespace: openshift-machine-api flavor: m1.xlarge 2 image: ocp1-2g2xs-rhcos kind: OpenstackProviderSpec 3 metadata: creationTimestamp: null networks: - filter: {} subnets: - filter: name: ocp1-2g2xs-nodes tags: openshiftClusterID=ocp1-2g2xs securityGroups: - filter: {} name: ocp1-2g2xs-master 4 serverGroupName: ocp1-2g2xs-master serverMetadata: Name: ocp1-2g2xs-master openshiftClusterID: ocp1-2g2xs tags: - openshiftClusterID=ocp1-2g2xs trunk: true userDataSecret: name: master-user-data 1 The secret name for the cluster. Do not change this value. 2 The RHOSP flavor type for the control plane. 3 The RHOSP cloud provider platform type. Do not change this value. 4 The control plane machines security group. 12.5.5.1.2. Sample RHOSP failure domain configuration The control plane machine set concept of a failure domain is analogous to the existing Red Hat OpenStack Platform (RHOSP) concept of an availability zone . The ControlPlaneMachineSet CR spreads control plane machines across multiple failure domains when possible. The following example demonstrates the use of multiple Nova availability zones as well as Cinder availability zones. Sample OpenStack failure domain values apiVersion: machine.openshift.io/v1 kind: ControlPlaneMachineSet metadata: name: cluster namespace: openshift-machine-api spec: # ... template: # ... machines_v1beta1_machine_openshift_io: failureDomains: platform: OpenStack openstack: - availabilityZone: nova-az0 rootVolume: availabilityZone: cinder-az0 - availabilityZone: nova-az1 rootVolume: availabilityZone: cinder-az1 - availabilityZone: nova-az2 rootVolume: availabilityZone: cinder-az2 # ... 12.5.5.2. Enabling Red Hat OpenStack Platform (RHOSP) features for control plane machines You can enable features by updating values in the control plane machine set. 12.5.5.2.1. Changing the RHOSP compute flavor by using a control plane machine set You can change the Red Hat OpenStack Platform (RHOSP) compute service (Nova) flavor that your control plane machines use by updating the specification in the control plane machine set custom resource. In RHOSP, flavors define the compute, memory, and storage capacity of computing instances. By increasing or decreasing the flavor size, you can scale your control plane vertically. Prerequisites Your RHOSP cluster uses a control plane machine set. Procedure Edit the following line under the providerSpec field: providerSpec: value: # ... flavor: m1.xlarge 1 1 Specify a RHOSP flavor type that has the same base as the existing selection. For example, you can change m6i.xlarge to m6i.2xlarge or m6i.4xlarge . You can choose larger or smaller flavors depending on your vertical scaling needs. Save your changes. After you save your changes, machines are replaced with ones that use the flavor you chose. 12.5.6. Control plane configuration options for VMware vSphere You can change the configuration of your VMware vSphere control plane machines by updating values in the control plane machine set. When you save an update to the control plane machine set, the Control Plane Machine Set Operator updates the control plane machines according to your configured update strategy . 12.5.6.1. Sample YAML for configuring VMware vSphere clusters The following example YAML snippets show provider specification and failure domain configurations for a vSphere cluster. 12.5.6.1.1. Sample VMware vSphere provider specification When you create a control plane machine set for an existing cluster, the provider specification must match the providerSpec configuration in the control plane machine custom resource (CR) that is created by the installation program. Sample vSphere providerSpec values apiVersion: machine.openshift.io/v1 kind: ControlPlaneMachineSet metadata: name: cluster namespace: openshift-machine-api spec: # ... template: # ... spec: providerSpec: value: apiVersion: machine.openshift.io/v1beta1 credentialsSecret: name: vsphere-cloud-credentials 1 diskGiB: 120 2 kind: VSphereMachineProviderSpec 3 memoryMiB: 16384 4 metadata: creationTimestamp: null network: 5 devices: - networkName: <vm_network_name> numCPUs: 4 6 numCoresPerSocket: 4 7 snapshot: "" template: <vm_template_name> 8 userDataSecret: name: master-user-data 9 workspace: 10 datacenter: <vcenter_datacenter_name> 11 datastore: <vcenter_datastore_name> 12 folder: <path_to_vcenter_vm_folder> 13 resourcePool: <vsphere_resource_pool> 14 server: <vcenter_server_ip> 15 1 Specifies the secret name for the cluster. Do not change this value. 2 Specifies the VM disk size for the control plane machines. 3 Specifies the cloud provider platform type. Do not change this value. 4 Specifies the memory allocated for the control plane machines. 5 Specifies the network on which the control plane is deployed. Note If the cluster is configured to use a failure domain, this parameter is configured in the failure domain. If you specify this value in the provider specification when using failure domains, the Control Plane Machine Set Operator ignores it. 6 Specifies the number of CPUs allocated for the control plane machines. 7 Specifies the number of cores for each control plane CPU. 8 Specifies the vSphere VM template to use, such as user-5ddjd-rhcos . Note If the cluster is configured to use a failure domain, this parameter is configured in the failure domain. If you specify this value in the provider specification when using failure domains, the Control Plane Machine Set Operator ignores it. 9 Specifies the control plane user data secret. Do not change this value. 10 Specifies the workspace details for the control plane. Note If the cluster is configured to use a failure domain, these parameters are configured in the failure domain. If you specify these values in the provider specification when using failure domains, the Control Plane Machine Set Operator ignores them. 11 Specifies the vCenter Datacenter for the control plane. 12 Specifies the vCenter Datastore for the control plane. 13 Specifies the path to the vSphere VM folder in vCenter, such as /dc1/vm/user-inst-5ddjd . 14 Specifies the vSphere resource pool for your VMs. 15 Specifies the vCenter server IP or fully qualified domain name. 12.5.6.1.2. Sample VMware vSphere failure domain configuration On VMware vSphere infrastructure, the cluster-wide infrastructure Custom Resource Definition (CRD), infrastructures.config.openshift.io , defines failure domains for your cluster. The providerSpec in the ControlPlaneMachineSet custom resource (CR) specifies names for failure domains. A failure domain is an infrastructure resource that comprises a control plane machine set, a vCenter datacenter, vCenter datastore, and a network. By using a failure domain resource, you can use a control plane machine set to deploy control plane machines on hardware that is separate from the primary VMware vSphere infrastructure. A control plane machine set also balances control plane machines across defined failure domains to provide fault tolerance capabilities to your infrastructure. Note If you modify the ProviderSpec configuration in the ControlPlaneMachineSet CR, the control plane machine set updates all control plane machines deployed on the primary infrastructure and each failure domain infrastructure. Important Defining a failure domain for a control plane machine set is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . Sample vSphere failure domain values apiVersion: machine.openshift.io/v1 kind: ControlPlaneMachineSet metadata: name: cluster namespace: openshift-machine-api spec: # ... template: # ... machines_v1beta1_machine_openshift_io: failureDomains: 1 platform: VSphere vsphere: 2 - name: <failure_domain_name1> - name: <failure_domain_name2> # ... 1 Specifies the vCenter location for OpenShift Container Platform cluster nodes. 2 Specifies failure domains by name for the control plane machine set. Important Each name field value in this section must match the corresponding value in the failureDomains.name field of the cluster-wide infrastructure CRD. You can find the value of the failureDomains.name field by running the following command: USD oc get infrastructure cluster -o=jsonpath={.spec.platformSpec.vsphere.failureDomains[0].name} The name field is the only supported failure domain field that you can specify in the ControlPlaneMachineSet CR. For an example of a cluster-wide infrastructure CRD that defines resources for each failure domain, see "Specifying multiple regions and zones for your cluster on vSphere." Additional resources Specifying multiple regions and zones for your cluster on vSphere 12.6. Control plane resiliency and recovery You can use the control plane machine set to improve the resiliency of the control plane for your OpenShift Container Platform cluster. 12.6.1. High availability and fault tolerance with failure domains When possible, the control plane machine set spreads the control plane machines across multiple failure domains. This configuration provides high availability and fault tolerance within the control plane. This strategy can help protect the control plane when issues arise within the infrastructure provider. 12.6.1.1. Failure domain platform support and configuration The control plane machine set concept of a failure domain is analogous to existing concepts on cloud providers. Not all platforms support the use of failure domains. Table 12.3. Failure domain support matrix Cloud provider Support for failure domains Provider nomenclature Amazon Web Services (AWS) X Availability Zone (AZ) Google Cloud Platform (GCP) X zone Microsoft Azure X Azure availability zone Nutanix X failure domain Red Hat OpenStack Platform (RHOSP) X OpenStack Nova availability zones and OpenStack Cinder availability zones VMware vSphere X failure domain mapped to a vSphere Zone [1] For more information, see "Regions and zones for a VMware vCenter". The failure domain configuration in the control plane machine set custom resource (CR) is platform-specific. For more information about failure domain parameters in the CR, see the sample failure domain configuration for your provider. Additional resources Sample Amazon Web Services failure domain configuration Sample Google Cloud Platform failure domain configuration Sample Microsoft Azure failure domain configuration Adding failure domains to an existing Nutanix cluster Sample Red Hat OpenStack Platform (RHOSP) failure domain configuration Sample VMware vSphere failure domain configuration Regions and zones for a VMware vCenter 12.6.1.2. Balancing control plane machines The control plane machine set balances control plane machines across the failure domains that are specified in the custom resource (CR). When possible, the control plane machine set uses each failure domain equally to ensure appropriate fault tolerance. If there are fewer failure domains than control plane machines, failure domains are selected for reuse alphabetically by name. For clusters with no failure domains specified, all control plane machines are placed within a single failure domain. Some changes to the failure domain configuration cause the control plane machine set to rebalance the control plane machines. For example, if you add failure domains to a cluster with fewer failure domains than control plane machines, the control plane machine set rebalances the machines across all available failure domains. 12.6.2. Recovery of failed control plane machines The Control Plane Machine Set Operator automates the recovery of control plane machines. When a control plane machine is deleted, the Operator creates a replacement with the configuration that is specified in the ControlPlaneMachineSet custom resource (CR). For clusters that use control plane machine sets, you can configure a machine health check. The machine health check deletes unhealthy control plane machines so that they are replaced. Important If you configure a MachineHealthCheck resource for the control plane, set the value of maxUnhealthy to 1 . This configuration ensures that the machine health check takes no action when multiple control plane machines appear to be unhealthy. Multiple unhealthy control plane machines can indicate that the etcd cluster is degraded or that a scaling operation to replace a failed machine is in progress. If the etcd cluster is degraded, manual intervention might be required. If a scaling operation is in progress, the machine health check should allow it to finish. Additional resources Deploying machine health checks 12.6.3. Quorum protection with machine lifecycle hooks For OpenShift Container Platform clusters that use the Machine API Operator, the etcd Operator uses lifecycle hooks for the machine deletion phase to implement a quorum protection mechanism. By using a preDrain lifecycle hook, the etcd Operator can control when the pods on a control plane machine are drained and removed. To protect etcd quorum, the etcd Operator prevents the removal of an etcd member until it migrates that member onto a new node within the cluster. This mechanism allows the etcd Operator precise control over the members of the etcd quorum and allows the Machine API Operator to safely create and remove control plane machines without specific operational knowledge of the etcd cluster. 12.6.3.1. Control plane deletion with quorum protection processing order When a control plane machine is replaced on a cluster that uses a control plane machine set, the cluster temporarily has four control plane machines. When the fourth control plane node joins the cluster, the etcd Operator starts a new etcd member on the replacement node. When the etcd Operator observes that the old control plane machine is marked for deletion, it stops the etcd member on the old node and promotes the replacement etcd member to join the quorum of the cluster. The control plane machine Deleting phase proceeds in the following order: A control plane machine is slated for deletion. The control plane machine enters the Deleting phase. To satisfy the preDrain lifecycle hook, the etcd Operator takes the following actions: The etcd Operator waits until a fourth control plane machine is added to the cluster as an etcd member. This new etcd member has a state of Running but not ready until it receives the full database update from the etcd leader. When the new etcd member receives the full database update, the etcd Operator promotes the new etcd member to a voting member and removes the old etcd member from the cluster. After this transition is complete, it is safe for the old etcd pod and its data to be removed, so the preDrain lifecycle hook is removed. The control plane machine status condition Drainable is set to True . The machine controller attempts to drain the node that is backed by the control plane machine. If draining fails, Drained is set to False and the machine controller attempts to drain the node again. If draining succeeds, Drained is set to True . The control plane machine status condition Drained is set to True . If no other Operators have added a preTerminate lifecycle hook, the control plane machine status condition Terminable is set to True . The machine controller removes the instance from the infrastructure provider. The machine controller deletes the Node object. YAML snippet demonstrating the etcd quorum protection preDrain lifecycle hook apiVersion: machine.openshift.io/v1beta1 kind: Machine metadata: ... spec: lifecycleHooks: preDrain: - name: EtcdQuorumOperator 1 owner: clusteroperator/etcd 2 ... 1 The name of the preDrain lifecycle hook. 2 The hook-implementing controller that manages the preDrain lifecycle hook. Additional resources Lifecycle hooks for the machine deletion phase 12.7. Troubleshooting the control plane machine set Use the information in this section to understand and recover from issues you might encounter. 12.7.1. Checking the control plane machine set custom resource state You can verify the existence and state of the ControlPlaneMachineSet custom resource (CR). Procedure Determine the state of the CR by running the following command: USD oc get controlplanemachineset.machine.openshift.io cluster \ --namespace openshift-machine-api A result of Active indicates that the ControlPlaneMachineSet CR exists and is activated. No administrator action is required. A result of Inactive indicates that a ControlPlaneMachineSet CR exists but is not activated. A result of NotFound indicates that there is no existing ControlPlaneMachineSet CR. steps To use the control plane machine set, you must ensure that a ControlPlaneMachineSet CR with the correct settings for your cluster exists. If your cluster has an existing CR, you must verify that the configuration in the CR is correct for your cluster. If your cluster does not have an existing CR, you must create one with the correct configuration for your cluster. Additional resources Activating the control plane machine set custom resource Creating a control plane machine set custom resource 12.7.2. Adding a missing Azure internal load balancer The internalLoadBalancer parameter is required in both the ControlPlaneMachineSet and control plane Machine custom resources (CRs) for Azure. If this parameter is not preconfigured on your cluster, you must add it to both CRs. For more information about where this parameter is located in the Azure provider specification, see the sample Azure provider specification. The placement in the control plane Machine CR is similar. Procedure List the control plane machines in your cluster by running the following command: USD oc get machines \ -l machine.openshift.io/cluster-api-machine-role==master \ -n openshift-machine-api For each control plane machine, edit the CR by running the following command: USD oc edit machine <control_plane_machine_name> Add the internalLoadBalancer parameter with the correct details for your cluster and save your changes. Edit your control plane machine set CR by running the following command: USD oc edit controlplanemachineset.machine.openshift.io cluster \ -n openshift-machine-api Add the internalLoadBalancer parameter with the correct details for your cluster and save your changes. steps For clusters that use the default RollingUpdate update strategy, the Operator automatically propagates the changes to your control plane configuration. For clusters that are configured to use the OnDelete update strategy, you must replace your control plane machines manually. Additional resources Sample Microsoft Azure provider specification 12.7.3. Recovering a degraded etcd Operator Certain situations can cause the etcd Operator to become degraded. For example, while performing remediation, the machine health check might delete a control plane machine that is hosting etcd. If the etcd member is not reachable at that time, the etcd Operator becomes degraded. When the etcd Operator is degraded, manual intervention is required to force the Operator to remove the failed member and restore the cluster state. Procedure List the control plane machines in your cluster by running the following command: USD oc get machines \ -l machine.openshift.io/cluster-api-machine-role==master \ -n openshift-machine-api \ -o wide Any of the following conditions might indicate a failed control plane machine: The STATE value is stopped . The PHASE value is Failed . The PHASE value is Deleting for more than ten minutes. Important Before continuing, ensure that your cluster has two healthy control plane machines. Performing the actions in this procedure on more than one control plane machine risks losing etcd quorum and can cause data loss. If you have lost the majority of your control plane hosts, leading to etcd quorum loss, then you must follow the disaster recovery procedure "Restoring to a cluster state" instead of this procedure. Edit the machine CR for the failed control plane machine by running the following command: USD oc edit machine <control_plane_machine_name> Remove the contents of the lifecycleHooks parameter from the failed control plane machine and save your changes. The etcd Operator removes the failed machine from the cluster and can then safely add new etcd members. Additional resources Restoring to a cluster state 12.7.4. Upgrading clusters that run on RHOSP For clusters that run on Red Hat OpenStack Platform (RHOSP) that were created with OpenShift Container Platform 4.13 or earlier, you might have to perform post-upgrade tasks before you can use control plane machine sets. 12.7.4.1. Configuring RHOSP clusters that have machines with root volume availability zones after an upgrade For some clusters that run on Red Hat OpenStack Platform (RHOSP) that you upgrade, you must manually update machine resources before you can use control plane machine sets if the following configurations are true: The upgraded cluster was created with OpenShift Container Platform 4.13 or earlier. The cluster infrastructure is installer-provisioned. Machines were distributed across multiple availability zones. Machines were configured to use root volumes for which block storage availability zones were not defined. To understand why this procedure is necessary, see Solution #7024383 . Procedure For all control plane machines, edit the provider spec for all control plane machines that match the environment. For example, to edit the machine master-0 , enter the following command: USD oc edit machine/<cluster_id>-master-0 -n openshift-machine-api where: <cluster_id> Specifies the ID of the upgraded cluster. In the provider spec, set the value of the property rootVolume.availabilityZone to the volume of the availability zone you want to use. An example RHOSP provider spec providerSpec: value: apiVersion: machine.openshift.io/v1alpha1 availabilityZone: az0 cloudName: openstack cloudsSecret: name: openstack-cloud-credentials namespace: openshift-machine-api flavor: m1.xlarge image: rhcos-4.14 kind: OpenstackProviderSpec metadata: creationTimestamp: null networks: - filter: {} subnets: - filter: name: refarch-lv7q9-nodes tags: openshiftClusterID=refarch-lv7q9 rootVolume: availabilityZone: nova 1 diskSize: 30 sourceUUID: rhcos-4.12 volumeType: fast-0 securityGroups: - filter: {} name: refarch-lv7q9-master serverGroupName: refarch-lv7q9-master serverMetadata: Name: refarch-lv7q9-master openshiftClusterID: refarch-lv7q9 tags: - openshiftClusterID=refarch-lv7q9 trunk: true userDataSecret: name: master-user-data 1 Set the zone name as this value. Note If you edited or recreated machine resources after your initial cluster deployment, you might have to adapt these steps for your configuration. In your RHOSP cluster, find the availability zone of the root volumes for your machines and use that as the value. Run the following command to retrieve information about the control plane machine set resource: USD oc describe controlplanemachineset.machine.openshift.io/cluster --namespace openshift-machine-api Run the following command to edit the resource: USD oc edit controlplanemachineset.machine.openshift.io/cluster --namespace openshift-machine-api For that resource, set the value of the spec.state property to Active to activate control plane machine sets for your cluster. Your control plane is ready to be managed by the Cluster Control Plane Machine Set Operator. 12.7.4.2. Configuring RHOSP clusters that have control plane machines with availability zones after an upgrade For some clusters that run on Red Hat OpenStack Platform (RHOSP) that you upgrade, you must manually update machine resources before you can use control plane machine sets if the following configurations are true: The upgraded cluster was created with OpenShift Container Platform 4.13 or earlier. The cluster infrastructure is installer-provisioned. Control plane machines were distributed across multiple compute availability zones. To understand why this procedure is necessary, see Solution #7013893 . Procedure For the master-1 and master-2 control plane machines, open the provider specs for editing. For example, to edit the first machine, enter the following command: USD oc edit machine/<cluster_id>-master-1 -n openshift-machine-api where: <cluster_id> Specifies the ID of the upgraded cluster. For the master-1 and master-2 control plane machines, edit the value of the serverGroupName property in their provider specs to match that of the machine master-0 . An example RHOSP provider spec providerSpec: value: apiVersion: machine.openshift.io/v1alpha1 availabilityZone: az0 cloudName: openstack cloudsSecret: name: openstack-cloud-credentials namespace: openshift-machine-api flavor: m1.xlarge image: rhcos-4.15 kind: OpenstackProviderSpec metadata: creationTimestamp: null networks: - filter: {} subnets: - filter: name: refarch-lv7q9-nodes tags: openshiftClusterID=refarch-lv7q9 securityGroups: - filter: {} name: refarch-lv7q9-master serverGroupName: refarch-lv7q9-master-az0 1 serverMetadata: Name: refarch-lv7q9-master openshiftClusterID: refarch-lv7q9 tags: - openshiftClusterID=refarch-lv7q9 trunk: true userDataSecret: name: master-user-data 1 This value must match for machines master-0 , master-1 , and master-3 . Note If you edited or recreated machine resources after your initial cluster deployment, you might have to adapt these steps for your configuration. In your RHOSP cluster, find the server group that your control plane instances are in and use that as the value. Run the following command to retrieve information about the control plane machine set resource: USD oc describe controlplanemachineset.machine.openshift.io/cluster --namespace openshift-machine-api Run the following command to edit the resource: USD oc edit controlplanemachineset.machine.openshift.io/cluster --namespace openshift-machine-api For that resource, set the value of the spec.state property to Active to activate control plane machine sets for your cluster. Your control plane is ready to be managed by the Cluster Control Plane Machine Set Operator. 12.8. Disabling the control plane machine set The .spec.state field in an activated ControlPlaneMachineSet custom resource (CR) cannot be changed from Active to Inactive . To disable the control plane machine set, you must delete the CR so that it is removed from the cluster. When you delete the CR, the Control Plane Machine Set Operator performs cleanup operations and disables the control plane machine set. The Operator then removes the CR from the cluster and creates an inactive control plane machine set with default settings. 12.8.1. Deleting the control plane machine set To stop managing control plane machines with the control plane machine set on your cluster, you must delete the ControlPlaneMachineSet custom resource (CR). Procedure Delete the control plane machine set CR by running the following command: USD oc delete controlplanemachineset.machine.openshift.io cluster \ -n openshift-machine-api Verification Check the control plane machine set custom resource state. A result of Inactive indicates that the removal and replacement process is successful. A ControlPlaneMachineSet CR exists but is not activated. 12.8.2. Checking the control plane machine set custom resource state You can verify the existence and state of the ControlPlaneMachineSet custom resource (CR). Procedure Determine the state of the CR by running the following command: USD oc get controlplanemachineset.machine.openshift.io cluster \ --namespace openshift-machine-api A result of Active indicates that the ControlPlaneMachineSet CR exists and is activated. No administrator action is required. A result of Inactive indicates that a ControlPlaneMachineSet CR exists but is not activated. A result of NotFound indicates that there is no existing ControlPlaneMachineSet CR. 12.8.3. Re-enabling the control plane machine set To re-enable the control plane machine set, you must ensure that the configuration in the CR is correct for your cluster and activate it. Additional resources Activating the control plane machine set custom resource | [
"oc get machine -n openshift-machine-api -l machine.openshift.io/cluster-api-machine-role=master",
"NAME PHASE TYPE REGION ZONE AGE <infrastructure_id>-master-0 Running m6i.xlarge us-west-1 us-west-1a 5h19m <infrastructure_id>-master-1 Running m6i.xlarge us-west-1 us-west-1b 5h19m <infrastructure_id>-master-2 Running m6i.xlarge us-west-1 us-west-1a 5h19m",
"No resources found in openshift-machine-api namespace.",
"oc get controlplanemachineset.machine.openshift.io cluster --namespace openshift-machine-api",
"oc --namespace openshift-machine-api edit controlplanemachineset.machine.openshift.io cluster",
"apiVersion: machine.openshift.io/v1 kind: ControlPlaneMachineSet metadata: name: cluster namespace: openshift-machine-api spec: replicas: 3 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <cluster_id> 1 machine.openshift.io/cluster-api-machine-role: master machine.openshift.io/cluster-api-machine-type: master state: Active 2 strategy: type: RollingUpdate 3 template: machineType: machines_v1beta1_machine_openshift_io machines_v1beta1_machine_openshift_io: failureDomains: platform: <platform> 4 <platform_failure_domains> 5 metadata: labels: machine.openshift.io/cluster-api-cluster: <cluster_id> 6 machine.openshift.io/cluster-api-machine-role: master machine.openshift.io/cluster-api-machine-type: master spec: providerSpec: value: <platform_provider_spec> 7",
"oc get -o jsonpath='{.status.infrastructureName}{\"\\n\"}' infrastructure cluster",
"oc create -f <control_plane_machine_set>.yaml",
"oc edit controlplanemachineset.machine.openshift.io cluster -n openshift-machine-api",
"openstack compute service set <target_node_host_name> nova-compute --disable",
"oc get machines -l machine.openshift.io/cluster-api-machine-role==master -n openshift-machine-api",
"oc delete machine -n openshift-machine-api <control_plane_machine_name> 1",
"apiVersion: machine.openshift.io/v1 kind: ControlPlaneMachineSet metadata: name: cluster 1 namespace: openshift-machine-api spec: replicas: 3 2 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <cluster_id> 3 machine.openshift.io/cluster-api-machine-role: master machine.openshift.io/cluster-api-machine-type: master state: Active 4 strategy: type: RollingUpdate 5 template: machineType: machines_v1beta1_machine_openshift_io machines_v1beta1_machine_openshift_io: failureDomains: platform: <platform> 6 <platform_failure_domains> 7 metadata: labels: machine.openshift.io/cluster-api-cluster: <cluster_id> machine.openshift.io/cluster-api-machine-role: master machine.openshift.io/cluster-api-machine-type: master spec: providerSpec: value: <platform_provider_spec> 8",
"oc get -o jsonpath='{.status.infrastructureName}{\"\\n\"}' infrastructure cluster",
"oc get -o jsonpath='{.status.infrastructureName}{\"\\n\"}' infrastructure cluster",
"apiVersion: machine.openshift.io/v1 kind: ControlPlaneMachineSet metadata: name: cluster namespace: openshift-machine-api spec: template: spec: providerSpec: value: ami: id: ami-<ami_id_string> 1 apiVersion: machine.openshift.io/v1beta1 blockDevices: - ebs: 2 encrypted: true iops: 0 kmsKey: arn: \"\" volumeSize: 120 volumeType: gp3 credentialsSecret: name: aws-cloud-credentials 3 deviceIndex: 0 iamInstanceProfile: id: <cluster_id>-master-profile 4 instanceType: m6i.xlarge 5 kind: AWSMachineProviderConfig 6 loadBalancers: 7 - name: <cluster_id>-int type: network - name: <cluster_id>-ext type: network metadata: creationTimestamp: null metadataServiceOptions: {} placement: 8 region: <region> 9 availabilityZone: \"\" 10 tenancy: 11 securityGroups: - filters: - name: tag:Name values: - <cluster_id>-master-sg 12 subnet: {} 13 userDataSecret: name: master-user-data 14",
"apiVersion: machine.openshift.io/v1 kind: ControlPlaneMachineSet metadata: name: cluster namespace: openshift-machine-api spec: template: machines_v1beta1_machine_openshift_io: failureDomains: aws: - placement: availabilityZone: <aws_zone_a> 1 subnet: 2 filters: - name: tag:Name values: - <cluster_id>-private-<aws_zone_a> 3 type: Filters 4 - placement: availabilityZone: <aws_zone_b> 5 subnet: filters: - name: tag:Name values: - <cluster_id>-private-<aws_zone_b> 6 type: Filters platform: AWS 7",
"providerSpec: value: loadBalancers: - name: lk4pj-ext 1 type: network 2 - name: lk4pj-int type: network",
"providerSpec: value: instanceType: <compatible_aws_instance_type> 1",
"apiVersion: machine.openshift.io/v1 kind: ControlPlaneMachineSet spec: template: spec: providerSpec: value: instanceType: <supported_instance_type> 1 networkInterfaceType: EFA 2 placement: availabilityZone: <zone> 3 region: <region> 4 placementGroupName: <placement_group> 5",
"providerSpec: value: metadataServiceOptions: authentication: Required 1",
"providerSpec: placement: tenancy: dedicated",
"oc get -o jsonpath='{.status.infrastructureName}{\"\\n\"}' infrastructure cluster",
"apiVersion: machine.openshift.io/v1 kind: ControlPlaneMachineSet metadata: name: cluster namespace: openshift-machine-api spec: template: spec: providerSpec: value: acceleratedNetworking: true apiVersion: machine.openshift.io/v1beta1 credentialsSecret: name: azure-cloud-credentials 1 namespace: openshift-machine-api diagnostics: {} image: 2 offer: \"\" publisher: \"\" resourceID: /resourceGroups/<cluster_id>-rg/providers/Microsoft.Compute/galleries/gallery_<cluster_id>/images/<cluster_id>-gen2/versions/412.86.20220930 3 sku: \"\" version: \"\" internalLoadBalancer: <cluster_id>-internal 4 kind: AzureMachineProviderSpec 5 location: <region> 6 managedIdentity: <cluster_id>-identity metadata: creationTimestamp: null name: <cluster_id> networkResourceGroup: <cluster_id>-rg osDisk: 7 diskSettings: {} diskSizeGB: 1024 managedDisk: storageAccountType: Premium_LRS osType: Linux publicIP: false publicLoadBalancer: <cluster_id> 8 resourceGroup: <cluster_id>-rg subnet: <cluster_id>-master-subnet 9 userDataSecret: name: master-user-data 10 vmSize: Standard_D8s_v3 vnet: <cluster_id>-vnet zone: \"1\" 11",
"apiVersion: machine.openshift.io/v1 kind: ControlPlaneMachineSet metadata: name: cluster namespace: openshift-machine-api spec: template: machines_v1beta1_machine_openshift_io: failureDomains: azure: - zone: \"1\" 1 - zone: \"2\" - zone: \"3\" platform: Azure 2",
"providerSpec: value: loadBalancers: - name: lk4pj-ext 1 type: network 2 - name: lk4pj-int type: network",
"az vm image list --all --offer rh-ocp-worker --publisher redhat -o table",
"Offer Publisher Sku Urn Version ------------- -------------- ------------------ -------------------------------------------------------------- ----------------- rh-ocp-worker RedHat rh-ocp-worker RedHat:rh-ocp-worker:rh-ocp-worker:4.15.2024072409 4.15.2024072409 rh-ocp-worker RedHat rh-ocp-worker-gen1 RedHat:rh-ocp-worker:rh-ocp-worker-gen1:4.15.2024072409 4.15.2024072409",
"az vm image list --all --offer rh-ocp-worker --publisher redhat-limited -o table",
"Offer Publisher Sku Urn Version ------------- -------------- ------------------ -------------------------------------------------------------- ----------------- rh-ocp-worker redhat-limited rh-ocp-worker redhat-limited:rh-ocp-worker:rh-ocp-worker:4.15.2024072409 4.15.2024072409 rh-ocp-worker redhat-limited rh-ocp-worker-gen1 redhat-limited:rh-ocp-worker:rh-ocp-worker-gen1:4.15.2024072409 4.15.2024072409",
"az vm image show --urn redhat:rh-ocp-worker:rh-ocp-worker:<version>",
"az vm image show --urn redhat-limited:rh-ocp-worker:rh-ocp-worker:<version>",
"az vm image terms show --urn redhat:rh-ocp-worker:rh-ocp-worker:<version>",
"az vm image terms show --urn redhat-limited:rh-ocp-worker:rh-ocp-worker:<version>",
"az vm image terms accept --urn redhat:rh-ocp-worker:rh-ocp-worker:<version>",
"az vm image terms accept --urn redhat-limited:rh-ocp-worker:rh-ocp-worker:<version>",
"providerSpec: value: image: offer: rh-ocp-worker publisher: redhat resourceID: \"\" sku: rh-ocp-worker type: MarketplaceWithPlan version: 413.92.2023101700",
"providerSpec: diagnostics: boot: storageAccountType: AzureManaged 1",
"providerSpec: diagnostics: boot: storageAccountType: CustomerManaged 1 customerManaged: storageAccountURI: https://<storage-account>.blob.core.windows.net 2",
"oc -n openshift-machine-api get secret <role>-user-data \\ 1 --template='{{index .data.userData | base64decode}}' | jq > userData.txt 2",
"\"storage\": { \"disks\": [ 1 { \"device\": \"/dev/disk/azure/scsi1/lun0\", 2 \"partitions\": [ 3 { \"label\": \"lun0p1\", 4 \"sizeMiB\": 1024, 5 \"startMiB\": 0 } ] } ], \"filesystems\": [ 6 { \"device\": \"/dev/disk/by-partlabel/lun0p1\", \"format\": \"xfs\", \"path\": \"/var/lib/lun0p1\" } ] }, \"systemd\": { \"units\": [ 7 { \"contents\": \"[Unit]\\nBefore=local-fs.target\\n[Mount]\\nWhere=/var/lib/lun0p1\\nWhat=/dev/disk/by-partlabel/lun0p1\\nOptions=defaults,pquota\\n[Install]\\nWantedBy=local-fs.target\\n\", 8 \"enabled\": true, \"name\": \"var-lib-lun0p1.mount\" } ] }",
"oc -n openshift-machine-api get secret <role>-user-data \\ 1 --template='{{index .data.disableTemplating | base64decode}}' | jq > disableTemplating.txt",
"oc -n openshift-machine-api create secret generic <role>-user-data-x5 \\ 1 --from-file=userData=userData.txt --from-file=disableTemplating=disableTemplating.txt",
"oc --namespace openshift-machine-api edit controlplanemachineset.machine.openshift.io cluster",
"apiVersion: machine.openshift.io/v1beta1 kind: ControlPlaneMachineSet spec: template: spec: metadata: labels: disk: ultrassd 1 providerSpec: value: ultraSSDCapability: Enabled 2 dataDisks: 3 - nameSuffix: ultrassd lun: 0 diskSizeGB: 4 deletionPolicy: Delete cachingType: None managedDisk: storageAccountType: UltraSSD_LRS userDataSecret: name: <role>-user-data-x5 4",
"oc get machines",
"oc debug node/<node-name> -- chroot /host lsblk",
"StorageAccountType UltraSSD_LRS can be used only when additionalCapabilities.ultraSSDEnabled is set.",
"failed to create vm <machine_name>: failure sending request for machine <machine_name>: cannot create vm: compute.VirtualMachinesClient#CreateOrUpdate: Failure sending request: StatusCode=400 -- Original Error: Code=\"BadRequest\" Message=\"Storage Account type 'UltraSSD_LRS' is not supported <more_information_about_why>.\"",
"providerSpec: value: osDisk: diskSizeGB: 128 managedDisk: diskEncryptionSet: id: /subscriptions/<subscription_id>/resourceGroups/<resource_group_name>/providers/Microsoft.Compute/diskEncryptionSets/<disk_encryption_set_name> storageAccountType: Premium_LRS",
"apiVersion: machine.openshift.io/v1 kind: ControlPlaneMachineSet spec: template: spec: providerSpec: value: securityProfile: settings: securityType: TrustedLaunch 1 trustedLaunch: uefiSettings: 2 secureBoot: Enabled 3 virtualizedTrustedPlatformModule: Enabled 4",
"apiVersion: machine.openshift.io/v1 kind: ControlPlaneMachineSet spec: template: spec: providerSpec: value: osDisk: # managedDisk: securityProfile: 1 securityEncryptionType: VMGuestStateOnly 2 # securityProfile: 3 settings: securityType: ConfidentialVM 4 confidentialVM: uefiSettings: 5 secureBoot: Disabled 6 virtualizedTrustedPlatformModule: Enabled 7 vmSize: Standard_DC16ads_v5 8",
"apiVersion: machine.openshift.io/v1 kind: ControlPlaneMachineSet spec: template: machines_v1beta1_machine_openshift_io: spec: providerSpec: value: capacityReservationGroupID: <capacity_reservation_group> 1",
"oc get machine -n openshift-machine-api -l machine.openshift.io/cluster-api-machine-role=master",
"providerSpec: value: acceleratedNetworking: true 1 vmSize: <azure-vm-size> 2",
"oc get -o jsonpath='{.status.infrastructureName}{\"\\n\"}' infrastructure cluster",
"oc -n openshift-machine-api -o jsonpath='{.spec.template.machines_v1beta1_machine_openshift_io.spec.providerSpec.value.disks[0].image}{\"\\n\"}' get ControlPlaneMachineSet/cluster",
"apiVersion: machine.openshift.io/v1 kind: ControlPlaneMachineSet metadata: name: cluster namespace: openshift-machine-api spec: template: spec: providerSpec: value: apiVersion: machine.openshift.io/v1beta1 canIPForward: false credentialsSecret: name: gcp-cloud-credentials 1 deletionProtection: false disks: - autoDelete: true boot: true image: <path_to_image> 2 labels: null sizeGb: 200 type: pd-ssd kind: GCPMachineProviderSpec 3 machineType: e2-standard-4 metadata: creationTimestamp: null metadataServiceOptions: {} networkInterfaces: - network: <cluster_id>-network subnetwork: <cluster_id>-master-subnet projectID: <project_name> 4 region: <region> 5 serviceAccounts: 6 - email: <cluster_id>-m@<project_name>.iam.gserviceaccount.com scopes: - https://www.googleapis.com/auth/cloud-platform shieldedInstanceConfig: {} tags: - <cluster_id>-master targetPools: - <cluster_id>-api userDataSecret: name: master-user-data 7 zone: \"\" 8",
"apiVersion: machine.openshift.io/v1 kind: ControlPlaneMachineSet metadata: name: cluster namespace: openshift-machine-api spec: template: machines_v1beta1_machine_openshift_io: failureDomains: gcp: - zone: <gcp_zone_a> 1 - zone: <gcp_zone_b> 2 - zone: <gcp_zone_c> - zone: <gcp_zone_d> platform: GCP 3",
"apiVersion: machine.openshift.io/v1 kind: ControlPlaneMachineSet spec: template: spec: providerSpec: value: disks: type: pd-ssd 1",
"apiVersion: machine.openshift.io/v1 kind: ControlPlaneMachineSet spec: template: spec: providerSpec: value: confidentialCompute: Enabled 1 onHostMaintenance: Terminate 2 machineType: n2d-standard-8 3",
"apiVersion: machine.openshift.io/v1 kind: ControlPlaneMachineSet spec: template: spec: providerSpec: value: shieldedInstanceConfig: 1 integrityMonitoring: Enabled 2 secureBoot: Disabled 3 virtualizedTrustedPlatformModule: Enabled 4",
"gcloud kms keys add-iam-policy-binding <key_name> --keyring <key_ring_name> --location <key_ring_location> --member \"serviceAccount:service-<project_number>@compute-system.iam.gserviceaccount.com\" --role roles/cloudkms.cryptoKeyEncrypterDecrypter",
"apiVersion: machine.openshift.io/v1 kind: ControlPlaneMachineSet spec: template: spec: providerSpec: value: disks: - type: encryptionKey: kmsKey: name: machine-encryption-key 1 keyRing: openshift-encrpytion-ring 2 location: global 3 projectID: openshift-gcp-project 4 kmsKeyServiceAccount: openshift-service-account@openshift-gcp-project.iam.gserviceaccount.com 5",
"oc get -o jsonpath='{.status.infrastructureName}{\"\\n\"}' infrastructure cluster",
"apiVersion: machine.openshift.io/v1 kind: ControlPlaneMachineSet metadata: name: cluster namespace: openshift-machine-api spec: template: spec: providerSpec: value: apiVersion: machine.openshift.io/v1 bootType: \"\" 1 categories: 2 - key: <category_name> value: <category_value> cluster: 3 type: uuid uuid: <cluster_uuid> credentialsSecret: name: nutanix-credentials 4 image: 5 name: <cluster_id>-rhcos type: name kind: NutanixMachineProviderConfig 6 memorySize: 16Gi 7 metadata: creationTimestamp: null project: 8 type: name name: <project_name> subnets: 9 - type: uuid uuid: <subnet_uuid> systemDiskSize: 120Gi 10 userDataSecret: name: master-user-data 11 vcpuSockets: 8 12 vcpusPerSocket: 1 13",
"apiVersion: machine.openshift.io/v1 kind: ControlPlaneMachineSet metadata: name: cluster namespace: openshift-machine-api spec: template: spec: providerSpec: value: apiVersion: machine.openshift.io/v1alpha1 cloudName: openstack cloudsSecret: name: openstack-cloud-credentials 1 namespace: openshift-machine-api flavor: m1.xlarge 2 image: ocp1-2g2xs-rhcos kind: OpenstackProviderSpec 3 metadata: creationTimestamp: null networks: - filter: {} subnets: - filter: name: ocp1-2g2xs-nodes tags: openshiftClusterID=ocp1-2g2xs securityGroups: - filter: {} name: ocp1-2g2xs-master 4 serverGroupName: ocp1-2g2xs-master serverMetadata: Name: ocp1-2g2xs-master openshiftClusterID: ocp1-2g2xs tags: - openshiftClusterID=ocp1-2g2xs trunk: true userDataSecret: name: master-user-data",
"apiVersion: machine.openshift.io/v1 kind: ControlPlaneMachineSet metadata: name: cluster namespace: openshift-machine-api spec: template: machines_v1beta1_machine_openshift_io: failureDomains: platform: OpenStack openstack: - availabilityZone: nova-az0 rootVolume: availabilityZone: cinder-az0 - availabilityZone: nova-az1 rootVolume: availabilityZone: cinder-az1 - availabilityZone: nova-az2 rootVolume: availabilityZone: cinder-az2",
"providerSpec: value: flavor: m1.xlarge 1",
"apiVersion: machine.openshift.io/v1 kind: ControlPlaneMachineSet metadata: name: cluster namespace: openshift-machine-api spec: template: spec: providerSpec: value: apiVersion: machine.openshift.io/v1beta1 credentialsSecret: name: vsphere-cloud-credentials 1 diskGiB: 120 2 kind: VSphereMachineProviderSpec 3 memoryMiB: 16384 4 metadata: creationTimestamp: null network: 5 devices: - networkName: <vm_network_name> numCPUs: 4 6 numCoresPerSocket: 4 7 snapshot: \"\" template: <vm_template_name> 8 userDataSecret: name: master-user-data 9 workspace: 10 datacenter: <vcenter_datacenter_name> 11 datastore: <vcenter_datastore_name> 12 folder: <path_to_vcenter_vm_folder> 13 resourcePool: <vsphere_resource_pool> 14 server: <vcenter_server_ip> 15",
"apiVersion: machine.openshift.io/v1 kind: ControlPlaneMachineSet metadata: name: cluster namespace: openshift-machine-api spec: template: machines_v1beta1_machine_openshift_io: failureDomains: 1 platform: VSphere vsphere: 2 - name: <failure_domain_name1> - name: <failure_domain_name2>",
"oc get infrastructure cluster -o=jsonpath={.spec.platformSpec.vsphere.failureDomains[0].name}",
"apiVersion: machine.openshift.io/v1beta1 kind: Machine metadata: spec: lifecycleHooks: preDrain: - name: EtcdQuorumOperator 1 owner: clusteroperator/etcd 2",
"oc get controlplanemachineset.machine.openshift.io cluster --namespace openshift-machine-api",
"oc get machines -l machine.openshift.io/cluster-api-machine-role==master -n openshift-machine-api",
"oc edit machine <control_plane_machine_name>",
"oc edit controlplanemachineset.machine.openshift.io cluster -n openshift-machine-api",
"oc get machines -l machine.openshift.io/cluster-api-machine-role==master -n openshift-machine-api -o wide",
"oc edit machine <control_plane_machine_name>",
"oc edit machine/<cluster_id>-master-0 -n openshift-machine-api",
"providerSpec: value: apiVersion: machine.openshift.io/v1alpha1 availabilityZone: az0 cloudName: openstack cloudsSecret: name: openstack-cloud-credentials namespace: openshift-machine-api flavor: m1.xlarge image: rhcos-4.14 kind: OpenstackProviderSpec metadata: creationTimestamp: null networks: - filter: {} subnets: - filter: name: refarch-lv7q9-nodes tags: openshiftClusterID=refarch-lv7q9 rootVolume: availabilityZone: nova 1 diskSize: 30 sourceUUID: rhcos-4.12 volumeType: fast-0 securityGroups: - filter: {} name: refarch-lv7q9-master serverGroupName: refarch-lv7q9-master serverMetadata: Name: refarch-lv7q9-master openshiftClusterID: refarch-lv7q9 tags: - openshiftClusterID=refarch-lv7q9 trunk: true userDataSecret: name: master-user-data",
"oc describe controlplanemachineset.machine.openshift.io/cluster --namespace openshift-machine-api",
"oc edit controlplanemachineset.machine.openshift.io/cluster --namespace openshift-machine-api",
"oc edit machine/<cluster_id>-master-1 -n openshift-machine-api",
"providerSpec: value: apiVersion: machine.openshift.io/v1alpha1 availabilityZone: az0 cloudName: openstack cloudsSecret: name: openstack-cloud-credentials namespace: openshift-machine-api flavor: m1.xlarge image: rhcos-4.15 kind: OpenstackProviderSpec metadata: creationTimestamp: null networks: - filter: {} subnets: - filter: name: refarch-lv7q9-nodes tags: openshiftClusterID=refarch-lv7q9 securityGroups: - filter: {} name: refarch-lv7q9-master serverGroupName: refarch-lv7q9-master-az0 1 serverMetadata: Name: refarch-lv7q9-master openshiftClusterID: refarch-lv7q9 tags: - openshiftClusterID=refarch-lv7q9 trunk: true userDataSecret: name: master-user-data",
"oc describe controlplanemachineset.machine.openshift.io/cluster --namespace openshift-machine-api",
"oc edit controlplanemachineset.machine.openshift.io/cluster --namespace openshift-machine-api",
"oc delete controlplanemachineset.machine.openshift.io cluster -n openshift-machine-api",
"oc get controlplanemachineset.machine.openshift.io cluster --namespace openshift-machine-api"
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.15/html/machine_management/managing-control-plane-machines |
Index | Index A accelerators, Tokens for Storing Certificate System Subsystem Keys and Certificates active logs default file location, Logs message categories, Services That Are Logged adding new directory attributes, Adding New or Custom Attributes agent certificate, User Certificates agents authorizing key recovery, Recovering Keys port used for operations, Planning Ports algorithm cryptographic, Encryption and Decryption archiving rotated log files, Log File Rotation authentication certificate-based, Certificate-Based Authentication client and server, Authentication Confirms an Identity password-based, Password-Based Authentication See also client authentication, Certificate-Based Authentication See also server authentication, Certificate-Based Authentication automatic revocation checking, Enabling Automatic Revocation Checking on the CA B buffered logging, Buffered and Unbuffered Logging C CA certificate, Types of Certificates defined, A Certificate Identifies Someone or Something hierarchies and root, CA Hierarchies trusted, How CA Certificates Establish Trust CA chaining, Linked CA CA decisions for deployment CA renewal, Renewing or Reissuing CA Signing Certificates distinguished name, Planning the CA Distinguished Name root versus subordinate, Defining the Certificate Authority Hierarchy signing certificate, Setting the CA Signing Certificate Validity Period signing key, Choosing the Signing Key Type and Length CA hierarchy, Subordination to a Certificate System CA root CA, Subordination to a Certificate System CA subordinate CA, Subordination to a Certificate System CA CA scalability, CA Cloning CA signing certificate, CA Signing Certificates , Setting the CA Signing Certificate Validity Period Certificate Manager as root CA, Subordination to a Certificate System CA as subordinate CA, Subordination to a Certificate System CA CA hierarchy, Subordination to a Certificate System CA CA signing certificate, CA Signing Certificates chaining to third-party CAs, Linked CA cloning, CA Cloning KRA and, Planning for Lost Keys: Key Archival and Recovery certificate profiles Windows smart card login, Using the Windows Smart Card Logon Profile certificate-based authentication defined, Authentication Confirms an Identity certificates authentication using, Certificate-Based Authentication CA certificate, Types of Certificates chains, Certificate Chains contents of, Contents of a Certificate issuing of, Certificate Issuance renewing, Certificate Expiration and Renewal revoking, Certificate Expiration and Renewal S/MIME, Types of Certificates self-signed, CA Hierarchies verifying a certificate chain, Verifying a Certificate Chain changing DER-encoding order of DirectoryString, Changing the DER-Encoding Order ciphers defined, Encryption and Decryption client authentication SSL/TLS client certificates defined, Types of Certificates cloning, CA Cloning configuration file, CS.cfg Files CS.cfg, Overview of the CS.cfg Configuration File format, Overview of the CS.cfg Configuration File CRL signing certificate, Other Signing Certificates CRLs Certificate Manager support for, CRLs publishing to online validation authority, OCSP Services CS.cfg, CS.cfg Files comments and TPS, Overview of the CS.cfg Configuration File D deployment planning CA decisions distinguished name, Planning the CA Distinguished Name root versus subordinate, Defining the Certificate Authority Hierarchy signing certificate, Setting the CA Signing Certificate Validity Period signing key, Choosing the Signing Key Type and Length token management, Smart Card Token Management with Certificate System DER-encoding order of DirectoryString, Changing the DER-Encoding Order digital signatures defined, Digital Signatures directory attributes adding new, Adding New or Custom Attributes supported in CS, Changing DN Attributes in CA-Issued Certificates distinguished name (DN) extending attribute support, Changing DN Attributes in CA-Issued Certificates for CA, Planning the CA Distinguished Name E email, signed and encrypted, Signed and Encrypted Email encryption defined, Encryption and Decryption public-key, Public-Key Encryption symmetric-key, Symmetric-Key Encryption Error log defined, Tomcat Error and Access Logs extending directory-attribute support in CS, Changing DN Attributes in CA-Issued Certificates extensions structure of, Structure of Certificate Extensions external tokens defined, Tokens for Storing Certificate System Subsystem Keys and Certificates F flush interval for logs, Buffered and Unbuffered Logging H hardware accelerators, Tokens for Storing Certificate System Subsystem Keys and Certificates hardware tokens, Tokens for Storing Certificate System Subsystem Keys and Certificates See external tokens, Tokens for Storing Certificate System Subsystem Keys and Certificates how to search for keys, Archiving Keys I installation, Installing and Configuring Certificate System planning, A Checklist for Planning the PKI internal tokens, Tokens for Storing Certificate System Subsystem Keys and Certificates K key archival, Archiving Keys how it works, Archiving Keys how keys are stored, Archiving Keys how to set up, Manually Setting up Key Archival where keys are stored, Archiving Keys key length, Choosing the Signing Key Type and Length key recovery, Recovering Keys how to set up, Setting up Agent-Approved Key Recovery Schemes Key Recovery Authority setting up key archival, Manually Setting up Key Archival key recovery, Setting up Agent-Approved Key Recovery Schemes keys defined, Encryption and Decryption management and recovery, Key Management KRA Certificate Manager and, Planning for Lost Keys: Key Archival and Recovery L linked CA, Linked CA location of active log files, Logs logging buffered vs. unbuffered, Buffered and Unbuffered Logging log files archiving rotated files, Log File Rotation default location, Logs timing of rotation, Log File Rotation log levels, Log Levels (Message Categories) default selection, Log Levels (Message Categories) how they relate to message categories, Log Levels (Message Categories) significance of choosing the right level, Log Levels (Message Categories) services that are logged, Services That Are Logged types of logs, Logs Error, Tomcat Error and Access Logs O OCSP responder, OCSP Services OCSP server, OCSP Services OCSP signing certificate, Other Signing Certificates P password using for authentication, Authentication Confirms an Identity password-based authentication, defined, Password-Based Authentication password.conf configuring contents, Configuring the password.conf File configuring location, Configuring the password.conf File contents, Configuring the password.conf File passwords configuring the password.conf file, Configuring the password.conf File for subsystem instances, Managing System Passwords used by subsystem instances, Configuring the password.conf File PKCS #11 support, Tokens for Storing Certificate System Subsystem Keys and Certificates planning installation, A Checklist for Planning the PKI ports for agent operations, Planning Ports how to choose numbers, Planning Ports private key, defined, Public-Key Encryption public key defined, Public-Key Encryption management, Key Management publishing of CRLs to online validation authority, OCSP Services queue, Enabling and Configuring a Publishing Queue (see also publishing queue) publishing queue, Enabling and Configuring a Publishing Queue enabling, Enabling and Configuring a Publishing Queue R recovering users' private keys, Recovering Keys root CA, Subordination to a Certificate System CA root versus subordinate CA, Defining the Certificate Authority Hierarchy rotating log files archiving files, Log File Rotation how to set the time, Log File Rotation RSA, Choosing the Signing Key Type and Length S S/MIME certificate, Types of Certificates self-signed certificate, CA Hierarchies setting up key archival, Manually Setting up Key Archival key recovery, Setting up Agent-Approved Key Recovery Schemes signing certificate CA, Setting the CA Signing Certificate Validity Period signing key, for CA, Choosing the Signing Key Type and Length smart cards Windows login, Using the Windows Smart Card Logon Profile SSL/TLS client certificates, Types of Certificates SSL/TLS client certificate, SSL/TLS Server and Client Certificates SSL/TLS server certificate, SSL/TLS Server and Client Certificates subordinate CA, Subordination to a Certificate System CA subsystems configuring password file, Configuring the password.conf File T timing log rotation, Log File Rotation Token Key Service, Smart Card Token Management with Certificate System Token Processing System and, Smart Card Token Management with Certificate System Token Processing System, Smart Card Token Management with Certificate System scalability, Using Smart Cards Token Key Service and, Smart Card Token Management with Certificate System tokens defined, Tokens for Storing Certificate System Subsystem Keys and Certificates external, Tokens for Storing Certificate System Subsystem Keys and Certificates internal, Tokens for Storing Certificate System Subsystem Keys and Certificates viewing which tokens are installed, Viewing Tokens Windows login, Using the Windows Smart Card Logon Profile topology decisions, for deployment, Smart Card Token Management with Certificate System TPS comments in the CS.cfg file, Overview of the CS.cfg Configuration File Windows smart card login, Using the Windows Smart Card Logon Profile trusted CA, defined, How CA Certificates Establish Trust U unbuffered logging, Buffered and Unbuffered Logging user certificate, User Certificates W Windows smart card login, Using the Windows Smart Card Logon Profile | null | https://docs.redhat.com/en/documentation/red_hat_certificate_system/10/html/planning_installation_and_deployment_guide/ix01 |
Chapter 8. KafkaListenerAuthenticationScramSha512 schema reference | Chapter 8. KafkaListenerAuthenticationScramSha512 schema reference Used in: GenericKafkaListener The type property is a discriminator that distinguishes use of the KafkaListenerAuthenticationScramSha512 type from KafkaListenerAuthenticationTls , KafkaListenerAuthenticationOAuth , KafkaListenerAuthenticationCustom . It must have the value scram-sha-512 for the type KafkaListenerAuthenticationScramSha512 . Property Property type Description type string Must be scram-sha-512 . | null | https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.7/html/streams_for_apache_kafka_api_reference/type-KafkaListenerAuthenticationScramSha512-reference |
Chapter 14. Using TLS certificates for applications accessing RGW | Chapter 14. Using TLS certificates for applications accessing RGW Most of the S3 applications require TLS certificate in the forms such as an option included in the Deployment configuration file, passed as a file in the request, or stored in /etc/pki paths. TLS certificates for RADOS Object Gateway (RGW) are stored as Kubernetes secret and you need to fetch the details from the secret. Prerequisites A running OpenShift Data Foundation cluster. Procedure For internal RGW server Get the TLS certificate and key from the kubernetes secret: <secret_name> The default kubernetes secret name is <objectstore_name>-cos-ceph-rgw-tls-cert . Specify the name of the object store. For external RGW server Get the the TLS certificate from the kubernetes secret: <secret_name> The default kubernetes secret name is ceph-rgw-tls-cert and it is an opaque type of secret. The key value for storing the TLS certificates is cert . | [
"oc get secrets/<secret_name> -o jsonpath='{.data..tls\\.crt}' | base64 -d oc get secrets/<secret_name> -o jsonpath='{.data..tls\\.key}' | base64 -d",
"oc get secrets/<secret_name> -o jsonpath='{.data.cert}' | base64 -d"
] | https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.14/html/managing_hybrid_and_multicloud_resources/using-tls-certificates-for-applications-accessing-rgw_rhodf |
Chapter 1. Overview | Chapter 1. Overview The Ansible Automation Platform (AAP) 2.3 on Red Hat OpenShift reference architecture provides an opinionated setup of deploying an Ansible Automation Platform environment. It provides a step-by-step deployment procedure with the latest best practices to install and configure Ansible Automation Platform 2.3. It is best suited for system and platform administrators looking to deploy Ansible Automation Platform on Red Hat OpenShift. By utilizing the power of Red Hat OpenShift, we can streamline the deployment of Ansible Automation Platform and significantly reduce the time and effort required to set it up. Figure 1.1. automation controller architecture The Figure 1.1, "automation controller architecture" , shows the deployment process flow of the Ansible Automation Platform (AAP) operator deploying the automation controller component. The automation controller operator, one of the three operators that comprise the larger Ansible Automation Platform operator, is responsible for deploying the various pods including the controller, postgres and automation job pods. Figure 1.2. automation hub architecture Similarly, the Figure 1.2, "automation hub architecture" shows the AAP operator deploying the automation hub component. The automation hub operator deploys various pods that communicate with each other to deliver automation hub to share internally generated content, Red Hat Ansible Certified Content, execution environments, and Ansible Validated Content with your teams. In addition, this reference architecture highlights key steps involved in providing an efficient and scalable environment delivered by a solid foundation for any of your automation efforts. | null | https://docs.redhat.com/en/documentation/red_hat_ansible_automation_platform/2.4/html/deploying_ansible_automation_platform_2_on_red_hat_openshift/overview |
Preface | Preface Red Hat OpenShift Data Foundation supports deployment on existing Red Hat OpenShift Container Platform (RHOCP) IBM Z clusters in connected or disconnected environments along with out-of-the-box support for proxy environments. Note See Planning your deployment and Preparing to deploy OpenShift Data Foundation for more information about deployment requirements. To deploy OpenShift Data Foundation, follow the appropriate deployment process for your environment: Internal Attached Devices mode Deploy using local storage devices External mode | null | https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.15/html/deploying_openshift_data_foundation_using_ibm_z/preface-ibm-z |
8.141. mobile-broadband-provider-info | 8.141. mobile-broadband-provider-info 8.141.1. RHBA-2014:0749 - mobile-broadband-provider-info bug fix update An updated mobile-broadband-provider-info package that fixes one bug is now available for Red Hat Enterprise Linux 6. The mobile-broadband-provider-info package contains a database of service provider specific settings of mobile broadband (3G) providers in various countries. Bug Fix BZ# 996599 Previously, the access point name (APN) string incorrectly contained a space at the end. As a consequence, a connection to the Israel Pelephone 3G provider could not be established. This update fixes the typographical error in the APN string, and the connection can now be established as expected. Users of mobile-broadband-provider-info are advised to upgrade to this updated package, which fixes this bug. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.6_technical_notes/mobile-broadband-provider-info |
Chapter 1. Introduction to Red Hat OpenShift API Management | Chapter 1. Introduction to Red Hat OpenShift API Management Learn about the features and functions available in the Red Hat OpenShift API Management cloud service. 1.1. What is OpenShift API Management Red Hat OpenShift API Management is a cloud service for creating, securing, and publishing your APIs. The OpenShift API Management service is an add-on for Red Hat OpenShift Dedicated and Red Hat OpenShift Service on AWS. The service is based on the Red Hat 3scale API Management platform and also includes an implementation of Red Hat Single Sign-On. Understanding Red Hat 3scale API Management Application Programming Interface (API) management refers to the processes of distributing, controlling, and analyzing the APIs that connect applications and data across cloud environments. Red Hat OpenShift API Management provides a management platform that allows users to share, secure, distribute, control, and monetize APIs. After setting up authentication and user accounts, OpenShift API Management developers, also referred to as API providers, can configure, and publish their APIs. The main OpenShift API Management components include: APIcast - the 3scale API gateway Admin Portal - the 3scale console that API providers work in Developer Portal - the interface for API consumers Red Hat Single Sign-On - for authenticating access to the Developer Portal as well as to APIs API providers are developers who work in the 3scale Admin Portal, for which an administrator has given them accounts. API providers also work in the OpenShift Dedicated cluster to deploy applications, such as a backend for service API requests. API providers create and publish APIs, and can configure Red Hat Single Sign-On authentication to secure APIs. 3scale separates APIs into two main groups: Backends are internal APIs bundled in a product. Backends grant API providers the freedom to map internal API organization structures to 3scale. A backend contains a private URL for an internal API. It is exposed through mapping rules and the public URL of one or more 3scale products. Products are customer-facing APIs. Products facilitate the creation of robust yet simplified offerings for API consumers. A product includes application plans and configuration of the APIcast gateway. A product can bundle multiple backends. When a 3scale product is ready for use, an API provider publishes it in the Developer Portal. API consumers visit the Developer Portal to subscribe to a plan that enables them to use the 3scale product that contains that API. Consumers can then call the API's operations, subject to any usage policies that may be in effect. Understanding Red Hat Single Sign-On Red Hat Single Sign-On provides single sign-on (SSO) authentication to secure web applications. You use this SSO implementation to control access to 3scale Developer Portals and to 3scale API products. It is not supported as a company-wide SSO solution. Red Hat OpenShift API Management considerations Red Hat OpenShift API Management introduces several product considerations that need to be thoroughly understood before proceeding with the installation and configuration of the service: Authentication Options: OpenShift API Management provides various authentication options within the service to ensure secure access control and identity verification: OAuth 2.0: An authorization framework that enables secure and delegated access to APIs. OAuth 2.0 allows users and applications to obtain limited, scoped access tokens, which can be used to authenticate and authorize API requests. OpenID Connect: An identity layer built on top of OAuth 2.0 that provides additional features for authentication, such as user profile information and identity federation. OpenID Connect allows users to authenticate using their existing accounts from various identity providers. LDAP (Lightweight Directory Access Protocol): A protocol commonly used for accessing and managing directory information. LDAP integration enables organizations to leverage their existing user directories for authentication within OpenShift API Management. Token-based authentication: A mechanism that involves exchanging a token for authentication purposes. Tokens are typically issued by an identity provider or authentication service and can be used to validate and authorize API requests. CIDR (Classless Inter-Domain Routing) : CIDR is a method used to allocate and manage IP addresses more efficiently. It replaces the older system of class-based IP addressing and enables flexible allocation of IP address blocks. CIDR notation is used to define network ranges and subnets. Understanding CIDR is important for correctly configuring networking components, such as IP whitelisting, firewall rules, and defining network policies for secure communication. The CIDR range must not overlap with any network you would like to peer within the OpenShift cluster VPC. If you do not specify a CIDR value, you can click the link in the OpenShift Cluster Manager to apply the default CIDR range. After submitting the initial configuration, you cannot modify the CIDR range. If you want to change the CIDR range, you must delete and reinstall Red Hat OpenShift API Management. The CIDR prefix length range must be between /16 and /26 . Only CIDR values within this range are permitted. You can use 10.1.0.0/26 as the default CIDR range. Custom Wildcard Domain: A wildcard domain name allows you to handle dynamic routing of API traffic across various endpoints and services within your infrastructure. By using a wildcard DNS record (for example *.example.com ), you can ensure that any subdomain under the specified domain is automatically routed to the corresponding API service or endpoint. This flexibility is particularly useful when dealing with multiple APIs or microservices, as it simplifies the management of API endpoints and enables dynamic scaling and routing. To configure a custom wildcard domain name with 3scale and Red Hat OpenShift API Management, you would typically do the following: Obtain a registered domain: You need to have a registered domain name that you own and have administrative control over. Configure DNS settings: Update your DNS settings for the domain to include a wildcard DNS record pointing to the appropriate IP address or load balancer associated with your API infrastructure. Obtain an SSL/TLS certificate: Obtain an SSL/TLS certificate for your custom wildcard domain name to ensure secure communication between clients and your API services. This certificate can be either self-signed or issued by a trusted certificate authority (CA). Configure 3scale and OpenShift API Management: In the configuration settings of both 3scale and OpenShift API Management, specify the custom wildcard domain name as the endpoint for your APIs. This ensures that API requests made to any subdomain under the wildcard domain are correctly routed and processed by the respective API services. SMTP (Simple Mail Transfer Protocol) : SMTP is a widely used standard protocol for email transmission. In the context of OpenShift API Management, SMTP configuration allows you to specify the email server settings required for email notifications, alerts, and communication within the system. By providing the necessary SMTP details, such as the server address, port number, authentication credentials, and encryption settings, you enable the platform to send emails seamlessly. To successully apply an SMTP configuration, you must enter values for all related fields. Values for all Custom SMTP fields are required, if you specify values for any of the fields. Entering an SMTP configuration is optional. Red Hat OpenShift API Management default values are applied if you leave the SMTP configuration fields blank. You can enter values for the following fields: Custom SMTP Mail Server Address - The remote mail server as a relay Custom SMTP From Address - Email address of the outgoing mail Custom SMTP Username - The mail server username Custom SMTP Password - The mail server password Custom SMTP Port - The port on which the mail server is listening for new connections VPC Configurations: A VPC (Virtual Private Cloud) is a virtual network infrastructure that allows you to provision and manage network resources within a logically isolated environment. OpenShift API Management supports the option to bring your own VPC, which means you can use your existing VPC setup instead of relying on the default networking configuration. The following Availability Zone (AZ) scenarios represent the tested configurations. Configurations that differ from the following, may not work as expected and are not supported. Single-AZ installation: The tested architecture includes a VPC with an internet gateway, an availability zone containing a public subnet, and a private subnet. Multi-AZ installation: The tested architecture includes a VPC with an internet gateway, up to three availability zones (with each containing one public subnet), and a private subnet. PrivateLink Multi-AZ installation: The tested architecture includes connections to clusters using AWS PrivateLink endpoints instead of public endpoints for OpenShift Service on AWS (ROSA) or OpenShift Dedicated (OSD). Additional resources Supported bring-your-own VPC architectures 1.2. How to set up OpenShift API Management A Red Hat OpenShift Dedicated cluster administrator sets up the cluster and identity provider and adds the OpenShift API Management service to a cluster. Then, you configure the service users. If desired, you can customize APIcast, which is the interface that handles calls to a 3scale API product. Note In Red Hat OpenShift API Management documentation, ignore content for 3scale Hosted (SaaS). It does not apply to OpenShift API Management. Configure an identity provider If an identity provider is already configured, there is no need to configure another one. Otherwise, you must choose and configure an identity provider, which can be LDAP, GitHub, GitHub Enterprise, Google, or OpenID Connect. Instructions: Configuring identity providers Add OpenShift API Management Adding OpenShift API Management to a cluster makes the service available for use by 3scale API providers. You can add OpenShift API Management to an OpenShift Dedicated cluster, or to a ROSA cluster. Instructions: Adding OpenShift API Management to your cluster Adding OpenShift API Management to your Red Hat OpenShift Service on AWS cluster Configure 3scale API provider account permissions In the 3scale Admin Portal, configure account permissions so that API providers in your organization can create, configure, and launch 3scale API products. When a new user logs in to the OpenShift Dedicated cluster by using the configured identity provider, the user automatically receives an OpenShift account with permission to access OpenShift API Management. You manage these accounts in the 3scale Admin Portal. By default, Single Sign-On is configured for 3scale in OpenShift API Management. Instructions: Red Hat Single Sign-On for the 3scale Admin Portal 1.3. How to use OpenShift API Management Use OpenShift API Management to create, secure, and publish your APIs. Get started with 3scale You can use the 3scale wizard to start learning about how to add and test a 3scale API product. Instructions: First steps with 3scale Create and configure an API In the 3scale Admin Portal, create and configure an API to ensure that access is protected by API keys, tracked, and monitored by 3scale with basic rate limits and controls in place. This involves the following steps: Create API backends Create API products Create mapping rules and application plans to define a customer-facing API product Capture metrics Configure API access rules Mapping rules define the metrics or methods to report. Application plans define the rules such as limits, pricing, and features for using an API product. An application subscribes to an application plan. Instructions: Adding and configuring APIs Configure APIcast policies APIcast is the 3scale API gateway, which is the endpoint that accepts API product calls and routes them to bundled backends. OpenShift API Management provides APIcast staging for developing and testing APIs and also APIcast production, for published APIs. APIcast policies are units of functionality that modify how APIcast operates. Policies can be enabled, disabled, and configured to control APIcast behavior. Use custom policies to add functionality that is not available in a default APIcast deployment. Instructions: APIcast policies Secure your API If you want to secure your API by using OpenID and OAuth, then in the Red Hat Single Sign-On Admin Console, create a Red Hat Single Sign-On realm. An SSO realm is required to manage authentication for access to the Developer Portal and 3scale API products. In the 3scale Admin Portal, set up authentication to control access to your API product and to the 3scale Developer Portal. Instructions: Enabling and disabling authentication through Red Hat Single Sign-On Set up a 3scale Developer Portal A well-structured developer portal and great documentation are key elements to assure adoption. A developer portal is the main hub for managing interactions with API consumers and for API consumers to access their API keys in a secure way. In the 3scale Admin Portal, add OpenAPI Specification 3.0 conforming documents for use in a Developer Portal. API consumers use the Developer Portal to access the APIs defined in these documents. Then, configure the Developer Portal and add your APIs. Instructions: Providing APIs in the Developer Portal Discover and import APIs available in your OpenShift Dedicated cluster Summary: from zero to hero Developer Portal Set up monitoring and analytics for your API You can designate methods in your API and add metrics to set access limits for any of an API product's application plans. For an API backend, methods and metrics can be used to set access limits in the application plan of any API product that bundles the backend. Instructions: Designating methods and adding metrics for capturing usage details API analytics Launch the API product After you have configured and secured your API and created a Developer Portal, you can launch your API so that consumers can begin to use it. Instructions: Going live Monitor your API After your API is launched, you can monitor metrics that indicate how it is being used. Knowing how a 3scale API product is used is a crucial step for managing traffic, provisioning for peaks, and identifying the users who most often send requests to the API product. Instructions: Viewing 3scale built-in traffic analytics for applications 1.4. Get OpenShift API Management To get OpenShift API Management, you can add it to your OpenShift Dedicated cluster or ROSA cluster. To learn more, go to https://cloud.redhat.com/application-services/overview . | null | https://docs.redhat.com/en/documentation/red_hat_openshift_api_management/1/html/getting_started_with_red_hat_openshift_api_management/introduction-to-rhoam_openshift-api-management-adding |
B.76. python-dmidecode | B.76. python-dmidecode B.76.1. RHBA-2011:1157 - python-dmidecode bug fix update An updated python-dmidecode package that fixes a bug is now available for Red Hat Enterprise Linux 6 Extended Update Support. The python-dmidecode package provides a Python extension module that uses the code-base of the dmidecode utility and presents the data as Python data structures or as XML data using the libxml2 library. Bug Fix BZ# 726613 Previously, certain DMI (Direct Media Interface) tables did not report CPU information as a string and returned the NULL value instead. Consequently, Python terminated unexpectedly with a segmentation fault when trying to identify the CPU type by performing a string comparison. With this update, additional checks for NULL values, performed prior the string comparison, have been added to the code, thus fixing this bug. All users of python-dmidecode are advised to upgrade to this updated package, which fixes this bug. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.0_technical_notes/python-dmidecode |
Chapter 2. Adding trusted certificate authorities | Chapter 2. Adding trusted certificate authorities Learn how to add custom trusted certificate authorities to Red Hat Advanced Cluster Security for Kubernetes. If you are using an enterprise certificate authority (CA) on your network, or self-signed certificates, you must add the CA's root certificate to Red Hat Advanced Cluster Security for Kubernetes as a trusted root CA. Adding trusted root CAs allows: Central and Scanner to trust remote servers when you integrate with other tools. Sensor to trust custom certificates you use for Central. You can add additional CAs during the installation or on an existing deployment. Note You must first configure your trusted CAs in the cluster where you have deployed Central and then propagate the changes to Scanner and Sensor. 2.1. Configuring additional CAs To add custom CAs: Procedure Download the ca-setup.sh script. Note If you are doing a new installation, you can find the ca-setup.sh script in the scripts directory at central-bundle/central/scripts/ca-setup.sh . You must run the ca-setup.sh script in the same terminal from which you logged into your OpenShift Container Platform cluster. Make the ca-setup.sh script executable: USD chmod +x ca-setup.sh To add: A single certificate, use the -f (file) option: USD ./ca-setup.sh -f <certificate> Note You must use a PEM-encoded certificate file (with any extension). You can also use the -u (update) option along with the -f option to update any previously added certificate. Multiple certificates at once, move all certificates in a directory, and then use the -d (directory) option: USD ./ca-setup.sh -d <directory_name> Note You must use PEM-encoded certificate files with a .crt or .pem extension. Each file must only contain a single certificate. You can also use the -u (update) option along with the -d option to update any previously added certificates. 2.2. Propagating changes After you configure trusted CAs, you must make Red Hat Advanced Cluster Security for Kubernetes services trust them. If you have configured trusted CAs after the installation, you must restart Central. Additionally, if you are also adding certificates for integrating with image registries, you must restart both Central and Scanner. 2.2.1. Restarting the Central container You can restart the Central container by killing the Central container or by deleting the Central pod. Procedure Run the following command to kill the Central container: Note You must wait for at least 1 minute, until OpenShift Container Platform propagates your changes and restarts the Central container. USD oc -n stackrox exec deploy/central -c central -- kill 1 Or, run the following command to delete the Central pod: USD oc -n stackrox delete pod -lapp=central 2.2.2. Restarting the Scanner container You can restart the Scanner container by deleting the pod. Procedure Run the following command to delete the Scanner pod: On OpenShift Container Platform: USD oc delete pod -n stackrox -l app=scanner On Kubernetes: USD kubectl delete pod -n stackrox -l app=scanner Important After you have added trusted CAs and configured Central, the CAs are included in any new Sensor deployment bundles that you create. If an existing Sensor reports problems while connecting to Central, you must generate a Sensor deployment YAML file and update existing clusters. If you are deploying a new Sensor using the sensor.sh script, run the following command before you run the sensor.sh script: USD ./ca-setup-sensor.sh -d ./additional-cas/ If you are deploying a new Sensor using Helm, you do not have to run any additional scripts. | [
"chmod +x ca-setup.sh",
"./ca-setup.sh -f <certificate>",
"./ca-setup.sh -d <directory_name>",
"oc -n stackrox exec deploy/central -c central -- kill 1",
"oc -n stackrox delete pod -lapp=central",
"oc delete pod -n stackrox -l app=scanner",
"kubectl delete pod -n stackrox -l app=scanner",
"./ca-setup-sensor.sh -d ./additional-cas/"
] | https://docs.redhat.com/en/documentation/red_hat_advanced_cluster_security_for_kubernetes/4.6/html/configuring/add-trusted-ca |
12.6. Geo-replication Logs | 12.6. Geo-replication Logs The following log files are used for a geo-replication session: Master-log-file - log file for the process that monitors the master volume. Slave-log-file - log file for process that initiates changes on a slave. Master-gluster-log-file - log file for the maintenance mount point that the geo-replication module uses to monitor the master volume. Slave-gluster-log-file - If the slave is a Red Hat Gluster Storage Volume, this log file is the slave's counterpart of Master-gluster-log-file . 12.6.1. Viewing the Geo-replication Master Log Files To view the Master-log-file for geo-replication, use the following command: For example: 12.6.2. Viewing the Geo-replication Slave Log Files To view the log file for geo-replication on a slave, use the following procedure. glusterd must be running on slave machine. On the master, run the following command to display the session-owner details: For example: On the slave, run the following command with the session-owner value from the step: For example: | [
"gluster volume geo-replication MASTER_VOL SLAVE_HOST :: SLAVE_VOL config log-file",
"gluster volume geo-replication Volume1 example.com::slave-vol config log-file",
"gluster volume geo-replication MASTER_VOL SLAVE_HOST :: SLAVE_VOL config session-owner",
"gluster volume geo-replication Volume1 example.com::slave-vol config session-owner 5f6e5200-756f-11e0-a1f0-0800200c9a66",
"gluster volume geo-replication SLAVE_VOL config log-file /var/log/gluster/ SESSION_OWNER :remote-mirror.log",
"gluster volume geo-replication slave-vol config log-file /var/log/gluster/5f6e5200-756f-11e0-a1f0-0800200c9a66:remote-mirror.log"
] | https://docs.redhat.com/en/documentation/red_hat_gluster_storage/3.5/html/administration_guide/sect-geo-replication_logs |
Chapter 3. Managing resource servers | Chapter 3. Managing resource servers According to the OAuth2 specification, a resource server is a server hosting the protected resources and capable of accepting and responding to protected resource requests. In Red Hat Single Sign-On, resource servers are provided with a rich platform for enabling fine-grained authorization for their protected resources, where authorization decisions can be made based on different access control mechanisms. Any client application can be configured to support fine-grained permissions. In doing so, you are conceptually turning the client application into a resource server. 3.1. Creating a client application The first step to enable Red Hat Single Sign-On Authorization Services is to create the client application that you want to turn into a resource server. Procedure Click Clients . Clients On this page, click Create . Add Client Type the Client ID of the client. For example, my-resource-server . Type the Root URL for your application. For example: Click Save . The client is created and the client Settings page opens. A page similar to the following is displayed: Client Settings 3.2. Enabling authorization services To turn your OIDC Client Application into a resource server and enable fine-grained authorization, select Access type confidential and click the Authorization Enabled switch to ON then click Save . Enabling authorization services A new Authorization tab is displayed for this client. Click the Authorization tab and a page similar to the following is displayed: Resource server settings The Authorization tab contains additional sub-tabs covering the different steps that you must follow to actually protect your application's resources. Each tab is covered separately by a specific topic in this documentation. But here is a quick description about each one: Settings General settings for your resource server. For more details about this page see the Resource Server Settings section. Resource From this page, you can manage your application's resources . Authorization Scopes From this page, you can manage scopes . Policies From this page, you can manage authorization policies and define the conditions that must be met to grant a permission. Permissions From this page, you can manage the permissions for your protected resources and scopes by linking them with the policies you created. Evaluate From this page, you can simulate authorization requests and view the result of the evaluation of the permissions and authorization policies you have defined. Export Settings From this page, you can export the authorization settings to a JSON file. 3.2.1. Resource server settings On the Resource Server Settings page, you can configure the policy enforcement mode, allow remote resource management, and export the authorization configuration settings. Policy Enforcement Mode Specifies how policies are enforced when processing authorization requests sent to the server. Enforcing (default mode) Requests are denied by default even when there is no policy associated with a given resource. Permissive Requests are allowed even when there is no policy associated with a given resource. Disabled Disables the evaluation of all policies and allows access to all resources. Decision Strategy This configurations changes how the policy evaluation engine decides whether or not a resource or scope should be granted based on the outcome from all evaluated permissions. Affirmative means that at least one permission must evaluate to a positive decision in order grant access to a resource and its scopes. Unanimous means that all permissions must evaluate to a positive decision in order for the final decision to be also positive. As an example, if two permissions for a same resource or scope are in conflict (one of them is granting access and the other is denying access), the permission to the resource or scope will be granted if the choosen strategy is Affirmative . Otherwise, a single deny from any permission will also deny access to the resource or scope. Remote Resource Management Specifies whether resources can be managed remotely by the resource server. If false, resources can be managed only from the administration console. 3.3. Default Configuration When you create a resource server, Red Hat Single Sign-On creates a default configuration for your newly created resource server. The default configuration consists of: A default protected resource representing all resources in your application. A policy that always grants access to the resources protected by this policy. A permission that governs access to all resources based on the default policy. The default protected resource is referred to as the default resource and you can view it if you navigate to the Resources tab. Default resource This resource defines a Type , namely urn:my-resource-server:resources:default and a URI /* . Here, the URI field defines a wildcard pattern that indicates to Red Hat Single Sign-On that this resource represents all the paths in your application. In other words, when enabling policy enforcement for your application, all the permissions associated with the resource will be examined before granting access. The Type mentioned previously defines a value that can be used to create typed resource permissions that must be applied to the default resource or any other resource you create using the same type. The default policy is referred to as the only from realm policy and you can view it if you navigate to the Policies tab. Default policy This policy is a JavaScript-based policy defining a condition that always grants access to the resources protected by this policy. If you click this policy you can see that it defines a rule as follows: // by default, grants any permission associated with this policy USDevaluation.grant(); Lastly, the default permission is referred to as the default permission and you can view it if you navigate to the Permissions tab. Default Permission This permission is a resource-based permission , defining a set of one or more policies that are applied to all resources with a given type. 3.3.1. Changing the default configuration You can change the default configuration by removing the default resource, policy, or permission definitions and creating your own. The default resource is created with an URI that maps to any resource or path in your application using a / * pattern. Before creating your own resources, permissions and policies, make sure the default configuration doesn't conflict with your own settings. Note The default configuration defines a resource that maps to all paths in your application. If you are about to write permissions to your own resources, be sure to remove the Default Resource or change its URIS fields to a more specific paths in your application. Otherwise, the policy associated with the default resource (which by default always grants access) will allow Red Hat Single Sign-On to grant access to any protected resource. 3.4. Export and import authorization configuration The configuration settings for a resource server (or client) can be exported and downloaded. You can also import an existing configuration file for a resource server. Importing and exporting a configuration file is helpful when you want to create an initial configuration for a resource server or to update an existing configuration. The configuration file contains definitions for: Protected resources and scopes Policies Permissions 3.4.1. Exporting a configuration file Procedure Navigate to the Resource Server Settings page. Click the Export Settings tab. On this page, click Export . Export Settings The configuration file is exported in JSON format and displayed in a text area, from which you can copy and paste. You can also click Download to download the configuration file and save it. 3.4.2. Importing a configuration file You can import a configuration file for a resource server. Procedure Navigate to the Resource Server Settings page. Import Settings Click Select file and choose a file containing the configuration that you want to import. | [
"http://USD{host}:USD{port}/my-resource-server",
"// by default, grants any permission associated with this policy USDevaluation.grant();"
] | https://docs.redhat.com/en/documentation/red_hat_single_sign-on/7.6/html/authorization_services_guide/resource_server_overview |
Part V. Known Issues | Part V. Known Issues This part describes known issues in Red Hat Enterprise Linux 7.1. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/7.1_release_notes/part-red_hat_enterprise_linux-7.1_release_notes-known_issues |
9.7.4. Hostname Formats | 9.7.4. Hostname Formats The host(s) can be in the following forms: Single machine A fully-qualified domain name (that can be resolved by the server), hostname (that can be resolved by the server), or an IP address. Series of machines specified with wildcards Use the * or ? character to specify a string match. Wildcards are not to be used with IP addresses; however, they may accidentally work if reverse DNS lookups fail. When specifying wildcards in fully qualified domain names, dots ( . ) are not included in the wildcard. For example, *.example.com includes one.example.com but does not include one.two.example.com . IP networks Use a.b.c.d/ z , where a.b.c.d is the network and z is the number of bits in the netmask (for example 192.168.0.0/24). Another acceptable format is a.b.c.d/ netmask , where a.b.c.d is the network and netmask is the netmask (for example, 192.168.100.8/255.255.255.0). Netgroups Use the format @ group-name , where group-name is the NIS netgroup name. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/storage_administration_guide/s2-nfs-hostname-formats |
Logging | Logging OpenShift Container Platform 4.13 Configuring and using logging in OpenShift Container Platform Red Hat OpenShift Documentation Team | [
"tls.verify_certificate = false tls.verify_hostname = false",
"ERROR vector::cli: Configuration error. error=redefinition of table transforms.audit for key transforms.audit",
"oc get clusterversion/version -o jsonpath='{.spec.clusterID}{\"\\n\"}'",
"Disabling ownership via cluster version overrides prevents upgrades. Please remove overrides before continuing.",
"oc adm must-gather --image=USD(oc -n openshift-logging get deployment.apps/cluster-logging-operator -o jsonpath='{.spec.template.spec.containers[?(@.name == \"cluster-logging-operator\")].image}')",
"tar -cvaf must-gather.tar.gz must-gather.local.4157245944708210408",
"oc project openshift-logging",
"oc get clusterlogging instance -o yaml",
"apiVersion: logging.openshift.io/v1 kind: ClusterLogging status: 1 collection: logs: fluentdStatus: daemonSet: fluentd 2 nodes: collector-2rhqp: ip-10-0-169-13.ec2.internal collector-6fgjh: ip-10-0-165-244.ec2.internal collector-6l2ff: ip-10-0-128-218.ec2.internal collector-54nx5: ip-10-0-139-30.ec2.internal collector-flpnn: ip-10-0-147-228.ec2.internal collector-n2frh: ip-10-0-157-45.ec2.internal pods: failed: [] notReady: [] ready: - collector-2rhqp - collector-54nx5 - collector-6fgjh - collector-6l2ff - collector-flpnn - collector-n2frh logstore: 3 elasticsearchStatus: - ShardAllocationEnabled: all cluster: activePrimaryShards: 5 activeShards: 5 initializingShards: 0 numDataNodes: 1 numNodes: 1 pendingTasks: 0 relocatingShards: 0 status: green unassignedShards: 0 clusterName: elasticsearch nodeConditions: elasticsearch-cdm-mkkdys93-1: nodeCount: 1 pods: client: failed: notReady: ready: - elasticsearch-cdm-mkkdys93-1-7f7c6-mjm7c data: failed: notReady: ready: - elasticsearch-cdm-mkkdys93-1-7f7c6-mjm7c master: failed: notReady: ready: - elasticsearch-cdm-mkkdys93-1-7f7c6-mjm7c visualization: 4 kibanaStatus: - deployment: kibana pods: failed: [] notReady: [] ready: - kibana-7fb4fd4cc9-f2nls replicaSets: - kibana-7fb4fd4cc9 replicas: 1",
"nodes: - conditions: - lastTransitionTime: 2019-03-15T15:57:22Z message: Disk storage usage for node is 27.5gb (36.74%). Shards will be not be allocated on this node. reason: Disk Watermark Low status: \"True\" type: NodeStorage deploymentName: example-elasticsearch-clientdatamaster-0-1 upgradeStatus: {}",
"nodes: - conditions: - lastTransitionTime: 2019-03-15T16:04:45Z message: Disk storage usage for node is 27.5gb (36.74%). Shards will be relocated from this node. reason: Disk Watermark High status: \"True\" type: NodeStorage deploymentName: cluster-logging-operator upgradeStatus: {}",
"Elasticsearch Status: Shard Allocation Enabled: shard allocation unknown Cluster: Active Primary Shards: 0 Active Shards: 0 Initializing Shards: 0 Num Data Nodes: 0 Num Nodes: 0 Pending Tasks: 0 Relocating Shards: 0 Status: cluster health unknown Unassigned Shards: 0 Cluster Name: elasticsearch Node Conditions: elasticsearch-cdm-mkkdys93-1: Last Transition Time: 2019-06-26T03:37:32Z Message: 0/5 nodes are available: 5 node(s) didn't match node selector. Reason: Unschedulable Status: True Type: Unschedulable elasticsearch-cdm-mkkdys93-2: Node Count: 2 Pods: Client: Failed: Not Ready: elasticsearch-cdm-mkkdys93-1-75dd69dccd-f7f49 elasticsearch-cdm-mkkdys93-2-67c64f5f4c-n58vl Ready: Data: Failed: Not Ready: elasticsearch-cdm-mkkdys93-1-75dd69dccd-f7f49 elasticsearch-cdm-mkkdys93-2-67c64f5f4c-n58vl Ready: Master: Failed: Not Ready: elasticsearch-cdm-mkkdys93-1-75dd69dccd-f7f49 elasticsearch-cdm-mkkdys93-2-67c64f5f4c-n58vl Ready:",
"Node Conditions: elasticsearch-cdm-mkkdys93-1: Last Transition Time: 2019-06-26T03:37:32Z Message: pod has unbound immediate PersistentVolumeClaims (repeated 5 times) Reason: Unschedulable Status: True Type: Unschedulable",
"Status: Collection: Logs: Fluentd Status: Daemon Set: fluentd Nodes: Pods: Failed: Not Ready: Ready:",
"oc project openshift-logging",
"oc describe deployment cluster-logging-operator",
"Name: cluster-logging-operator . Conditions: Type Status Reason ---- ------ ------ Available True MinimumReplicasAvailable Progressing True NewReplicaSetAvailable . Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal ScalingReplicaSet 62m deployment-controller Scaled up replica set cluster-logging-operator-574b8987df to 1----",
"oc get replicaset",
"NAME DESIRED CURRENT READY AGE cluster-logging-operator-574b8987df 1 1 1 159m elasticsearch-cdm-uhr537yu-1-6869694fb 1 1 1 157m elasticsearch-cdm-uhr537yu-2-857b6d676f 1 1 1 156m elasticsearch-cdm-uhr537yu-3-5b6fdd8cfd 1 1 1 155m kibana-5bd5544f87 1 1 1 157m",
"oc describe replicaset cluster-logging-operator-574b8987df",
"Name: cluster-logging-operator-574b8987df . Replicas: 1 current / 1 desired Pods Status: 1 Running / 0 Waiting / 0 Succeeded / 0 Failed . Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal SuccessfulCreate 66m replicaset-controller Created pod: cluster-logging-operator-574b8987df-qjhqv----",
"oc delete pod --selector logging-infra=collector",
"\"values\":[[\"1630410392689800468\",\"{\\\"kind\\\":\\\"Event\\\",\\\"apiVersion\\\": .... ... ... ... \\\"received_at\\\":\\\"2021-08-31T11:46:32.800278+00:00\\\",\\\"version\\\":\\\"1.7.4 1.6.0\\\"}},\\\"@timestamp\\\":\\\"2021-08-31T11:46:32.799692+00:00\\\",\\\"viaq_index_name\\\":\\\"audit-write\\\",\\\"viaq_msg_id\\\":\\\"MzFjYjJkZjItNjY0MC00YWU4LWIwMTEtNGNmM2E5ZmViMGU4\\\",\\\"log_type\\\":\\\"audit\\\"}\"]]}]}",
"429 Too Many Requests Ingestion rate limit exceeded",
"2023-08-25T16:08:49.301780Z WARN sink{component_kind=\"sink\" component_id=default_loki_infra component_type=loki component_name=default_loki_infra}: vector::sinks::util::retries: Retrying after error. error=Server responded with an error: 429 Too Many Requests internal_log_rate_limit=true",
"2023-08-30 14:52:15 +0000 [warn]: [default_loki_infra] failed to flush the buffer. retry_times=2 next_retry_time=2023-08-30 14:52:19 +0000 chunk=\"604251225bf5378ed1567231a1c03b8b\" error_class=Fluent::Plugin::LokiOutput::LogPostError error=\"429 Too Many Requests Ingestion rate limit exceeded for user infrastructure (limit: 4194304 bytes/sec) while attempting to ingest '4082' lines totaling '7820025' bytes, reduce log volume or contact your Loki administrator to see if the limit can be increased\\n\"",
"level=warn ts=2023-08-30T14:57:34.155592243Z caller=grpc_logging.go:43 duration=1.434942ms method=/logproto.Pusher/Push err=\"rpc error: code = Code(429) desc = entry with timestamp 2023-08-30 14:57:32.012778399 +0000 UTC ignored, reason: 'Per stream rate limit exceeded (limit: 3MB/sec) while attempting to ingest for stream",
"apiVersion: loki.grafana.com/v1 kind: LokiStack metadata: name: logging-loki namespace: openshift-logging spec: limits: global: ingestion: ingestionBurstSize: 16 1 ingestionRate: 8 2",
"oc -n openshift-logging get pods -l component=elasticsearch",
"export ES_POD_NAME=<elasticsearch_pod_name>",
"oc exec -n openshift-logging -c elasticsearch USDES_POD_NAME -- health",
"oc exec -n openshift-logging -c elasticsearch USDES_POD_NAME -- es_util --query=_cat/nodes?v",
"oc -n openshift-logging get pods -l component=elasticsearch",
"oc exec -n openshift-logging -c elasticsearch USDES_POD_NAME -- es_util --query=_cat/master?v",
"oc logs <elasticsearch_master_pod_name> -c elasticsearch -n openshift-logging",
"oc logs <elasticsearch_node_name> -c elasticsearch -n openshift-logging",
"oc exec -n openshift-logging -c elasticsearch USDES_POD_NAME -- es_util --query=_cat/recovery?active_only=true",
"oc exec -n openshift-logging -c elasticsearch USDES_POD_NAME -- health | grep number_of_pending_tasks",
"oc exec -n openshift-logging -c elasticsearch USDES_POD_NAME -- es_util --query=_cluster/settings?pretty",
"oc exec -n openshift-logging -c elasticsearch USDES_POD_NAME -- es_util --query=_cluster/settings?pretty -X PUT -d '{\"persistent\": {\"cluster.routing.allocation.enable\":\"all\"}}'",
"oc exec -n openshift-logging -c elasticsearch USDES_POD_NAME -- es_util --query=_cat/indices?v",
"oc exec -n openshift-logging -c elasticsearch USDES_POD_NAME -- es_util --query=<elasticsearch_index_name>/_cache/clear?pretty",
"oc exec -n openshift-logging -c elasticsearch USDES_POD_NAME -- es_util --query=<elasticsearch_index_name>/_settings?pretty -X PUT -d '{\"index.allocation.max_retries\":10}'",
"oc exec -n openshift-logging -c elasticsearch USDES_POD_NAME -- es_util --query=_search/scroll/_all -X DELETE",
"oc exec -n openshift-logging -c elasticsearch USDES_POD_NAME -- es_util --query=<elasticsearch_index_name>/_settings?pretty -X PUT -d '{\"index.unassigned.node_left.delayed_timeout\":\"10m\"}'",
"oc exec -n openshift-logging -c elasticsearch USDES_POD_NAME -- es_util --query=_cat/indices?v",
"oc exec -n openshift-logging -c elasticsearch USDES_POD_NAME -- es_util --query=<elasticsearch_red_index_name> -X DELETE",
"oc exec -n openshift-logging -c elasticsearch USDES_POD_NAME -- es_util --query=_nodes/stats?pretty",
"oc -n openshift-logging get pods -l component=elasticsearch",
"export ES_POD_NAME=<elasticsearch_pod_name>",
"oc -n openshift-logging get po -o wide",
"oc exec -n openshift-logging -c elasticsearch USDES_POD_NAME -- es_util --query=_cluster/health?pretty | grep unassigned_shards",
"for pod in `oc -n openshift-logging get po -l component=elasticsearch -o jsonpath='{.items[*].metadata.name}'`; do echo USDpod; oc -n openshift-logging exec -c elasticsearch USDpod -- df -h /elasticsearch/persistent; done",
"elasticsearch-cdm-kcrsda6l-1-586cc95d4f-h8zq8 Filesystem Size Used Avail Use% Mounted on /dev/nvme1n1 19G 522M 19G 3% /elasticsearch/persistent elasticsearch-cdm-kcrsda6l-2-5b548fc7b-cwwk7 Filesystem Size Used Avail Use% Mounted on /dev/nvme2n1 19G 522M 19G 3% /elasticsearch/persistent elasticsearch-cdm-kcrsda6l-3-5dfc884d99-59tjw Filesystem Size Used Avail Use% Mounted on /dev/nvme3n1 19G 528M 19G 3% /elasticsearch/persistent",
"oc -n openshift-logging get es elasticsearch -o jsonpath='{.spec.redundancyPolicy}'",
"oc -n openshift-logging get cl -o jsonpath='{.items[*].spec.logStore.elasticsearch.redundancyPolicy}'",
"oc exec -n openshift-logging -c elasticsearch USDES_POD_NAME -- indices",
"oc exec -n openshift-logging -c elasticsearch USDES_POD_NAME -- es_util --query=<elasticsearch_index_name> -X DELETE",
"oc -n openshift-logging get pods -l component=elasticsearch",
"export ES_POD_NAME=<elasticsearch_pod_name>",
"oc -n openshift-logging get po -o wide",
"for pod in `oc -n openshift-logging get po -l component=elasticsearch -o jsonpath='{.items[*].metadata.name}'`; do echo USDpod; oc -n openshift-logging exec -c elasticsearch USDpod -- df -h /elasticsearch/persistent; done",
"oc exec -n openshift-logging -c elasticsearch USDES_POD_NAME -- es_util --query=_cluster/health?pretty | grep relocating_shards",
"oc -n openshift-logging get es elasticsearch -o jsonpath='{.spec.redundancyPolicy}'",
"oc -n openshift-logging get cl -o jsonpath='{.items[*].spec.logStore.elasticsearch.redundancyPolicy}'",
"oc exec -n openshift-logging -c elasticsearch USDES_POD_NAME -- indices",
"oc exec -n openshift-logging -c elasticsearch USDES_POD_NAME -- es_util --query=<elasticsearch_index_name> -X DELETE",
"oc -n openshift-logging get pods -l component=elasticsearch",
"export ES_POD_NAME=<elasticsearch_pod_name>",
"for pod in `oc -n openshift-logging get po -l component=elasticsearch -o jsonpath='{.items[*].metadata.name}'`; do echo USDpod; oc -n openshift-logging exec -c elasticsearch USDpod -- df -h /elasticsearch/persistent; done",
"elasticsearch-cdm-kcrsda6l-1-586cc95d4f-h8zq8 Filesystem Size Used Avail Use% Mounted on /dev/nvme1n1 19G 522M 19G 3% /elasticsearch/persistent elasticsearch-cdm-kcrsda6l-2-5b548fc7b-cwwk7 Filesystem Size Used Avail Use% Mounted on /dev/nvme2n1 19G 522M 19G 3% /elasticsearch/persistent elasticsearch-cdm-kcrsda6l-3-5dfc884d99-59tjw Filesystem Size Used Avail Use% Mounted on /dev/nvme3n1 19G 528M 19G 3% /elasticsearch/persistent",
"oc -n openshift-logging get es elasticsearch -o jsonpath='{.spec.redundancyPolicy}'",
"oc -n openshift-logging get cl -o jsonpath='{.items[*].spec.logStore.elasticsearch.redundancyPolicy}'",
"oc exec -n openshift-logging -c elasticsearch USDES_POD_NAME -- indices",
"oc exec -n openshift-logging -c elasticsearch USDES_POD_NAME -- es_util --query=<elasticsearch_index_name> -X DELETE",
"oc exec -n openshift-logging -c elasticsearch USDES_POD_NAME -- es_util --query=_all/_settings?pretty -X PUT -d '{\"index.blocks.read_only_allow_delete\": null}'",
"for pod in `oc -n openshift-logging get po -l component=elasticsearch -o jsonpath='{.items[*].metadata.name}'`; do echo USDpod; oc -n openshift-logging exec -c elasticsearch USDpod -- df -h /elasticsearch/persistent; done",
"elasticsearch-cdm-kcrsda6l-1-586cc95d4f-h8zq8 Filesystem Size Used Avail Use% Mounted on /dev/nvme1n1 19G 522M 19G 3% /elasticsearch/persistent elasticsearch-cdm-kcrsda6l-2-5b548fc7b-cwwk7 Filesystem Size Used Avail Use% Mounted on /dev/nvme2n1 19G 522M 19G 3% /elasticsearch/persistent elasticsearch-cdm-kcrsda6l-3-5dfc884d99-59tjw Filesystem Size Used Avail Use% Mounted on /dev/nvme3n1 19G 528M 19G 3% /elasticsearch/persistent",
"oc -n openshift-logging get es elasticsearch -o jsonpath='{.spec.redundancyPolicy}'",
"oc -n openshift-logging get cl -o jsonpath='{.items[*].spec.logStore.elasticsearch.redundancyPolicy}'",
"oc exec -n openshift-logging -c elasticsearch USDES_POD_NAME -- indices",
"oc exec -n openshift-logging -c elasticsearch USDES_POD_NAME -- es_util --query=<elasticsearch_index_name> -X DELETE",
"oc project openshift-logging",
"oc get Elasticsearch",
"NAME AGE elasticsearch 5h9m",
"oc get Elasticsearch <Elasticsearch-instance> -o yaml",
"oc get Elasticsearch elasticsearch -n openshift-logging -o yaml",
"status: 1 cluster: 2 activePrimaryShards: 30 activeShards: 60 initializingShards: 0 numDataNodes: 3 numNodes: 3 pendingTasks: 0 relocatingShards: 0 status: green unassignedShards: 0 clusterHealth: \"\" conditions: [] 3 nodes: 4 - deploymentName: elasticsearch-cdm-zjf34ved-1 upgradeStatus: {} - deploymentName: elasticsearch-cdm-zjf34ved-2 upgradeStatus: {} - deploymentName: elasticsearch-cdm-zjf34ved-3 upgradeStatus: {} pods: 5 client: failed: [] notReady: [] ready: - elasticsearch-cdm-zjf34ved-1-6d7fbf844f-sn422 - elasticsearch-cdm-zjf34ved-2-dfbd988bc-qkzjz - elasticsearch-cdm-zjf34ved-3-c8f566f7c-t7zkt data: failed: [] notReady: [] ready: - elasticsearch-cdm-zjf34ved-1-6d7fbf844f-sn422 - elasticsearch-cdm-zjf34ved-2-dfbd988bc-qkzjz - elasticsearch-cdm-zjf34ved-3-c8f566f7c-t7zkt master: failed: [] notReady: [] ready: - elasticsearch-cdm-zjf34ved-1-6d7fbf844f-sn422 - elasticsearch-cdm-zjf34ved-2-dfbd988bc-qkzjz - elasticsearch-cdm-zjf34ved-3-c8f566f7c-t7zkt shardAllocationEnabled: all",
"status: nodes: - conditions: - lastTransitionTime: 2019-03-15T15:57:22Z message: Disk storage usage for node is 27.5gb (36.74%). Shards will be not be allocated on this node. reason: Disk Watermark Low status: \"True\" type: NodeStorage deploymentName: example-elasticsearch-cdm-0-1 upgradeStatus: {}",
"status: nodes: - conditions: - lastTransitionTime: 2019-03-15T16:04:45Z message: Disk storage usage for node is 27.5gb (36.74%). Shards will be relocated from this node. reason: Disk Watermark High status: \"True\" type: NodeStorage deploymentName: example-elasticsearch-cdm-0-1 upgradeStatus: {}",
"status: nodes: - conditions: - lastTransitionTime: 2019-04-10T02:26:24Z message: '0/8 nodes are available: 8 node(s) didn''t match node selector.' reason: Unschedulable status: \"True\" type: Unschedulable",
"status: nodes: - conditions: - last Transition Time: 2019-04-10T05:55:51Z message: pod has unbound immediate PersistentVolumeClaims (repeated 5 times) reason: Unschedulable status: True type: Unschedulable",
"status: clusterHealth: \"\" conditions: - lastTransitionTime: 2019-04-17T20:01:31Z message: Wrong RedundancyPolicy selected. Choose different RedundancyPolicy or add more nodes with data roles reason: Invalid Settings status: \"True\" type: InvalidRedundancy",
"status: clusterHealth: green conditions: - lastTransitionTime: '2019-04-17T20:12:34Z' message: >- Invalid master nodes count. Please ensure there are no more than 3 total nodes with master roles reason: Invalid Settings status: 'True' type: InvalidMasters",
"status: clusterHealth: green conditions: - lastTransitionTime: \"2021-05-07T01:05:13Z\" message: Changing the storage structure for a custom resource is not supported reason: StorageStructureChangeIgnored status: 'True' type: StorageStructureChangeIgnored",
"oc get pods --selector component=elasticsearch -o name",
"pod/elasticsearch-cdm-1godmszn-1-6f8495-vp4lw pod/elasticsearch-cdm-1godmszn-2-5769cf-9ms2n pod/elasticsearch-cdm-1godmszn-3-f66f7d-zqkz7",
"oc exec elasticsearch-cdm-4vjor49p-2-6d4d7db474-q2w7z -- indices",
"Defaulting container name to elasticsearch. Use 'oc describe pod/elasticsearch-cdm-4vjor49p-2-6d4d7db474-q2w7z -n openshift-logging' to see all of the containers in this pod. green open infra-000002 S4QANnf1QP6NgCegfnrnbQ 3 1 119926 0 157 78 green open audit-000001 8_EQx77iQCSTzFOXtxRqFw 3 1 0 0 0 0 green open .security iDjscH7aSUGhIdq0LheLBQ 1 1 5 0 0 0 green open .kibana_-377444158_kubeadmin yBywZ9GfSrKebz5gWBZbjw 3 1 1 0 0 0 green open infra-000001 z6Dpe__ORgiopEpW6Yl44A 3 1 871000 0 874 436 green open app-000001 hIrazQCeSISewG3c2VIvsQ 3 1 2453 0 3 1 green open .kibana_1 JCitcBMSQxKOvIq6iQW6wg 1 1 0 0 0 0 green open .kibana_-1595131456_user1 gIYFIEGRRe-ka0W3okS-mQ 3 1 1 0 0 0",
"oc get pods --selector component=elasticsearch -o name",
"pod/elasticsearch-cdm-1godmszn-1-6f8495-vp4lw pod/elasticsearch-cdm-1godmszn-2-5769cf-9ms2n pod/elasticsearch-cdm-1godmszn-3-f66f7d-zqkz7",
"oc describe pod elasticsearch-cdm-1godmszn-1-6f8495-vp4lw",
". Status: Running . Containers: elasticsearch: Container ID: cri-o://b7d44e0a9ea486e27f47763f5bb4c39dfd2 State: Running Started: Mon, 08 Jun 2020 10:17:56 -0400 Ready: True Restart Count: 0 Readiness: exec [/usr/share/elasticsearch/probe/readiness.sh] delay=10s timeout=30s period=5s #success=1 #failure=3 . proxy: Container ID: cri-o://3f77032abaddbb1652c116278652908dc01860320b8a4e741d06894b2f8f9aa1 State: Running Started: Mon, 08 Jun 2020 10:18:38 -0400 Ready: True Restart Count: 0 . Conditions: Type Status Initialized True Ready True ContainersReady True PodScheduled True . Events: <none>",
"oc get deployment --selector component=elasticsearch -o name",
"deployment.extensions/elasticsearch-cdm-1gon-1 deployment.extensions/elasticsearch-cdm-1gon-2 deployment.extensions/elasticsearch-cdm-1gon-3",
"oc describe deployment elasticsearch-cdm-1gon-1",
". Containers: elasticsearch: Image: registry.redhat.io/openshift-logging/elasticsearch6-rhel8 Readiness: exec [/usr/share/elasticsearch/probe/readiness.sh] delay=10s timeout=30s period=5s #success=1 #failure=3 . Conditions: Type Status Reason ---- ------ ------ Progressing Unknown DeploymentPaused Available True MinimumReplicasAvailable . Events: <none>",
"oc get replicaSet --selector component=elasticsearch -o name replicaset.extensions/elasticsearch-cdm-1gon-1-6f8495 replicaset.extensions/elasticsearch-cdm-1gon-2-5769cf replicaset.extensions/elasticsearch-cdm-1gon-3-f66f7d",
"oc describe replicaSet elasticsearch-cdm-1gon-1-6f8495",
". Containers: elasticsearch: Image: registry.redhat.io/openshift-logging/elasticsearch6-rhel8@sha256:4265742c7cdd85359140e2d7d703e4311b6497eec7676957f455d6908e7b1c25 Readiness: exec [/usr/share/elasticsearch/probe/readiness.sh] delay=10s timeout=30s period=5s #success=1 #failure=3 . Events: <none>",
"eo_elasticsearch_cr_cluster_management_state{state=\"managed\"} 1 eo_elasticsearch_cr_cluster_management_state{state=\"unmanaged\"} 0",
"eo_elasticsearch_cr_restart_total{reason=\"cert_restart\"} 1 eo_elasticsearch_cr_restart_total{reason=\"rolling_restart\"} 1 eo_elasticsearch_cr_restart_total{reason=\"scheduled_restart\"} 3",
"Total number of Namespaces. es_index_namespaces_total 5",
"es_index_document_count{namespace=\"namespace_1\"} 25 es_index_document_count{namespace=\"namespace_2\"} 10 es_index_document_count{namespace=\"namespace_3\"} 5",
"message\": \"Secret \\\"elasticsearch\\\" fields are either missing or empty: [admin-cert, admin-key, logging-es.crt, logging-es.key]\", \"reason\": \"Missing Required Secrets\",",
"apiVersion: v1 kind: Namespace metadata: name: <name> 1 annotations: openshift.io/node-selector: \"\" labels: openshift.io/cluster-monitoring: \"true\"",
"oc apply -f <filename>.yaml",
"apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: cluster-logging namespace: openshift-logging 1 spec: targetNamespaces: [ ] 2",
"oc apply -f <filename>.yaml",
"apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: cluster-logging namespace: openshift-logging 1 spec: channel: stable 2 name: cluster-logging source: redhat-operators 3 sourceNamespace: openshift-marketplace",
"oc apply -f <filename>.yaml",
"oc get csv -n <namespace>",
"NAMESPACE NAME DISPLAY VERSION REPLACES PHASE openshift-logging clusterlogging.5.8.0-202007012112.p0 OpenShift Logging 5.8.0-202007012112.p0 Succeeded",
"apiVersion: logging.openshift.io/v1 kind: ClusterLogging metadata: name: instance 1 namespace: openshift-logging spec: managementState: Managed 2 logStore: type: elasticsearch 3 retentionPolicy: 4 application: maxAge: 1d infra: maxAge: 7d audit: maxAge: 7d elasticsearch: nodeCount: 3 5 storage: storageClassName: <storage_class_name> 6 size: 200G resources: 7 limits: memory: 16Gi requests: memory: 16Gi proxy: 8 resources: limits: memory: 256Mi requests: memory: 256Mi redundancyPolicy: SingleRedundancy visualization: type: kibana 9 kibana: replicas: 1 collection: type: fluentd 10 fluentd: {}",
"oc get deployment",
"NAME READY UP-TO-DATE AVAILABLE AGE cluster-logging-operator 1/1 1 1 18h elasticsearch-cd-x6kdekli-1 1/1 1 1 6m54s elasticsearch-cdm-x6kdekli-1 1/1 1 1 18h elasticsearch-cdm-x6kdekli-2 1/1 1 1 6m49s elasticsearch-cdm-x6kdekli-3 1/1 1 1 6m44s",
"oc get pods -n openshift-logging",
"NAME READY STATUS RESTARTS AGE cluster-logging-operator-66f77ffccb-ppzbg 1/1 Running 0 7m elasticsearch-cdm-ftuhduuw-1-ffc4b9566-q6bhp 2/2 Running 0 2m40s elasticsearch-cdm-ftuhduuw-2-7b4994dbfc-rd2gc 2/2 Running 0 2m36s elasticsearch-cdm-ftuhduuw-3-84b5ff7ff8-gqnm2 2/2 Running 0 2m4s collector-587vb 1/1 Running 0 2m26s collector-7mpb9 1/1 Running 0 2m30s collector-flm6j 1/1 Running 0 2m33s collector-gn4rn 1/1 Running 0 2m26s collector-nlgb6 1/1 Running 0 2m30s collector-snpkt 1/1 Running 0 2m28s kibana-d6d5668c5-rppqm 2/2 Running 0 2m39s",
"apiVersion: logging.openshift.io/v1 kind: ClusterLogging metadata: name: instance 1 namespace: openshift-logging 2 spec: managementState: Managed 3",
"apiVersion: logging.openshift.io/v1 kind: ClusterLogging metadata: spec: logStore: type: <log_store_type> 1 elasticsearch: 2 nodeCount: <integer> resources: {} storage: {} redundancyPolicy: <redundancy_type> 3 lokistack: 4 name: {}",
"apiVersion: logging.openshift.io/v1 kind: ClusterLogging metadata: name: instance namespace: openshift-logging spec: managementState: Managed logStore: type: lokistack lokistack: name: logging-loki",
"oc apply -f <filename>.yaml",
"apiVersion: logging.openshift.io/v1 kind: ClusterLogging metadata: spec: collection: type: <log_collector_type> 1 resources: {} tolerations: {}",
"oc apply -f <filename>.yaml",
"apiVersion: logging.openshift.io/v1 kind: ClusterLogging metadata: spec: visualization: type: <visualizer_type> 1 kibana: 2 resources: {} nodeSelector: {} proxy: {} replicas: {} tolerations: {} ocpConsole: 3 logsLimit: {} timeout: {}",
"oc apply -f <filename>.yaml",
"oc adm pod-network join-projects --to=openshift-operators-redhat openshift-logging",
"oc label namespace openshift-operators-redhat project=openshift-operators-redhat",
"apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: allow-from-openshift-monitoring-ingress-operators-redhat spec: ingress: - from: - podSelector: {} - from: - namespaceSelector: matchLabels: project: \"openshift-operators-redhat\" - from: - namespaceSelector: matchLabels: name: \"openshift-monitoring\" - from: - namespaceSelector: matchLabels: network.openshift.io/policy-group: ingress podSelector: {} policyTypes: - Ingress",
"oc -n openshift-logging delete subscription <subscription>",
"oc -n openshift-logging delete operatorgroup <operator_group_name>",
"oc delete clusterserviceversion cluster-logging.<version>",
"oc get operatorgroup <operator_group_name> -o yaml",
"apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: openshift-logging-f52cn namespace: openshift-logging spec: upgradeStrategy: Default status: namespaces: - \"\"",
"oc get pod -n openshift-logging --selector component=elasticsearch",
"NAME READY STATUS RESTARTS AGE elasticsearch-cdm-1pbrl44l-1-55b7546f4c-mshhk 2/2 Running 0 31m elasticsearch-cdm-1pbrl44l-2-5c6d87589f-gx5hk 2/2 Running 0 30m elasticsearch-cdm-1pbrl44l-3-88df5d47-m45jc 2/2 Running 0 29m",
"oc exec -n openshift-logging -c elasticsearch elasticsearch-cdm-1pbrl44l-1-55b7546f4c-mshhk -- health",
"{ \"cluster_name\" : \"elasticsearch\", \"status\" : \"green\", }",
"oc project openshift-logging",
"oc get cronjob",
"NAME SCHEDULE SUSPEND ACTIVE LAST SCHEDULE AGE elasticsearch-im-app */15 * * * * False 0 <none> 56s elasticsearch-im-audit */15 * * * * False 0 <none> 56s elasticsearch-im-infra */15 * * * * False 0 <none> 56s",
"oc exec -c elasticsearch <any_es_pod_in_the_cluster> -- indices",
"Tue Jun 30 14:30:54 UTC 2020 health status index uuid pri rep docs.count docs.deleted store.size pri.store.size green open infra-000008 bnBvUFEXTWi92z3zWAzieQ 3 1 222195 0 289 144 green open infra-000004 rtDSzoqsSl6saisSK7Au1Q 3 1 226717 0 297 148 green open infra-000012 RSf_kUwDSR2xEuKRZMPqZQ 3 1 227623 0 295 147 green open .kibana_7 1SJdCqlZTPWlIAaOUd78yg 1 1 4 0 0 0 green open infra-000010 iXwL3bnqTuGEABbUDa6OVw 3 1 248368 0 317 158 green open infra-000009 YN9EsULWSNaxWeeNvOs0RA 3 1 258799 0 337 168 green open infra-000014 YP0U6R7FQ_GVQVQZ6Yh9Ig 3 1 223788 0 292 146 green open infra-000015 JRBbAbEmSMqK5X40df9HbQ 3 1 224371 0 291 145 green open .orphaned.2020.06.30 n_xQC2dWQzConkvQqei3YA 3 1 9 0 0 0 green open infra-000007 llkkAVSzSOmosWTSAJM_hg 3 1 228584 0 296 148 green open infra-000005 d9BoGQdiQASsS3BBFm2iRA 3 1 227987 0 297 148 green open infra-000003 1-goREK1QUKlQPAIVkWVaQ 3 1 226719 0 295 147 green open .security zeT65uOuRTKZMjg_bbUc1g 1 1 5 0 0 0 green open .kibana-377444158_kubeadmin wvMhDwJkR-mRZQO84K0gUQ 3 1 1 0 0 0 green open infra-000006 5H-KBSXGQKiO7hdapDE23g 3 1 226676 0 295 147 green open infra-000001 eH53BQ-bSxSWR5xYZB6lVg 3 1 341800 0 443 220 green open .kibana-6 RVp7TemSSemGJcsSUmuf3A 1 1 4 0 0 0 green open infra-000011 J7XWBauWSTe0jnzX02fU6A 3 1 226100 0 293 146 green open app-000001 axSAFfONQDmKwatkjPXdtw 3 1 103186 0 126 57 green open infra-000016 m9c1iRLtStWSF1GopaRyCg 3 1 13685 0 19 9 green open infra-000002 Hz6WvINtTvKcQzw-ewmbYg 3 1 228994 0 296 148 green open infra-000013 KR9mMFUpQl-jraYtanyIGw 3 1 228166 0 298 148 green open audit-000001 eERqLdLmQOiQDFES1LBATQ 3 1 0 0 0 0",
"oc get kibana kibana -o json",
"[ { \"clusterCondition\": { \"kibana-5fdd766ffd-nb2jj\": [ { \"lastTransitionTime\": \"2020-06-30T14:11:07Z\", \"reason\": \"ContainerCreating\", \"status\": \"True\", \"type\": \"\" }, { \"lastTransitionTime\": \"2020-06-30T14:11:07Z\", \"reason\": \"ContainerCreating\", \"status\": \"True\", \"type\": \"\" } ] }, \"deployment\": \"kibana\", \"pods\": { \"failed\": [], \"notReady\": [] \"ready\": [] }, \"replicaSets\": [ \"kibana-5fdd766ffd\" ], \"replicas\": 1 } ]",
"apiVersion: logging.openshift.io/v1 kind: ClusterLogging metadata: spec: visualization: type: <visualizer_type> 1 kibana: 2 resources: {} nodeSelector: {} proxy: {} replicas: {} tolerations: {} ocpConsole: 3 logsLimit: {} timeout: {}",
"oc apply -f <filename>.yaml",
"oc logs -f <pod_name> -c <container_name>",
"oc logs ruby-58cd97df55-mww7r",
"oc logs -f ruby-57f7f4855b-znl92 -c ruby",
"oc logs <object_type>/<resource_name> 1",
"oc logs deployment/ruby",
"oc get consoles.operator.openshift.io cluster -o yaml |grep logging-view-plugin || oc patch consoles.operator.openshift.io cluster --type=merge --patch '{ \"spec\": { \"plugins\": [\"logging-view-plugin\"]}}'",
"oc patch clusterlogging instance --type=merge --patch '{ \"metadata\": { \"annotations\": { \"logging.openshift.io/ocp-console-migration-target\": \"lokistack-dev\" }}}' -n openshift-logging",
"clusterlogging.logging.openshift.io/instance patched",
"oc get clusterlogging instance -o=jsonpath='{.metadata.annotations.logging\\.openshift\\.io/ocp-console-migration-target}' -n openshift-logging",
"\"lokistack-dev\"",
"oc auth can-i get pods --subresource log -n <project>",
"yes",
"oc auth can-i get pods --subresource log -n <project>",
"yes",
"{ \"_index\": \"infra-000001\", \"_type\": \"_doc\", \"_id\": \"YmJmYTBlNDkZTRmLTliMGQtMjE3NmFiOGUyOWM3\", \"_version\": 1, \"_score\": null, \"_source\": { \"docker\": { \"container_id\": \"f85fa55bbef7bb783f041066be1e7c267a6b88c4603dfce213e32c1\" }, \"kubernetes\": { \"container_name\": \"registry-server\", \"namespace_name\": \"openshift-marketplace\", \"pod_name\": \"redhat-marketplace-n64gc\", \"container_image\": \"registry.redhat.io/redhat/redhat-marketplace-index:v4.7\", \"container_image_id\": \"registry.redhat.io/redhat/redhat-marketplace-index@sha256:65fc0c45aabb95809e376feb065771ecda9e5e59cc8b3024c4545c168f\", \"pod_id\": \"8f594ea2-c866-4b5c-a1c8-a50756704b2a\", \"host\": \"ip-10-0-182-28.us-east-2.compute.internal\", \"master_url\": \"https://kubernetes.default.svc\", \"namespace_id\": \"3abab127-7669-4eb3-b9ef-44c04ad68d38\", \"namespace_labels\": { \"openshift_io/cluster-monitoring\": \"true\" }, \"flat_labels\": [ \"catalogsource_operators_coreos_com/update=redhat-marketplace\" ] }, \"message\": \"time=\\\"2020-09-23T20:47:03Z\\\" level=info msg=\\\"serving registry\\\" database=/database/index.db port=50051\", \"level\": \"unknown\", \"hostname\": \"ip-10-0-182-28.internal\", \"pipeline_metadata\": { \"collector\": { \"ipaddr4\": \"10.0.182.28\", \"inputname\": \"fluent-plugin-systemd\", \"name\": \"fluentd\", \"received_at\": \"2020-09-23T20:47:15.007583+00:00\", \"version\": \"1.7.4 1.6.0\" } }, \"@timestamp\": \"2020-09-23T20:47:03.422465+00:00\", \"viaq_msg_id\": \"YmJmYTBlNDktMDMGQtMjE3NmFiOGUyOWM3\", \"openshift\": { \"labels\": { \"logging\": \"infra\" } } }, \"fields\": { \"@timestamp\": [ \"2020-09-23T20:47:03.422Z\" ], \"pipeline_metadata.collector.received_at\": [ \"2020-09-23T20:47:15.007Z\" ] }, \"sort\": [ 1600894023422 ] }",
"oc -n openshift-logging edit ClusterLogging instance",
"apiVersion: \"logging.openshift.io/v1\" kind: \"ClusterLogging\" metadata: name: \"instance\" namespace: openshift-logging spec: managementState: \"Managed\" logStore: type: \"elasticsearch\" elasticsearch: nodeCount: 3 resources: 1 limits: memory: 16Gi requests: cpu: 200m memory: 16Gi storage: storageClassName: \"gp2\" size: \"200G\" redundancyPolicy: \"SingleRedundancy\" visualization: type: \"kibana\" kibana: resources: 2 limits: memory: 1Gi requests: cpu: 500m memory: 1Gi proxy: resources: 3 limits: memory: 100Mi requests: cpu: 100m memory: 100Mi replicas: 2 collection: resources: 4 limits: memory: 736Mi requests: cpu: 200m memory: 736Mi type: fluentd",
"oc -n openshift-logging edit ClusterLogging instance",
"oc edit ClusterLogging instance apiVersion: \"logging.openshift.io/v1\" kind: \"ClusterLogging\" metadata: name: \"instance\" namespace: openshift-logging . spec: visualization: type: \"kibana\" kibana: replicas: 1 1",
"oc -n openshift-logging edit ClusterLogging instance",
"apiVersion: \"logging.openshift.io/v1\" kind: \"ClusterLogging\" metadata: name: \"instance\" namespace: openshift-logging spec: managementState: \"Managed\" logStore: type: \"elasticsearch\" elasticsearch: nodeCount: 3 resources: 1 limits: memory: 16Gi requests: cpu: 200m memory: 16Gi storage: storageClassName: \"gp2\" size: \"200G\" redundancyPolicy: \"SingleRedundancy\" visualization: type: \"kibana\" kibana: resources: 2 limits: memory: 1Gi requests: cpu: 500m memory: 1Gi proxy: resources: 3 limits: memory: 100Mi requests: cpu: 100m memory: 100Mi replicas: 2 collection: resources: 4 limits: memory: 736Mi requests: cpu: 200m memory: 736Mi type: fluentd",
"variant: openshift version: 4.13.0 metadata: name: 40-worker-custom-journald labels: machineconfiguration.openshift.io/role: \"worker\" storage: files: - path: /etc/systemd/journald.conf mode: 0644 1 overwrite: true contents: inline: | Compress=yes 2 ForwardToConsole=no 3 ForwardToSyslog=no MaxRetentionSec=1month 4 RateLimitBurst=10000 5 RateLimitIntervalSec=30s Storage=persistent 6 SyncIntervalSec=1s 7 SystemMaxUse=8G 8 SystemKeepFree=20% 9 SystemMaxFileSize=10M 10",
"butane 40-worker-custom-journald.bu -o 40-worker-custom-journald.yaml",
"oc apply -f 40-worker-custom-journald.yaml",
"oc describe machineconfigpool/worker",
"Name: worker Namespace: Labels: machineconfiguration.openshift.io/mco-built-in= Annotations: <none> API Version: machineconfiguration.openshift.io/v1 Kind: MachineConfigPool Conditions: Message: Reason: All nodes are updating to rendered-worker-913514517bcea7c93bd446f4830bc64e",
"apiVersion: logging.openshift.io/v1 kind: ClusterLogging metadata: name: instance namespace: openshift-logging spec: collection: logs: type: vector vector: {}",
"oc adm policy add-cluster-role-to-user <cluster_role_name> system:serviceaccount:<namespace_name>:<service_account_name>",
"{\"level\":\"info\",\"name\":\"fred\",\"home\":\"bedrock\"}",
"pipelines: - inputRefs: [ application ] outputRefs: myFluentd parse: json",
"{\"structured\": { \"level\": \"info\", \"name\": \"fred\", \"home\": \"bedrock\" }, \"more fields...\"}",
"apiVersion: logging.openshift.io/v1 kind: ClusterLogForwarder metadata: spec: outputDefaults: elasticsearch: structuredTypeKey: kubernetes.labels.logFormat 1 structuredTypeName: nologformat pipelines: - inputRefs: - application outputRefs: - default parse: json 2",
"{ \"structured\":{\"name\":\"fred\",\"home\":\"bedrock\"}, \"kubernetes\":{\"labels\":{\"logFormat\": \"apache\", ...}} }",
"{ \"structured\":{\"name\":\"wilma\",\"home\":\"bedrock\"}, \"kubernetes\":{\"labels\":{\"logFormat\": \"google\", ...}} }",
"outputDefaults: elasticsearch: structuredTypeKey: openshift.labels.myLabel 1 structuredTypeName: nologformat pipelines: - name: application-logs inputRefs: - application - audit outputRefs: - elasticsearch-secure - default parse: json labels: myLabel: myValue 2",
"{ \"structured\":{\"name\":\"fred\",\"home\":\"bedrock\"}, \"openshift\":{\"labels\":{\"myLabel\": \"myValue\", ...}} }",
"outputDefaults: elasticsearch: structuredTypeKey: <log record field> structuredTypeName: <name> pipelines: - inputRefs: - application outputRefs: default parse: json",
"oc create -f <filename>.yaml",
"oc delete pod --selector logging-infra=collector",
"apiVersion: logging.openshift.io/v1 kind: ClusterLogForwarder metadata: name: instance namespace: openshift-logging spec: outputDefaults: elasticsearch: structuredTypeKey: kubernetes.labels.logFormat 1 structuredTypeName: nologformat enableStructuredContainerLogs: true 2 pipelines: - inputRefs: - application name: application-logs outputRefs: - default parse: json",
"apiVersion: v1 kind: Pod metadata: annotations: containerType.logging.openshift.io/heavy: heavy 1 containerType.logging.openshift.io/low: low spec: containers: - name: heavy 2 image: heavyimage - name: low image: lowimage",
"apiVersion: \"logging.openshift.io/v1\" kind: ClusterLogForwarder metadata: name: <log_forwarder_name> 1 namespace: <log_forwarder_namespace> 2 spec: serviceAccountName: <service_account_name> 3 outputs: - name: elasticsearch-secure 4 type: \"elasticsearch\" url: https://elasticsearch.secure.com:9200 secret: name: elasticsearch - name: elasticsearch-insecure 5 type: \"elasticsearch\" url: http://elasticsearch.insecure.com:9200 - name: kafka-app 6 type: \"kafka\" url: tls://kafka.secure.com:9093/app-topic inputs: 7 - name: my-app-logs application: namespaces: - my-project pipelines: - name: audit-logs 8 inputRefs: - audit outputRefs: - elasticsearch-secure - default labels: secure: \"true\" 9 datacenter: \"east\" - name: infrastructure-logs 10 inputRefs: - infrastructure outputRefs: - elasticsearch-insecure labels: datacenter: \"west\" - name: my-app 11 inputRefs: - my-app-logs outputRefs: - default - inputRefs: 12 - application outputRefs: - kafka-app labels: datacenter: \"south\"",
"oc create secret generic -n <namespace> <secret_name> --from-file=ca-bundle.crt=<your_bundle_file> --from-literal=username=<your_username> --from-literal=password=<your_password>",
"apiVersion: logging.openshift.io/v1 kind: ClusterLogForwarder metadata: name: <log_forwarder_name> 1 namespace: <log_forwarder_namespace> 2 spec: serviceAccountName: <service_account_name> 3 pipelines: - inputRefs: - <log_type> 4 outputRefs: - <output_name> 5 outputs: - name: <output_name> 6 type: <output_type> 7 url: <log_output_url> 8",
"apiVersion: logging.openshift.io/v1 kind: ClusterLogForwarder metadata: spec: tuning: delivery: AtLeastOnce 1 compression: none 2 maxWrite: <integer> 3 minRetryDuration: 1s 4 maxRetryDuration: 1s 5",
"java.lang.NullPointerException: Cannot invoke \"String.toString()\" because \"<param1>\" is null at testjava.Main.handle(Main.java:47) at testjava.Main.printMe(Main.java:19) at testjava.Main.main(Main.java:10)",
"apiVersion: logging.openshift.io/v1 kind: ClusterLogForwarder metadata: name: instance namespace: openshift-logging spec: pipelines: - name: my-app-logs inputRefs: - application outputRefs: - default detectMultilineErrors: true",
"[transforms.detect_exceptions_app-logs] type = \"detect_exceptions\" inputs = [\"application\"] languages = [\"All\"] group_by = [\"kubernetes.namespace_name\",\"kubernetes.pod_name\",\"kubernetes.container_name\"] expire_after_ms = 2000 multiline_flush_interval_ms = 1000",
"<label @MULTILINE_APP_LOGS> <match kubernetes.**> @type detect_exceptions remove_tag_prefix 'kubernetes' message message force_line_breaks true multiline_flush_interval .2 </match> </label>",
"oc -n openshift-logging create secret generic gcp-secret --from-file google-application-credentials.json= <your_service_account_key_file.json>",
"apiVersion: logging.openshift.io/v1 kind: ClusterLogForwarder metadata: name: <log_forwarder_name> 1 namespace: <log_forwarder_namespace> 2 spec: serviceAccountName: <service_account_name> 3 outputs: - name: gcp-1 type: googleCloudLogging secret: name: gcp-secret googleCloudLogging: projectId : \"openshift-gce-devel\" 4 logId : \"app-gcp\" 5 pipelines: - name: test-app inputRefs: 6 - application outputRefs: - gcp-1",
"oc -n openshift-logging create secret generic vector-splunk-secret --from-literal hecToken=<HEC_Token>",
"apiVersion: logging.openshift.io/v1 kind: ClusterLogForwarder metadata: name: <log_forwarder_name> 1 namespace: <log_forwarder_namespace> 2 spec: serviceAccountName: <service_account_name> 3 outputs: - name: splunk-receiver 4 secret: name: vector-splunk-secret 5 type: splunk 6 url: <http://your.splunk.hec.url:8088> 7 pipelines: 8 - inputRefs: - application - infrastructure name: 9 outputRefs: - splunk-receiver 10",
"apiVersion: logging.openshift.io/v1 kind: ClusterLogForwarder metadata: name: <log_forwarder_name> 1 namespace: <log_forwarder_namespace> 2 spec: serviceAccountName: <service_account_name> 3 outputs: - name: httpout-app type: http url: 4 http: headers: 5 h1: v1 h2: v2 method: POST secret: name: 6 tls: insecureSkipVerify: 7 pipelines: - name: inputRefs: - application outputRefs: - httpout-app 8",
"apiVersion: v1 kind: Secret metadata: name: my-secret namespace: openshift-logging type: Opaque data: shared_key: <your_shared_key> 1",
"Get-AzOperationalInsightsWorkspaceSharedKey -ResourceGroupName \"<resource_name>\" -Name \"<workspace_name>\"",
"apiVersion: \"logging.openshift.io/v1\" kind: \"ClusterLogForwarder\" metadata: name: instance namespace: openshift-logging spec: outputs: - name: azure-monitor type: azureMonitor azureMonitor: customerId: my-customer-id 1 logType: my_log_type 2 secret: name: my-secret pipelines: - name: app-pipeline inputRefs: - application outputRefs: - azure-monitor",
"apiVersion: \"logging.openshift.io/v1\" kind: \"ClusterLogForwarder\" metadata: name: instance namespace: openshift-logging spec: outputs: - name: azure-monitor-app type: azureMonitor azureMonitor: customerId: my-customer-id logType: application_log 1 secret: name: my-secret - name: azure-monitor-infra type: azureMonitor azureMonitor: customerId: my-customer-id logType: infra_log # secret: name: my-secret pipelines: - name: app-pipeline inputRefs: - application outputRefs: - azure-monitor-app - name: infra-pipeline inputRefs: - infrastructure outputRefs: - azure-monitor-infra",
"apiVersion: \"logging.openshift.io/v1\" kind: \"ClusterLogForwarder\" metadata: name: instance namespace: openshift-logging spec: outputs: - name: azure-monitor type: azureMonitor azureMonitor: customerId: my-customer-id logType: my_log_type azureResourceId: \"/subscriptions/111111111\" 1 host: \"ods.opinsights.azure.com\" 2 secret: name: my-secret pipelines: - name: app-pipeline inputRefs: - application outputRefs: - azure-monitor",
"apiVersion: logging.openshift.io/v1 kind: ClusterLogForwarder metadata: name: instance 1 namespace: openshift-logging 2 spec: outputs: - name: fluentd-server-secure 3 type: fluentdForward 4 url: 'tls://fluentdserver.security.example.com:24224' 5 secret: 6 name: fluentd-secret - name: fluentd-server-insecure type: fluentdForward url: 'tcp://fluentdserver.home.example.com:24224' inputs: 7 - name: my-app-logs application: namespaces: - my-project 8 pipelines: - name: forward-to-fluentd-insecure 9 inputRefs: 10 - my-app-logs outputRefs: 11 - fluentd-server-insecure labels: project: \"my-project\" 12 - name: forward-to-fluentd-secure 13 inputRefs: - application 14 - audit - infrastructure outputRefs: - fluentd-server-secure - default labels: clusterId: \"C1234\"",
"oc apply -f <filename>.yaml",
"apiVersion: logging.openshift.io/v1 kind: ClusterLogForwarder metadata: name: <log_forwarder_name> 1 namespace: <log_forwarder_namespace> 2 spec: pipelines: - inputRefs: [ myAppLogData ] 3 outputRefs: [ default ] 4 inputs: 5 - name: myAppLogData application: selector: matchLabels: 6 environment: production app: nginx namespaces: 7 - app1 - app2 outputs: 8 - <output_name>",
"- inputRefs: [ myAppLogData, myOtherAppLogData ]",
"oc create -f <file-name>.yaml",
"apiVersion: logging.openshift.io/v1 kind: ClusterLogForwarder metadata: name: instance namespace: openshift-logging spec: pipelines: - name: my-pipeline inputRefs: audit 1 filterRefs: my-policy 2 outputRefs: default filters: - name: my-policy type: kubeAPIAudit kubeAPIAudit: # Don't generate audit events for all requests in RequestReceived stage. omitStages: - \"RequestReceived\" rules: # Log pod changes at RequestResponse level - level: RequestResponse resources: - group: \"\" resources: [\"pods\"] # Log \"pods/log\", \"pods/status\" at Metadata level - level: Metadata resources: - group: \"\" resources: [\"pods/log\", \"pods/status\"] # Don't log requests to a configmap called \"controller-leader\" - level: None resources: - group: \"\" resources: [\"configmaps\"] resourceNames: [\"controller-leader\"] # Don't log watch requests by the \"system:kube-proxy\" on endpoints or services - level: None users: [\"system:kube-proxy\"] verbs: [\"watch\"] resources: - group: \"\" # core API group resources: [\"endpoints\", \"services\"] # Don't log authenticated requests to certain non-resource URL paths. - level: None userGroups: [\"system:authenticated\"] nonResourceURLs: - \"/api*\" # Wildcard matching. - \"/version\" # Log the request body of configmap changes in kube-system. - level: Request resources: - group: \"\" # core API group resources: [\"configmaps\"] # This rule only applies to resources in the \"kube-system\" namespace. # The empty string \"\" can be used to select non-namespaced resources. namespaces: [\"kube-system\"] # Log configmap and secret changes in all other namespaces at the Metadata level. - level: Metadata resources: - group: \"\" # core API group resources: [\"secrets\", \"configmaps\"] # Log all other resources in core and extensions at the Request level. - level: Request resources: - group: \"\" # core API group - group: \"extensions\" # Version of group should NOT be included. # A catch-all rule to log all other requests at the Metadata level. - level: Metadata",
"apiVersion: logging.openshift.io/v1 kind: ClusterLogForwarder metadata: name: <log_forwarder_name> 1 namespace: <log_forwarder_namespace> 2 spec: serviceAccountName: <service_account_name> 3 outputs: - name: loki-insecure 4 type: \"loki\" 5 url: http://loki.insecure.com:3100 6 loki: tenantKey: kubernetes.namespace_name labelKeys: - kubernetes.labels.foo - name: loki-secure 7 type: \"loki\" url: https://loki.secure.com:3100 secret: name: loki-secret 8 loki: tenantKey: kubernetes.namespace_name 9 labelKeys: - kubernetes.labels.foo 10 pipelines: - name: application-logs 11 inputRefs: 12 - application - audit outputRefs: 13 - loki-secure",
"oc apply -f <filename>.yaml",
"apiVersion: logging.openshift.io/v1 kind: ClusterLogForwarder metadata: name: <log_forwarder_name> 1 namespace: <log_forwarder_namespace> 2 spec: serviceAccountName: <service_account_name> 3 outputs: - name: elasticsearch-example 4 type: elasticsearch 5 elasticsearch: version: 8 6 url: http://elasticsearch.example.com:9200 7 secret: name: es-secret 8 pipelines: - name: application-logs 9 inputRefs: 10 - application - audit outputRefs: - elasticsearch-example 11 - default 12 labels: myLabel: \"myValue\" 13",
"oc apply -f <filename>.yaml",
"apiVersion: v1 kind: Secret metadata: name: openshift-test-secret data: username: <username> password: <password>",
"oc create secret -n openshift-logging openshift-test-secret.yaml",
"kind: ClusterLogForwarder metadata: name: instance namespace: openshift-logging spec: outputs: - name: elasticsearch type: \"elasticsearch\" url: https://elasticsearch.secure.com:9200 secret: name: openshift-test-secret",
"oc apply -f <filename>.yaml",
"apiVersion: logging.openshift.io/v1 kind: ClusterLogForwarder metadata: name: instance 1 namespace: openshift-logging 2 spec: outputs: - name: fluentd-server-secure 3 type: fluentdForward 4 url: 'tls://fluentdserver.security.example.com:24224' 5 secret: 6 name: fluentd-secret - name: fluentd-server-insecure type: fluentdForward url: 'tcp://fluentdserver.home.example.com:24224' pipelines: - name: forward-to-fluentd-secure 7 inputRefs: 8 - application - audit outputRefs: - fluentd-server-secure 9 - default 10 labels: clusterId: \"C1234\" 11 - name: forward-to-fluentd-insecure 12 inputRefs: - infrastructure outputRefs: - fluentd-server-insecure labels: clusterId: \"C1234\"",
"oc create -f <file-name>.yaml",
"input { tcp { codec => fluent { nanosecond_precision => true } port => 24114 } } filter { } output { stdout { codec => rubydebug } }",
"apiVersion: logging.openshift.io/v1 kind: ClusterLogForwarder metadata: name: <log_forwarder_name> 1 namespace: <log_forwarder_namespace> 2 spec: serviceAccountName: <service_account_name> 3 outputs: - name: rsyslog-east 4 type: syslog 5 syslog: 6 facility: local0 rfc: RFC3164 payloadKey: message severity: informational url: 'tls://rsyslogserver.east.example.com:514' 7 secret: 8 name: syslog-secret - name: rsyslog-west type: syslog syslog: appName: myapp facility: user msgID: mymsg procID: myproc rfc: RFC5424 severity: debug url: 'tcp://rsyslogserver.west.example.com:514' pipelines: - name: syslog-east 9 inputRefs: 10 - audit - application outputRefs: 11 - rsyslog-east - default 12 labels: secure: \"true\" 13 syslog: \"east\" - name: syslog-west 14 inputRefs: - infrastructure outputRefs: - rsyslog-west - default labels: syslog: \"west\"",
"oc create -f <filename>.yaml",
"spec: outputs: - name: syslogout syslog: addLogSource: true facility: user payloadKey: message rfc: RFC3164 severity: debug tag: mytag type: syslog url: tls://syslog-receiver.openshift-logging.svc:24224 pipelines: - inputRefs: - application name: test-app outputRefs: - syslogout",
"<15>1 2020-11-15T17:06:14+00:00 fluentd-9hkb4 mytag - - - {\"msgcontent\"=>\"Message Contents\", \"timestamp\"=>\"2020-11-15 17:06:09\", \"tag_key\"=>\"rec_tag\", \"index\"=>56}",
"<15>1 2020-11-16T10:49:37+00:00 crc-j55b9-master-0 mytag - - - namespace_name=clo-test-6327,pod_name=log-generator-ff9746c49-qxm7l,container_name=log-generator,message={\"msgcontent\":\"My life is my message\", \"timestamp\":\"2020-11-16 10:49:36\", \"tag_key\":\"rec_tag\", \"index\":76}",
"apiVersion: logging.openshift.io/v1 kind: ClusterLogForwarder metadata: name: <log_forwarder_name> 1 namespace: <log_forwarder_namespace> 2 spec: serviceAccountName: <service_account_name> 3 outputs: - name: app-logs 4 type: kafka 5 url: tls://kafka.example.devlab.com:9093/app-topic 6 secret: name: kafka-secret 7 - name: infra-logs type: kafka url: tcp://kafka.devlab2.example.com:9093/infra-topic 8 - name: audit-logs type: kafka url: tls://kafka.qelab.example.com:9093/audit-topic secret: name: kafka-secret-qe pipelines: - name: app-topic 9 inputRefs: 10 - application outputRefs: 11 - app-logs labels: logType: \"application\" 12 - name: infra-topic 13 inputRefs: - infrastructure outputRefs: - infra-logs labels: logType: \"infra\" - name: audit-topic inputRefs: - audit outputRefs: - audit-logs labels: logType: \"audit\"",
"spec: outputs: - name: app-logs type: kafka secret: name: kafka-secret-dev kafka: 1 brokers: 2 - tls://kafka-broker1.example.com:9093/ - tls://kafka-broker2.example.com:9093/ topic: app-topic 3",
"oc apply -f <filename>.yaml",
"apiVersion: v1 kind: Secret metadata: name: cw-secret namespace: openshift-logging data: aws_access_key_id: QUtJQUlPU0ZPRE5ON0VYQU1QTEUK aws_secret_access_key: d0phbHJYVXRuRkVNSS9LN01ERU5HL2JQeFJmaUNZRVhBTVBMRUtFWQo=",
"oc apply -f cw-secret.yaml",
"apiVersion: logging.openshift.io/v1 kind: ClusterLogForwarder metadata: name: <log_forwarder_name> 1 namespace: <log_forwarder_namespace> 2 spec: serviceAccountName: <service_account_name> 3 outputs: - name: cw 4 type: cloudwatch 5 cloudwatch: groupBy: logType 6 groupPrefix: <group prefix> 7 region: us-east-2 8 secret: name: cw-secret 9 pipelines: - name: infra-logs 10 inputRefs: 11 - infrastructure - audit - application outputRefs: - cw 12",
"oc create -f <file-name>.yaml",
"oc get Infrastructure/cluster -ojson | jq .status.infrastructureName \"mycluster-7977k\"",
"oc run busybox --image=busybox -- sh -c 'while true; do echo \"My life is my message\"; sleep 3; done' oc logs -f busybox My life is my message My life is my message My life is my message",
"oc get ns/app -ojson | jq .metadata.uid \"794e1e1a-b9f5-4958-a190-e76a9b53d7bf\"",
"apiVersion: \"logging.openshift.io/v1\" kind: ClusterLogForwarder metadata: name: instance namespace: openshift-logging spec: outputs: - name: cw type: cloudwatch cloudwatch: groupBy: logType region: us-east-2 secret: name: cw-secret pipelines: - name: all-logs inputRefs: - infrastructure - audit - application outputRefs: - cw",
"aws --output json logs describe-log-groups | jq .logGroups[].logGroupName \"mycluster-7977k.application\" \"mycluster-7977k.audit\" \"mycluster-7977k.infrastructure\"",
"aws --output json logs describe-log-streams --log-group-name mycluster-7977k.application | jq .logStreams[].logStreamName \"kubernetes.var.log.containers.busybox_app_busybox-da085893053e20beddd6747acdbaf98e77c37718f85a7f6a4facf09ca195ad76.log\"",
"aws --output json logs describe-log-streams --log-group-name mycluster-7977k.audit | jq .logStreams[].logStreamName \"ip-10-0-131-228.us-east-2.compute.internal.k8s-audit.log\" \"ip-10-0-131-228.us-east-2.compute.internal.linux-audit.log\" \"ip-10-0-131-228.us-east-2.compute.internal.openshift-audit.log\"",
"aws --output json logs describe-log-streams --log-group-name mycluster-7977k.infrastructure | jq .logStreams[].logStreamName \"ip-10-0-131-228.us-east-2.compute.internal.kubernetes.var.log.containers.apiserver-69f9fd9b58-zqzw5_openshift-oauth-apiserver_oauth-apiserver-453c5c4ee026fe20a6139ba6b1cdd1bed25989c905bf5ac5ca211b7cbb5c3d7b.log\" \"ip-10-0-131-228.us-east-2.compute.internal.kubernetes.var.log.containers.apiserver-797774f7c5-lftrx_openshift-apiserver_openshift-apiserver-ce51532df7d4e4d5f21c4f4be05f6575b93196336be0027067fd7d93d70f66a4.log\" \"ip-10-0-131-228.us-east-2.compute.internal.kubernetes.var.log.containers.apiserver-797774f7c5-lftrx_openshift-apiserver_openshift-apiserver-check-endpoints-82a9096b5931b5c3b1d6dc4b66113252da4a6472c9fff48623baee761911a9ef.log\"",
"aws logs get-log-events --log-group-name mycluster-7977k.application --log-stream-name kubernetes.var.log.containers.busybox_app_busybox-da085893053e20beddd6747acdbaf98e77c37718f85a7f6a4facf09ca195ad76.log { \"events\": [ { \"timestamp\": 1629422704178, \"message\": \"{\\\"docker\\\":{\\\"container_id\\\":\\\"da085893053e20beddd6747acdbaf98e77c37718f85a7f6a4facf09ca195ad76\\\"},\\\"kubernetes\\\":{\\\"container_name\\\":\\\"busybox\\\",\\\"namespace_name\\\":\\\"app\\\",\\\"pod_name\\\":\\\"busybox\\\",\\\"container_image\\\":\\\"docker.io/library/busybox:latest\\\",\\\"container_image_id\\\":\\\"docker.io/library/busybox@sha256:0f354ec1728d9ff32edcd7d1b8bbdfc798277ad36120dc3dc683be44524c8b60\\\",\\\"pod_id\\\":\\\"870be234-90a3-4258-b73f-4f4d6e2777c7\\\",\\\"host\\\":\\\"ip-10-0-216-3.us-east-2.compute.internal\\\",\\\"labels\\\":{\\\"run\\\":\\\"busybox\\\"},\\\"master_url\\\":\\\"https://kubernetes.default.svc\\\",\\\"namespace_id\\\":\\\"794e1e1a-b9f5-4958-a190-e76a9b53d7bf\\\",\\\"namespace_labels\\\":{\\\"kubernetes_io/metadata_name\\\":\\\"app\\\"}},\\\"message\\\":\\\"My life is my message\\\",\\\"level\\\":\\\"unknown\\\",\\\"hostname\\\":\\\"ip-10-0-216-3.us-east-2.compute.internal\\\",\\\"pipeline_metadata\\\":{\\\"collector\\\":{\\\"ipaddr4\\\":\\\"10.0.216.3\\\",\\\"inputname\\\":\\\"fluent-plugin-systemd\\\",\\\"name\\\":\\\"fluentd\\\",\\\"received_at\\\":\\\"2021-08-20T01:25:08.085760+00:00\\\",\\\"version\\\":\\\"1.7.4 1.6.0\\\"}},\\\"@timestamp\\\":\\\"2021-08-20T01:25:04.178986+00:00\\\",\\\"viaq_index_name\\\":\\\"app-write\\\",\\\"viaq_msg_id\\\":\\\"NWRjZmUyMWQtZjgzNC00MjI4LTk3MjMtNTk3NmY3ZjU4NDk1\\\",\\\"log_type\\\":\\\"application\\\",\\\"time\\\":\\\"2021-08-20T01:25:04+00:00\\\"}\", \"ingestionTime\": 1629422744016 },",
"cloudwatch: groupBy: logType groupPrefix: demo-group-prefix region: us-east-2",
"aws --output json logs describe-log-groups | jq .logGroups[].logGroupName \"demo-group-prefix.application\" \"demo-group-prefix.audit\" \"demo-group-prefix.infrastructure\"",
"cloudwatch: groupBy: namespaceName region: us-east-2",
"aws --output json logs describe-log-groups | jq .logGroups[].logGroupName \"mycluster-7977k.app\" \"mycluster-7977k.audit\" \"mycluster-7977k.infrastructure\"",
"cloudwatch: groupBy: namespaceUUID region: us-east-2",
"aws --output json logs describe-log-groups | jq .logGroups[].logGroupName \"mycluster-7977k.794e1e1a-b9f5-4958-a190-e76a9b53d7bf\" // uid of the \"app\" namespace \"mycluster-7977k.audit\" \"mycluster-7977k.infrastructure\"",
"oc create secret generic cw-sts-secret -n openshift-logging --from-literal=role_arn=arn:aws:iam::123456789012:role/my-role_with-permissions",
"apiVersion: v1 kind: Secret metadata: namespace: openshift-logging name: my-secret-name stringData: role_arn: arn:aws:iam::123456789012:role/my-role_with-permissions",
"apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: name: <your_role_name>-credrequest namespace: openshift-cloud-credential-operator spec: providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: AWSProviderSpec statementEntries: - action: - logs:PutLogEvents - logs:CreateLogGroup - logs:PutRetentionPolicy - logs:CreateLogStream - logs:DescribeLogGroups - logs:DescribeLogStreams effect: Allow resource: arn:aws:logs:*:*:* secretRef: name: <your_role_name> namespace: openshift-logging serviceAccountNames: - logcollector",
"ccoctl aws create-iam-roles --name=<name> --region=<aws_region> --credentials-requests-dir=<path_to_directory_with_list_of_credentials_requests>/credrequests --identity-provider-arn=arn:aws:iam::<aws_account_id>:oidc-provider/<name>-oidc.s3.<aws_region>.amazonaws.com 1",
"oc apply -f output/manifests/openshift-logging-<your_role_name>-credentials.yaml",
"apiVersion: logging.openshift.io/v1 kind: ClusterLogForwarder metadata: name: <log_forwarder_name> 1 namespace: <log_forwarder_namespace> 2 spec: serviceAccountName: clf-collector 3 outputs: - name: cw 4 type: cloudwatch 5 cloudwatch: groupBy: logType 6 groupPrefix: <group prefix> 7 region: us-east-2 8 secret: name: <your_secret_name> 9 pipelines: - name: to-cloudwatch 10 inputRefs: 11 - infrastructure - audit - application outputRefs: - cw 12",
"apiVersion: logging.openshift.io/v1 kind: ClusterLogging metadata: spec: collection: type: <log_collector_type> 1 resources: {} tolerations: {}",
"oc apply -f <filename>.yaml",
"apiVersion: logging.openshift.io/v1alpha1 kind: LogFileMetricExporter metadata: name: instance namespace: openshift-logging spec: nodeSelector: {} 1 resources: 2 limits: cpu: 500m memory: 256Mi requests: cpu: 200m memory: 128Mi tolerations: [] 3",
"oc apply -f <filename>.yaml",
"oc get pods -l app.kubernetes.io/component=logfilesmetricexporter -n openshift-logging",
"NAME READY STATUS RESTARTS AGE logfilesmetricexporter-9qbjj 1/1 Running 0 2m46s logfilesmetricexporter-cbc4v 1/1 Running 0 2m46s",
"oc -n openshift-logging edit ClusterLogging instance",
"apiVersion: logging.openshift.io/v1 kind: ClusterLogging metadata: name: instance namespace: openshift-logging spec: collection: type: fluentd resources: limits: 1 memory: 736Mi requests: cpu: 100m memory: 736Mi",
"apiVersion: logging.openshift.io/v1beta1 kind: ClusterLogForwarder metadata: spec: serviceAccountName: <service_account_name> inputs: - name: http-receiver 1 receiver: type: http 2 http: format: kubeAPIAudit 3 port: 8443 4 pipelines: 5 - name: http-pipeline inputRefs: - http-receiver",
"apiVersion: logging.openshift.io/v1 kind: ClusterLogForwarder metadata: name: instance namespace: openshift-logging spec: inputs: - name: http-receiver 1 receiver: type: http 2 http: format: kubeAPIAudit 3 port: 8443 4 pipelines: 5 - inputRefs: - http-receiver name: http-pipeline",
"oc apply -f <filename>.yaml",
"oc edit ClusterLogging instance",
"apiVersion: logging.openshift.io/v1 kind: ClusterLogging metadata: name: instance namespace: openshift-logging spec: collection: fluentd: buffer: chunkLimitSize: 8m 1 flushInterval: 5s 2 flushMode: interval 3 flushThreadCount: 3 4 overflowAction: throw_exception 5 retryMaxInterval: \"300s\" 6 retryType: periodic 7 retryWait: 1s 8 totalLimitSize: 32m 9",
"oc get pods -l component=collector -n openshift-logging",
"oc extract configmap/collector-config --confirm",
"<buffer> @type file path '/var/lib/fluentd/default' flush_mode interval flush_interval 5s flush_thread_count 3 retry_type periodic retry_wait 1s retry_max_interval 300s retry_timeout 60m queued_chunks_limit_size \"#{ENV['BUFFER_QUEUE_LIMIT'] || '32'}\" total_limit_size \"#{ENV['TOTAL_LIMIT_SIZE_PER_BUFFER'] || '8589934592'}\" chunk_limit_size 8m overflow_action throw_exception disable_chunk_backup true </buffer>",
"apiVersion: template.openshift.io/v1 kind: Template metadata: name: eventrouter-template annotations: description: \"A pod forwarding kubernetes events to OpenShift Logging stack.\" tags: \"events,EFK,logging,cluster-logging\" objects: - kind: ServiceAccount 1 apiVersion: v1 metadata: name: eventrouter namespace: USD{NAMESPACE} - kind: ClusterRole 2 apiVersion: rbac.authorization.k8s.io/v1 metadata: name: event-reader rules: - apiGroups: [\"\"] resources: [\"events\"] verbs: [\"get\", \"watch\", \"list\"] - kind: ClusterRoleBinding 3 apiVersion: rbac.authorization.k8s.io/v1 metadata: name: event-reader-binding subjects: - kind: ServiceAccount name: eventrouter namespace: USD{NAMESPACE} roleRef: kind: ClusterRole name: event-reader - kind: ConfigMap 4 apiVersion: v1 metadata: name: eventrouter namespace: USD{NAMESPACE} data: config.json: |- { \"sink\": \"stdout\" } - kind: Deployment 5 apiVersion: apps/v1 metadata: name: eventrouter namespace: USD{NAMESPACE} labels: component: \"eventrouter\" logging-infra: \"eventrouter\" provider: \"openshift\" spec: selector: matchLabels: component: \"eventrouter\" logging-infra: \"eventrouter\" provider: \"openshift\" replicas: 1 template: metadata: labels: component: \"eventrouter\" logging-infra: \"eventrouter\" provider: \"openshift\" name: eventrouter spec: serviceAccount: eventrouter containers: - name: kube-eventrouter image: USD{IMAGE} imagePullPolicy: IfNotPresent resources: requests: cpu: USD{CPU} memory: USD{MEMORY} volumeMounts: - name: config-volume mountPath: /etc/eventrouter securityContext: allowPrivilegeEscalation: false capabilities: drop: [\"ALL\"] securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault volumes: - name: config-volume configMap: name: eventrouter parameters: - name: IMAGE 6 displayName: Image value: \"registry.redhat.io/openshift-logging/eventrouter-rhel8:v0.4\" - name: CPU 7 displayName: CPU value: \"100m\" - name: MEMORY 8 displayName: Memory value: \"128Mi\" - name: NAMESPACE displayName: Namespace value: \"openshift-logging\" 9",
"oc process -f <templatefile> | oc apply -n openshift-logging -f -",
"oc process -f eventrouter.yaml | oc apply -n openshift-logging -f -",
"serviceaccount/eventrouter created clusterrole.rbac.authorization.k8s.io/event-reader created clusterrolebinding.rbac.authorization.k8s.io/event-reader-binding created configmap/eventrouter created deployment.apps/eventrouter created",
"oc get pods --selector component=eventrouter -o name -n openshift-logging",
"pod/cluster-logging-eventrouter-d649f97c8-qvv8r",
"oc logs <cluster_logging_eventrouter_pod> -n openshift-logging",
"oc logs cluster-logging-eventrouter-d649f97c8-qvv8r -n openshift-logging",
"{\"verb\":\"ADDED\",\"event\":{\"metadata\":{\"name\":\"openshift-service-catalog-controller-manager-remover.1632d931e88fcd8f\",\"namespace\":\"openshift-service-catalog-removed\",\"selfLink\":\"/api/v1/namespaces/openshift-service-catalog-removed/events/openshift-service-catalog-controller-manager-remover.1632d931e88fcd8f\",\"uid\":\"787d7b26-3d2f-4017-b0b0-420db4ae62c0\",\"resourceVersion\":\"21399\",\"creationTimestamp\":\"2020-09-08T15:40:26Z\"},\"involvedObject\":{\"kind\":\"Job\",\"namespace\":\"openshift-service-catalog-removed\",\"name\":\"openshift-service-catalog-controller-manager-remover\",\"uid\":\"fac9f479-4ad5-4a57-8adc-cb25d3d9cf8f\",\"apiVersion\":\"batch/v1\",\"resourceVersion\":\"21280\"},\"reason\":\"Completed\",\"message\":\"Job completed\",\"source\":{\"component\":\"job-controller\"},\"firstTimestamp\":\"2020-09-08T15:40:26Z\",\"lastTimestamp\":\"2020-09-08T15:40:26Z\",\"count\":1,\"type\":\"Normal\"}}",
"apiVersion: v1 kind: Secret metadata: name: logging-loki-s3 namespace: openshift-logging stringData: access_key_id: AKIAIOSFODNN7EXAMPLE access_key_secret: wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY bucketnames: s3-bucket-name endpoint: https://s3.eu-central-1.amazonaws.com region: eu-central-1",
"apiVersion: loki.grafana.com/v1 kind: LokiStack metadata: name: logging-loki 1 namespace: openshift-logging spec: size: 1x.small 2 storage: schemas: - version: v12 effectiveDate: '2022-06-01' secret: name: logging-loki-s3 3 type: s3 4 storageClassName: <storage_class_name> 5 tenants: mode: openshift-logging 6",
"apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: loki-operator namespace: openshift-operators-redhat 1 spec: channel: stable 2 name: loki-operator source: redhat-operators 3 sourceNamespace: openshift-marketplace",
"oc apply -f <filename>.yaml",
"oc create secret generic -n openshift-logging <your_secret_name> --from-file=tls.key=<your_key_file> --from-file=tls.crt=<your_crt_file> --from-file=ca-bundle.crt=<your_bundle_file> --from-literal=username=<your_username> --from-literal=password=<your_password>",
"oc get secrets",
"apiVersion: loki.grafana.com/v1 kind: LokiStack metadata: name: logging-loki namespace: openshift-logging spec: size: 1x.small 1 storage: schemas: - version: v12 effectiveDate: \"2022-06-01\" secret: name: logging-loki-s3 2 type: s3 3 storageClassName: <storage_class_name> 4 tenants: mode: openshift-logging 5",
"oc apply -f <filename>.yaml",
"oc get pods -n openshift-logging",
"NAME READY STATUS RESTARTS AGE cluster-logging-operator-78fddc697-mnl82 1/1 Running 0 14m collector-6cglq 2/2 Running 0 45s collector-8r664 2/2 Running 0 45s collector-8z7px 2/2 Running 0 45s collector-pdxl9 2/2 Running 0 45s collector-tc9dx 2/2 Running 0 45s collector-xkd76 2/2 Running 0 45s logging-loki-compactor-0 1/1 Running 0 8m2s logging-loki-distributor-b85b7d9fd-25j9g 1/1 Running 0 8m2s logging-loki-distributor-b85b7d9fd-xwjs6 1/1 Running 0 8m2s logging-loki-gateway-7bb86fd855-hjhl4 2/2 Running 0 8m2s logging-loki-gateway-7bb86fd855-qjtlb 2/2 Running 0 8m2s logging-loki-index-gateway-0 1/1 Running 0 8m2s logging-loki-index-gateway-1 1/1 Running 0 7m29s logging-loki-ingester-0 1/1 Running 0 8m2s logging-loki-ingester-1 1/1 Running 0 6m46s logging-loki-querier-f5cf9cb87-9fdjd 1/1 Running 0 8m2s logging-loki-querier-f5cf9cb87-fp9v5 1/1 Running 0 8m2s logging-loki-query-frontend-58c579fcb7-lfvbc 1/1 Running 0 8m2s logging-loki-query-frontend-58c579fcb7-tjf9k 1/1 Running 0 8m2s logging-view-plugin-79448d8df6-ckgmx 1/1 Running 0 46s",
"oc create secret generic logging-loki-aws --from-literal=bucketnames=\"<bucket_name>\" --from-literal=endpoint=\"<aws_bucket_endpoint>\" --from-literal=access_key_id=\"<aws_access_key_id>\" --from-literal=access_key_secret=\"<aws_access_key_secret>\" --from-literal=region=\"<aws_region_of_your_bucket>\"",
"oc create secret generic logging-loki-azure --from-literal=container=\"<azure_container_name>\" --from-literal=environment=\"<azure_environment>\" \\ 1 --from-literal=account_name=\"<azure_account_name>\" --from-literal=account_key=\"<azure_account_key>\"",
"oc create secret generic logging-loki-gcs --from-literal=bucketname=\"<bucket_name>\" --from-file=key.json=\"<path/to/key.json>\"",
"oc create secret generic logging-loki-minio --from-literal=bucketnames=\"<bucket_name>\" --from-literal=endpoint=\"<minio_bucket_endpoint>\" --from-literal=access_key_id=\"<minio_access_key_id>\" --from-literal=access_key_secret=\"<minio_access_key_secret>\"",
"apiVersion: objectbucket.io/v1alpha1 kind: ObjectBucketClaim metadata: name: loki-bucket-odf namespace: openshift-logging spec: generateBucketName: loki-bucket-odf storageClassName: openshift-storage.noobaa.io",
"BUCKET_HOST=USD(oc get -n openshift-logging configmap loki-bucket-odf -o jsonpath='{.data.BUCKET_HOST}') BUCKET_NAME=USD(oc get -n openshift-logging configmap loki-bucket-odf -o jsonpath='{.data.BUCKET_NAME}') BUCKET_PORT=USD(oc get -n openshift-logging configmap loki-bucket-odf -o jsonpath='{.data.BUCKET_PORT}')",
"ACCESS_KEY_ID=USD(oc get -n openshift-logging secret loki-bucket-odf -o jsonpath='{.data.AWS_ACCESS_KEY_ID}' | base64 -d) SECRET_ACCESS_KEY=USD(oc get -n openshift-logging secret loki-bucket-odf -o jsonpath='{.data.AWS_SECRET_ACCESS_KEY}' | base64 -d)",
"oc create -n openshift-logging secret generic logging-loki-odf --from-literal=access_key_id=\"<access_key_id>\" --from-literal=access_key_secret=\"<secret_access_key>\" --from-literal=bucketnames=\"<bucket_name>\" --from-literal=endpoint=\"https://<bucket_host>:<bucket_port>\"",
"oc create secret generic logging-loki-swift --from-literal=auth_url=\"<swift_auth_url>\" --from-literal=username=\"<swift_usernameclaim>\" --from-literal=user_domain_name=\"<swift_user_domain_name>\" --from-literal=user_domain_id=\"<swift_user_domain_id>\" --from-literal=user_id=\"<swift_user_id>\" --from-literal=password=\"<swift_password>\" --from-literal=domain_id=\"<swift_domain_id>\" --from-literal=domain_name=\"<swift_domain_name>\" --from-literal=container_name=\"<swift_container_name>\"",
"oc create secret generic logging-loki-swift --from-literal=auth_url=\"<swift_auth_url>\" --from-literal=username=\"<swift_usernameclaim>\" --from-literal=user_domain_name=\"<swift_user_domain_name>\" --from-literal=user_domain_id=\"<swift_user_domain_id>\" --from-literal=user_id=\"<swift_user_id>\" --from-literal=password=\"<swift_password>\" --from-literal=domain_id=\"<swift_domain_id>\" --from-literal=domain_name=\"<swift_domain_name>\" --from-literal=container_name=\"<swift_container_name>\" --from-literal=project_id=\"<swift_project_id>\" --from-literal=project_name=\"<swift_project_name>\" --from-literal=project_domain_id=\"<swift_project_domain_id>\" --from-literal=project_domain_name=\"<swift_project_domain_name>\" --from-literal=region=\"<swift_region>\"",
"apiVersion: v1 kind: Namespace metadata: name: openshift-operators-redhat 1 annotations: openshift.io/node-selector: \"\" labels: openshift.io/cluster-monitoring: \"true\" 2",
"oc apply -f <filename>.yaml",
"apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: openshift-operators-redhat namespace: openshift-operators-redhat 1 spec: {}",
"oc apply -f <filename>.yaml",
"apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: elasticsearch-operator namespace: openshift-operators-redhat 1 spec: channel: stable-x.y 2 installPlanApproval: Automatic 3 source: redhat-operators 4 sourceNamespace: openshift-marketplace name: elasticsearch-operator",
"oc apply -f <filename>.yaml",
"oc get csv -n --all-namespaces",
"NAMESPACE NAME DISPLAY VERSION REPLACES PHASE default elasticsearch-operator.v5.8.1 OpenShift Elasticsearch Operator 5.8.1 elasticsearch-operator.v5.8.0 Succeeded kube-node-lease elasticsearch-operator.v5.8.1 OpenShift Elasticsearch Operator 5.8.1 elasticsearch-operator.v5.8.0 Succeeded kube-public elasticsearch-operator.v5.8.1 OpenShift Elasticsearch Operator 5.8.1 elasticsearch-operator.v5.8.0 Succeeded kube-system elasticsearch-operator.v5.8.1 OpenShift Elasticsearch Operator 5.8.1 elasticsearch-operator.v5.8.0 Succeeded non-destructive-test elasticsearch-operator.v5.8.1 OpenShift Elasticsearch Operator 5.8.1 elasticsearch-operator.v5.8.0 Succeeded openshift-apiserver-operator elasticsearch-operator.v5.8.1 OpenShift Elasticsearch Operator 5.8.1 elasticsearch-operator.v5.8.0 Succeeded openshift-apiserver elasticsearch-operator.v5.8.1 OpenShift Elasticsearch Operator 5.8.1 elasticsearch-operator.v5.8.0 Succeeded",
"apiVersion: logging.openshift.io/v1 kind: ClusterLogging metadata: spec: logStore: type: <log_store_type> 1 elasticsearch: 2 nodeCount: <integer> resources: {} storage: {} redundancyPolicy: <redundancy_type> 3 lokistack: 4 name: {}",
"apiVersion: logging.openshift.io/v1 kind: ClusterLogging metadata: name: instance namespace: openshift-logging spec: managementState: Managed logStore: type: lokistack lokistack: name: logging-loki",
"oc apply -f <filename>.yaml",
"oc adm groups new cluster-admin",
"oc adm groups add-users cluster-admin <username>",
"oc adm policy add-cluster-role-to-group cluster-admin cluster-admin",
"apiVersion: loki.grafana.com/v1 kind: LokiStack metadata: name: logging-loki namespace: openshift-logging spec: template: ingester: podAntiAffinity: # requiredDuringSchedulingIgnoredDuringExecution: 1 - labelSelector: matchLabels: 2 app.kubernetes.io/component: ingester topologyKey: kubernetes.io/hostname",
"apiVersion: loki.grafana.com/v1 kind: LokiStack metadata: name: logging-loki namespace: openshift-logging spec: replicationFactor: 2 1 replication: factor: 2 2 zones: - maxSkew: 1 3 topologyKey: topology.kubernetes.io/zone 4",
"get pods --field-selector status.phase==Pending -n openshift-logging",
"NAME READY STATUS RESTARTS AGE 1 logging-loki-index-gateway-1 0/1 Pending 0 17m logging-loki-ingester-1 0/1 Pending 0 16m logging-loki-ruler-1 0/1 Pending 0 16m",
"get pvc -o=json -n openshift-logging | jq '.items[] | select(.status.phase == \"Pending\") | .metadata.name' -r",
"storage-logging-loki-index-gateway-1 storage-logging-loki-ingester-1 wal-logging-loki-ingester-1 storage-logging-loki-ruler-1 wal-logging-loki-ruler-1",
"delete pvc __<pvc_name>__ -n openshift-logging",
"delete pod __<pod_name>__ -n openshift-logging",
"patch pvc __<pvc_name>__ -p '{\"metadata\":{\"finalizers\":null}}' -n openshift-logging",
"kind: ClusterRoleBinding apiVersion: rbac.authorization.k8s.io/v1 metadata: name: logging-all-application-logs-reader roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: cluster-logging-application-view 1 subjects: 2 - kind: Group name: system:authenticated apiGroup: rbac.authorization.k8s.io",
"kind: RoleBinding apiVersion: rbac.authorization.k8s.io/v1 metadata: name: allow-read-logs namespace: log-test-0 1 roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: cluster-logging-application-view subjects: - kind: User apiGroup: rbac.authorization.k8s.io name: testuser-0",
"apiVersion: loki.grafana.com/v1 kind: LokiStack metadata: name: logging-loki namespace: openshift-logging spec: tenants: mode: openshift-logging 1 openshift: adminGroups: 2 - cluster-admin - custom-admin-group 3",
"apiVersion: loki.grafana.com/v1 kind: LokiStack metadata: name: logging-loki namespace: openshift-logging spec: limits: global: 1 retention: 2 days: 20 streams: - days: 4 priority: 1 selector: '{kubernetes_namespace_name=~\"test.+\"}' 3 - days: 1 priority: 1 selector: '{log_type=\"infrastructure\"}' managementState: Managed replicationFactor: 1 size: 1x.small storage: schemas: - effectiveDate: \"2020-10-11\" version: v11 secret: name: logging-loki-s3 type: aws storageClassName: standard tenants: mode: openshift-logging",
"apiVersion: loki.grafana.com/v1 kind: LokiStack metadata: name: logging-loki namespace: openshift-logging spec: limits: global: retention: days: 20 tenants: 1 application: retention: days: 1 streams: - days: 4 selector: '{kubernetes_namespace_name=~\"test.+\"}' 2 infrastructure: retention: days: 5 streams: - days: 1 selector: '{kubernetes_namespace_name=~\"openshift-cluster.+\"}' managementState: Managed replicationFactor: 1 size: 1x.small storage: schemas: - effectiveDate: \"2020-10-11\" version: v11 secret: name: logging-loki-s3 type: aws storageClassName: standard tenants: mode: openshift-logging",
"oc apply -f <filename>.yaml",
"\"values\":[[\"1630410392689800468\",\"{\\\"kind\\\":\\\"Event\\\",\\\"apiVersion\\\": .... ... ... ... \\\"received_at\\\":\\\"2021-08-31T11:46:32.800278+00:00\\\",\\\"version\\\":\\\"1.7.4 1.6.0\\\"}},\\\"@timestamp\\\":\\\"2021-08-31T11:46:32.799692+00:00\\\",\\\"viaq_index_name\\\":\\\"audit-write\\\",\\\"viaq_msg_id\\\":\\\"MzFjYjJkZjItNjY0MC00YWU4LWIwMTEtNGNmM2E5ZmViMGU4\\\",\\\"log_type\\\":\\\"audit\\\"}\"]]}]}",
"429 Too Many Requests Ingestion rate limit exceeded",
"2023-08-25T16:08:49.301780Z WARN sink{component_kind=\"sink\" component_id=default_loki_infra component_type=loki component_name=default_loki_infra}: vector::sinks::util::retries: Retrying after error. error=Server responded with an error: 429 Too Many Requests internal_log_rate_limit=true",
"2023-08-30 14:52:15 +0000 [warn]: [default_loki_infra] failed to flush the buffer. retry_times=2 next_retry_time=2023-08-30 14:52:19 +0000 chunk=\"604251225bf5378ed1567231a1c03b8b\" error_class=Fluent::Plugin::LokiOutput::LogPostError error=\"429 Too Many Requests Ingestion rate limit exceeded for user infrastructure (limit: 4194304 bytes/sec) while attempting to ingest '4082' lines totaling '7820025' bytes, reduce log volume or contact your Loki administrator to see if the limit can be increased\\n\"",
"level=warn ts=2023-08-30T14:57:34.155592243Z caller=grpc_logging.go:43 duration=1.434942ms method=/logproto.Pusher/Push err=\"rpc error: code = Code(429) desc = entry with timestamp 2023-08-30 14:57:32.012778399 +0000 UTC ignored, reason: 'Per stream rate limit exceeded (limit: 3MB/sec) while attempting to ingest for stream",
"apiVersion: loki.grafana.com/v1 kind: LokiStack metadata: name: logging-loki namespace: openshift-logging spec: limits: global: ingestion: ingestionBurstSize: 16 1 ingestionRate: 8 2",
"oc patch LokiStack logging-loki -n openshift-logging --type=merge -p '{\"spec\": {\"hashRing\":{\"memberlist\":{\"instanceAddrType\":\"podIP\",\"type\": \"memberlist\"}}}}'",
"apiVersion: loki.grafana.com/v1 kind: LokiStack metadata: name: logging-loki namespace: openshift-logging spec: hashRing: type: memberlist memberlist: instanceAddrType: podIP",
"apiVersion: logging.openshift.io/v1 kind: ClusterLogging metadata: spec: logStore: type: <log_store_type> 1 elasticsearch: 2 nodeCount: <integer> resources: {} storage: {} redundancyPolicy: <redundancy_type> 3 lokistack: 4 name: {}",
"apiVersion: logging.openshift.io/v1 kind: ClusterLogging metadata: name: instance namespace: openshift-logging spec: managementState: Managed logStore: type: lokistack lokistack: name: logging-loki",
"oc apply -f <filename>.yaml",
"apiVersion: logging.openshift.io/v1 kind: ClusterLogForwarder metadata: name: instance namespace: openshift-logging spec: pipelines: 1 - name: all-to-default inputRefs: - infrastructure - application - audit outputRefs: - default",
"apiVersion: \"logging.openshift.io/v1\" kind: ClusterLogForwarder metadata: name: instance namespace: openshift-logging spec: outputs: - name: elasticsearch-insecure type: \"elasticsearch\" url: http://elasticsearch-insecure.messaging.svc.cluster.local insecure: true - name: elasticsearch-secure type: \"elasticsearch\" url: https://elasticsearch-secure.messaging.svc.cluster.local secret: name: es-audit - name: secureforward-offcluster type: \"fluentdForward\" url: https://secureforward.offcluster.com:24224 secret: name: secureforward pipelines: - name: container-logs inputRefs: - application outputRefs: - secureforward-offcluster - name: infra-logs inputRefs: - infrastructure outputRefs: - elasticsearch-insecure - name: audit-logs inputRefs: - audit outputRefs: - elasticsearch-secure - default 1",
"apiVersion: \"logging.openshift.io/v1\" kind: \"ClusterLogging\" spec: managementState: \"Managed\" logStore: type: \"elasticsearch\" retentionPolicy: 1 application: maxAge: 1d infra: maxAge: 7d audit: maxAge: 7d elasticsearch: nodeCount: 3",
"apiVersion: \"logging.openshift.io/v1\" kind: \"Elasticsearch\" metadata: name: \"elasticsearch\" spec: indexManagement: policies: 1 - name: infra-policy phases: delete: minAge: 7d 2 hot: actions: rollover: maxAge: 8h 3 pollInterval: 15m 4",
"oc get cronjob",
"NAME SCHEDULE SUSPEND ACTIVE LAST SCHEDULE AGE elasticsearch-im-app */15 * * * * False 0 <none> 4s elasticsearch-im-audit */15 * * * * False 0 <none> 4s elasticsearch-im-infra */15 * * * * False 0 <none> 4s",
"oc edit ClusterLogging instance",
"apiVersion: \"logging.openshift.io/v1\" kind: \"ClusterLogging\" metadata: name: \"instance\" . spec: logStore: type: \"elasticsearch\" elasticsearch: 1 resources: limits: 2 memory: \"32Gi\" requests: 3 cpu: \"1\" memory: \"16Gi\" proxy: 4 resources: limits: memory: 100Mi requests: memory: 100Mi",
"resources: limits: 1 memory: \"32Gi\" requests: 2 cpu: \"8\" memory: \"32Gi\"",
"oc -n openshift-logging edit ClusterLogging instance",
"apiVersion: \"logging.openshift.io/v1\" kind: \"ClusterLogging\" metadata: name: \"instance\" . spec: logStore: type: \"elasticsearch\" elasticsearch: redundancyPolicy: \"SingleRedundancy\" 1",
"apiVersion: \"logging.openshift.io/v1\" kind: \"ClusterLogging\" metadata: name: \"instance\" spec: logStore: type: \"elasticsearch\" elasticsearch: nodeCount: 3 storage: storageClassName: \"gp2\" size: \"200G\"",
"spec: logStore: type: \"elasticsearch\" elasticsearch: nodeCount: 3 storage: {}",
"oc project openshift-logging",
"oc get pods -l component=elasticsearch",
"oc -n openshift-logging patch daemonset/collector -p '{\"spec\":{\"template\":{\"spec\":{\"nodeSelector\":{\"logging-infra-collector\": \"false\"}}}}}'",
"oc exec <any_es_pod_in_the_cluster> -c elasticsearch -- es_util --query=\"_flush/synced\" -XPOST",
"oc exec -c elasticsearch-cdm-5ceex6ts-1-dcd6c4c7c-jpw6 -c elasticsearch -- es_util --query=\"_flush/synced\" -XPOST",
"{\"_shards\":{\"total\":4,\"successful\":4,\"failed\":0},\".security\":{\"total\":2,\"successful\":2,\"failed\":0},\".kibana_1\":{\"total\":2,\"successful\":2,\"failed\":0}}",
"oc exec <any_es_pod_in_the_cluster> -c elasticsearch -- es_util --query=\"_cluster/settings\" -XPUT -d '{ \"persistent\": { \"cluster.routing.allocation.enable\" : \"primaries\" } }'",
"oc exec elasticsearch-cdm-5ceex6ts-1-dcd6c4c7c-jpw6 -c elasticsearch -- es_util --query=\"_cluster/settings\" -XPUT -d '{ \"persistent\": { \"cluster.routing.allocation.enable\" : \"primaries\" } }'",
"{\"acknowledged\":true,\"persistent\":{\"cluster\":{\"routing\":{\"allocation\":{\"enable\":\"primaries\"}}}},\"transient\":",
"oc rollout resume deployment/<deployment-name>",
"oc rollout resume deployment/elasticsearch-cdm-0-1",
"deployment.extensions/elasticsearch-cdm-0-1 resumed",
"oc get pods -l component=elasticsearch-",
"NAME READY STATUS RESTARTS AGE elasticsearch-cdm-5ceex6ts-1-dcd6c4c7c-jpw6k 2/2 Running 0 22h elasticsearch-cdm-5ceex6ts-2-f799564cb-l9mj7 2/2 Running 0 22h elasticsearch-cdm-5ceex6ts-3-585968dc68-k7kjr 2/2 Running 0 22h",
"oc rollout pause deployment/<deployment-name>",
"oc rollout pause deployment/elasticsearch-cdm-0-1",
"deployment.extensions/elasticsearch-cdm-0-1 paused",
"oc exec <any_es_pod_in_the_cluster> -c elasticsearch -- es_util --query=_cluster/health?pretty=true",
"oc exec elasticsearch-cdm-5ceex6ts-1-dcd6c4c7c-jpw6 -c elasticsearch -- es_util --query=_cluster/health?pretty=true",
"{ \"cluster_name\" : \"elasticsearch\", \"status\" : \"yellow\", 1 \"timed_out\" : false, \"number_of_nodes\" : 3, \"number_of_data_nodes\" : 3, \"active_primary_shards\" : 8, \"active_shards\" : 16, \"relocating_shards\" : 0, \"initializing_shards\" : 0, \"unassigned_shards\" : 1, \"delayed_unassigned_shards\" : 0, \"number_of_pending_tasks\" : 0, \"number_of_in_flight_fetch\" : 0, \"task_max_waiting_in_queue_millis\" : 0, \"active_shards_percent_as_number\" : 100.0 }",
"oc exec <any_es_pod_in_the_cluster> -c elasticsearch -- es_util --query=\"_cluster/settings\" -XPUT -d '{ \"persistent\": { \"cluster.routing.allocation.enable\" : \"all\" } }'",
"oc exec elasticsearch-cdm-5ceex6ts-1-dcd6c4c7c-jpw6 -c elasticsearch -- es_util --query=\"_cluster/settings\" -XPUT -d '{ \"persistent\": { \"cluster.routing.allocation.enable\" : \"all\" } }'",
"{ \"acknowledged\" : true, \"persistent\" : { }, \"transient\" : { \"cluster\" : { \"routing\" : { \"allocation\" : { \"enable\" : \"all\" } } } } }",
"oc -n openshift-logging patch daemonset/collector -p '{\"spec\":{\"template\":{\"spec\":{\"nodeSelector\":{\"logging-infra-collector\": \"true\"}}}}}'",
"oc get service elasticsearch -o jsonpath={.spec.clusterIP} -n openshift-logging",
"172.30.183.229",
"oc get service elasticsearch -n openshift-logging",
"NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE elasticsearch ClusterIP 172.30.183.229 <none> 9200/TCP 22h",
"oc exec elasticsearch-cdm-oplnhinv-1-5746475887-fj2f8 -n openshift-logging -- curl -tlsv1.2 --insecure -H \"Authorization: Bearer USD{token}\" \"https://172.30.183.229:9200/_cat/health\"",
"% Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 100 29 100 29 0 0 108 0 --:--:-- --:--:-- --:--:-- 108",
"oc project openshift-logging",
"oc extract secret/elasticsearch --to=. --keys=admin-ca",
"admin-ca",
"apiVersion: route.openshift.io/v1 kind: Route metadata: name: elasticsearch namespace: openshift-logging spec: host: to: kind: Service name: elasticsearch tls: termination: reencrypt destinationCACertificate: | 1",
"cat ./admin-ca | sed -e \"s/^/ /\" >> <file-name>.yaml",
"oc create -f <file-name>.yaml",
"route.route.openshift.io/elasticsearch created",
"token=USD(oc whoami -t)",
"routeES=`oc get route elasticsearch -o jsonpath={.spec.host}`",
"curl -tlsv1.2 --insecure -H \"Authorization: Bearer USD{token}\" \"https://USD{routeES}\"",
"{ \"name\" : \"elasticsearch-cdm-i40ktba0-1\", \"cluster_name\" : \"elasticsearch\", \"cluster_uuid\" : \"0eY-tJzcR3KOdpgeMJo-MQ\", \"version\" : { \"number\" : \"6.8.1\", \"build_flavor\" : \"oss\", \"build_type\" : \"zip\", \"build_hash\" : \"Unknown\", \"build_date\" : \"Unknown\", \"build_snapshot\" : true, \"lucene_version\" : \"7.7.0\", \"minimum_wire_compatibility_version\" : \"5.6.0\", \"minimum_index_compatibility_version\" : \"5.0.0\" }, \"<tagline>\" : \"<for search>\" }",
"outputRefs: - default",
"oc edit ClusterLogging instance",
"apiVersion: \"logging.openshift.io/v1\" kind: \"ClusterLogging\" metadata: name: \"instance\" namespace: \"openshift-logging\" spec: managementState: \"Managed\" collection: type: \"fluentd\" fluentd: {}",
"oc get pods -l component=collector -n openshift-logging",
"apiVersion: loki.grafana.com/v1 kind: LokiStack metadata: name: <name> namespace: <namespace> spec: rules: enabled: true 1 selector: matchLabels: openshift.io/<label_name>: \"true\" 2 namespaceSelector: matchLabels: openshift.io/<label_name>: \"true\" 3",
"oc adm policy add-role-to-user alertingrules.loki.grafana.com-v1-admin -n <namespace> <username>",
"oc adm policy add-cluster-role-to-user alertingrules.loki.grafana.com-v1-admin <username>",
"apiVersion: loki.grafana.com/v1 kind: AlertingRule metadata: name: loki-operator-alerts namespace: openshift-operators-redhat 1 labels: 2 openshift.io/<label_name>: \"true\" spec: tenantID: \"infrastructure\" 3 groups: - name: LokiOperatorHighReconciliationError rules: - alert: HighPercentageError expr: | 4 sum(rate({kubernetes_namespace_name=\"openshift-operators-redhat\", kubernetes_pod_name=~\"loki-operator-controller-manager.*\"} |= \"error\" [1m])) by (job) / sum(rate({kubernetes_namespace_name=\"openshift-operators-redhat\", kubernetes_pod_name=~\"loki-operator-controller-manager.*\"}[1m])) by (job) > 0.01 for: 10s labels: severity: critical 5 annotations: summary: High Loki Operator Reconciliation Errors 6 description: High Loki Operator Reconciliation Errors 7",
"apiVersion: loki.grafana.com/v1 kind: AlertingRule metadata: name: app-user-workload namespace: app-ns 1 labels: 2 openshift.io/<label_name>: \"true\" spec: tenantID: \"application\" groups: - name: AppUserWorkloadHighError rules: - alert: expr: | 3 sum(rate({kubernetes_namespace_name=\"app-ns\", kubernetes_pod_name=~\"podName.*\"} |= \"error\" [1m])) by (job) for: 10s labels: severity: critical 4 annotations: summary: 5 description: 6",
"oc apply -f <filename>.yaml",
"apiVersion: logging.openshift.io/v1 kind: ClusterLogForwarder metadata: spec: outputs: - name: kafka-example 1 type: kafka 2 limit: maxRecordsPerSecond: 1000000 3",
"oc apply -f <filename>.yaml",
"apiVersion: logging.openshift.io/v1 kind: ClusterLogForwarder metadata: spec: inputs: - name: <input_name> 1 application: selector: matchLabels: { example: label } 2 containerLimit: maxRecordsPerSecond: 0 3",
"apiVersion: logging.openshift.io/v1 kind: ClusterLogForwarder metadata: spec: inputs: - name: <input_name> 1 application: namespaces: [ example-ns-1, example-ns-2 ] 2 containerLimit: maxRecordsPerSecond: 10 3 - name: <input_name> application: namespaces: [ test ] containerLimit: maxRecordsPerSecond: 1000",
"oc apply -f <filename>.yaml",
"apiVersion: logging.openshift.io/v1 kind: ClusterLogForwarder metadata: spec: filters: - name: <filter_name> type: drop 1 drop: 2 - test: 3 - field: .kubernetes.labels.\"foo-bar/baz\" 4 matches: .+ 5 - field: .kubernetes.pod_name notMatches: \"my-pod\" 6 pipelines: - name: <pipeline_name> 7 filterRefs: [\"<filter_name>\"]",
"oc apply -f <filename>.yaml",
"apiVersion: logging.openshift.io/v1 kind: ClusterLogForwarder metadata: spec: filters: - name: important type: drop drop: test: - field: .message notMatches: \"(?i)critical|error\" - field: .level matches: \"info|warning\"",
"apiVersion: logging.openshift.io/v1 kind: ClusterLogForwarder metadata: spec: filters: - name: important type: drop drop: test: - field: .kubernetes.namespace_name matches: \"^open\" test: - field: .log_type matches: \"application\" - field: .kubernetes.pod_name notMatches: \"my-pod\"",
"apiVersion: logging.openshift.io/v1 kind: ClusterLogForwarder metadata: spec: filters: - name: <filter_name> type: prune 1 prune: 2 in: [.kubernetes.annotations, .kubernetes.namespace_id] 3 notIn: [.kubernetes,.log_type,.message,.\"@timestamp\"] 4 pipelines: - name: <pipeline_name> 5 filterRefs: [\"<filter_name>\"]",
"oc apply -f <filename>.yaml",
"apiVersion: \"logging.openshift.io/v1\" kind: ClusterLogForwarder spec: inputs: - name: mylogs application: includes: - namespace: \"my-project\" 1 container: \"my-container\" 2 excludes: - container: \"other-container*\" 3 namespace: \"other-namespace\" 4",
"oc apply -f <filename>.yaml",
"apiVersion: \"logging.openshift.io/v1\" kind: ClusterLogForwarder spec: inputs: - name: mylogs application: selector: matchExpressions: - key: env 1 operator: In 2 values: [\"prod\", \"qa\"] 3 - key: zone operator: NotIn values: [\"east\", \"west\"] matchLabels: 4 app: one name: app1",
"oc apply -f <filename>.yaml",
"apiVersion: \"logging.openshift.io/v1\" kind: ClusterLogForwarder spec: inputs: - name: mylogs1 infrastructure: sources: 1 - node - name: mylogs2 audit: sources: 2 - kubeAPI - openshiftAPI - ovn",
"oc apply -f <filename>.yaml",
"kind: Node apiVersion: v1 metadata: name: ip-10-0-131-14.ec2.internal selfLink: /api/v1/nodes/ip-10-0-131-14.ec2.internal uid: 7bc2580a-8b8e-11e9-8e01-021ab4174c74 resourceVersion: '478704' creationTimestamp: '2019-06-10T14:46:08Z' labels: kubernetes.io/os: linux topology.kubernetes.io/zone: us-east-1a node.openshift.io/os_version: '4.5' node-role.kubernetes.io/worker: '' topology.kubernetes.io/region: us-east-1 node.openshift.io/os_id: rhcos node.kubernetes.io/instance-type: m4.large kubernetes.io/hostname: ip-10-0-131-14 kubernetes.io/arch: amd64 region: east 1 type: user-node #",
"apiVersion: v1 kind: Pod metadata: name: s1 # spec: nodeSelector: 1 region: east type: user-node #",
"apiVersion: config.openshift.io/v1 kind: Scheduler metadata: name: cluster # spec: defaultNodeSelector: type=user-node,region=east #",
"apiVersion: v1 kind: Node metadata: name: ci-ln-qg1il3k-f76d1-hlmhl-worker-b-df2s4 # labels: region: east type: user-node #",
"apiVersion: v1 kind: Pod metadata: name: s1 # spec: nodeSelector: region: east #",
"NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES pod-s1 1/1 Running 0 20s 10.131.2.6 ci-ln-qg1il3k-f76d1-hlmhl-worker-b-df2s4 <none> <none>",
"apiVersion: v1 kind: Namespace metadata: name: east-region annotations: openshift.io/node-selector: \"region=east\" #",
"apiVersion: v1 kind: Node metadata: name: ci-ln-qg1il3k-f76d1-hlmhl-worker-b-df2s4 # labels: region: east type: user-node #",
"apiVersion: v1 kind: Pod metadata: namespace: east-region # spec: nodeSelector: region: east type: user-node #",
"NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES pod-s1 1/1 Running 0 20s 10.131.2.6 ci-ln-qg1il3k-f76d1-hlmhl-worker-b-df2s4 <none> <none>",
"apiVersion: v1 kind: Pod metadata: name: west-region # spec: nodeSelector: region: west #",
"apiVersion: loki.grafana.com/v1 kind: LokiStack metadata: name: logging-loki namespace: openshift-logging spec: template: compactor: 1 nodeSelector: node-role.kubernetes.io/infra: \"\" 2 distributor: nodeSelector: node-role.kubernetes.io/infra: \"\" gateway: nodeSelector: node-role.kubernetes.io/infra: \"\" indexGateway: nodeSelector: node-role.kubernetes.io/infra: \"\" ingester: nodeSelector: node-role.kubernetes.io/infra: \"\" querier: nodeSelector: node-role.kubernetes.io/infra: \"\" queryFrontend: nodeSelector: node-role.kubernetes.io/infra: \"\" ruler: nodeSelector: node-role.kubernetes.io/infra: \"\"",
"apiVersion: loki.grafana.com/v1 kind: LokiStack metadata: name: logging-loki namespace: openshift-logging spec: template: compactor: nodeSelector: node-role.kubernetes.io/infra: \"\" tolerations: - effect: NoSchedule key: node-role.kubernetes.io/infra value: reserved - effect: NoExecute key: node-role.kubernetes.io/infra value: reserved distributor: nodeSelector: node-role.kubernetes.io/infra: \"\" tolerations: - effect: NoSchedule key: node-role.kubernetes.io/infra value: reserved - effect: NoExecute key: node-role.kubernetes.io/infra value: reserved nodeSelector: node-role.kubernetes.io/infra: \"\" tolerations: - effect: NoSchedule key: node-role.kubernetes.io/infra value: reserved - effect: NoExecute key: node-role.kubernetes.io/infra value: reserved indexGateway: nodeSelector: node-role.kubernetes.io/infra: \"\" tolerations: - effect: NoSchedule key: node-role.kubernetes.io/infra value: reserved - effect: NoExecute key: node-role.kubernetes.io/infra value: reserved ingester: nodeSelector: node-role.kubernetes.io/infra: \"\" tolerations: - effect: NoSchedule key: node-role.kubernetes.io/infra value: reserved - effect: NoExecute key: node-role.kubernetes.io/infra value: reserved querier: nodeSelector: node-role.kubernetes.io/infra: \"\" tolerations: - effect: NoSchedule key: node-role.kubernetes.io/infra value: reserved - effect: NoExecute key: node-role.kubernetes.io/infra value: reserved queryFrontend: nodeSelector: node-role.kubernetes.io/infra: \"\" tolerations: - effect: NoSchedule key: node-role.kubernetes.io/infra value: reserved - effect: NoExecute key: node-role.kubernetes.io/infra value: reserved ruler: nodeSelector: node-role.kubernetes.io/infra: \"\" tolerations: - effect: NoSchedule key: node-role.kubernetes.io/infra value: reserved - effect: NoExecute key: node-role.kubernetes.io/infra value: reserved gateway: nodeSelector: node-role.kubernetes.io/infra: \"\" tolerations: - effect: NoSchedule key: node-role.kubernetes.io/infra value: reserved - effect: NoExecute key: node-role.kubernetes.io/infra value: reserved",
"oc explain lokistack.spec.template",
"KIND: LokiStack VERSION: loki.grafana.com/v1 RESOURCE: template <Object> DESCRIPTION: Template defines the resource/limits/tolerations/nodeselectors per component FIELDS: compactor <Object> Compactor defines the compaction component spec. distributor <Object> Distributor defines the distributor component spec.",
"oc explain lokistack.spec.template.compactor",
"KIND: LokiStack VERSION: loki.grafana.com/v1 RESOURCE: compactor <Object> DESCRIPTION: Compactor defines the compaction component spec. FIELDS: nodeSelector <map[string]string> NodeSelector defines the labels required by a node to schedule the component onto it.",
"apiVersion: logging.openshift.io/v1 kind: ClusterLogging metadata: name: <name> 1 namespace: <namespace> 2 spec: managementState: \"Managed\" collection: type: \"vector\" tolerations: - key: \"logging\" operator: \"Exists\" effect: \"NoExecute\" tolerationSeconds: 6000 resources: limits: memory: 1Gi requests: cpu: 100m memory: 1Gi nodeSelector: collector: needed",
"oc apply -f <filename>.yaml",
"oc get pods --selector component=collector -o wide -n <project_name>",
"NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES collector-8d69v 1/1 Running 0 134m 10.130.2.30 master1.example.com <none> <none> collector-bd225 1/1 Running 0 134m 10.131.1.11 master2.example.com <none> <none> collector-cvrzs 1/1 Running 0 134m 10.130.0.21 master3.example.com <none> <none> collector-gpqg2 1/1 Running 0 134m 10.128.2.27 worker1.example.com <none> <none> collector-l9j7j 1/1 Running 0 134m 10.129.2.31 worker2.example.com <none> <none>",
"apiVersion: v1 kind: Node metadata: name: my-node # spec: taints: - effect: NoExecute key: key1 value: value1 #",
"apiVersion: v1 kind: Pod metadata: name: my-pod # spec: tolerations: - key: \"key1\" operator: \"Equal\" value: \"value1\" effect: \"NoExecute\" tolerationSeconds: 3600 #",
"apiVersion: v1 kind: Node metadata: annotations: machine.openshift.io/machine: openshift-machine-api/ci-ln-62s7gtb-f76d1-v8jxv-master-0 machineconfiguration.openshift.io/currentConfig: rendered-master-cdc1ab7da414629332cc4c3926e6e59c name: my-node # spec: taints: - effect: NoSchedule key: node-role.kubernetes.io/master #",
"apiVersion: loki.grafana.com/v1 kind: LokiStack metadata: name: logging-loki namespace: openshift-logging spec: template: compactor: 1 nodeSelector: node-role.kubernetes.io/infra: \"\" 2 distributor: nodeSelector: node-role.kubernetes.io/infra: \"\" gateway: nodeSelector: node-role.kubernetes.io/infra: \"\" indexGateway: nodeSelector: node-role.kubernetes.io/infra: \"\" ingester: nodeSelector: node-role.kubernetes.io/infra: \"\" querier: nodeSelector: node-role.kubernetes.io/infra: \"\" queryFrontend: nodeSelector: node-role.kubernetes.io/infra: \"\" ruler: nodeSelector: node-role.kubernetes.io/infra: \"\"",
"apiVersion: loki.grafana.com/v1 kind: LokiStack metadata: name: logging-loki namespace: openshift-logging spec: template: compactor: nodeSelector: node-role.kubernetes.io/infra: \"\" tolerations: - effect: NoSchedule key: node-role.kubernetes.io/infra value: reserved - effect: NoExecute key: node-role.kubernetes.io/infra value: reserved distributor: nodeSelector: node-role.kubernetes.io/infra: \"\" tolerations: - effect: NoSchedule key: node-role.kubernetes.io/infra value: reserved - effect: NoExecute key: node-role.kubernetes.io/infra value: reserved nodeSelector: node-role.kubernetes.io/infra: \"\" tolerations: - effect: NoSchedule key: node-role.kubernetes.io/infra value: reserved - effect: NoExecute key: node-role.kubernetes.io/infra value: reserved indexGateway: nodeSelector: node-role.kubernetes.io/infra: \"\" tolerations: - effect: NoSchedule key: node-role.kubernetes.io/infra value: reserved - effect: NoExecute key: node-role.kubernetes.io/infra value: reserved ingester: nodeSelector: node-role.kubernetes.io/infra: \"\" tolerations: - effect: NoSchedule key: node-role.kubernetes.io/infra value: reserved - effect: NoExecute key: node-role.kubernetes.io/infra value: reserved querier: nodeSelector: node-role.kubernetes.io/infra: \"\" tolerations: - effect: NoSchedule key: node-role.kubernetes.io/infra value: reserved - effect: NoExecute key: node-role.kubernetes.io/infra value: reserved queryFrontend: nodeSelector: node-role.kubernetes.io/infra: \"\" tolerations: - effect: NoSchedule key: node-role.kubernetes.io/infra value: reserved - effect: NoExecute key: node-role.kubernetes.io/infra value: reserved ruler: nodeSelector: node-role.kubernetes.io/infra: \"\" tolerations: - effect: NoSchedule key: node-role.kubernetes.io/infra value: reserved - effect: NoExecute key: node-role.kubernetes.io/infra value: reserved gateway: nodeSelector: node-role.kubernetes.io/infra: \"\" tolerations: - effect: NoSchedule key: node-role.kubernetes.io/infra value: reserved - effect: NoExecute key: node-role.kubernetes.io/infra value: reserved",
"oc explain lokistack.spec.template",
"KIND: LokiStack VERSION: loki.grafana.com/v1 RESOURCE: template <Object> DESCRIPTION: Template defines the resource/limits/tolerations/nodeselectors per component FIELDS: compactor <Object> Compactor defines the compaction component spec. distributor <Object> Distributor defines the distributor component spec.",
"oc explain lokistack.spec.template.compactor",
"KIND: LokiStack VERSION: loki.grafana.com/v1 RESOURCE: compactor <Object> DESCRIPTION: Compactor defines the compaction component spec. FIELDS: nodeSelector <map[string]string> NodeSelector defines the labels required by a node to schedule the component onto it.",
"apiVersion: v1 kind: Pod metadata: name: collector-example namespace: openshift-logging spec: collection: type: vector tolerations: - effect: NoSchedule key: node-role.kubernetes.io/master operator: Exists - effect: NoSchedule key: node.kubernetes.io/disk-pressure operator: Exists - effect: NoExecute key: node.kubernetes.io/not-ready operator: Exists - effect: NoExecute key: node.kubernetes.io/unreachable operator: Exists - effect: NoSchedule key: node.kubernetes.io/memory-pressure operator: Exists - effect: NoSchedule key: node.kubernetes.io/pid-pressure operator: Exists - effect: NoSchedule key: node.kubernetes.io/unschedulable operator: Exists",
"oc adm taint nodes <node_name> <key>=<value>:<effect>",
"oc adm taint nodes node1 collector=node:NoExecute",
"apiVersion: logging.openshift.io/v1 kind: ClusterLogging metadata: spec: collection: type: vector tolerations: - key: collector 1 operator: Exists 2 effect: NoExecute 3 tolerationSeconds: 6000 4 resources: limits: memory: 2Gi requests: cpu: 100m memory: 1Gi",
"apiVersion: logging.openshift.io/v1 kind: ClusterLogging metadata: name: <name> 1 namespace: <namespace> 2 spec: managementState: \"Managed\" collection: type: \"vector\" tolerations: - key: \"logging\" operator: \"Exists\" effect: \"NoExecute\" tolerationSeconds: 6000 resources: limits: memory: 1Gi requests: cpu: 100m memory: 1Gi nodeSelector: collector: needed",
"oc apply -f <filename>.yaml",
"oc get pods --selector component=collector -o wide -n <project_name>",
"NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES collector-8d69v 1/1 Running 0 134m 10.130.2.30 master1.example.com <none> <none> collector-bd225 1/1 Running 0 134m 10.131.1.11 master2.example.com <none> <none> collector-cvrzs 1/1 Running 0 134m 10.130.0.21 master3.example.com <none> <none> collector-gpqg2 1/1 Running 0 134m 10.128.2.27 worker1.example.com <none> <none> collector-l9j7j 1/1 Running 0 134m 10.129.2.31 worker2.example.com <none> <none>",
"oc get subscription.operators.coreos.com serverless-operator -n openshift-serverless -o yaml | grep currentCSV",
"currentCSV: serverless-operator.v1.28.0",
"oc delete subscription.operators.coreos.com serverless-operator -n openshift-serverless",
"subscription.operators.coreos.com \"serverless-operator\" deleted",
"oc delete clusterserviceversion serverless-operator.v1.28.0 -n openshift-serverless",
"clusterserviceversion.operators.coreos.com \"serverless-operator.v1.28.0\" deleted"
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.13/html-single/logging/index |
Appendix A. Using your subscription | Appendix A. Using your subscription Apicurio Registry is provided through a software subscription. To manage your subscriptions, access your account at the Red Hat Customer Portal. Accessing your account Go to access.redhat.com . If you do not already have an account, create one. Log in to your account. Activating a subscription Go to access.redhat.com . Navigate to My Subscriptions . Navigate to Activate a subscription and enter your 16-digit activation number. Downloading ZIP and TAR files To access ZIP or TAR files, use the customer portal to find the relevant files for download. If you are using RPM packages, this step is not required. Open a browser and log in to the Red Hat Customer Portal Product Downloads page at access.redhat.com/downloads . Locate the Red Hat Integration entries in the Integration and Automation category. Select the desired Apicurio Registry product. The Software Downloads page opens. Click the Download link for your component. Revised on 2025-02-11 14:03:23 UTC | null | https://docs.redhat.com/en/documentation/red_hat_build_of_apicurio_registry/2.6/html/release_notes_for_apicurio_registry_2.6/using_your_subscription |
Chapter 3. Gathering diagnostic information for support | Chapter 3. Gathering diagnostic information for support When you open a support case, you must provide debugging information about your cluster to the Red Hat Support team. You can use the must-gather tool to collect diagnostic information for project-level resources, cluster-level resources, and Red Hat OpenShift GitOps components. Note For prompt support, provide diagnostic information for both OpenShift Container Platform and Red Hat OpenShift GitOps. 3.1. About the must-gather tool The oc adm must-gather CLI command collects the information from your cluster that is most likely needed for debugging issues, including: Resource definitions Service logs By default, the oc adm must-gather command uses the default plugin image and writes into ./must-gather.local . Alternatively, you can collect specific information by running the command with the appropriate arguments as described in the following sections: To collect data related to one or more specific features, use the --image argument with an image, as listed in a following section. Example command USD oc adm must-gather --image=registry.redhat.io/openshift-gitops-1/must-gather-rhel8:v1.10.0 To collect the audit logs, use the -- /usr/bin/gather_audit_logs argument, as described in a following section. Example command USD oc adm must-gather -- /usr/bin/gather_audit_logs Note Audit logs are not collected as part of the default set of information to reduce the size of the files. When you run oc adm must-gather , a new pod with a random name is created in a new project on the cluster. The data is collected on that pod and saved in a new directory that starts with must-gather.local . This directory is created in the current working directory. Example pod NAMESPACE NAME READY STATUS RESTARTS AGE ... openshift-must-gather-5drcj must-gather-bklx4 2/2 Running 0 72s openshift-must-gather-5drcj must-gather-s8sdh 2/2 Running 0 72s ... Optionally, you can run the oc adm must-gather command in a specific namespace by using the --run-namespace option. Example command USD oc adm must-gather --image=registry.redhat.io/openshift-gitops-1/must-gather-rhel8:v1.10.0 3.2. Collecting debugging data for Red Hat OpenShift GitOps Use the oc adm must-gather CLI command to collect the following details about the cluster that is associated with Red Hat OpenShift GitOps: The subscription and namespace of the Red Hat OpenShift GitOps Operator. The namespaces where ArgoCD objects are available and the objects in those namespaces, such as ArgoCD , Applications , ApplicationSets , AppProjects , and configmaps . A list of the namespaces that are managed by the Red Hat OpenShift GitOps Operator, and resources from those namespaces. All GitOps-related custom resource objects and definitions. Operator and Argo CD logs. Warning and error-level events. Prerequisites You have logged in to the OpenShift Container Platform cluster as an administrator. You have installed the OpenShift Container Platform CLI ( oc ). You have installed the Red Hat OpenShift GitOps Operator. Procedure Navigate to the directory where you want to store the debugging information. Run the oc adm must-gather command with the Red Hat OpenShift GitOps must-gather image: USD oc adm must-gather --image=registry.redhat.io/openshift-gitops-1/must-gather-rhel8:<image_version_tag> 1 1 The must-gather image for GitOps. Example command USD oc adm must-gather --image=registry.redhat.io/openshift-gitops-1/must-gather-rhel8:v1.10.0 The must-gather tool creates a new directory that starts with ./must-gather.local in the current directory. For example, ./must-gather.local.4157245944708210399 . Create a compressed file from the directory that was just created. For example, on a computer that uses a Linux operating system, run the following command: USD tar -cvaf must-gather.tar.gz must-gather.local.4157245944708210399 1 1 Replace must-gather-local.4157245944708210399 with the actual directory name. Attach the compressed file to your support case on the Red Hat Customer Portal . 3.3. Additional resources Gathering data about specific features | [
"oc adm must-gather --image=registry.redhat.io/openshift-gitops-1/must-gather-rhel8:v1.10.0",
"oc adm must-gather -- /usr/bin/gather_audit_logs",
"NAMESPACE NAME READY STATUS RESTARTS AGE openshift-must-gather-5drcj must-gather-bklx4 2/2 Running 0 72s openshift-must-gather-5drcj must-gather-s8sdh 2/2 Running 0 72s",
"oc adm must-gather --image=registry.redhat.io/openshift-gitops-1/must-gather-rhel8:v1.10.0",
"oc adm must-gather --image=registry.redhat.io/openshift-gitops-1/must-gather-rhel8:<image_version_tag> 1",
"oc adm must-gather --image=registry.redhat.io/openshift-gitops-1/must-gather-rhel8:v1.10.0",
"tar -cvaf must-gather.tar.gz must-gather.local.4157245944708210399 1"
] | https://docs.redhat.com/en/documentation/red_hat_openshift_gitops/1.15/html/understanding_openshift_gitops/gathering-gitops-diagnostic-information-for-support |
Chapter 91. zone | Chapter 91. zone This chapter describes the commands under the zone command. 91.1. zone abandon Abandon a zone Usage: Table 91.1. Positional Arguments Value Summary id Zone id Table 91.2. Optional Arguments Value Summary -h, --help Show this help message and exit --all-projects Show results from all projects. default: false --edit-managed Edit resources marked as managed. default: false --sudo-project-id SUDO_PROJECT_ID Project id to impersonate for this command. default: None 91.2. zone axfr AXFR a zone Usage: Table 91.3. Positional Arguments Value Summary id Zone id Table 91.4. Optional Arguments Value Summary -h, --help Show this help message and exit --all-projects Show results from all projects. default: false --edit-managed Edit resources marked as managed. default: false --sudo-project-id SUDO_PROJECT_ID Project id to impersonate for this command. default: None 91.3. zone blacklist create Create new blacklist Usage: Table 91.5. Optional Arguments Value Summary -h, --help Show this help message and exit --pattern PATTERN Blacklist pattern --description DESCRIPTION Description --all-projects Show results from all projects. default: false --edit-managed Edit resources marked as managed. default: false --sudo-project-id SUDO_PROJECT_ID Project id to impersonate for this command. default: None Table 91.6. Output Formatters Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated Table 91.7. JSON Formatter Value Summary --noindent Whether to disable indenting the json Table 91.8. Shell Formatter Value Summary --prefix PREFIX Add a prefix to all variable names Table 91.9. Table Formatter Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 91.4. zone blacklist delete Delete blacklist Usage: Table 91.10. Positional Arguments Value Summary id Blacklist id Table 91.11. Optional Arguments Value Summary -h, --help Show this help message and exit --all-projects Show results from all projects. default: false --edit-managed Edit resources marked as managed. default: false --sudo-project-id SUDO_PROJECT_ID Project id to impersonate for this command. default: None 91.5. zone blacklist list List blacklists Usage: Table 91.12. Optional Arguments Value Summary -h, --help Show this help message and exit --all-projects Show results from all projects. default: false --edit-managed Edit resources marked as managed. default: false --sudo-project-id SUDO_PROJECT_ID Project id to impersonate for this command. default: None Table 91.13. Output Formatters Value Summary -f {csv,json,table,value,yaml}, --format {csv,json,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated --sort-column SORT_COLUMN Specify the column(s) to sort the data (columns specified first have a priority, non-existing columns are ignored), can be repeated Table 91.14. CSV Formatter Value Summary --quote {all,minimal,none,nonnumeric} When to include quotes, defaults to nonnumeric Table 91.15. JSON Formatter Value Summary --noindent Whether to disable indenting the json Table 91.16. Table Formatter Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 91.6. zone blacklist set Set blacklist properties Usage: Table 91.17. Positional Arguments Value Summary id Blacklist id Table 91.18. Optional Arguments Value Summary -h, --help Show this help message and exit --pattern PATTERN Blacklist pattern --description DESCRIPTION Description --no-description- all-projects Show results from all projects. default: false --edit-managed Edit resources marked as managed. default: false --sudo-project-id SUDO_PROJECT_ID Project id to impersonate for this command. default: None Table 91.19. Output Formatters Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated Table 91.20. JSON Formatter Value Summary --noindent Whether to disable indenting the json Table 91.21. Shell Formatter Value Summary --prefix PREFIX Add a prefix to all variable names Table 91.22. Table Formatter Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 91.7. zone blacklist show Show blacklist details Usage: Table 91.23. Positional Arguments Value Summary id Blacklist id Table 91.24. Optional Arguments Value Summary -h, --help Show this help message and exit --all-projects Show results from all projects. default: false --edit-managed Edit resources marked as managed. default: false --sudo-project-id SUDO_PROJECT_ID Project id to impersonate for this command. default: None Table 91.25. Output Formatters Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated Table 91.26. JSON Formatter Value Summary --noindent Whether to disable indenting the json Table 91.27. Shell Formatter Value Summary --prefix PREFIX Add a prefix to all variable names Table 91.28. Table Formatter Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 91.8. zone create Create new zone Usage: Table 91.29. Positional Arguments Value Summary name Zone name Table 91.30. Optional Arguments Value Summary -h, --help Show this help message and exit --email EMAIL Zone email --type TYPE Zone type --ttl TTL Time to live (seconds) --description DESCRIPTION Description --masters MASTERS [MASTERS ... ] Zone masters --attributes ATTRIBUTES [ATTRIBUTES ... ] Zone attributes --all-projects Show results from all projects. default: false --edit-managed Edit resources marked as managed. default: false --sudo-project-id SUDO_PROJECT_ID Project id to impersonate for this command. default: None Table 91.31. Output Formatters Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated Table 91.32. JSON Formatter Value Summary --noindent Whether to disable indenting the json Table 91.33. Shell Formatter Value Summary --prefix PREFIX Add a prefix to all variable names Table 91.34. Table Formatter Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 91.9. zone delete Delete zone Usage: Table 91.35. Positional Arguments Value Summary id Zone id Table 91.36. Optional Arguments Value Summary -h, --help Show this help message and exit --all-projects Show results from all projects. default: false --edit-managed Edit resources marked as managed. default: false --sudo-project-id SUDO_PROJECT_ID Project id to impersonate for this command. default: None Table 91.37. Output Formatters Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated Table 91.38. JSON Formatter Value Summary --noindent Whether to disable indenting the json Table 91.39. Shell Formatter Value Summary --prefix PREFIX Add a prefix to all variable names Table 91.40. Table Formatter Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 91.10. zone export create Export a Zone Usage: Table 91.41. Positional Arguments Value Summary zone_id Zone id Table 91.42. Optional Arguments Value Summary -h, --help Show this help message and exit --all-projects Show results from all projects. default: false --edit-managed Edit resources marked as managed. default: false --sudo-project-id SUDO_PROJECT_ID Project id to impersonate for this command. default: None Table 91.43. Output Formatters Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated Table 91.44. JSON Formatter Value Summary --noindent Whether to disable indenting the json Table 91.45. Shell Formatter Value Summary --prefix PREFIX Add a prefix to all variable names Table 91.46. Table Formatter Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 91.11. zone export delete Delete a Zone Export Usage: Table 91.47. Positional Arguments Value Summary zone_export_id Zone export id Table 91.48. Optional Arguments Value Summary -h, --help Show this help message and exit --all-projects Show results from all projects. default: false --edit-managed Edit resources marked as managed. default: false --sudo-project-id SUDO_PROJECT_ID Project id to impersonate for this command. default: None 91.12. zone export list List Zone Exports Usage: Table 91.49. Optional Arguments Value Summary -h, --help Show this help message and exit --all-projects Show results from all projects. default: false --edit-managed Edit resources marked as managed. default: false --sudo-project-id SUDO_PROJECT_ID Project id to impersonate for this command. default: None Table 91.50. Output Formatters Value Summary -f {csv,json,table,value,yaml}, --format {csv,json,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated --sort-column SORT_COLUMN Specify the column(s) to sort the data (columns specified first have a priority, non-existing columns are ignored), can be repeated Table 91.51. CSV Formatter Value Summary --quote {all,minimal,none,nonnumeric} When to include quotes, defaults to nonnumeric Table 91.52. JSON Formatter Value Summary --noindent Whether to disable indenting the json Table 91.53. Table Formatter Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 91.13. zone export show Show a Zone Export Usage: Table 91.54. Positional Arguments Value Summary zone_export_id Zone export id Table 91.55. Optional Arguments Value Summary -h, --help Show this help message and exit --all-projects Show results from all projects. default: false --edit-managed Edit resources marked as managed. default: false --sudo-project-id SUDO_PROJECT_ID Project id to impersonate for this command. default: None Table 91.56. Output Formatters Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated Table 91.57. JSON Formatter Value Summary --noindent Whether to disable indenting the json Table 91.58. Shell Formatter Value Summary --prefix PREFIX Add a prefix to all variable names Table 91.59. Table Formatter Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 91.14. zone export showfile Show the zone file for the Zone Export Usage: Table 91.60. Positional Arguments Value Summary zone_export_id Zone export id Table 91.61. Optional Arguments Value Summary -h, --help Show this help message and exit --all-projects Show results from all projects. default: false --edit-managed Edit resources marked as managed. default: false --sudo-project-id SUDO_PROJECT_ID Project id to impersonate for this command. default: None Table 91.62. Output Formatters Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated Table 91.63. JSON Formatter Value Summary --noindent Whether to disable indenting the json Table 91.64. Shell Formatter Value Summary --prefix PREFIX Add a prefix to all variable names Table 91.65. Table Formatter Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 91.15. zone import create Import a Zone from a file on the filesystem Usage: Table 91.66. Positional Arguments Value Summary zone_file_path Path to a zone file Table 91.67. Optional Arguments Value Summary -h, --help Show this help message and exit --all-projects Show results from all projects. default: false --edit-managed Edit resources marked as managed. default: false --sudo-project-id SUDO_PROJECT_ID Project id to impersonate for this command. default: None Table 91.68. Output Formatters Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated Table 91.69. JSON Formatter Value Summary --noindent Whether to disable indenting the json Table 91.70. Shell Formatter Value Summary --prefix PREFIX Add a prefix to all variable names Table 91.71. Table Formatter Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 91.16. zone import delete Delete a Zone Import Usage: Table 91.72. Positional Arguments Value Summary zone_import_id Zone import id Table 91.73. Optional Arguments Value Summary -h, --help Show this help message and exit --all-projects Show results from all projects. default: false --edit-managed Edit resources marked as managed. default: false --sudo-project-id SUDO_PROJECT_ID Project id to impersonate for this command. default: None 91.17. zone import list List Zone Imports Usage: Table 91.74. Optional Arguments Value Summary -h, --help Show this help message and exit --all-projects Show results from all projects. default: false --edit-managed Edit resources marked as managed. default: false --sudo-project-id SUDO_PROJECT_ID Project id to impersonate for this command. default: None Table 91.75. Output Formatters Value Summary -f {csv,json,table,value,yaml}, --format {csv,json,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated --sort-column SORT_COLUMN Specify the column(s) to sort the data (columns specified first have a priority, non-existing columns are ignored), can be repeated Table 91.76. CSV Formatter Value Summary --quote {all,minimal,none,nonnumeric} When to include quotes, defaults to nonnumeric Table 91.77. JSON Formatter Value Summary --noindent Whether to disable indenting the json Table 91.78. Table Formatter Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 91.18. zone import show Show a Zone Import Usage: Table 91.79. Positional Arguments Value Summary zone_import_id Zone import id Table 91.80. Optional Arguments Value Summary -h, --help Show this help message and exit --all-projects Show results from all projects. default: false --edit-managed Edit resources marked as managed. default: false --sudo-project-id SUDO_PROJECT_ID Project id to impersonate for this command. default: None Table 91.81. Output Formatters Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated Table 91.82. JSON Formatter Value Summary --noindent Whether to disable indenting the json Table 91.83. Shell Formatter Value Summary --prefix PREFIX Add a prefix to all variable names Table 91.84. Table Formatter Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 91.19. zone list List zones Usage: Table 91.85. Optional Arguments Value Summary -h, --help Show this help message and exit --name NAME Zone name --email EMAIL Zone email --type TYPE Zone type --ttl TTL Time to live (seconds) --description DESCRIPTION Description --status STATUS Zone status --all-projects Show results from all projects. default: false --edit-managed Edit resources marked as managed. default: false --sudo-project-id SUDO_PROJECT_ID Project id to impersonate for this command. default: None Table 91.86. Output Formatters Value Summary -f {csv,json,table,value,yaml}, --format {csv,json,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated --sort-column SORT_COLUMN Specify the column(s) to sort the data (columns specified first have a priority, non-existing columns are ignored), can be repeated Table 91.87. CSV Formatter Value Summary --quote {all,minimal,none,nonnumeric} When to include quotes, defaults to nonnumeric Table 91.88. JSON Formatter Value Summary --noindent Whether to disable indenting the json Table 91.89. Table Formatter Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 91.20. zone set Set zone properties Usage: Table 91.90. Positional Arguments Value Summary id Zone id Table 91.91. Optional Arguments Value Summary -h, --help Show this help message and exit --email EMAIL Zone email --ttl TTL Time to live (seconds) --description DESCRIPTION Description --no-description- masters MASTERS [MASTERS ... ] Zone masters --all-projects Show results from all projects. default: false --edit-managed Edit resources marked as managed. default: false --sudo-project-id SUDO_PROJECT_ID Project id to impersonate for this command. default: None Table 91.92. Output Formatters Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated Table 91.93. JSON Formatter Value Summary --noindent Whether to disable indenting the json Table 91.94. Shell Formatter Value Summary --prefix PREFIX Add a prefix to all variable names Table 91.95. Table Formatter Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 91.21. zone show Show zone details Usage: Table 91.96. Positional Arguments Value Summary id Zone id Table 91.97. Optional Arguments Value Summary -h, --help Show this help message and exit --all-projects Show results from all projects. default: false --edit-managed Edit resources marked as managed. default: false --sudo-project-id SUDO_PROJECT_ID Project id to impersonate for this command. default: None Table 91.98. Output Formatters Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated Table 91.99. JSON Formatter Value Summary --noindent Whether to disable indenting the json Table 91.100. Shell Formatter Value Summary --prefix PREFIX Add a prefix to all variable names Table 91.101. Table Formatter Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 91.22. zone transfer accept list List Zone Transfer Accepts Usage: Table 91.102. Optional Arguments Value Summary -h, --help Show this help message and exit --all-projects Show results from all projects. default: false --edit-managed Edit resources marked as managed. default: false --sudo-project-id SUDO_PROJECT_ID Project id to impersonate for this command. default: None Table 91.103. Output Formatters Value Summary -f {csv,json,table,value,yaml}, --format {csv,json,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated --sort-column SORT_COLUMN Specify the column(s) to sort the data (columns specified first have a priority, non-existing columns are ignored), can be repeated Table 91.104. CSV Formatter Value Summary --quote {all,minimal,none,nonnumeric} When to include quotes, defaults to nonnumeric Table 91.105. JSON Formatter Value Summary --noindent Whether to disable indenting the json Table 91.106. Table Formatter Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 91.23. zone transfer accept request Accept a Zone Transfer Request Usage: Table 91.107. Optional Arguments Value Summary -h, --help Show this help message and exit --transfer-id TRANSFER_ID Transfer id --key KEY Transfer key --all-projects Show results from all projects. default: false --edit-managed Edit resources marked as managed. default: false --sudo-project-id SUDO_PROJECT_ID Project id to impersonate for this command. default: None Table 91.108. Output Formatters Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated Table 91.109. JSON Formatter Value Summary --noindent Whether to disable indenting the json Table 91.110. Shell Formatter Value Summary --prefix PREFIX Add a prefix to all variable names Table 91.111. Table Formatter Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 91.24. zone transfer accept show Show Zone Transfer Accept Usage: Table 91.112. Positional Arguments Value Summary id Zone tranfer accept id Table 91.113. Optional Arguments Value Summary -h, --help Show this help message and exit --all-projects Show results from all projects. default: false --edit-managed Edit resources marked as managed. default: false --sudo-project-id SUDO_PROJECT_ID Project id to impersonate for this command. default: None Table 91.114. Output Formatters Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated Table 91.115. JSON Formatter Value Summary --noindent Whether to disable indenting the json Table 91.116. Shell Formatter Value Summary --prefix PREFIX Add a prefix to all variable names Table 91.117. Table Formatter Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 91.25. zone transfer request create Create new zone transfer request Usage: Table 91.118. Positional Arguments Value Summary zone_id Zone id to transfer. Table 91.119. Optional Arguments Value Summary -h, --help Show this help message and exit --target-project-id TARGET_PROJECT_ID Target project id to transfer to. --description DESCRIPTION Description --all-projects Show results from all projects. default: false --edit-managed Edit resources marked as managed. default: false --sudo-project-id SUDO_PROJECT_ID Project id to impersonate for this command. default: None Table 91.120. Output Formatters Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated Table 91.121. JSON Formatter Value Summary --noindent Whether to disable indenting the json Table 91.122. Shell Formatter Value Summary --prefix PREFIX Add a prefix to all variable names Table 91.123. Table Formatter Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 91.26. zone transfer request delete Delete a Zone Transfer Request Usage: Table 91.124. Positional Arguments Value Summary id Zone transfer request id Table 91.125. Optional Arguments Value Summary -h, --help Show this help message and exit --all-projects Show results from all projects. default: false --edit-managed Edit resources marked as managed. default: false --sudo-project-id SUDO_PROJECT_ID Project id to impersonate for this command. default: None 91.27. zone transfer request list List Zone Transfer Requests Usage: Table 91.126. Optional Arguments Value Summary -h, --help Show this help message and exit --all-projects Show results from all projects. default: false --edit-managed Edit resources marked as managed. default: false --sudo-project-id SUDO_PROJECT_ID Project id to impersonate for this command. default: None Table 91.127. Output Formatters Value Summary -f {csv,json,table,value,yaml}, --format {csv,json,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated --sort-column SORT_COLUMN Specify the column(s) to sort the data (columns specified first have a priority, non-existing columns are ignored), can be repeated Table 91.128. CSV Formatter Value Summary --quote {all,minimal,none,nonnumeric} When to include quotes, defaults to nonnumeric Table 91.129. JSON Formatter Value Summary --noindent Whether to disable indenting the json Table 91.130. Table Formatter Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 91.28. zone transfer request set Set a Zone Transfer Request Usage: Table 91.131. Positional Arguments Value Summary id Zone transfer request id Table 91.132. Optional Arguments Value Summary -h, --help Show this help message and exit --description DESCRIPTION Description --no-description- all-projects Show results from all projects. default: false --edit-managed Edit resources marked as managed. default: false --sudo-project-id SUDO_PROJECT_ID Project id to impersonate for this command. default: None Table 91.133. Output Formatters Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated Table 91.134. JSON Formatter Value Summary --noindent Whether to disable indenting the json Table 91.135. Shell Formatter Value Summary --prefix PREFIX Add a prefix to all variable names Table 91.136. Table Formatter Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 91.29. zone transfer request show Show Zone Transfer Request Details Usage: Table 91.137. Positional Arguments Value Summary id Zone tranfer request id Table 91.138. Optional Arguments Value Summary -h, --help Show this help message and exit --all-projects Show results from all projects. default: false --edit-managed Edit resources marked as managed. default: false --sudo-project-id SUDO_PROJECT_ID Project id to impersonate for this command. default: None Table 91.139. Output Formatters Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated Table 91.140. JSON Formatter Value Summary --noindent Whether to disable indenting the json Table 91.141. Shell Formatter Value Summary --prefix PREFIX Add a prefix to all variable names Table 91.142. Table Formatter Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. | [
"openstack zone abandon [-h] [--all-projects] [--edit-managed] [--sudo-project-id SUDO_PROJECT_ID] id",
"openstack zone axfr [-h] [--all-projects] [--edit-managed] [--sudo-project-id SUDO_PROJECT_ID] id",
"openstack zone blacklist create [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] --pattern PATTERN [--description DESCRIPTION] [--all-projects] [--edit-managed] [--sudo-project-id SUDO_PROJECT_ID]",
"openstack zone blacklist delete [-h] [--all-projects] [--edit-managed] [--sudo-project-id SUDO_PROJECT_ID] id",
"openstack zone blacklist list [-h] [-f {csv,json,table,value,yaml}] [-c COLUMN] [--quote {all,minimal,none,nonnumeric}] [--noindent] [--max-width <integer>] [--fit-width] [--print-empty] [--sort-column SORT_COLUMN] [--all-projects] [--edit-managed] [--sudo-project-id SUDO_PROJECT_ID]",
"openstack zone blacklist set [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] [--pattern PATTERN] [--description DESCRIPTION | --no-description] [--all-projects] [--edit-managed] [--sudo-project-id SUDO_PROJECT_ID] id",
"openstack zone blacklist show [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] [--all-projects] [--edit-managed] [--sudo-project-id SUDO_PROJECT_ID] id",
"openstack zone create [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] [--email EMAIL] [--type TYPE] [--ttl TTL] [--description DESCRIPTION] [--masters MASTERS [MASTERS ...]] [--attributes ATTRIBUTES [ATTRIBUTES ...]] [--all-projects] [--edit-managed] [--sudo-project-id SUDO_PROJECT_ID] name",
"openstack zone delete [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] [--all-projects] [--edit-managed] [--sudo-project-id SUDO_PROJECT_ID] id",
"openstack zone export create [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] [--all-projects] [--edit-managed] [--sudo-project-id SUDO_PROJECT_ID] zone_id",
"openstack zone export delete [-h] [--all-projects] [--edit-managed] [--sudo-project-id SUDO_PROJECT_ID] zone_export_id",
"openstack zone export list [-h] [-f {csv,json,table,value,yaml}] [-c COLUMN] [--quote {all,minimal,none,nonnumeric}] [--noindent] [--max-width <integer>] [--fit-width] [--print-empty] [--sort-column SORT_COLUMN] [--all-projects] [--edit-managed] [--sudo-project-id SUDO_PROJECT_ID]",
"openstack zone export show [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] [--all-projects] [--edit-managed] [--sudo-project-id SUDO_PROJECT_ID] zone_export_id",
"openstack zone export showfile [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] [--all-projects] [--edit-managed] [--sudo-project-id SUDO_PROJECT_ID] zone_export_id",
"openstack zone import create [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] [--all-projects] [--edit-managed] [--sudo-project-id SUDO_PROJECT_ID] zone_file_path",
"openstack zone import delete [-h] [--all-projects] [--edit-managed] [--sudo-project-id SUDO_PROJECT_ID] zone_import_id",
"openstack zone import list [-h] [-f {csv,json,table,value,yaml}] [-c COLUMN] [--quote {all,minimal,none,nonnumeric}] [--noindent] [--max-width <integer>] [--fit-width] [--print-empty] [--sort-column SORT_COLUMN] [--all-projects] [--edit-managed] [--sudo-project-id SUDO_PROJECT_ID]",
"openstack zone import show [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] [--all-projects] [--edit-managed] [--sudo-project-id SUDO_PROJECT_ID] zone_import_id",
"openstack zone list [-h] [-f {csv,json,table,value,yaml}] [-c COLUMN] [--quote {all,minimal,none,nonnumeric}] [--noindent] [--max-width <integer>] [--fit-width] [--print-empty] [--sort-column SORT_COLUMN] [--name NAME] [--email EMAIL] [--type TYPE] [--ttl TTL] [--description DESCRIPTION] [--status STATUS] [--all-projects] [--edit-managed] [--sudo-project-id SUDO_PROJECT_ID]",
"openstack zone set [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] [--email EMAIL] [--ttl TTL] [--description DESCRIPTION | --no-description] [--masters MASTERS [MASTERS ...]] [--all-projects] [--edit-managed] [--sudo-project-id SUDO_PROJECT_ID] id",
"openstack zone show [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] [--all-projects] [--edit-managed] [--sudo-project-id SUDO_PROJECT_ID] id",
"openstack zone transfer accept list [-h] [-f {csv,json,table,value,yaml}] [-c COLUMN] [--quote {all,minimal,none,nonnumeric}] [--noindent] [--max-width <integer>] [--fit-width] [--print-empty] [--sort-column SORT_COLUMN] [--all-projects] [--edit-managed] [--sudo-project-id SUDO_PROJECT_ID]",
"openstack zone transfer accept request [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] --transfer-id TRANSFER_ID --key KEY [--all-projects] [--edit-managed] [--sudo-project-id SUDO_PROJECT_ID]",
"openstack zone transfer accept show [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] [--all-projects] [--edit-managed] [--sudo-project-id SUDO_PROJECT_ID] id",
"openstack zone transfer request create [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] [--target-project-id TARGET_PROJECT_ID] [--description DESCRIPTION] [--all-projects] [--edit-managed] [--sudo-project-id SUDO_PROJECT_ID] zone_id",
"openstack zone transfer request delete [-h] [--all-projects] [--edit-managed] [--sudo-project-id SUDO_PROJECT_ID] id",
"openstack zone transfer request list [-h] [-f {csv,json,table,value,yaml}] [-c COLUMN] [--quote {all,minimal,none,nonnumeric}] [--noindent] [--max-width <integer>] [--fit-width] [--print-empty] [--sort-column SORT_COLUMN] [--all-projects] [--edit-managed] [--sudo-project-id SUDO_PROJECT_ID]",
"openstack zone transfer request set [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] [--description DESCRIPTION | --no-description] [--all-projects] [--edit-managed] [--sudo-project-id SUDO_PROJECT_ID] id",
"openstack zone transfer request show [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] [--all-projects] [--edit-managed] [--sudo-project-id SUDO_PROJECT_ID] id"
] | https://docs.redhat.com/en/documentation/red_hat_openstack_platform/16.0/html/command_line_interface_reference/zone |
Chapter 8. Creating non-secure HTTP load balancers | Chapter 8. Creating non-secure HTTP load balancers You can create the following load balancers for non-secure HTTP network traffic: Section 8.1, "Creating an HTTP load balancer with a health monitor" Section 8.2, "Creating an HTTP load balancer that uses a floating IP" Section 8.3, "Creating an HTTP load balancer with session persistence" 8.1. Creating an HTTP load balancer with a health monitor For networks that are not compatible with Red Hat OpenStack Platform Networking service (neutron) floating IPs, create a load balancer to manage network traffic for non-secure HTTP applications. Create a health monitor to ensure that your back-end members remain available. Prerequisites A shared external (public) subnet that you can reach from the internet. Procedure Source your credentials file. Example Create a load balancer ( lb1 ) on a public subnet ( public_subnet ). Note Values inside parentheses are sample values that are used in the example commands in this procedure. Substitute these sample values with values that are appropriate for your site. Example Create a listener ( listener1 ) on a port ( 80 ). Example Verify the state of the listener. Example Before going to the step, ensure that the status is ACTIVE . Create the listener default pool ( pool1 ). Example Create a health monitor ( healthmon1 ) of type ( HTTP ) on the pool ( pool1 ) that connects to the back-end servers and tests the path ( / ). Health checks are recommended but not required. If no health monitor is defined, the member server is assumed to be ONLINE . Example Add load balancer members ( 192.0.2.10 and 192.0.2.11 ) on the private subnet ( private_subnet ) to the default pool. Example In this example, the back-end servers, 192.0.2.10 and 192.0.2.11 , are named member1 and member2 , respectively: Verification View and verify the load balancer (lb1) settings: Example Sample output When a health monitor is present and functioning properly, you can check the status of each member. A working member ( member1 ) has an ONLINE value for its operating_status . Example Sample output Additional resources loadbalancer in the Command line interface reference 8.2. Creating an HTTP load balancer that uses a floating IP To manage network traffic for non-secure HTTP applications, create a load balancer with a virtual IP (VIP) that depends on a floating IP. The advantage of using a floating IP is that you retain control of the assigned IP, which is necessary if you need to move, destroy, or recreate your load balancer. It is a best practice to also create a health monitor to ensure that your back-end members remain available. Note Floating IPs do not work with IPv6 networks. Prerequisites A floating IP to use with a load balancer VIP. A Red Hat OpenStack Platform Networking service (neutron) shared external (public) subnet that you can reach from the internet to use for the floating IP. Procedure Source your credentials file. Example Create a load balancer ( lb1 ) on a private subnet ( private_subnet ). Note Values inside parentheses are sample values that are used in the example commands in this procedure. Substitute these sample values with values that are appropriate for your site. Example In the output from step 2, record the value of load_balancer_vip_port_id , because you must provide it in a later step. Create a listener ( listener1 ) on a port ( 80 ). Example Create the listener default pool ( pool1 ). Example The command in this example creates an HTTP pool that uses a private subnet containing back-end servers that host non-secure HTTP applications on TCP port 80: Create a health monitor ( healthmon1 ) of type ( HTTP ) on the pool ( pool1 ) that connects to the back-end servers and tests the path ( / ). Health checks are recommended but not required. If no health monitor is defined, the member server is assumed to be ONLINE . Example Add load balancer members ( 192.0.2.10 and 192.0.2.11 ) on the private subnet to the default pool. Example In this example, the back-end servers, 192.0.2.10 and 192.0.2.11 , are named member1 and member2 , respectively: Create a floating IP address on the shared external subnet ( public ). Example In the output from step 8, record the value of floating_ip_address , because you must provide it in a later step. Associate this floating IP ( 203.0.113.0 ) with the load balancer vip_port_id ( 69a85edd-5b1c-458f-96f2-b4552b15b8e6 ). Example Verification Verify HTTP traffic flows across the load balancer by using the floating IP ( 203.0.113.0 ). Example Sample output When a health monitor is present and functioning properly, you can check the status of each member. A working member ( member1 ) has an ONLINE value for its operating_status . Example Sample output Additional resources loadbalancer in the Command line interface reference floating in the Command line interface reference 8.3. Creating an HTTP load balancer with session persistence To manage network traffic for non-secure HTTP applications, you can create load balancers that track session persistence. Doing so ensures that when a request comes in, the load balancer directs subsequent requests from the same client to the same back-end server. Session persistence optimizes load balancing by saving time and memory. Prerequisites A shared external (public) subnet that you can reach from the internet. The non-secure web applications whose network traffic you are load balancing have cookies enabled. Procedure Source your credentials file. Example Create a load balancer ( lb1 ) on a public subnet ( public_subnet ). Note Values inside parentheses are sample values that are used in the example commands in this procedure. Substitute these sample values with values that are appropriate for your site. Example Create a listener ( listener1 ) on a port ( 80 ). Example Create the listener default pool ( pool1 ) that defines session persistence on a cookie ( PHPSESSIONID ). Example The command in this example creates an HTTP pool that uses a private subnet containing back-end servers that host non-secure HTTP applications on TCP port 80: Create a health monitor ( healthmon1 ) of type ( HTTP ) on the pool ( pool1 ) that connects to the back-end servers and tests the path ( / ). Health checks are recommended but not required. If no health monitor is defined, the member server is assumed to be ONLINE . Example Add load balancer members ( 192.0.2.10 and 192.0.2.11 ) on the private subnet ( private_subnet ) to the default pool. Example In this example, the back-end servers, 192.0.2.10 and 192.0.2.11 , are named member1 and member2 , respectively: Verification View and verify the load balancer ( lb1 ) settings: Example Sample output When a health monitor is present and functioning properly, you can check the status of each member. A working member ( member1 ) has an ONLINE value for its operating_status . Example Sample output Additional resources loadbalancer in the Command line interface reference | [
"source ~/overcloudrc",
"openstack loadbalancer create --name lb1 --vip-subnet-id public_subnet --wait",
"openstack loadbalancer listener create --name listener1 --protocol HTTP --protocol-port 80 lb1",
"openstack loadbalancer listener show listener1",
"openstack loadbalancer pool create --name pool1 --lb-algorithm ROUND_ROBIN --listener listener1 --protocol HTTP",
"openstack loadbalancer healthmonitor create --name healthmon1 --delay 15 --max-retries 4 --timeout 10 --type HTTP --url-path / pool1",
"openstack loadbalancer member create --name member1 --subnet-id private_subnet --address 192.0.2.10 --protocol-port 80 pool1 openstack loadbalancer member create --name member2 --subnet-id private_subnet --address 192.0.2.11 --protocol-port 80 pool1",
"openstack loadbalancer show lb1",
"+---------------------+--------------------------------------+ | Field | Value | +---------------------+--------------------------------------+ | admin_state_up | True | | created_at | 2022-01-15T11:11:09 | | description | | | flavor | | | id | 788fe121-3dec-4e1b-8360-4020642238b0 | | listeners | 09f28053-fde8-4c78-88b9-0f191d84120e | | name | lb1 | | operating_status | ONLINE | | pools | 627842b3-eed8-4f5f-9f4a-01a738e64d6a | | project_id | dda678ca5b1241e7ad7bf7eb211a2fd7 | | provider | amphora | | provisioning_status | ACTIVE | | updated_at | 2022-01-15T11:12:13 | | vip_address | 198.51.100.12 | | vip_network_id | 9bca13be-f18d-49a5-a83d-9d487827fd16 | | vip_port_id | 69a85edd-5b1c-458f-96f2-b4552b15b8e6 | | vip_qos_policy_id | None | | vip_subnet_id | 5bd7334b-49b3-4849-b3a2-b0b83852dba1 | +---------------------+--------------------------------------+",
"openstack loadbalancer member show pool1 member1",
"+---------------------+--------------------------------------+ | Field | Value | +---------------------+--------------------------------------+ | address | 192.0.2.10 | | admin_state_up | True | | created_at | 2022-01-15T11:16:23 | | id | b85c807e-4d7c-4cbd-b725-5e8afddf80d2 | | name | member1 | | operating_status | ONLINE | | project_id | dda678ca5b1241e7ad7bf7eb211a2fd7 | | protocol_port | 80 | | provisioning_status | ACTIVE | | subnet_id | 5bd7334b-49b3-4849-b3a2-b0b83852dba1 | | updated_at | 2022-01-15T11:20:45 | | weight | 1 | | monitor_port | None | | monitor_address | None | | backup | False | +---------------------+--------------------------------------+",
"source ~/overcloudrc",
"openstack loadbalancer create --name lb1 --vip-subnet-id private_subnet --wait",
"openstack loadbalancer listener create --name listener1 --protocol HTTP --protocol-port 80 lb1",
"openstack loadbalancer pool create --name pool1 --lb-algorithm ROUND_ROBIN --listener listener1 --protocol HTTP",
"openstack loadbalancer healthmonitor create --name healthmon1 --delay 15 --max-retries 4 --timeout 10 --type HTTP --url-path / pool1",
"openstack loadbalancer member create --name member1 --subnet-id private_subnet --address 192.0.2.10 --protocol-port 80 pool1 openstack loadbalancer member create --name member2 --subnet-id private_subnet --address 192.0.2.11 --protocol-port 80 pool1",
"openstack floating ip create public",
"openstack floating ip set --port 69a85edd-5b1c-458f-96f2-b4552b15b8e6 203.0.113.0",
"curl -v http://203.0.113.0 --insecure",
"* About to connect() to 203.0.113.0 port 80 (#0) * Trying 203.0.113.0 * Connected to 203.0.113.0 (203.0.113.0) port 80 (#0) > GET / HTTP/1.1 > User-Agent: curl/7.29.0 > Host: 203.0.113.0 > Accept: */* > < HTTP/1.1 200 OK < Content-Length: 30 < * Connection #0 to host 203.0.113.0 left intact",
"openstack loadbalancer member show pool1 member1",
"+---------------------+--------------------------------------+ | Field | Value | +---------------------+--------------------------------------+ | address | 192.0.02.10 | | admin_state_up | True | | created_at | 2022-01-15T11:11:23 | | id | b85c807e-4d7c-4cbd-b725-5e8afddf80d2 | | name | member1 | | operating_status | ONLINE | | project_id | dda678ca5b1241e7ad7bf7eb211a2fd7 | | protocol_port | 80 | | provisioning_status | ACTIVE | | subnet_id | 5bd7334b-49b3-4849-b3a2-b0b83852dba1 | | updated_at | 2022-01-15T11:28:42 | | weight | 1 | | monitor_port | None | | monitor_address | None | | backup | False | +---------------------+--------------------------------------+",
"source ~/overcloudrc",
"openstack loadbalancer create --name lb1 --vip-subnet-id public_subnet --wait",
"openstack loadbalancer listener create --name listener1 --protocol HTTP --protocol-port 80 lb1",
"openstack loadbalancer pool create --name pool1 --lb-algorithm ROUND_ROBIN --listener listener1 --protocol HTTP --session-persistence type=APP_COOKIE,cookie_name=PHPSESSIONID",
"openstack loadbalancer healthmonitor create --name healthmon1 --delay 15 --max-retries 4 --timeout 10 --type HTTP --url-path / pool1",
"openstack loadbalancer member create --name member1 --subnet-id private_subnet --address 192.0.2.10 --protocol-port 80 pool1 openstack loadbalancer member create --name member2 --subnet-id private_subnet --address 192.0.2.11 --protocol-port 80 pool1",
"openstack loadbalancer show lb1",
"+---------------------+--------------------------------------+ | Field | Value | +---------------------+--------------------------------------+ | admin_state_up | True | | created_at | 2022-01-15T11:11:58 | | description | | | flavor | | | id | 788fe121-3dec-4e1b-8360-4020642238b0 | | listeners | 09f28053-fde8-4c78-88b9-0f191d84120e | | name | lb1 | | operating_status | ONLINE | | pools | 627842b3-eed8-4f5f-9f4a-01a738e64d6a | | project_id | dda678ca5b1241e7ad7bf7eb211a2fd7 | | provider | amphora | | provisioning_status | ACTIVE | | updated_at | 2022-01-15T11:28:42 | | vip_address | 198.51.100.22 | | vip_network_id | 9bca13be-f18d-49a5-a83d-9d487827fd16 | | vip_port_id | 69a85edd-5b1c-458f-96f2-b4552b15b8e6 | | vip_qos_policy_id | None | | vip_subnet_id | 5bd7334b-49b3-4849-b3a2-b0b83852dba1 | +---------------------+--------------------------------------+",
"openstack loadbalancer member show pool1 member1",
"+---------------------+--------------------------------------+ | Field | Value | +---------------------+--------------------------------------+ | address | 192.0.02.10 | | admin_state_up | True | | created_at | 2022-01-15T11:11:23 | | id | b85c807e-4d7c-4cbd-b725-5e8afddf80d2 | | name | member1 | | operating_status | ONLINE | | project_id | dda678ca5b1241e7ad7bf7eb211a2fd7 | | protocol_port | 80 | | provisioning_status | ACTIVE | | subnet_id | 5bd7334b-49b3-4849-b3a2-b0b83852dba1 | | updated_at | 2022-01-15T11:28:42 | | weight | 1 | | monitor_port | None | | monitor_address | None | | backup | False | +---------------------+--------------------------------------+"
] | https://docs.redhat.com/en/documentation/red_hat_openstack_platform/17.1/html/configuring_load_balancing_as_a_service/create-non-secure-http-lbs_rhosp-lbaas |
9.4. Configure Network Bridging Using a GUI | 9.4. Configure Network Bridging Using a GUI When starting a bridge interface, NetworkManager waits for at least one port to enter the " forwarding " state before beginning any network-dependent IP configuration such as DHCP or IPv6 autoconfiguration. Static IP addressing is allowed to proceed before any ports or ports are connected or begin forwarding packets. 9.4.1. Establishing a Bridge Connection with a GUI Procedure 9.1. Adding a New Bridge Connection Using nm-connection-editor Follow the below instructions to create a new bridge connection: Enter nm-connection-editor in a terminal: Click the Add button. The Choose a Connection Type window appears. Select Bridge and click Create . The Editing Bridge connection 1 window appears. Figure 9.5. Editing Bridge Connection 1 Add port devices by referring to Procedure 9.3, "Adding a Port Interface to a Bridge" below. Procedure 9.2. Editing an Existing Bridge Connection Enter nm-connection-editor in a terminal: Select the Bridge connection you want to edit. Click the Edit button. Configuring the Connection Name, Auto-Connect Behavior, and Availability Settings Five settings in the Editing dialog are common to all connection types, see the General tab: Connection name - Enter a descriptive name for your network connection. This name will be used to list this connection in the menu of the Network window. Automatically connect to this network when it is available - Select this box if you want NetworkManager to auto-connect to this connection when it is available. See the section called "Editing an Existing Connection with control-center" for more information. All users may connect to this network - Select this box to create a connection available to all users on the system. Changing this setting may require root privileges. See Section 3.4.5, "Managing System-wide and Private Connection Profiles with a GUI" for details. Automatically connect to VPN when using this connection - Select this box if you want NetworkManager to auto-connect to a VPN connection when it is available. Select the VPN from the dropdown menu. Firewall Zone - Select the Firewall Zone from the dropdown menu. See the Red Hat Enterprise Linux 7 Security Guide for more information on Firewall Zones. 9.4.1.1. Configuring the Bridge Tab Interface name The name of the interface to the bridge. Bridged connections One or more port interfaces. Aging time The time, in seconds, a MAC address is kept in the MAC address forwarding database. Enable IGMP snooping If required, select the check box to enable IGMP snooping on the device. Enable STP (Spanning Tree Protocol) If required, select the check box to enable STP . Priority The bridge priority; the bridge with the lowest priority will be elected as the root bridge. Forward delay The time, in seconds, spent in both the Listening and Learning states before entering the Forwarding state. The default is 15 seconds. Hello time The time interval, in seconds, between sending configuration information in bridge protocol data units (BPDU). Max age The maximum time, in seconds, to store the configuration information from BPDUs. This value should be twice the Hello Time plus 1 but less than twice the Forwarding delay minus 1. Group forward mask This property is a mask of group addresses that allows group addresses to be forwarded. In most cases, group addresses in the range from 01:80:C2:00:00:00 to 01:80:C2:00:00:0F are not forwarded by the bridge device. This property is a mask of 16 bits, each corresponding to a group address in the above range, that must be forwarded. Note that the Group forward mask property cannot have any of the 0 , 1 , 2 bits set to 1 because those addresses are used for Spanning tree protocol (STP), Link Aggregation Control Protocol (LACP) and Ethernet MAC pause frames. Procedure 9.3. Adding a Port Interface to a Bridge To add a port to a bridge, select the Bridge tab in the Editing Bridge connection 1 window. If necessary, open this window by following the procedure in Procedure 9.2, "Editing an Existing Bridge Connection" . Click Add . The Choose a Connection Type menu appears. Select the type of connection to be created from the list. Click Create . A window appropriate to the connection type selected appears. Figure 9.6. The NetworkManager Graphical User Interface Add a Bridge Connection Select the Bridge Port tab. Configure Priority and Path cost as required. Note the STP priority for a bridge port is limited by the Linux kernel. Although the standard allows a range of 0 to 255 , Linux only allows 0 to 63 . The default is 32 in this case. Figure 9.7. The NetworkManager Graphical User Interface Bridge Port tab If required, select the Hairpin mode check box to enable forwarding of frames for external processing. Also known as virtual Ethernet port aggregator ( VEPA ) mode. Then, to configure: An Ethernet port, click the Ethernet tab and proceed to the section called "Basic Configuration Options " , or; A Bond port, click the Bond tab and proceed to Section 7.8.1.1, "Configuring the Bond Tab" , or; A Team port, click the Team tab and proceed to Section 8.14.1.1, "Configuring the Team Tab" , or; An VLAN port, click the VLAN tab and proceed to Section 10.5.1.1, "Configuring the VLAN Tab" , or; Saving Your New (or Modified) Connection and Making Further Configurations Once you have finished editing your new bridge connection, click the Save button to save your customized configuration. If the profile was in use while being edited, power cycle the connection to make NetworkManager apply the changes. If the profile is OFF, set it to ON or select it in the network connection icon's menu. See Section 3.4.1, "Connecting to a Network Using the control-center GUI" for information on using your new or altered connection. You can further configure an existing connection by selecting it in the Network window and clicking Options to return to the Editing dialog. Then, to configure: IPv4 settings for the connection, click the IPv4 Settings tab and proceed to Section 5.4, "Configuring IPv4 Settings" , or; IPv6 settings for the connection, click the IPv6 Settings tab and proceed to Section 5.5, "Configuring IPv6 Settings" . Once saved the Bridge will appear in the Network settings tool with each port showing in the display. Figure 9.8. The NetworkManager Graphical User Interface with Bridge | [
"~]USD nm-connection-editor",
"~]USD nm-connection-editor"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/networking_guide/sec-configure_network_bridging_using_a_gui |
Chapter 96. Platform HTTP | Chapter 96. Platform HTTP Since Camel 3.0 Only consumer is supported The Platform HTTP is used to allow Camel to use the existing HTTP server from the runtime, for example when running Camel on Spring Boot, Quarkus, or other runtimes. 96.1. Dependencies When using platform-http with Red Hat build of Camel Spring Boot make sure to use the following Maven dependency to have support for auto configuration: <dependency> <groupId>org.apache.camel.springboot</groupId> <artifactId>camel-platform-http-starter</artifactId> </dependency> 96.2. Platform HTTP Provider To use Platform HTTP a provider (engine) is required to be available on the classpath. The purpose is to have drivers for different runtimes such as Quarkus, VertX, or Spring Boot. At this moment there is only support for Quarkus and VertX by camel-platform-http-vertx . This JAR must be on the classpath otherwise the Platform HTTP component cannot be used and an exception will be thrown on startup. <dependency> <groupId>org.apache.camel</groupId> <artifactId>camel-platform-http-vertx</artifactId> <version>4.0.0.redhat-00036</version> <!-- use the same version as your Camel core version --> </dependency> 96.3. Configuring Options Camel components are configured on two levels: Component level Endpoint level 96.3.1. Component Level Options The component level is the highest level. The configurations you define at this level are inherited by all the endpoints. For example, a component can have security settings, credentials for authentication, urls for network connection, and so on. Since components typically have pre-configured defaults for the most common cases, you may need to only configure a few component options, or maybe none at all. You can configure components with Component DSL in a configuration file (application.properties|yaml), or directly with Java code. 96.3.2. Endpoint Level Options At the Endpoint level you have many options, which you can use to configure what you want the endpoint to do. The options are categorized according to whether the endpoint is used as a consumer (from) or as a producer (to) or used for both. You can configure endpoints directly in the endpoint URI as path and query parameters. You can also use Endpoint DSL and DataFormat DSL as type safe ways of configuring endpoints and data formats in Java. When configuring options, use Property Placeholders for urls, port numbers, sensitive information, and other settings. Placeholders allows you to externalize the configuration from your code, giving you more flexible and reusable code. 96.4. Component Options The Platform HTTP component supports 3 options, which are listed below. Name Description Default Type bridgeErrorHandler (consumer) Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false boolean autowiredEnabled (advanced) Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true boolean engine (advanced) An HTTP Server engine implementation to serve the requests. PlatformHttpEngine 96.4.1. Endpoint Options The Platform HTTP endpoint is configured using URI syntax: with the following path and query parameters: 96.4.1.1. Path Parameters (1 parameters) Name Description Default Type path (consumer) Required The path under which this endpoint serves the HTTP requests, for proxy use 'proxy'. String 96.4.1.2. Query Parameters (11 parameters) Name Description Default Type consumes (consumer) The content type this endpoint accepts as an input, such as application/xml or application/json. null or / mean no restriction. String httpMethodRestrict (consumer) A comma separated list of HTTP methods to serve, e.g. GET,POST . If no methods are specified, all methods will be served. String matchOnUriPrefix (consumer) Whether or not the consumer should try to find a target consumer by matching the URI prefix if no exact match is found. false boolean muteException (consumer) If enabled and an Exchange failed processing on the consumer side the response's body won't contain the exception's stack trace. true boolean produces (consumer) The content type this endpoint produces, such as application/xml or application/json. String bridgeErrorHandler (consumer (advanced)) Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false boolean exceptionHandler (consumer (advanced)) To let the consumer use a custom ExceptionHandler. Notice if the option bridgeErrorHandler is enabled then this option is not in use. By default the consumer will deal with exceptions, that will be logged at WARN or ERROR level and ignored. ExceptionHandler exchangePattern (consumer (advanced)) Sets the exchange pattern when the consumer creates an exchange. Enum values: InOnly InOut InOptionalOut ExchangePattern fileNameExtWhitelist (consumer (advanced)) A comma or whitespace separated list of file extensions. Uploads having these extensions will be stored locally. Null value or asterisk () will allow all files. String headerFilterStrategy (advanced) To use a custom HeaderFilterStrategy to filter headers to and from Camel message. HeaderFilterStrategy platformHttpEngine (advanced) An HTTP Server engine implementation to serve the requests of this endpoint. PlatformHttpEngine 96.5. Spring Boot Auto-Configuration The component supports 4 options, which are listed below. Name Description Default Type camel.component.platform-http.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.platform-http.bridge-error-handler Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false Boolean camel.component.platform-http.enabled Whether to enable auto configuration of the platform-http component. This is enabled by default. Boolean camel.component.platform-http.engine An HTTP Server engine implementation to serve the requests. The option is a org.apache.camel.component.platform.http.spi.PlatformHttpEngine type. PlatformHttpEngine 96.5.1. Implementing a reverse proxy Platform HTTP component can act as a reverse proxy, in that case some headers are populated from the absolute URL received on the request line of the HTTP request. Those headers are specific to the underlining platform. At this moment, this feature is only supported for Vert.x in camel-platform-http-vertx component. | [
"<dependency> <groupId>org.apache.camel.springboot</groupId> <artifactId>camel-platform-http-starter</artifactId> </dependency>",
"<dependency> <groupId>org.apache.camel</groupId> <artifactId>camel-platform-http-vertx</artifactId> <version>4.0.0.redhat-00036</version> <!-- use the same version as your Camel core version --> </dependency>",
"platform-http:path"
] | https://docs.redhat.com/en/documentation/red_hat_build_of_apache_camel/4.0/html/red_hat_build_of_apache_camel_for_spring_boot_reference/csb-camel-platform-http-component-starter |
Chapter 45. Kafka Source | Chapter 45. Kafka Source Receive data from Kafka topics. 45.1. Configuration Options The following table summarizes the configuration options available for the kafka-source Kamelet: Property Name Description Type Default Example topic * Topic Names Comma separated list of Kafka topic names string bootstrapServers * Brokers Comma separated list of Kafka Broker URLs string securityProtocol Security Protocol Protocol used to communicate with brokers. SASL_PLAINTEXT, PLAINTEXT, SASL_SSL and SSL are supported string "SASL_SSL" saslMechanism SASL Mechanism The Simple Authentication and Security Layer (SASL) Mechanism used. string "PLAIN" user * Username Username to authenticate to Kafka string password * Password Password to authenticate to kafka string autoCommitEnable Auto Commit Enable If true, periodically commit to ZooKeeper the offset of messages already fetched by the consumer. boolean true allowManualCommit Allow Manual Commit Whether to allow doing manual commits boolean false autoOffsetReset Auto Offset Reset What to do when there is no initial offset. There are 3 enums and the value can be one of latest, earliest, none string "latest" pollOnError Poll On Error Behavior What to do if kafka threw an exception while polling for new messages. There are 5 enums and the value can be one of DISCARD, ERROR_HANDLER, RECONNECT, RETRY, STOP string "ERROR_HANDLER" deserializeHeaders Automatically Deserialize Headers When enabled the Kamelet source will deserialize all message headers to String representation. The default is false . boolean true consumerGroup Consumer Group A string that uniquely identifies the group of consumers to which this source belongs string "my-group-id" Note Fields marked with an asterisk (*) are mandatory. 45.2. Dependencies At runtime, the `kafka-source Kamelet relies upon the presence of the following dependencies: camel:kafka camel:kamelet camel:core 45.3. Usage This section describes how you can use the kafka-source . 45.3.1. Knative Source You can use the kafka-source Kamelet as a Knative source by binding it to a Knative object. kafka-source-binding.yaml apiVersion: camel.apache.org/v1alpha1 kind: KameletBinding metadata: name: kafka-source-binding spec: source: ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: kafka-source properties: bootstrapServers: "The Brokers" password: "The Password" topic: "The Topic Names" user: "The Username" sink: ref: kind: Channel apiVersion: messaging.knative.dev/v1 name: mychannel 45.3.1.1. Prerequisite Make sure you have "Red Hat Integration - Camel K" installed into the OpenShift cluster you're connected to. 45.3.1.2. Procedure for using the cluster CLI Save the kafka-source-binding.yaml file to your local drive, and then edit it as needed for your configuration. Run the source by using the following command: oc apply -f kafka-source-binding.yaml 45.3.1.3. Procedure for using the Kamel CLI Configure and run the source by using the following command: kamel bind kafka-source -p "source.bootstrapServers=The Brokers" -p "source.password=The Password" -p "source.topic=The Topic Names" -p "source.user=The Username" channel:mychannel This command creates the KameletBinding in the current namespace on the cluster. 45.3.2. Kafka Source You can use the kafka-source Kamelet as a Kafka source by binding it to a Kafka topic. kafka-source-binding.yaml apiVersion: camel.apache.org/v1alpha1 kind: KameletBinding metadata: name: kafka-source-binding spec: source: ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: kafka-source properties: bootstrapServers: "The Brokers" password: "The Password" topic: "The Topic Names" user: "The Username" sink: ref: kind: KafkaTopic apiVersion: kafka.strimzi.io/v1beta1 name: my-topic 45.3.2.1. Prerequisites Ensure that you've installed the AMQ Streams operator in your OpenShift cluster and created a topic named my-topic in the current namespace. Make also sure you have "Red Hat Integration - Camel K" installed into the OpenShift cluster you're connected to. 45.3.2.2. Procedure for using the cluster CLI Save the kafka-source-binding.yaml file to your local drive, and then edit it as needed for your configuration. Run the source by using the following command: oc apply -f kafka-source-binding.yaml 45.3.2.3. Procedure for using the Kamel CLI Configure and run the source by using the following command: kamel bind kafka-source -p "source.bootstrapServers=The Brokers" -p "source.password=The Password" -p "source.topic=The Topic Names" -p "source.user=The Username" kafka.strimzi.io/v1beta1:KafkaTopic:my-topic This command creates the KameletBinding in the current namespace on the cluster. 45.4. Kamelet source file https://github.com/openshift-integration/kamelet-catalog/kafka-source.kamelet.yaml | [
"apiVersion: camel.apache.org/v1alpha1 kind: KameletBinding metadata: name: kafka-source-binding spec: source: ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: kafka-source properties: bootstrapServers: \"The Brokers\" password: \"The Password\" topic: \"The Topic Names\" user: \"The Username\" sink: ref: kind: Channel apiVersion: messaging.knative.dev/v1 name: mychannel",
"apply -f kafka-source-binding.yaml",
"kamel bind kafka-source -p \"source.bootstrapServers=The Brokers\" -p \"source.password=The Password\" -p \"source.topic=The Topic Names\" -p \"source.user=The Username\" channel:mychannel",
"apiVersion: camel.apache.org/v1alpha1 kind: KameletBinding metadata: name: kafka-source-binding spec: source: ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: kafka-source properties: bootstrapServers: \"The Brokers\" password: \"The Password\" topic: \"The Topic Names\" user: \"The Username\" sink: ref: kind: KafkaTopic apiVersion: kafka.strimzi.io/v1beta1 name: my-topic",
"apply -f kafka-source-binding.yaml",
"kamel bind kafka-source -p \"source.bootstrapServers=The Brokers\" -p \"source.password=The Password\" -p \"source.topic=The Topic Names\" -p \"source.user=The Username\" kafka.strimzi.io/v1beta1:KafkaTopic:my-topic"
] | https://docs.redhat.com/en/documentation/red_hat_build_of_apache_camel_k/1.10.9/html/kamelets_reference/kafka-source |
Installing IBM Cloud Bare Metal (Classic) | Installing IBM Cloud Bare Metal (Classic) OpenShift Container Platform 4.13 Installing OpenShift Container Platform on IBM Cloud Bare Metal (Classic) Red Hat OpenShift Documentation Team | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.13/html/installing_ibm_cloud_bare_metal_classic/index |
Architecture | Architecture OpenShift Container Platform 4.7 An overview of the architecture for OpenShift Container Platform Red Hat OpenShift Documentation Team | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.7/html/architecture/index |
Chapter 72. KafkaClientAuthenticationScramSha256 schema reference | Chapter 72. KafkaClientAuthenticationScramSha256 schema reference Used in: KafkaBridgeSpec , KafkaConnectSpec , KafkaMirrorMaker2ClusterSpec , KafkaMirrorMakerConsumerSpec , KafkaMirrorMakerProducerSpec Full list of KafkaClientAuthenticationScramSha256 schema properties To configure SASL-based SCRAM-SHA-256 authentication, set the type property to scram-sha-256 . The SCRAM-SHA-256 authentication mechanism requires a username and password. 72.1. username Specify the username in the username property. 72.2. passwordSecret In the passwordSecret property, specify a link to a Secret containing the password. You can use the secrets created by the User Operator. If required, you can create a text file that contains the password, in cleartext, to use for authentication: echo -n PASSWORD > MY-PASSWORD .txt You can then create a Secret from the text file, setting your own field name (key) for the password: oc create secret generic MY-CONNECT-SECRET-NAME --from-file= MY-PASSWORD-FIELD-NAME =./ MY-PASSWORD .txt Example Secret for SCRAM-SHA-256 client authentication for Kafka Connect apiVersion: v1 kind: Secret metadata: name: my-connect-secret-name type: Opaque data: my-connect-password-field: LFTIyFRFlMmU2N2Tm The secretName property contains the name of the Secret , and the password property contains the name of the key under which the password is stored inside the Secret . Important Do not specify the actual password in the password property. Example SASL-based SCRAM-SHA-256 client authentication configuration for Kafka Connect authentication: type: scram-sha-256 username: my-connect-username passwordSecret: secretName: my-connect-secret-name password: my-connect-password-field 72.3. KafkaClientAuthenticationScramSha256 schema properties Property Description passwordSecret Reference to the Secret which holds the password. PasswordSecretSource type Must be scram-sha-256 . string username Username used for the authentication. string | [
"echo -n PASSWORD > MY-PASSWORD .txt",
"create secret generic MY-CONNECT-SECRET-NAME --from-file= MY-PASSWORD-FIELD-NAME =./ MY-PASSWORD .txt",
"apiVersion: v1 kind: Secret metadata: name: my-connect-secret-name type: Opaque data: my-connect-password-field: LFTIyFRFlMmU2N2Tm",
"authentication: type: scram-sha-256 username: my-connect-username passwordSecret: secretName: my-connect-secret-name password: my-connect-password-field"
] | https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.5/html/amq_streams_api_reference/type-kafkaclientauthenticationscramsha256-reference |
RPM upgrade and migration | RPM upgrade and migration Red Hat Ansible Automation Platform 2.5 Upgrade and migrate legacy deployments of Ansible Automation Platform Red Hat Customer Content Services | null | https://docs.redhat.com/en/documentation/red_hat_ansible_automation_platform/2.5/html/rpm_upgrade_and_migration/index |
probe::tty.receive | probe::tty.receive Name probe::tty.receive - called when a tty receives a message Synopsis tty.receive Values driver_name the driver name count The amount of characters received index The tty Index cp the buffer that was received id the tty id name the name of the module file fp The flag buffer | null | https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/systemtap_tapset_reference/api-tty-receive |
4.3. Prerequisites for Installing a Replica | 4.3. Prerequisites for Installing a Replica The installation requirements for replicas are the same as for IdM servers. Make sure that the replica machine meets all of the prerequisites listed in Section 2.1, "Prerequisites for Installing a Server" . In addition to the general server requirements, you must also meet the following conditions: The replica must be running the same or later version of IdM For example, if the master server is running on Red Hat Enterprise Linux 7 and uses the IdM 4.4 packages, then the replica must also run on Red Hat Enterprise Linux 7 or later and use IdM version 4.4 or later. This ensures that configuration can be properly copied from the server to the replica. Important IdM does not support creating a replica of an earlier version than the version of the master. If you try to create a replica using an earlier version, the installation fails. The replica needs additional ports to be open In addition to the standard IdM server port requirements described in Section 2.1.6, "Port Requirements" , make sure you also meet the following: At domain level 0, keep the TCP port 22 open on the master server during the replica setup process. This port is required in order to use SSH to connect to the master server. Note For details on domain levels, see Chapter 7, Displaying and Raising the Domain Level . If one of the servers is running Red Hat Enterprise Linux 6 and has a CA installed, keep also TCP port 7389 open during and after the replica configuration. In a purely Red Hat Enterprise Linux 7 environment, port 7389 is not required. For information on how to open ports using the firewall-cmd utility, see Section 2.1.6, "Port Requirements" . | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/linux_domain_identity_authentication_and_policy_guide/prepping-replica |
Chapter 4. Storage | Chapter 4. Storage LVM Cache As of Red Hat Enterprise Linux 7.1, LVM cache is fully supported. This feature allows users to create logical volumes with a small fast device performing as a cache to larger slower devices. Please refer to the lvm(7) manual page for information on creating cache logical volumes. Note that the following restrictions on the use of cache logical volumes (LV): The cache LV must be a top-level device. It cannot be used as a thin-pool LV, an image of a RAID LV, or any other sub-LV type. The cache LV sub-LVs (the origin LV, metadata LV, and data LV) can only be of linear, stripe, or RAID type. The properties of the cache LV cannot be changed after creation. To change cache properties, remove the cache and recreate it with the desired properties. Storage Array Management with libStorageMgmt API Since Red Hat Enterprise Linux 7.1, storage array management with libStorageMgmt , a storage array independent API, is fully supported. The provided API is stable, consistent, and allows developers to programmatically manage different storage arrays and utilize the hardware-accelerated features provided. System administrators can also use libStorageMgmt to manually configure storage and to automate storage management tasks with the included command-line interface. Please note that the Targetd plug-in is not fully supported and remains a Technology Preview. Supported hardware: NetApp Filer (ontap 7-Mode) Nexenta (nstor 3.1.x only) SMI-S, for the following vendors: HP 3PAR OS release 3.2.1 or later EMC VMAX and VNX Solutions Enabler V7.6.2.48 or later SMI-S Provider V4.6.2.18 hotfix kit or later HDS VSP Array non-embedded provider Hitachi Command Suite v8.0 or later For more information on libStorageMgmt , refer to the relevant chapter in the Storage Administration Guide . Support for LSI Syncro Red Hat Enterprise Linux 7.1 includes code in the megaraid_sas driver to enable LSI Syncro CS high-availability direct-attached storage (HA-DAS) adapters. While the megaraid_sas driver is fully supported for previously enabled adapters, the use of this driver for Syncro CS is available as a Technology Preview. Support for this adapter will be provided directly by LSI, your system integrator, or system vendor. Users deploying Syncro CS on Red Hat Enterprise Linux 7.1 are encouraged to provide feedback to Red Hat and LSI. For more information on LSI Syncro CS solutions, please visit http://www.lsi.com/products/shared-das/pages/default.aspx . DIF/DIX Support DIF/DIX is a new addition to the SCSI Standard and a Technology Preview in Red Hat Enterprise Linux 7.1. DIF/DIX increases the size of the commonly used 512-byte disk block from 512 to 520 bytes, adding the Data Integrity Field (DIF). The DIF stores a checksum value for the data block that is calculated by the Host Bus Adapter (HBA) when a write occurs. The storage device then confirms the checksum on receive, and stores both the data and the checksum. Conversely, when a read occurs, the checksum can be verified by the storage device, and by the receiving HBA. For more information, refer to the section Block Devices with DIF/DIX Enabled in the Storage Administration Guide . Enhanced device-mapper-multipath Syntax Error Checking and Output The device-mapper-multipath tool has been enhanced to verify the multipath.conf file more reliably. As a result, if multipath.conf contains any lines that cannot be parsed, device-mapper-multipath reports an error and ignores these lines to avoid incorrect parsing. In addition, the following wildcard expressions have been added for the multipathd show paths format command: %N and %n for the host and target Fibre Channel World Wide Node Names, respectively. %R and %r for the host and target Fibre Channel World Wide Port Names, respectively. Now, it is easier to associate multipaths with specific Fibre Channel hosts, targets, and their ports, which allows users to manage their storage configuration more effectively. | null | https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/7.1_release_notes/chap-Red_Hat_Enterprise_Linux-7.1_Release_Notes-Storage |
Migration Toolkit for Containers | Migration Toolkit for Containers OpenShift Container Platform 4.15 Migrating to OpenShift Container Platform 4 Red Hat OpenShift Documentation Team | [
"status: conditions: - category: Warn lastTransitionTime: 2021-07-15T04:11:44Z message: Failed gathering extended PV usage information for PVs [nginx-logs nginx-html], please see MigAnalytic openshift-migration/ocp-24706-basicvolmig-migplan-1626319591-szwd6 for details reason: FailedRunningDf status: \"True\" type: ExtendedPVAnalysisFailed",
"podman login registry.redhat.io",
"cp USD(podman create registry.redhat.io/rhmtc/openshift-migration-legacy-rhel8-operator:v1.7):/operator.yml ./",
"cp USD(podman create registry.redhat.io/rhmtc/openshift-migration-legacy-rhel8-operator:v1.7):/controller.yml ./",
"oc run test --image registry.redhat.io/ubi9 --command sleep infinity",
"oc create -f operator.yml",
"namespace/openshift-migration created rolebinding.rbac.authorization.k8s.io/system:deployers created serviceaccount/migration-operator created customresourcedefinition.apiextensions.k8s.io/migrationcontrollers.migration.openshift.io created role.rbac.authorization.k8s.io/migration-operator created rolebinding.rbac.authorization.k8s.io/migration-operator created clusterrolebinding.rbac.authorization.k8s.io/migration-operator created deployment.apps/migration-operator created Error from server (AlreadyExists): error when creating \"./operator.yml\": rolebindings.rbac.authorization.k8s.io \"system:image-builders\" already exists 1 Error from server (AlreadyExists): error when creating \"./operator.yml\": rolebindings.rbac.authorization.k8s.io \"system:image-pullers\" already exists",
"oc create -f controller.yml",
"oc get pods -n openshift-migration",
"apiVersion: migration.openshift.io/v1alpha1 kind: MigrationController metadata: name: migration-controller namespace: openshift-migration spec: [...] stunnel_tcp_proxy: http://username:password@ip:port",
"apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: allow-all-egress-from-rsync-pods spec: podSelector: matchLabels: owner: directvolumemigration app: directvolumemigration-rsync-transfer egress: - {} policyTypes: - Egress",
"apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: allow-all-egress-from-rsync-pods spec: podSelector: matchLabels: owner: directvolumemigration app: directvolumemigration-rsync-transfer ingress: - {} policyTypes: - Ingress",
"apiVersion: network.openshift.io/v1 kind: EgressNetworkPolicy metadata: name: test-egress-policy namespace: <namespace> spec: egress: - to: cidrSelector: <cidr_of_source_or_target_cluster> type: Deny",
"apiVersion: migration.openshift.io/v1alpha1 kind: MigrationController metadata: name: migration-controller namespace: openshift-migration spec: [...] rsync_endpoint_type: [NodePort|ClusterIP|Route]",
"spec: src_supplemental_groups: \"1000,2000\" target_supplemental_groups: \"2000,3000\"",
"oc get migrationcontroller <migration_controller> -n openshift-migration",
"apiVersion: migration.openshift.io/v1alpha1 kind: MigrationController metadata: name: <migration_controller> namespace: openshift-migration spec: stunnel_tcp_proxy: http://<username>:<password>@<ip>:<port> 1 noProxy: example.com 2",
"oc replace -f migration-controller.yaml -n openshift-migration",
"apiVersion: migration.openshift.io/v1alpha1 kind: MigrationController metadata: name: migration-controller namespace: openshift-migration spec: [...] migration_rsync_privileged: true",
"apiVersion: migration.openshift.io/v1alpha1 kind: MigMigration metadata: name: migration-controller namespace: openshift-migration spec: [...] runAsRoot: true",
"apiVersion: migration.openshift.io/v1alpha1 kind: MigMigration metadata: name: migration-controller namespace: openshift-migration spec: [...] runAsUser: 10010001 runAsGroup: 3",
"BUCKET=<your_bucket>",
"REGION=<your_region>",
"aws s3api create-bucket --bucket USDBUCKET --region USDREGION --create-bucket-configuration LocationConstraint=USDREGION 1",
"aws iam create-user --user-name velero 1",
"cat > velero-policy.json <<EOF { \"Version\": \"2012-10-17\", \"Statement\": [ { \"Effect\": \"Allow\", \"Action\": [ \"ec2:DescribeVolumes\", \"ec2:DescribeSnapshots\", \"ec2:CreateTags\", \"ec2:CreateVolume\", \"ec2:CreateSnapshot\", \"ec2:DeleteSnapshot\" ], \"Resource\": \"*\" }, { \"Effect\": \"Allow\", \"Action\": [ \"s3:GetObject\", \"s3:DeleteObject\", \"s3:PutObject\", \"s3:AbortMultipartUpload\", \"s3:ListMultipartUploadParts\" ], \"Resource\": [ \"arn:aws:s3:::USD{BUCKET}/*\" ] }, { \"Effect\": \"Allow\", \"Action\": [ \"s3:ListBucket\", \"s3:GetBucketLocation\", \"s3:ListBucketMultipartUploads\" ], \"Resource\": [ \"arn:aws:s3:::USD{BUCKET}\" ] } ] } EOF",
"aws iam put-user-policy --user-name velero --policy-name velero --policy-document file://velero-policy.json",
"aws iam create-access-key --user-name velero",
"{ \"AccessKey\": { \"UserName\": \"velero\", \"Status\": \"Active\", \"CreateDate\": \"2017-07-31T22:24:41.576Z\", \"SecretAccessKey\": <AWS_SECRET_ACCESS_KEY>, \"AccessKeyId\": <AWS_ACCESS_KEY_ID> } }",
"gcloud auth login",
"BUCKET=<bucket> 1",
"gsutil mb gs://USDBUCKET/",
"PROJECT_ID=USD(gcloud config get-value project)",
"gcloud iam service-accounts create velero --display-name \"Velero service account\"",
"gcloud iam service-accounts list",
"SERVICE_ACCOUNT_EMAIL=USD(gcloud iam service-accounts list --filter=\"displayName:Velero service account\" --format 'value(email)')",
"ROLE_PERMISSIONS=( compute.disks.get compute.disks.create compute.disks.createSnapshot compute.snapshots.get compute.snapshots.create compute.snapshots.useReadOnly compute.snapshots.delete compute.zones.get storage.objects.create storage.objects.delete storage.objects.get storage.objects.list iam.serviceAccounts.signBlob )",
"gcloud iam roles create velero.server --project USDPROJECT_ID --title \"Velero Server\" --permissions \"USD(IFS=\",\"; echo \"USD{ROLE_PERMISSIONS[*]}\")\"",
"gcloud projects add-iam-policy-binding USDPROJECT_ID --member serviceAccount:USDSERVICE_ACCOUNT_EMAIL --role projects/USDPROJECT_ID/roles/velero.server",
"gsutil iam ch serviceAccount:USDSERVICE_ACCOUNT_EMAIL:objectAdmin gs://USD{BUCKET}",
"gcloud iam service-accounts keys create credentials-velero --iam-account USDSERVICE_ACCOUNT_EMAIL",
"az login",
"AZURE_RESOURCE_GROUP=Velero_Backups",
"az group create -n USDAZURE_RESOURCE_GROUP --location CentralUS 1",
"AZURE_STORAGE_ACCOUNT_ID=\"veleroUSD(uuidgen | cut -d '-' -f5 | tr '[A-Z]' '[a-z]')\"",
"az storage account create --name USDAZURE_STORAGE_ACCOUNT_ID --resource-group USDAZURE_RESOURCE_GROUP --sku Standard_GRS --encryption-services blob --https-only true --kind BlobStorage --access-tier Hot",
"BLOB_CONTAINER=velero",
"az storage container create -n USDBLOB_CONTAINER --public-access off --account-name USDAZURE_STORAGE_ACCOUNT_ID",
"AZURE_SUBSCRIPTION_ID=`az account list --query '[?isDefault].id' -o tsv` AZURE_TENANT_ID=`az account list --query '[?isDefault].tenantId' -o tsv`",
"AZURE_CLIENT_SECRET=`az ad sp create-for-rbac --name \"velero\" --role \"Contributor\" --query 'password' -o tsv --scopes /subscriptions/USDAZURE_SUBSCRIPTION_ID/resourceGroups/USDAZURE_RESOURCE_GROUP`",
"AZURE_CLIENT_ID=`az ad app credential list --id <your_app_id>`",
"cat << EOF > ./credentials-velero AZURE_SUBSCRIPTION_ID=USD{AZURE_SUBSCRIPTION_ID} AZURE_TENANT_ID=USD{AZURE_TENANT_ID} AZURE_CLIENT_ID=USD{AZURE_CLIENT_ID} AZURE_CLIENT_SECRET=USD{AZURE_CLIENT_SECRET} AZURE_RESOURCE_GROUP=USD{AZURE_RESOURCE_GROUP} AZURE_CLOUD_NAME=AzurePublicCloud EOF",
"oc delete migrationcontroller <migration_controller>",
"oc delete USD(oc get crds -o name | grep 'migration.openshift.io')",
"oc delete USD(oc get crds -o name | grep 'velero')",
"oc delete USD(oc get clusterroles -o name | grep 'migration.openshift.io')",
"oc delete clusterrole migration-operator",
"oc delete USD(oc get clusterroles -o name | grep 'velero')",
"oc delete USD(oc get clusterrolebindings -o name | grep 'migration.openshift.io')",
"oc delete clusterrolebindings migration-operator",
"oc delete USD(oc get clusterrolebindings -o name | grep 'velero')",
"podman login registry.redhat.io",
"cp USD(podman create registry.redhat.io/rhmtc/openshift-migration-legacy-rhel8-operator:v1.7):/operator.yml ./",
"cp USD(podman create registry.redhat.io/rhmtc/openshift-migration-legacy-rhel8-operator:v1.7):/controller.yml ./",
"grep openshift-migration-legacy-rhel8-operator ./mapping.txt | grep rhmtc",
"registry.redhat.io/rhmtc/openshift-migration-legacy-rhel8-operator@sha256:468a6126f73b1ee12085ca53a312d1f96ef5a2ca03442bcb63724af5e2614e8a=<registry.apps.example.com>/rhmtc/openshift-migration-legacy-rhel8-operator",
"containers: - name: ansible image: <registry.apps.example.com>/rhmtc/openshift-migration-legacy-rhel8-operator@sha256:<468a6126f73b1ee12085ca53a312d1f96ef5a2ca03442bcb63724af5e2614e8a> 1 - name: operator image: <registry.apps.example.com>/rhmtc/openshift-migration-legacy-rhel8-operator@sha256:<468a6126f73b1ee12085ca53a312d1f96ef5a2ca03442bcb63724af5e2614e8a> 2 env: - name: REGISTRY value: <registry.apps.example.com> 3",
"oc create -f operator.yml",
"namespace/openshift-migration created rolebinding.rbac.authorization.k8s.io/system:deployers created serviceaccount/migration-operator created customresourcedefinition.apiextensions.k8s.io/migrationcontrollers.migration.openshift.io created role.rbac.authorization.k8s.io/migration-operator created rolebinding.rbac.authorization.k8s.io/migration-operator created clusterrolebinding.rbac.authorization.k8s.io/migration-operator created deployment.apps/migration-operator created Error from server (AlreadyExists): error when creating \"./operator.yml\": rolebindings.rbac.authorization.k8s.io \"system:image-builders\" already exists 1 Error from server (AlreadyExists): error when creating \"./operator.yml\": rolebindings.rbac.authorization.k8s.io \"system:image-pullers\" already exists",
"oc create -f controller.yml",
"oc get pods -n openshift-migration",
"apiVersion: migration.openshift.io/v1alpha1 kind: MigrationController metadata: name: migration-controller namespace: openshift-migration spec: [...] stunnel_tcp_proxy: http://username:password@ip:port",
"apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: allow-all-egress-from-rsync-pods spec: podSelector: matchLabels: owner: directvolumemigration app: directvolumemigration-rsync-transfer egress: - {} policyTypes: - Egress",
"apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: allow-all-egress-from-rsync-pods spec: podSelector: matchLabels: owner: directvolumemigration app: directvolumemigration-rsync-transfer ingress: - {} policyTypes: - Ingress",
"apiVersion: network.openshift.io/v1 kind: EgressNetworkPolicy metadata: name: test-egress-policy namespace: <namespace> spec: egress: - to: cidrSelector: <cidr_of_source_or_target_cluster> type: Deny",
"apiVersion: migration.openshift.io/v1alpha1 kind: MigrationController metadata: name: migration-controller namespace: openshift-migration spec: [...] rsync_endpoint_type: [NodePort|ClusterIP|Route]",
"spec: src_supplemental_groups: \"1000,2000\" target_supplemental_groups: \"2000,3000\"",
"oc get migrationcontroller <migration_controller> -n openshift-migration",
"apiVersion: migration.openshift.io/v1alpha1 kind: MigrationController metadata: name: <migration_controller> namespace: openshift-migration spec: stunnel_tcp_proxy: http://<username>:<password>@<ip>:<port> 1 noProxy: example.com 2",
"oc replace -f migration-controller.yaml -n openshift-migration",
"apiVersion: migration.openshift.io/v1alpha1 kind: MigrationController metadata: name: migration-controller namespace: openshift-migration spec: [...] migration_rsync_privileged: true",
"apiVersion: migration.openshift.io/v1alpha1 kind: MigMigration metadata: name: migration-controller namespace: openshift-migration spec: [...] runAsRoot: true",
"apiVersion: migration.openshift.io/v1alpha1 kind: MigMigration metadata: name: migration-controller namespace: openshift-migration spec: [...] runAsUser: 10010001 runAsGroup: 3",
"oc delete migrationcontroller <migration_controller>",
"oc delete USD(oc get crds -o name | grep 'migration.openshift.io')",
"oc delete USD(oc get crds -o name | grep 'velero')",
"oc delete USD(oc get clusterroles -o name | grep 'migration.openshift.io')",
"oc delete clusterrole migration-operator",
"oc delete USD(oc get clusterroles -o name | grep 'velero')",
"oc delete USD(oc get clusterrolebindings -o name | grep 'migration.openshift.io')",
"oc delete clusterrolebindings migration-operator",
"oc delete USD(oc get clusterrolebindings -o name | grep 'velero')",
"oc -n openshift-migration get sub",
"NAME PACKAGE SOURCE CHANNEL mtc-operator mtc-operator mtc-operator-catalog release-v1.7 redhat-oadp-operator-stable-1.0-mtc-operator-catalog-openshift-marketplace redhat-oadp-operator mtc-operator-catalog stable-1.0",
"oc -n openshift-migration get sub -o json | jq -r '.items[] | { name: .metadata.name, package: .spec.name, channel: .spec.channel }'",
"{ \"name\": \"mtc-operator\", \"package\": \"mtc-operator\", \"channel\": \"release-v1.7\" } { \"name\": \"redhat-oadp-operator-stable-1.0-mtc-operator-catalog-openshift-marketplace\", \"package\": \"redhat-oadp-operator\", \"channel\": \"stable-1.0\" }",
"oc -n openshift-migration patch subscription mtc-operator --type merge --patch '{\"spec\": {\"channel\": \"release-v1.8\"}}'",
"subscription.operators.coreos.com/mtc-operator patched",
"oc -n openshift-migration patch subscription redhat-oadp-operator-stable-1.0-mtc-operator-catalog-openshift-marketplace --type merge --patch '{\"spec\": {\"channel\":\"stable-1.2\"}}'",
"subscription.operators.coreos.com/redhat-oadp-operator-stable-1.0-mtc-operator-catalog-openshift-marketplace patched",
"oc -n openshift-migration get subscriptions.operators.coreos.com mtc-operator -o json | jq '.status | (.\"state\"==\"AtLatestKnown\")'",
"oc -n openshift-migration get sub -o json | jq -r '.items[] | {name: .metadata.name, channel: .spec.channel }'",
"{ \"name\": \"mtc-operator\", \"channel\": \"release-v1.8\" } { \"name\": \"redhat-oadp-operator-stable-1.0-mtc-operator-catalog-openshift-marketplace\", \"channel\": \"stable-1.2\" }",
"Confirm that the `mtc-operator.v1.8.0` and `oadp-operator.v1.2.x` packages are installed by running the following command:",
"oc -n openshift-migration get csv",
"NAME DISPLAY VERSION REPLACES PHASE mtc-operator.v1.8.0 Migration Toolkit for Containers Operator 1.8.0 mtc-operator.v1.7.13 Succeeded oadp-operator.v1.2.2 OADP Operator 1.2.2 oadp-operator.v1.0.13 Succeeded",
"podman login registry.redhat.io",
"podman cp USD(podman create registry.redhat.io/rhmtc/openshift-migration-legacy-rhel8-operator:v1.7:/operator.yml ./",
"oc replace --force -f operator.yml",
"oc scale -n openshift-migration --replicas=0 deployment/migration-operator",
"oc scale -n openshift-migration --replicas=1 deployment/migration-operator",
"oc -o yaml -n openshift-migration get deployment/migration-operator | grep image: | awk -F \":\" '{ print USDNF }'",
"podman cp USD(podman create registry.redhat.io/rhmtc/openshift-migration-legacy-rhel8-operator:v1.7):/operator.yml ./",
"oc create -f controller.yml",
"oc get pods -n openshift-migration",
"oc get migplan <migplan> -o yaml -n openshift-migration",
"spec: indirectImageMigration: true indirectVolumeMigration: true",
"oc replace -f migplan.yaml -n openshift-migration",
"oc get migplan <migplan> -o yaml -n openshift-migration",
"oc get pv",
"oc get pods --all-namespaces | egrep -v 'Running | Completed'",
"oc get pods --all-namespaces --field-selector=status.phase=Running -o json | jq '.items[]|select(any( .status.containerStatuses[]; .restartCount > 3))|.metadata.name'",
"oc get csr -A | grep pending -i",
"oc expose svc <app1-svc> --hostname <app1.apps.source.example.com> -n <app1-namespace>",
"spec: src_supplemental_groups: \"1000,2000\" target_supplemental_groups: \"2000,3000\"",
"apiVersion: migration.openshift.io/v1alpha1 kind: MigrationController metadata: name: migration-controller namespace: openshift-migration spec: [...] rsync_backoff_limit: 40",
"apiVersion: migration.openshift.io/v1alpha1 kind: MigrationController metadata: name: migration-controller namespace: openshift-migration spec: [...] migration_rsync_privileged: true",
"apiVersion: migration.openshift.io/v1alpha1 kind: MigMigration metadata: name: migration-controller namespace: openshift-migration spec: [...] runAsRoot: true",
"apiVersion: migration.openshift.io/v1alpha1 kind: MigMigration metadata: name: migration-controller namespace: openshift-migration spec: [...] runAsUser: 10010001 runAsGroup: 3",
"kubenswrapper[3879]: W0326 16:30:36.749224 3879 volume_linux.go:49] Setting volume ownership for /var/lib/kubelet/pods/8905d88e-6531-4d65-9c2a-eff11dc7eb29/volumes/kubernetes.io~csi/pvc-287d1988-3fd9-4517-a0c7-22539acd31e6/mount and fsGroup set. If the volume has a lot of files then setting volume ownership could be slow, see https://github.com/kubernetes/kubernetes/issues/69699 kubenswrapper[3879]: E0326 16:32:02.706363 3879 kubelet.go:1841] \"Unable to attach or mount volumes for pod; skipping pod\" err=\"unmounted volumes=[8db9d5b032dab17d4ea9495af12e085a], unattached volumes=[crane2-rsync-server-secret 8db9d5b032dab17d4ea9495af12e085a kube-api-access-dlbd2 crane2-stunnel-server-config crane2-stunnel-server-secret crane2-rsync-server-config]: timed out waiting for the condition\" pod=\"caboodle-preprod/rsync-server\" kubenswrapper[3879]: E0326 16:32:02.706496 3879 pod_workers.go:965] \"Error syncing pod, skipping\" err=\"unmounted volumes=[8db9d5b032dab17d4ea9495af12e085a], unattached volumes=[crane2-rsync-server-secret 8db9d5b032dab17d4ea9495af12e085a kube-api-access-dlbd2 crane2-stunnel-server-config crane2-stunnel-server-secret crane2-rsync-server-config]: timed out waiting for the condition\" pod=\"caboodle-preprod/rsync-server\" podUID=8905d88e-6531-4d65-9c2a-eff11dc7eb29",
"apiVersion: migration.openshift.io/v1alpha1 kind: MigrationController metadata: name: migration-controller namespace: openshift-migration spec: migration_rsync_super_privileged: true 1 azure_resource_group: \"\" cluster_name: host mig_namespace_limit: \"10\" mig_pod_limit: \"100\" mig_pv_limit: \"100\" migration_controller: true migration_log_reader: true migration_ui: true migration_velero: true olm_managed: true restic_timeout: 1h version: 1.8.3",
"oc get -n openshift-migration route/migration -o go-template='https://{{ .spec.host }}'",
"oc create token migration-controller -n openshift-migration",
"eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJtaWciLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlY3JldC5uYW1lIjoibWlnLXRva2VuLWs4dDJyIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQubmFtZSI6Im1pZyIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImE1YjFiYWMwLWMxYmYtMTFlOS05Y2NiLTAyOWRmODYwYjMwOCIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDptaWc6bWlnIn0.xqeeAINK7UXpdRqAtOj70qhBJPeMwmgLomV9iFxr5RoqUgKchZRG2J2rkqmPm6vr7K-cm7ibD1IBpdQJCcVDuoHYsFgV4mp9vgOfn9osSDp2TGikwNz4Az95e81xnjVUmzh-NjDsEpw71DH92iHV_xt2sTwtzftS49LpPW2LjrV0evtNBP_t_RfskdArt5VSv25eORl7zScqfe1CiMkcVbf2UqACQjo3LbkpfN26HAioO2oH0ECPiRzT0Xyh-KwFutJLS9Xgghyw-LD9kPKcE_xbbJ9Y4Rqajh7WdPYuB0Jd9DPVrslmzK-F6cgHHYoZEv0SvLQi-PO0rpDrcjOEQQ",
"oc create route passthrough --service=docker-registry --port=5000 -n default",
"oc create route passthrough --service=image-registry --port=5000 -n openshift-image-registry",
"az group list",
"{ \"id\": \"/subscriptions/...//resourceGroups/sample-rg-name\", \"location\": \"centralus\", \"name\": \"...\", \"properties\": { \"provisioningState\": \"Succeeded\" }, \"tags\": { \"kubernetes.io_cluster.sample-ld57c\": \"owned\", \"openshift_creationDate\": \"2019-10-25T23:28:57.988208+00:00\" }, \"type\": \"Microsoft.Resources/resourceGroups\" },",
"oc create route passthrough --service=image-registry -n openshift-image-registry",
"apiVersion: migration.openshift.io/v1alpha1 kind: MigrationController metadata: name: migration-controller namespace: openshift-migration spec: [...] stunnel_tcp_proxy: http://username:password@ip:port",
"apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: allow-all-egress-from-rsync-pods spec: podSelector: matchLabels: owner: directvolumemigration app: directvolumemigration-rsync-transfer egress: - {} policyTypes: - Egress",
"apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: allow-all-egress-from-rsync-pods spec: podSelector: matchLabels: owner: directvolumemigration app: directvolumemigration-rsync-transfer ingress: - {} policyTypes: - Ingress",
"apiVersion: network.openshift.io/v1 kind: EgressNetworkPolicy metadata: name: test-egress-policy namespace: <namespace> spec: egress: - to: cidrSelector: <cidr_of_source_or_target_cluster> type: Deny",
"apiVersion: migration.openshift.io/v1alpha1 kind: MigrationController metadata: name: migration-controller namespace: openshift-migration spec: [...] rsync_endpoint_type: [NodePort|ClusterIP|Route]",
"spec: src_supplemental_groups: \"1000,2000\" target_supplemental_groups: \"2000,3000\"",
"oc get migrationcontroller <migration_controller> -n openshift-migration",
"apiVersion: migration.openshift.io/v1alpha1 kind: MigrationController metadata: name: <migration_controller> namespace: openshift-migration spec: stunnel_tcp_proxy: http://<username>:<password>@<ip>:<port> 1 noProxy: example.com 2",
"oc replace -f migration-controller.yaml -n openshift-migration",
"cat << EOF | oc apply -f - apiVersion: migration.openshift.io/v1alpha1 kind: MigCluster metadata: name: <host_cluster> namespace: openshift-migration spec: isHostCluster: true EOF",
"cat << EOF | oc apply -f - apiVersion: v1 kind: Secret metadata: name: <cluster_secret> namespace: openshift-config type: Opaque data: saToken: <sa_token> 1 EOF",
"oc sa get-token migration-controller -n openshift-migration | base64 -w 0",
"cat << EOF | oc apply -f - apiVersion: migration.openshift.io/v1alpha1 kind: MigCluster metadata: name: <remote_cluster> 1 namespace: openshift-migration spec: exposedRegistryPath: <exposed_registry_route> 2 insecure: false 3 isHostCluster: false serviceAccountSecretRef: name: <remote_cluster_secret> 4 namespace: openshift-config url: <remote_cluster_url> 5 EOF",
"oc describe MigCluster <cluster>",
"cat << EOF | oc apply -f - apiVersion: v1 kind: Secret metadata: namespace: openshift-config name: <migstorage_creds> type: Opaque data: aws-access-key-id: <key_id_base64> 1 aws-secret-access-key: <secret_key_base64> 2 EOF",
"echo -n \"<key>\" | base64 -w 0 1",
"cat << EOF | oc apply -f - apiVersion: migration.openshift.io/v1alpha1 kind: MigStorage metadata: name: <migstorage> namespace: openshift-migration spec: backupStorageConfig: awsBucketName: <bucket> 1 credsSecretRef: name: <storage_secret> 2 namespace: openshift-config backupStorageProvider: <storage_provider> 3 volumeSnapshotConfig: credsSecretRef: name: <storage_secret> 4 namespace: openshift-config volumeSnapshotProvider: <storage_provider> 5 EOF",
"oc describe migstorage <migstorage>",
"cat << EOF | oc apply -f - apiVersion: migration.openshift.io/v1alpha1 kind: MigPlan metadata: name: <migplan> namespace: openshift-migration spec: destMigClusterRef: name: <host_cluster> namespace: openshift-migration indirectImageMigration: true 1 indirectVolumeMigration: true 2 migStorageRef: name: <migstorage> 3 namespace: openshift-migration namespaces: - <source_namespace_1> 4 - <source_namespace_2> - <source_namespace_3>:<destination_namespace> 5 srcMigClusterRef: name: <remote_cluster> 6 namespace: openshift-migration EOF",
"oc describe migplan <migplan> -n openshift-migration",
"cat << EOF | oc apply -f - apiVersion: migration.openshift.io/v1alpha1 kind: MigMigration metadata: name: <migmigration> namespace: openshift-migration spec: migPlanRef: name: <migplan> 1 namespace: openshift-migration quiescePods: true 2 stage: false 3 rollback: false 4 EOF",
"oc watch migmigration <migmigration> -n openshift-migration",
"Name: c8b034c0-6567-11eb-9a4f-0bc004db0fbc Namespace: openshift-migration Labels: migration.openshift.io/migplan-name=django Annotations: openshift.io/touch: e99f9083-6567-11eb-8420-0a580a81020c API Version: migration.openshift.io/v1alpha1 Kind: MigMigration Spec: Mig Plan Ref: Name: migplan Namespace: openshift-migration Stage: false Status: Conditions: Category: Advisory Last Transition Time: 2021-02-02T15:04:09Z Message: Step: 19/47 Reason: InitialBackupCreated Status: True Type: Running Category: Required Last Transition Time: 2021-02-02T15:03:19Z Message: The migration is ready. Status: True Type: Ready Category: Required Durable: true Last Transition Time: 2021-02-02T15:04:05Z Message: The migration registries are healthy. Status: True Type: RegistriesHealthy Itinerary: Final Observed Digest: 7fae9d21f15979c71ddc7dd075cb97061895caac5b936d92fae967019ab616d5 Phase: InitialBackupCreated Pipeline: Completed: 2021-02-02T15:04:07Z Message: Completed Name: Prepare Started: 2021-02-02T15:03:18Z Message: Waiting for initial Velero backup to complete. Name: Backup Phase: InitialBackupCreated Progress: Backup openshift-migration/c8b034c0-6567-11eb-9a4f-0bc004db0fbc-wpc44: 0 out of estimated total of 0 objects backed up (5s) Started: 2021-02-02T15:04:07Z Message: Not started Name: StageBackup Message: Not started Name: StageRestore Message: Not started Name: DirectImage Message: Not started Name: DirectVolume Message: Not started Name: Restore Message: Not started Name: Cleanup Start Timestamp: 2021-02-02T15:03:18Z Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Running 57s migmigration_controller Step: 2/47 Normal Running 57s migmigration_controller Step: 3/47 Normal Running 57s (x3 over 57s) migmigration_controller Step: 4/47 Normal Running 54s migmigration_controller Step: 5/47 Normal Running 54s migmigration_controller Step: 6/47 Normal Running 52s (x2 over 53s) migmigration_controller Step: 7/47 Normal Running 51s (x2 over 51s) migmigration_controller Step: 8/47 Normal Ready 50s (x12 over 57s) migmigration_controller The migration is ready. Normal Running 50s migmigration_controller Step: 9/47 Normal Running 50s migmigration_controller Step: 10/47",
"- hosts: localhost gather_facts: false tasks: - name: get pod name shell: oc get po --all-namespaces",
"- hosts: localhost gather_facts: false tasks: - name: Get pod k8s_info: kind: pods api: v1 namespace: openshift-migration name: \"{{ lookup( 'env', 'HOSTNAME') }}\" register: pods - name: Print pod name debug: msg: \"{{ pods.resources[0].metadata.name }}\"",
"- hosts: localhost gather_facts: false tasks: - name: Set a boolean set_fact: do_fail: true - name: \"fail\" fail: msg: \"Cause a failure\" when: do_fail",
"- hosts: localhost gather_facts: false tasks: - set_fact: namespaces: \"{{ (lookup( 'env', 'MIGRATION_NAMESPACES')).split(',') }}\" - debug: msg: \"{{ item }}\" with_items: \"{{ namespaces }}\" - debug: msg: \"{{ lookup( 'env', 'MIGRATION_PLAN_NAME') }}\"",
"oc edit migrationcontroller <migration_controller> -n openshift-migration",
"apiVersion: migration.openshift.io/v1alpha1 kind: MigrationController metadata: name: migration-controller namespace: openshift-migration spec: disable_image_migration: true 1 disable_pv_migration: true 2 additional_excluded_resources: 3 - resource1 - resource2",
"oc get deployment -n openshift-migration migration-controller -o yaml | grep EXCLUDED_RESOURCES -A1",
"name: EXCLUDED_RESOURCES value: resource1,resource2,imagetags,templateinstances,clusterserviceversions,packagemanifests,subscriptions,servicebrokers,servicebindings,serviceclasses,serviceinstances,serviceplans,imagestreams,persistentvolumes,persistentvolumeclaims",
"spec: namespaces: - namespace_2 - namespace_1:namespace_2",
"spec: namespaces: - namespace_1:namespace_1",
"spec: namespaces: - namespace_1",
"apiVersion: migration.openshift.io/v1alpha1 kind: MigPlan metadata: name: <migplan> namespace: openshift-migration spec: persistentVolumes: - capacity: 10Gi name: <pv_name> pvc: selection: action: skip",
"apiVersion: migration.openshift.io/v1alpha1 kind: MigPlan metadata: name: <migplan> namespace: openshift-migration spec: persistentVolumes: - capacity: 10Gi name: <pv_name> pvc: name: <source_pvc>:<destination_pvc> 1",
"apiVersion: migration.openshift.io/v1alpha1 kind: MigPlan metadata: name: <migplan> namespace: openshift-migration spec: persistentVolumes: - capacity: 10Gi name: pvc-095a6559-b27f-11eb-b27f-021bddcaf6e4 proposedCapacity: 10Gi pvc: accessModes: - ReadWriteMany hasReference: true name: mysql namespace: mysql-persistent selection: action: <copy> 1 copyMethod: <filesystem> 2 verify: true 3 storageClass: <gp2> 4 accessMode: <ReadWriteMany> 5 storageClass: cephfs",
"apiVersion: migration.openshift.io/v1alpha1 kind: MigPlan metadata: name: <migplan> namespace: openshift-migration spec: includedResources: - kind: <kind> 1 group: \"\" - kind: <kind> group: \"\"",
"apiVersion: migration.openshift.io/v1alpha1 kind: MigPlan metadata: name: <migplan> namespace: openshift-migration spec: includedResources: - kind: <kind> 1 group: \"\" - kind: <kind> group: \"\" labelSelector: matchLabels: <label> 2",
"apiVersion: migration.openshift.io/v1alpha1 kind: MigMigration metadata: generateName: <migplan> namespace: openshift-migration spec: migPlanRef: name: <migplan> namespace: openshift-migration stage: false",
"oc edit migrationcontroller -n openshift-migration",
"mig_controller_limits_cpu: \"1\" 1 mig_controller_limits_memory: \"10Gi\" 2 mig_controller_requests_cpu: \"100m\" 3 mig_controller_requests_memory: \"350Mi\" 4 mig_pv_limit: 100 5 mig_pod_limit: 100 6 mig_namespace_limit: 10 7",
"oc patch migrationcontroller migration-controller -p '{\"spec\":{\"enable_dvm_pv_resizing\":true}}' \\ 1 --type='merge' -n openshift-migration",
"oc patch migrationcontroller migration-controller -p '{\"spec\":{\"pv_resizing_threshold\":41}}' \\ 1 --type='merge' -n openshift-migration",
"status: conditions: - category: Warn durable: true lastTransitionTime: \"2021-06-17T08:57:01Z\" message: 'Capacity of the following volumes will be automatically adjusted to avoid disk capacity issues in the target cluster: [pvc-b800eb7b-cf3b-11eb-a3f7-0eae3e0555f3]' reason: Done status: \"False\" type: PvCapacityAdjustmentRequired",
"oc -n openshift-migration patch migrationcontroller migration-controller --type=json --patch '[{ \"op\": \"replace\", \"path\": \"/spec/mig_controller_enable_cache\", \"value\": true}]'",
"oc -n openshift-migration patch migrationcontroller migration-controller --type=json --patch '[{ \"op\": \"replace\", \"path\": \"/spec/mig_controller_limits_memory\", \"value\": <10Gi>}]'",
"oc -n openshift-migration patch migrationcontroller migration-controller --type=json --patch '[{ \"op\": \"replace\", \"path\": \"/spec/mig_controller_requests_memory\", \"value\": <350Mi>}]'",
"apiVersion: migration.openshift.io/v1alpha1 kind: DirectImageMigration metadata: labels: controller-tools.k8s.io: \"1.0\" name: <direct_image_migration> spec: srcMigClusterRef: name: <source_cluster> namespace: openshift-migration destMigClusterRef: name: <destination_cluster> namespace: openshift-migration namespaces: 1 - <source_namespace_1> - <source_namespace_2>:<destination_namespace_3> 2",
"apiVersion: migration.openshift.io/v1alpha1 kind: DirectImageStreamMigration metadata: labels: controller-tools.k8s.io: \"1.0\" name: <direct_image_stream_migration> spec: srcMigClusterRef: name: <source_cluster> namespace: openshift-migration destMigClusterRef: name: <destination_cluster> namespace: openshift-migration imageStreamRef: name: <image_stream> namespace: <source_image_stream_namespace> destNamespace: <destination_image_stream_namespace>",
"apiVersion: migration.openshift.io/v1alpha1 kind: DirectVolumeMigration metadata: name: <direct_volume_migration> namespace: openshift-migration spec: createDestinationNamespaces: false 1 deleteProgressReportingCRs: false 2 destMigClusterRef: name: <host_cluster> 3 namespace: openshift-migration persistentVolumeClaims: - name: <pvc> 4 namespace: <pvc_namespace> srcMigClusterRef: name: <source_cluster> namespace: openshift-migration",
"apiVersion: migration.openshift.io/v1alpha1 kind: DirectVolumeMigrationProgress metadata: labels: controller-tools.k8s.io: \"1.0\" name: <direct_volume_migration_progress> spec: clusterRef: name: <source_cluster> namespace: openshift-migration podRef: name: <rsync_pod> namespace: openshift-migration",
"apiVersion: migration.openshift.io/v1alpha1 kind: MigAnalytic metadata: annotations: migplan: <migplan> name: <miganalytic> namespace: openshift-migration labels: migplan: <migplan> spec: analyzeImageCount: true 1 analyzeK8SResources: true 2 analyzePVCapacity: true 3 listImages: false 4 listImagesLimit: 50 5 migPlanRef: name: <migplan> namespace: openshift-migration",
"apiVersion: migration.openshift.io/v1alpha1 kind: MigCluster metadata: labels: controller-tools.k8s.io: \"1.0\" name: <host_cluster> 1 namespace: openshift-migration spec: isHostCluster: true 2 The 'azureResourceGroup' parameter is relevant only for Microsoft Azure. azureResourceGroup: <azure_resource_group> 3 caBundle: <ca_bundle_base64> 4 insecure: false 5 refresh: false 6 The 'restartRestic' parameter is relevant for a source cluster. restartRestic: true 7 The following parameters are relevant for a remote cluster. exposedRegistryPath: <registry_route> 8 url: <destination_cluster_url> 9 serviceAccountSecretRef: name: <source_secret> 10 namespace: openshift-config",
"apiVersion: migration.openshift.io/v1alpha1 kind: MigHook metadata: generateName: <hook_name_prefix> 1 name: <mighook> 2 namespace: openshift-migration spec: activeDeadlineSeconds: 1800 3 custom: false 4 image: <hook_image> 5 playbook: <ansible_playbook_base64> 6 targetCluster: source 7",
"apiVersion: migration.openshift.io/v1alpha1 kind: MigMigration metadata: labels: controller-tools.k8s.io: \"1.0\" name: <migmigration> namespace: openshift-migration spec: canceled: false 1 rollback: false 2 stage: false 3 quiescePods: true 4 keepAnnotations: true 5 verify: false 6 migPlanRef: name: <migplan> namespace: openshift-migration",
"apiVersion: migration.openshift.io/v1alpha1 kind: MigPlan metadata: labels: controller-tools.k8s.io: \"1.0\" name: <migplan> namespace: openshift-migration spec: closed: false 1 srcMigClusterRef: name: <source_cluster> namespace: openshift-migration destMigClusterRef: name: <destination_cluster> namespace: openshift-migration hooks: 2 - executionNamespace: <namespace> 3 phase: <migration_phase> 4 reference: name: <hook> 5 namespace: <hook_namespace> 6 serviceAccount: <service_account> 7 indirectImageMigration: true 8 indirectVolumeMigration: false 9 migStorageRef: name: <migstorage> namespace: openshift-migration namespaces: - <source_namespace_1> 10 - <source_namespace_2> - <source_namespace_3>:<destination_namespace_4> 11 refresh: false 12",
"apiVersion: migration.openshift.io/v1alpha1 kind: MigStorage metadata: labels: controller-tools.k8s.io: \"1.0\" name: <migstorage> namespace: openshift-migration spec: backupStorageProvider: <backup_storage_provider> 1 volumeSnapshotProvider: <snapshot_storage_provider> 2 backupStorageConfig: awsBucketName: <bucket> 3 awsRegion: <region> 4 credsSecretRef: namespace: openshift-config name: <storage_secret> 5 awsKmsKeyId: <key_id> 6 awsPublicUrl: <public_url> 7 awsSignatureVersion: <signature_version> 8 volumeSnapshotConfig: awsRegion: <region> 9 credsSecretRef: namespace: openshift-config name: <storage_secret> 10 refresh: false 11",
"oc -n openshift-migration get pods | grep log",
"oc -n openshift-migration logs -f <mig-log-reader-pod> -c color 1",
"oc adm must-gather --image=registry.redhat.io/rhmtc/openshift-migration-must-gather-rhel8:v1.8",
"oc adm must-gather --image=registry.redhat.io/rhmtc/openshift-migration-must-gather-rhel8:v1.8 -- /usr/bin/gather_metrics_dump",
"oc -n openshift-migration exec deployment/velero -c velero -- ./velero <backup_restore_cr> <command> <cr_name>",
"oc -n openshift-migration exec deployment/velero -c velero -- ./velero backup describe 0e44ae00-5dc3-11eb-9ca8-df7e5254778b-2d8ql",
"oc -n openshift-migration exec deployment/velero -c velero -- ./velero --help",
"oc -n openshift-migration exec deployment/velero -c velero -- ./velero <backup_restore_cr> describe <cr_name>",
"oc -n openshift-migration exec deployment/velero -c velero -- ./velero backup describe 0e44ae00-5dc3-11eb-9ca8-df7e5254778b-2d8ql",
"oc -n openshift-migration exec deployment/velero -c velero -- ./velero <backup_restore_cr> logs <cr_name>",
"oc -n openshift-migration exec deployment/velero -c velero -- ./velero restore logs ccc7c2d0-6017-11eb-afab-85d0007f5a19-x4lbf",
"oc get migmigration <migmigration> -o yaml",
"status: conditions: - category: Warn durable: true lastTransitionTime: \"2021-01-26T20:48:40Z\" message: 'Final Restore openshift-migration/ccc7c2d0-6017-11eb-afab-85d0007f5a19-x4lbf: partially failed on destination cluster' status: \"True\" type: VeleroFinalRestorePartiallyFailed - category: Advisory durable: true lastTransitionTime: \"2021-01-26T20:48:42Z\" message: The migration has completed with warnings, please look at `Warn` conditions. reason: Completed status: \"True\" type: SucceededWithWarnings",
"oc -n {namespace} exec deployment/velero -c velero -- ./velero restore describe <restore>",
"Phase: PartiallyFailed (run 'velero restore logs ccc7c2d0-6017-11eb-afab-85d0007f5a19-x4lbf' for more information) Errors: Velero: <none> Cluster: <none> Namespaces: migration-example: error restoring example.com/migration-example/migration-example: the server could not find the requested resource",
"oc -n {namespace} exec deployment/velero -c velero -- ./velero restore logs <restore>",
"time=\"2021-01-26T20:48:37Z\" level=info msg=\"Attempting to restore migration-example: migration-example\" logSource=\"pkg/restore/restore.go:1107\" restore=openshift-migration/ccc7c2d0-6017-11eb-afab-85d0007f5a19-x4lbf time=\"2021-01-26T20:48:37Z\" level=info msg=\"error restoring migration-example: the server could not find the requested resource\" logSource=\"pkg/restore/restore.go:1170\" restore=openshift-migration/ccc7c2d0-6017-11eb-afab-85d0007f5a19-x4lbf",
"labels: migrationcontroller: ebe13bee-c803-47d0-a9e9-83f380328b93",
"labels: migrationcontroller: ebe13bee-c803-47d0-a9e9-83f380328b93",
"oc get migmigration -n openshift-migration",
"NAME AGE 88435fe0-c9f8-11e9-85e6-5d593ce65e10 6m42s",
"oc describe migmigration 88435fe0-c9f8-11e9-85e6-5d593ce65e10 -n openshift-migration",
"name: 88435fe0-c9f8-11e9-85e6-5d593ce65e10 namespace: openshift-migration labels: <none> annotations: touch: 3b48b543-b53e-4e44-9d34-33563f0f8147 apiVersion: migration.openshift.io/v1alpha1 kind: MigMigration metadata: creationTimestamp: 2019-08-29T01:01:29Z generation: 20 resourceVersion: 88179 selfLink: /apis/migration.openshift.io/v1alpha1/namespaces/openshift-migration/migmigrations/88435fe0-c9f8-11e9-85e6-5d593ce65e10 uid: 8886de4c-c9f8-11e9-95ad-0205fe66cbb6 spec: migPlanRef: name: socks-shop-mig-plan namespace: openshift-migration quiescePods: true stage: false status: conditions: category: Advisory durable: True lastTransitionTime: 2019-08-29T01:03:40Z message: The migration has completed successfully. reason: Completed status: True type: Succeeded phase: Completed startTimestamp: 2019-08-29T01:01:29Z events: <none>",
"apiVersion: velero.io/v1 kind: Backup metadata: annotations: openshift.io/migrate-copy-phase: final openshift.io/migrate-quiesce-pods: \"true\" openshift.io/migration-registry: 172.30.105.179:5000 openshift.io/migration-registry-dir: /socks-shop-mig-plan-registry-44dd3bd5-c9f8-11e9-95ad-0205fe66cbb6 openshift.io/orig-reclaim-policy: delete creationTimestamp: \"2019-08-29T01:03:15Z\" generateName: 88435fe0-c9f8-11e9-85e6-5d593ce65e10- generation: 1 labels: app.kubernetes.io/part-of: migration migmigration: 8886de4c-c9f8-11e9-95ad-0205fe66cbb6 migration-stage-backup: 8886de4c-c9f8-11e9-95ad-0205fe66cbb6 velero.io/storage-location: myrepo-vpzq9 name: 88435fe0-c9f8-11e9-85e6-5d593ce65e10-59gb7 namespace: openshift-migration resourceVersion: \"87313\" selfLink: /apis/velero.io/v1/namespaces/openshift-migration/backups/88435fe0-c9f8-11e9-85e6-5d593ce65e10-59gb7 uid: c80dbbc0-c9f8-11e9-95ad-0205fe66cbb6 spec: excludedNamespaces: [] excludedResources: [] hooks: resources: [] includeClusterResources: null includedNamespaces: - sock-shop includedResources: - persistentvolumes - persistentvolumeclaims - namespaces - imagestreams - imagestreamtags - secrets - configmaps - pods labelSelector: matchLabels: migration-included-stage-backup: 8886de4c-c9f8-11e9-95ad-0205fe66cbb6 storageLocation: myrepo-vpzq9 ttl: 720h0m0s volumeSnapshotLocations: - myrepo-wv6fx status: completionTimestamp: \"2019-08-29T01:02:36Z\" errors: 0 expiration: \"2019-09-28T01:02:35Z\" phase: Completed startTimestamp: \"2019-08-29T01:02:35Z\" validationErrors: null version: 1 volumeSnapshotsAttempted: 0 volumeSnapshotsCompleted: 0 warnings: 0",
"apiVersion: velero.io/v1 kind: Restore metadata: annotations: openshift.io/migrate-copy-phase: final openshift.io/migrate-quiesce-pods: \"true\" openshift.io/migration-registry: 172.30.90.187:5000 openshift.io/migration-registry-dir: /socks-shop-mig-plan-registry-36f54ca7-c925-11e9-825a-06fa9fb68c88 creationTimestamp: \"2019-08-28T00:09:49Z\" generateName: e13a1b60-c927-11e9-9555-d129df7f3b96- generation: 3 labels: app.kubernetes.io/part-of: migration migmigration: e18252c9-c927-11e9-825a-06fa9fb68c88 migration-final-restore: e18252c9-c927-11e9-825a-06fa9fb68c88 name: e13a1b60-c927-11e9-9555-d129df7f3b96-gb8nx namespace: openshift-migration resourceVersion: \"82329\" selfLink: /apis/velero.io/v1/namespaces/openshift-migration/restores/e13a1b60-c927-11e9-9555-d129df7f3b96-gb8nx uid: 26983ec0-c928-11e9-825a-06fa9fb68c88 spec: backupName: e13a1b60-c927-11e9-9555-d129df7f3b96-sz24f excludedNamespaces: null excludedResources: - nodes - events - events.events.k8s.io - backups.velero.io - restores.velero.io - resticrepositories.velero.io includedNamespaces: null includedResources: null namespaceMapping: null restorePVs: true status: errors: 0 failureReason: \"\" phase: Completed validationErrors: null warnings: 15",
"oc describe migmigration <pod> -n openshift-migration",
"Some or all transfer pods are not running for more than 10 mins on destination cluster",
"oc get namespace <namespace> -o yaml 1",
"oc edit namespace <namespace>",
"apiVersion: v1 kind: Namespace metadata: annotations: openshift.io/node-selector: \"region=east\"",
"echo -n | openssl s_client -connect <host_FQDN>:<port> \\ 1 | sed -ne '/-BEGIN CERTIFICATE-/,/-END CERTIFICATE-/p' > <ca_bundle.cert> 2",
"oc logs <Velero_Pod> -n openshift-migration",
"level=error msg=\"Error checking repository for stale locks\" error=\"error getting backup storage location: BackupStorageLocation.velero.io \\\"ts-dpa-1\\\" not found\" error.file=\"/remote-source/src/github.com/vmware-tanzu/velero/pkg/restic/repository_manager.go:259\"",
"level=error msg=\"Error backing up item\" backup=velero/monitoring error=\"timed out waiting for all PodVolumeBackups to complete\" error.file=\"/go/src/github.com/heptio/velero/pkg/restic/backupper.go:165\" error.function=\"github.com/heptio/velero/pkg/restic.(*backupper).BackupPodVolumes\" group=v1",
"spec: restic_timeout: 1h 1",
"status: conditions: - category: Warn durable: true lastTransitionTime: 2020-04-16T20:35:16Z message: There were verify errors found in 1 Restic volume restores. See restore `<registry-example-migration-rvwcm>` for details 1 status: \"True\" type: ResticVerifyErrors 2",
"oc describe <registry-example-migration-rvwcm> -n openshift-migration",
"status: phase: Completed podVolumeRestoreErrors: - kind: PodVolumeRestore name: <registry-example-migration-rvwcm-98t49> namespace: openshift-migration podVolumeRestoreResticErrors: - kind: PodVolumeRestore name: <registry-example-migration-rvwcm-98t49> namespace: openshift-migration",
"oc describe <migration-example-rvwcm-98t49>",
"completionTimestamp: 2020-05-01T20:49:12Z errors: 1 resticErrors: 1 resticPod: <restic-nr2v5>",
"oc logs -f <restic-nr2v5>",
"backup=openshift-migration/<backup_id> controller=pod-volume-backup error=\"fork/exec /usr/bin/restic: permission denied\" error.file=\"/go/src/github.com/vmware-tanzu/velero/pkg/controller/pod_volume_backup_controller.go:280\" error.function=\"github.com/vmware-tanzu/velero/pkg/controller.(*podVolumeBackupController).processBackup\" logSource=\"pkg/controller/pod_volume_backup_controller.go:280\" name=<backup_id> namespace=openshift-migration",
"spec: restic_supplemental_groups: <group_id> 1",
"kubenswrapper[3879]: W0326 16:30:36.749224 3879 volume_linux.go:49] Setting volume ownership for /var/lib/kubelet/pods/8905d88e-6531-4d65-9c2a-eff11dc7eb29/volumes/kubernetes.io~csi/pvc-287d1988-3fd9-4517-a0c7-22539acd31e6/mount and fsGroup set. If the volume has a lot of files then setting volume ownership could be slow, see https://github.com/kubernetes/kubernetes/issues/69699 kubenswrapper[3879]: E0326 16:32:02.706363 3879 kubelet.go:1841] \"Unable to attach or mount volumes for pod; skipping pod\" err=\"unmounted volumes=[8db9d5b032dab17d4ea9495af12e085a], unattached volumes=[crane2-rsync-server-secret 8db9d5b032dab17d4ea9495af12e085a kube-api-access-dlbd2 crane2-stunnel-server-config crane2-stunnel-server-secret crane2-rsync-server-config]: timed out waiting for the condition\" pod=\"caboodle-preprod/rsync-server\" kubenswrapper[3879]: E0326 16:32:02.706496 3879 pod_workers.go:965] \"Error syncing pod, skipping\" err=\"unmounted volumes=[8db9d5b032dab17d4ea9495af12e085a], unattached volumes=[crane2-rsync-server-secret 8db9d5b032dab17d4ea9495af12e085a kube-api-access-dlbd2 crane2-stunnel-server-config crane2-stunnel-server-secret crane2-rsync-server-config]: timed out waiting for the condition\" pod=\"caboodle-preprod/rsync-server\" podUID=8905d88e-6531-4d65-9c2a-eff11dc7eb29",
"apiVersion: migration.openshift.io/v1alpha1 kind: MigrationController metadata: name: migration-controller namespace: openshift-migration spec: migration_rsync_super_privileged: true 1 azure_resource_group: \"\" cluster_name: host mig_namespace_limit: \"10\" mig_pod_limit: \"100\" mig_pv_limit: \"100\" migration_controller: true migration_log_reader: true migration_ui: true migration_velero: true olm_managed: true restic_timeout: 1h version: 1.8.3",
"cat << EOF | oc apply -f - apiVersion: migration.openshift.io/v1alpha1 kind: MigMigration metadata: labels: controller-tools.k8s.io: \"1.0\" name: <migmigration> namespace: openshift-migration spec: rollback: true migPlanRef: name: <migplan> 1 namespace: openshift-migration EOF",
"oc delete USD(oc get pods -l migration.openshift.io/is-stage-pod -n <namespace>) 1",
"oc scale deployment <deployment> --replicas=<premigration_replicas>",
"apiVersion: extensions/v1beta1 kind: Deployment metadata: annotations: deployment.kubernetes.io/revision: \"1\" migration.openshift.io/preQuiesceReplicas: \"1\"",
"oc get pod -n <namespace>"
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.15/html-single/migration_toolkit_for_containers/index |
Chapter 6. Customizing component deployment resources | Chapter 6. Customizing component deployment resources 6.1. Overview of component resource customization You can customize deployment resources that are related to the Red Hat OpenShift AI Operator, for example, CPU and memory limits and requests. For resource customizations to persist without being overwritten by the Operator, the opendatahub.io/managed: true annotation must not be present in the YAML file for the component deployment. This annotation is absent by default. The following table shows the deployment names for each component in the redhat-ods-applications namespace. Important Components denoted with (Technology Preview) in this table are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using Technology Preview features in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . Component Deployment names CodeFlare codeflare-operator-manager KServe kserve-controller-manager odh-model-controller Ray kuberay-operator Kueue kueue-controller-manager Workbenches notebook-controller-deployment odh-notebook-controller-manager Dashboard rhods-dashboard Model serving modelmesh-controller odh-model-controller Model registry (Technology Preview) model-registry-operator-controller-manager Data science pipelines data-science-pipelines-operator-controller-manager Training Operator kubeflow-training-operator 6.2. Customizing component resources You can customize component deployment resources by updating the .spec.template.spec.containers.resources section of the YAML file for the component deployment. Prerequisites You have cluster administrator privileges for your OpenShift cluster. Procedure Log in to the OpenShift console as a cluster administrator. In the Administrator perspective, click Workloads > Deployments . From the Project drop-down list, select redhat-ods-applications . In the Name column, click the name of the deployment for the component that you want to customize resources for. Note For more information about the deployment names for each component, see Overview of component resource customization . On the Deployment details page that appears, click the YAML tab. Find the .spec.template.spec.containers.resources section. Update the value of the resource that you want to customize. For example, to update the memory limit to 500Mi, make the following change: Click Save . Click Reload . Verification Log in to OpenShift AI and verify that your resource changes apply. 6.3. Disabling component resource customization You can disable customization of component deployment resources, and restore default values, by adding the opendatahub.io/managed: true annotation to the YAML file for the component deployment. Important Manually removing or setting the opendatahub.io/managed: true annotation to false after manually adding it to the YAML file for a component deployment might cause unexpected cluster issues. To remove the annotation from a deployment, use the steps described in Re-enabling component resource customization . Prerequisites You have cluster administrator privileges for your OpenShift cluster. Procedure Log in to the OpenShift console as a cluster administrator. In the Administrator perspective, click Workloads > Deployments . From the Project drop-down list, select redhat-ods-applications . In the Name column, click the name of the deployment for the component to which you want to add the annotation. Note For more information about the deployment names for each component, see Overview of component resource customization . On the Deployment details page that appears, click the YAML tab. Find the metadata.annotations: section. Add the opendatahub.io/managed: true annotation. Click Save . Click Reload . Verification The opendatahub.io/managed: true annotation appears in the YAML file for the component deployment. 6.4. Re-enabling component resource customization You can re-enable customization of component deployment resources after manually disabling it. Important Manually removing or setting the opendatahub.io/managed: annotation to false after adding it to the YAML file for a component deployment might cause unexpected cluster issues. To remove the annotation from a deployment, use the following steps to delete the deployment. The controller pod for the deployment will automatically redeploy with the default settings. Prerequisites You have cluster administrator privileges for your OpenShift cluster. Procedure Log in to the OpenShift console as a cluster administrator. In the Administrator perspective, click Workloads > Deployments . From the Project drop-down list, select redhat-ods-applications . In the Name column, click the name of the deployment for the component for which you want to remove the annotation. Click the Options menu . Click Delete Deployment . Verification The controller pod for the deployment automatically redeploys with the default settings. | [
"containers: - resources: limits: cpu: '2' memory: 500Mi requests: cpu: '1' memory: 1Gi",
"metadata: annotations: opendatahub.io/managed: true"
] | https://docs.redhat.com/en/documentation/red_hat_openshift_ai_self-managed/2.18/html/managing_openshift_ai/customizing-component-deployment-resources_resource-mgmt |
Chapter 3. Standalone upgrade | Chapter 3. Standalone upgrade In general, Red Hat Quay supports upgrades from a prior (N-1) minor version only. For example, upgrading directly from Red Hat Quay 3.0.5 to the latest version of 3.5 is not supported. Instead, users would have to upgrade as follows: 3.0.5 3.1.3 3.1.3 3.2.2 3.2.2 3.3.4 3.3.4 3.4.z 3.4.z 3.5.z This is required to ensure that any necessary database migrations are done correctly and in the right order during the upgrade. In some cases, Red Hat Quay supports direct, single-step upgrades from prior (N-2, N-3) minor versions. This exception to the normal, prior minor version-only, upgrade simplifies the upgrade procedure for customers on older releases. The following upgrade paths are supported: 3.3.z 3.6.z 3.4.z 3.6.z 3.4.z 3.7.z 3.5.z 3.7.z 3.7.z 3.9.z For users wanting to upgrade the Red Hat Quay Operator, see Upgrading the Red Hat Quay Operator Overview . This document describes the steps needed to perform each individual upgrade. Determine your current version and then follow the steps in sequential order, starting with your current version and working up to your desired target version. Upgrade to 3.9.z from 3.8.z Upgrade to 3.9.z from 3.7.z Upgrade to 3.8.z from 3.7.z Upgrade to 3.7.z from 3.6.z Upgrade to 3.7.z from 3.5.z Upgrade to 3.7.z from 3.4.z Upgrade to 3.7.z from 3.3.z Upgrade to 3.6.z from 3.5.z Upgrade to 3.6.z from 3.4.z Upgrade to 3.6.z from 3.3.z Upgrade to 3.5.z from 3.4.z Upgrade to 3.4.z from 3.3.4 Upgrade to 3.3.4 from 3.2.2 Upgrade to 3.2.2 from 3.1.3 Upgrade to 3.1.3 from 3.0.5 Upgrade to 3.0.5 from 2.9.5 See the Red Hat Quay Release Notes for information on features for individual releases. The general procedure for a manual upgrade consists of the following steps: Stop the Quay and Clair containers. Backup the database and image storage (optional but recommended). Start Clair using the new version of the image. Wait until Clair is ready to accept connections before starting the new version of Quay. 3.1. Accessing images Images for Quay 3.4.0 and later are available from registry.redhat.io and registry.access.redhat.com , with authentication set up as described in Red Hat Container Registry Authentication . Images for Quay 3.3.4 and earlier are available from quay.io , with authentication set up as described in Accessing Red Hat Quay without a CoreOS login . 3.2. Upgrade to 3.9.z from 3.8.z If you are upgrading your standalone Red Hat Quay deployment from 3.8.z 3.9, it is highly recommended that you upgrade PostgreSQL from version 10 13. To upgrade PostgreSQL from 10 13, you must bring down your PostgreSQL 10 database and run a migration script to initiate the process. Use the following procedure to upgrade PostgreSQL from 10 13 on a standalone Red Hat Quay deployment. Procedure Enter the following command to scale down the Red Hat Quay container: USD sudo podman stop <quay_container_name> Optional. If you are using Clair, enter the following command to stop the Clair container: USD sudo podman stop <clair_container_id> Run the Podman process from SCLOrg's Data Migration procedure, which allows for data migration from a remote PostgreSQL server: USD sudo podman run -d --name <migration_postgresql_database> 1 -e POSTGRESQL_MIGRATION_REMOTE_HOST=172.17.0.2 \ 2 -e POSTGRESQL_MIGRATION_ADMIN_PASSWORD=remoteAdminP@ssword \ -v </host/data/directory:/var/lib/pgsql/data:Z> 3 [ OPTIONAL_CONFIGURATION_VARIABLES ] rhel8/postgresql-13 1 The name of your PostgreSQL 13 migration database. 2 Your current Red Hat Quay PostgreSQL 13 database container IP address. Can obtained by running the following command: sudo podman inspect -f "{{.NetworkSettings.IPAddress}}" postgresql-quay . 3 You must specify a different volume mount point than the one from your initial PostgreSQL 10 deployment, and modify the access control lists for said directory. For example: USD mkdir -p /host/data/directory USD setfacl -m u:26:-wx /host/data/directory This prevents data from being overwritten by the new container. Optional. If you are using Clair, repeat the step for the Clair PostgreSQL database container. Stop the PostgreSQL 10 container: USD sudo podman stop <postgresql_container_name> After completing the PostgreSQL migration, run the PostgreSQL 13 container, using the new data volume mount from Step 3, for example, </host/data/directory:/var/lib/postgresql/data> : USD sudo podman run -d --rm --name postgresql-quay \ -e POSTGRESQL_USER=<username> \ -e POSTGRESQL_PASSWORD=<password> \ -e POSTGRESQL_DATABASE=<quay_database_name> \ -e POSTGRESQL_ADMIN_PASSWORD=<admin_password> \ -p 5432:5432 \ -v </host/data/directory:/var/lib/pgsql/data:Z> \ registry.redhat.io/rhel8/postgresql-13:1-109 Optional. If you are using Clair, repeat the step for the Clair PostgreSQL database container. Start the Red Hat Quay container: USD sudo podman run -d --rm -p 80:8080 -p 443:8443 --name=quay \ -v /home/<quay_user>/quay-poc/config:/conf/stack:Z \ -v /home/<quay_user>/quay-poc/storage:/datastorage:Z \ {productrepo}/{quayimage}:{productminv} Optional. Restart the Clair container, for example: USD sudo podman run -d --name clairv4 \ -p 8081:8081 -p 8088:8088 \ -e CLAIR_CONF=/clair/config.yaml \ -e CLAIR_MODE=combo \ registry.redhat.io/quay/clair-rhel8:v3.9.0 For more information, see Data Migration . 3.2.1. Target images Quay: registry.redhat.io/quay/quay-rhel8:v3.9.10 Clair: registry.redhat.io/quay/clair-rhel8:v3.9.10 PostgreSQL: registry.redhat.io/rhel8/postgresql-13 Redis: registry.redhat.io/rhel8/redis-6 3.3. Upgrade to 3.9.z from 3.7.z If you are upgrading your standalone Red Hat Quay deployment from 3.8.z 3.9, it is highly recommended that you upgrade PostgreSQL from version 10 13. To upgrade PostgreSQL from 10 13, you must bring down your PostgreSQL 10 database and run a migration script to initiate the process: Note When upgrading from Red Hat Quay 3.7 to 3.9, you might receive the following error: pg_dumpall: error: query failed: ERROR: xlog flush request 1/B446CCD8 is not satisfied --- flushed only to 1/B0013858 . As a workaround to this issue, you can delete the quayregistry-clair-postgres-upgrade job on your OpenShift Container Platform deployment, which should resolve the issue. 3.3.1. Target images Quay: registry.redhat.io/quay/quay-rhel8:v3.9.10 Clair: registry.redhat.io/quay/clair-rhel8:v3.9.10 PostgreSQL: registry.redhat.io/rhel8/postgresql-13 Redis: registry.redhat.io/rhel8/redis-6 3.4. Upgrade to 3.8.z from 3.7.z 3.4.1. Target images Quay: registry.redhat.io/quay/quay-rhel8:v3.8.0 Clair: registry.redhat.io/quay/clair-rhel8:v3.8.0 PostgreSQL: registry.redhat.io/rhel8/postgresql-10 Redis: registry.redhat.io/rhel8/redis-6 3.5. Upgrade to 3.7.z from 3.6.z 3.5.1. Target images Quay: registry.redhat.io/quay/quay-rhel8:v3.7.0 Clair: registry.redhat.io/quay/clair-rhel8:3.7.0 PostgreSQL: registry.redhat.io/rhel8/postgresql-10 Redis: registry.redhat.io/rhel8/redis-6 3.6. Upgrade to 3.7.z from 3.5.z 3.6.1. Target images Quay: registry.redhat.io/quay/quay-rhel8:v3.7.0 Clair: registry.redhat.io/quay/clair-rhel8:3.7.0 PostgreSQL: registry.redhat.io/rhel8/postgresql-10 Redis: registry.redhat.io/rhel8/redis-6 3.7. Upgrade to 3.7.z from 3.4.z 3.7.1. Target images Quay: registry.redhat.io/quay/quay-rhel8:v3.7.0 Clair: registry.redhat.io/quay/clair-rhel8:3.7.0 PostgreSQL: registry.redhat.io/rhel8/postgresql-10 Redis: registry.redhat.io/rhel8/redis-6 3.8. Upgrade to 3.7.z from 3.3.z Upgrading to Red Hat Quay 3.7 from 3.3. is unsupported. Users must first upgrade to 3.6 from 3.3, and then upgrade to 3.7. For more information, see Upgrade to 3.6.z from 3.3.z . 3.9. Upgrade to 3.6.z from 3.5.z 3.9.1. Target images Quay: registry.redhat.io/quay/quay-rhel8:v3.6.0 Clair: registry.redhat.io/quay/clair-rhel8:v3.6.0 PostgreSQL: registry.redhat.io/rhel8/postgresql-10 Redis: registry.redhat.io/rhel8/redis-6 3.10. Upgrade to 3.6.z from 3.4.z Note Red Hat Quay 3.6 supports direct, single-step upgrade from 3.4.z. This exception to the normal, prior minor version-only, upgrade simplifies the upgrade procedure for customers on older releases. Upgrading to Red Hat Quay 3.6 from 3.4.z requires a database migration which does not support downgrading back to a prior version of Red Hat Quay. Please back up your database before performing this migration. Users will also need to configure a completely new Clair v4 instance to replace the old Clair v2 when upgrading from 3.4.z. For instructions on configuring Clair v4, see Setting up Clair on a non-OpenShift Red Hat Quay deployment . 3.10.1. Target images Quay: registry.redhat.io/quay/quay-rhel8:v3.6.0 Clair: registry.redhat.io/quay/clair-rhel8:v3.6.0 PostgreSQL: registry.redhat.io/rhel8/postgresql-10 Redis: registry.redhat.io/rhel8/redis-6 3.11. Upgrade to 3.6.z from 3.3.z Note Red Hat Quay 3.6 supports direct, single-step upgrade from 3.3.z. This exception to the normal, prior minor version-only, upgrade simplifies the upgrade procedure for customers on older releases. Upgrading to Red Hat Quay 3.6.z from 3.3.z requires a database migration which does not support downgrading back to a prior version of Red Hat Quay. Please back up your database before performing this migration. Users will also need to configure a completely new Clair v4 instance to replace the old Clair v2 when upgrading from 3.3.z. For instructions on configuring Clair v4, see Setting up Clair on a non-OpenShift Red Hat Quay deployment . 3.11.1. Target images Quay: registry.redhat.io/quay/quay-rhel8:v3.6.0 Clair: registry.redhat.io/quay/clair-rhel8:v3.6.0 PostgreSQL: registry.redhat.io/rhel8/postgresql-10 Redis: registry.redhat.io/rhel8/redis-6 3.11.2. Swift configuration when upgrading from 3.3.z to 3.6 When upgrading from Red Hat Quay 3.3.z to 3.6.z, some users might receive the following error: Switch auth v3 requires tenant_id (string) in os_options . As a workaround, you can manually update your DISTRIBUTED_STORAGE_CONFIG to add the os_options and tenant_id parameters: DISTRIBUTED_STORAGE_CONFIG: brscale: - SwiftStorage - auth_url: http://****/v3 auth_version: "3" os_options: tenant_id: **** project_name: ocp-base user_domain_name: Default storage_path: /datastorage/registry swift_container: ocp-svc-quay-ha swift_password: ***** swift_user: ***** 3.12. Upgrade to 3.5.7 from 3.4.z 3.12.1. Target images Quay: registry.redhat.io/quay/quay-rhel8:v3.5.7 Clair: registry.redhat.io/quay/clair-rhel8:v3.5.7 PostgreSQL: registry.redhat.io/rhel8/postgresql-10 Redis: registry.redhat.io/rhel8/redis-6 3.13. Upgrade to 3.4.6 from 3.3.z Upgrading to Quay 3.4 requires a database migration which does not support downgrading back to a prior version of Quay. Please back up your database before performing this migration. 3.13.1. Target images Quay: registry.redhat.io/quay/quay-rhel8:v3.4.6 Clair: registry.redhat.io/quay/clair-rhel8:v3.4.6 PostgreSQL: registry.redhat.io/rhel8/postgresql-10 Redis: registry.redhat.io/rhel8/redis-6 3.14. Upgrade to 3.3.4 from 3.2.z 3.14.1. Target images Quay: quay.io/redhat/quay:v3.3.4 Clair: registry.redhat.io/quay/clair-rhel8:v3.3.4 PostgreSQL: rhscl/postgresql-96-rhel7 Redis: registry.redhat.io/rhel8/redis-6 3.15. Upgrade to 3.2.2 from 3.1.z Once your cluster is running any Red Hat Quay 3.1.z version, to upgrade your cluster to 3.2.2 you must bring down your entire cluster and make a small change to the configuration before bringing it back up with the 3.2.2 version. Warning Once you set the value of DATABASE_SECRET_KEY in this procedure, do not ever change it. If you do so, then existing robot accounts, API tokens, etc. cannot be used anymore. You would have to create a new robot account and API tokens to use with Quay. Take all hosts in the Red Hat Quay cluster out of service. Generate some random data to use as a database secret key. For example: Add a new DATABASE_SECRET_KEY field to your config.yaml file. For example: Note For an OpenShift installation, the config.yaml file is stored as a secret. Bring up one Quay container to complete the migration to 3.2.2. Once the migration is done, make sure the same config.yaml is available on all nodes and bring up the new quay 3.2.2 service on those nodes. Start 3.0.z versions of quay-builder and Clair to replace any instances of those containers you want to return to your cluster. 3.15.1. Target images Quay: quay.io/redhat/quay:v3.2.2 Clair: registry.redhat.io/quay/clair-rhel8:v3.9.10 PostgreSQL: rhscl/postgresql-96-rhel7 Redis: registry.access.redhat.com/rhscl/redis-32-rhel7 3.16. Upgrade to 3.1.3 from 3.0.z 3.16.1. Target images Quay: quay.io/redhat/quay:v3.1.3 Clair: registry.redhat.io/quay/clair-rhel8:v3.9.10 PostgreSQL: rhscl/postgresql-96-rhel7 Redis: registry.access.redhat.com/rhscl/redis-32-rhel7 3.17. Upgrade to 3.0.5 from 2.9.5 For the 2.9.5 to 3.0.5 upgrade, you can either do the whole upgrade with Red Hat Quay down (synchronous upgrade) or only bring down Red Hat Quay for a few minutes and have the bulk of the upgrade continue with Red Hat Quay running (background upgrade). A background upgrade could take longer to run the upgrade depending on how many tags need to be processed. However, there is less total downtime. The downside of a background upgrade is that you will not have access to the latest features until the upgrade completes. The cluster runs from the Quay v3 container in v2 compatibility mode until the upgrade is complete. 3.17.1. Overview of upgrade Follow the procedure below if you are starting with a Red Hat Quay 2.y.z cluster. Before upgrading to the latest Red Hat Quay 3.x version, you must first migrate that cluster to 3.0.5, as described here . Once your cluster is running 3.0.5, you can then upgrade to the latest 3.x version by sequentially upgrading to each minor version in turn. For example: 3.0.5 3.1.3 3.1.3 3.2.2 3.2.2 3.3.4 3.3.4 3.4.z Before beginning your Red Hat Quay 2.y.z to 3.0 upgrade, please note the following: Synchronous upgrade : For a synchronous upgrade, expect less than one hour of total downtime for small installations. Consider a small installation to contain a few thousand container image tags or fewer. For that size installation, you could probably get by with just a couple hours of scheduled downtime. The entire Red Hat Quay service is down for the duration, so if you were to try a synchronous upgrade on a registry with millions of tags, you could potentially be down for several days. Background upgrade : For a background upgrade (also called a compatibility mode upgrade), after a short shutdown your Red Hat Quay cluster upgrade runs in the background. For large Red Hat Quay registries, this could take weeks to complete, but the cluster continues to operate in v2 mode for the duration of the upgrade. As a point of reference, one Red Hat Quay v3 upgrade took four days to process approximately 30 million tags across six machines. Full features on completion : Before you have access to features associated with Docker version 2, schema 2 changes (such as support for containers of different architectures), the entire migration must complete. Other v3 features are immediately available when you switch over. Upgrade complete : When the upgrade is complete, you need to set V3_UPGRADE_MODE: complete in the Red Hat Quay config.yaml file for the new features to be available. All new Red Hat Quay v3 installations automatically have that set. 3.17.2. Prerequisites To assure the best results, we recommend the following prerequisites: Back up your Red Hat Quay database before starting the upgrade (doing regular backups is a general best practice). A good time to do this is right after you have taken down the Red Hat Quay cluster to do the upgrade. Back up your storage (also a general best practice). Upgrade your current Red Hat Quay 2.y.z setup to the latest 2.9.z version (currently 2.9.5) before starting the v3 upgrade. To do that: While the Red Hat Quay cluster is still running, take one node and change the Quay container on that system to a Quay container that is running the latest 2.9.z version. Wait for all the database migrations to run, bringing the database up to the latest 2.9.z version. This should only take a few minutes to a half an hour. Once that is done, replace the Quay container on all the existing nodes with the same latest 2.9.z version. With the entire Red Hat Quay cluster on the new version, you can proceed to the v3 upgrade. 3.17.3. Choosing upgrade type Choose between a synchronous upgrade (complete the upgrade in downtime) and a background upgrade (complete the upgrade while Red Hat Quay is still running). Both of these major-release upgrades require that the Red Hat Quay cluster be down for at least a short period of time. Regardless of which upgrade type you choose, during the time that the Red Hat Quay cluster is down, if you are using builder and Clair images, you need to also upgrade to those new images: Builder : quay.io/redhat/quay-builder:v3.0.5 Clair : quay.io/redhat/clair-jwt:v3.0.5 Both of those images are available from the registry.redhat.io/quay repository. 3.17.4. Running a synchronous upgrade To run a synchronous upgrade, where your whole cluster is down for the entire upgrade, do the following: Take down your entire Red Hat Quay cluster, including any quay-builder and Clair containers. Add the following setting to the config.yaml file on all nodes: V3_UPGRADE_MODE: complete Pull and start up the v3 container on a single node and wait for however long it takes to do the upgrade (it will take a few minutes). Use the following container or later: Quay : quay.io/redhat/quay:v3.0.5 Note that the Quay container comes up on ports 8080 and 8443 for Red Hat Quay 3, instead of 80 and 443, as they did for Red Hat Quay 2. Therefore, we recommend remapping 8080 and 8443 into 80 and 443, respectively, as shown in this example: After the upgrade completes, bring the Red Hat Quay 3 container up on all other nodes. Start 3.0.z versions of quay-builder and Clair to replace any instances of those containers you want to return to your cluster. Verify that Red Hat Quay is working, including pushes and pulls of containers compatible with Docker version 2, schema 2. This can include windows container images and images of different computer architectures (arm, ppc, etc.). 3.17.5. Running a background upgrade To run a background upgrade, you need only bring down your cluster for a short period of time on two occasions. When you bring the cluster back up after the first downtime, the quay v3 container runs in v2 compatibility mode as it backfills the database. This background process can take hours or even days to complete. Background upgrades are recommended for large installations where downtime of more than a few hours would be a problem. For this type of upgrade, you put Red Hat Quay into a compatibility mode, where you have a Quay 3 container running, but it is running on the old data model while the upgrade completes. Here's what you do: Pull the Red Hat Quay 3 container to all the nodes. Use the following container or later: quay.io/redhat/quay:v3.0.5 Take down your entire Red Hat Quay cluster, including any quay-builder and Clair containers. Edit the config.yaml file on each node and set the upgrade mode to background as follows: V3_UPGRADE_MODE: background Bring the Red Hat Quay 3 container up on a single node and wait for the migrations to complete (should take a few minutes maximum). Here is an example of that command: Note that the Quay container comes up on ports 8080 and 8443 for Red Hat Quay 3, instead of 80 and 443, as they did for Red Hat Quay 2. Therefore, we recommend remapping 8080 and 8443 into 80 and 443, respectively, as shown in this example: Bring the Red Hat Quay 3 container up on all the other nodes. Monitor the /upgradeprogress API endpoint until it reports done enough to move to the step (the status reaches 99%). For example, view https://myquay.example.com/upgradeprogress or use some other tool to query the API. Once the background process is far enough along you have to schedule another maintenance window. During your scheduled maintenance, take the entire Red Hat Quay cluster down. Edit the config.yaml file on each node and set the upgrade mode to complete as follows: V3_UPGRADE_MODE: complete Bring Red Hat Quay back up on one node to have it do a final check. Once the final check is done, bring Red Hat Quay v3 back up on all the other nodes. Start 3.0.z versions of quay-builder and Clair to replace any instances of those containers you want to return to your cluster. Verify Quay is working, including pushes and pulls of containers compatible with Docker version 2, schema 2. This can include windows container images and images of different computer architectures (arm, ppc, etc.). 3.17.6. Target images Quay: quay.io/redhat/quay:v3.0.5 Clair: quay.io/redhat/clair-jwt:v3.0.5 Redis: registry.access.redhat.com/rhscl/redis-32-rhel7 PostgreSQL: rhscl/postgresql-96-rhel7 Builder: quay.io/redhat/quay-builder:v3.0.5 3.18. Upgrading a geo-replication deployment of Red Hat Quay Use the following procedure to upgrade your geo-replication Red Hat Quay deployment. Important When upgrading geo-replication Red Hat Quay deployments to the y-stream release (for example, Red Hat Quay 3.7 Red Hat Quay 3.8), or geo-replication deployments, you must stop operations before upgrading. There is intermittent downtime down upgrading from one y-stream release to the . It is highly recommended to back up your Red Hat Quay deployment before upgrading. Prerequisites You have logged into registry.redhat.io Procedure This procedure assumes that you are running Red Hat Quay services on three (or more) systems. For more information, see Preparing for Red Hat Quay high availability . Obtain a list of all Red Hat Quay instances on each system running a Red Hat Quay instance. Enter the following command on System A to reveal the Red Hat Quay instances: USD sudo podman ps Example output CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES ec16ece208c0 registry.redhat.io/quay/quay-rhel8:v3.7.0 registry 6 minutes ago Up 6 minutes ago 0.0.0.0:80->8080/tcp, 0.0.0.0:443->8443/tcp quay01 Enter the following command on System B to reveal the Red Hat Quay instances: USD sudo podman ps Example output CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 7ae0c9a8b37d registry.redhat.io/quay/quay-rhel8:v3.7.0 registry 5 minutes ago Up 2 seconds ago 0.0.0.0:82->8080/tcp, 0.0.0.0:445->8443/tcp quay02 Enter the following command on System C to reveal the Red Hat Quay instances: USD sudo podman ps Example output CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES e75c4aebfee9 registry.redhat.io/quay/quay-rhel8:v3.7.0 registry 4 seconds ago Up 4 seconds ago 0.0.0.0:84->8080/tcp, 0.0.0.0:447->8443/tcp quay03 Temporarily shut down all Red Hat Quay instances on each system. Enter the following command on System A to shut down the Red Hat Quay instance: USD sudo podman stop ec16ece208c0 Enter the following command on System B to shut down the Red Hat Quay instance: USD sudo podman stop 7ae0c9a8b37d Enter the following command on System C to shut down the Red Hat Quay instance: USD sudo podman stop e75c4aebfee9 Obtain the latest Red Hat Quay version, for example, Red Hat Quay 3.9, on each system. Enter the following command on System A to obtain the latest Red Hat Quay version: USD sudo podman pull registry.redhat.io/quay/quay-rhel8:v3.8.0 Enter the following command on System B to obtain the latest Red Hat Quay version: USD sudo podman pull registry.redhat.io/quay/quay-rhel8:v3.8.0 Enter the following command on System C to obtain the latest Red Hat Quay version: USD sudo podman pull registry.redhat.io/quay/quay-rhel8:v3.8.0 On System A of your highly available Red Hat Quay deployment, run the new image version, for example, Red Hat Quay 3.9: # sudo podman run --restart=always -p 443:8443 -p 80:8080 \ --sysctl net.core.somaxconn=4096 \ --name=quay01 \ -v /mnt/quay/config:/conf/stack:Z \ -v /mnt/quay/storage:/datastorage:Z \ -d registry.redhat.io/quay/quay-rhel8:v3.8.0 Wait for the new Red Hat Quay container to become fully operational on System A. You can check the status of the container by entering the following command: USD sudo podman ps Example output CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 70b9f38c3fb4 registry.redhat.io/quay/quay-rhel8:v3.8.0 registry 2 seconds ago Up 2 seconds ago 0.0.0.0:82->8080/tcp, 0.0.0.0:445->8443/tcp quay01 Optional: Ensure that Red Hat Quay is fully operation by navigating to the Red Hat Quay UI. After ensuring that Red Hat Quay on System A is fully operational, run the new image versions on System B and on System C. On System B of your highly available Red Hat Quay deployment, run the new image version, for example, Red Hat Quay 3.9: # sudo podman run --restart=always -p 443:8443 -p 80:8080 \ --sysctl net.core.somaxconn=4096 \ --name=quay02 \ -v /mnt/quay/config:/conf/stack:Z \ -v /mnt/quay/storage:/datastorage:Z \ -d registry.redhat.io/quay/quay-rhel8:v3.8.0 On System C of your highly available Red Hat Quay deployment, run the new image version, for example, Red Hat Quay 3.9: # sudo podman run --restart=always -p 443:8443 -p 80:8080 \ --sysctl net.core.somaxconn=4096 \ --name=quay03 \ -v /mnt/quay/config:/conf/stack:Z \ -v /mnt/quay/storage:/datastorage:Z \ -d registry.redhat.io/quay/quay-rhel8:v3.8.0 You can check the status of the containers on System B and on System C by entering the following command: USD sudo podman ps | [
"sudo podman stop <quay_container_name>",
"sudo podman stop <clair_container_id>",
"sudo podman run -d --name <migration_postgresql_database> 1 -e POSTGRESQL_MIGRATION_REMOTE_HOST=172.17.0.2 \\ 2 -e POSTGRESQL_MIGRATION_ADMIN_PASSWORD=remoteAdminP@ssword -v </host/data/directory:/var/lib/pgsql/data:Z> 3 [ OPTIONAL_CONFIGURATION_VARIABLES ] rhel8/postgresql-13",
"mkdir -p /host/data/directory",
"setfacl -m u:26:-wx /host/data/directory",
"sudo podman stop <postgresql_container_name>",
"sudo podman run -d --rm --name postgresql-quay -e POSTGRESQL_USER=<username> -e POSTGRESQL_PASSWORD=<password> -e POSTGRESQL_DATABASE=<quay_database_name> -e POSTGRESQL_ADMIN_PASSWORD=<admin_password> -p 5432:5432 -v </host/data/directory:/var/lib/pgsql/data:Z> registry.redhat.io/rhel8/postgresql-13:1-109",
"sudo podman run -d --rm -p 80:8080 -p 443:8443 --name=quay -v /home/<quay_user>/quay-poc/config:/conf/stack:Z -v /home/<quay_user>/quay-poc/storage:/datastorage:Z {productrepo}/{quayimage}:{productminv}",
"sudo podman run -d --name clairv4 -p 8081:8081 -p 8088:8088 -e CLAIR_CONF=/clair/config.yaml -e CLAIR_MODE=combo registry.redhat.io/quay/clair-rhel8:v3.9.0",
"DISTRIBUTED_STORAGE_CONFIG: brscale: - SwiftStorage - auth_url: http://****/v3 auth_version: \"3\" os_options: tenant_id: **** project_name: ocp-base user_domain_name: Default storage_path: /datastorage/registry swift_container: ocp-svc-quay-ha swift_password: ***** swift_user: *****",
"openssl rand -hex 48 2d023adb9c477305348490aa0fd9c",
"DATABASE_SECRET_KEY: \"2d023adb9c477305348490aa0fd9c\"",
"docker run --restart=always -p 80:8080 -p 443:8443 --sysctl net.core.somaxconn=4096 --privileged=true -v /mnt/quay/config:/conf/stack:Z -v /mnt/quay/storage:/datastorage:Z -d quay.io/redhat/quay:v3.0.5",
"docker run --restart=always -p 80:8080 -p 443:8443 --sysctl net.core.somaxconn=4096 --privileged=true -v /mnt/quay/config:/conf/stack:Z -v /mnt/quay/storage:/datastorage:Z -d quay.io/redhat/quay:v3.0.5",
"V3_UPGRADE_MODE: complete",
"sudo podman ps",
"CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES ec16ece208c0 registry.redhat.io/quay/quay-rhel8:v3.7.0 registry 6 minutes ago Up 6 minutes ago 0.0.0.0:80->8080/tcp, 0.0.0.0:443->8443/tcp quay01",
"sudo podman ps",
"CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 7ae0c9a8b37d registry.redhat.io/quay/quay-rhel8:v3.7.0 registry 5 minutes ago Up 2 seconds ago 0.0.0.0:82->8080/tcp, 0.0.0.0:445->8443/tcp quay02",
"sudo podman ps",
"CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES e75c4aebfee9 registry.redhat.io/quay/quay-rhel8:v3.7.0 registry 4 seconds ago Up 4 seconds ago 0.0.0.0:84->8080/tcp, 0.0.0.0:447->8443/tcp quay03",
"sudo podman stop ec16ece208c0",
"sudo podman stop 7ae0c9a8b37d",
"sudo podman stop e75c4aebfee9",
"sudo podman pull registry.redhat.io/quay/quay-rhel8:v3.8.0",
"sudo podman pull registry.redhat.io/quay/quay-rhel8:v3.8.0",
"sudo podman pull registry.redhat.io/quay/quay-rhel8:v3.8.0",
"sudo podman run --restart=always -p 443:8443 -p 80:8080 --sysctl net.core.somaxconn=4096 --name=quay01 -v /mnt/quay/config:/conf/stack:Z -v /mnt/quay/storage:/datastorage:Z -d registry.redhat.io/quay/quay-rhel8:v3.8.0",
"sudo podman ps",
"CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 70b9f38c3fb4 registry.redhat.io/quay/quay-rhel8:v3.8.0 registry 2 seconds ago Up 2 seconds ago 0.0.0.0:82->8080/tcp, 0.0.0.0:445->8443/tcp quay01",
"sudo podman run --restart=always -p 443:8443 -p 80:8080 --sysctl net.core.somaxconn=4096 --name=quay02 -v /mnt/quay/config:/conf/stack:Z -v /mnt/quay/storage:/datastorage:Z -d registry.redhat.io/quay/quay-rhel8:v3.8.0",
"sudo podman run --restart=always -p 443:8443 -p 80:8080 --sysctl net.core.somaxconn=4096 --name=quay03 -v /mnt/quay/config:/conf/stack:Z -v /mnt/quay/storage:/datastorage:Z -d registry.redhat.io/quay/quay-rhel8:v3.8.0",
"sudo podman ps"
] | https://docs.redhat.com/en/documentation/red_hat_quay/3.9/html/upgrade_red_hat_quay/standalone-upgrade |
Chapter 7. Partition operations with parted | Chapter 7. Partition operations with parted parted is a program to manipulate disk partitions. It supports multiple partition table formats, including MS-DOS and GPT. It is useful for creating space for new operating systems, reorganizing disk usage, and copying data to new hard disks. 7.1. Viewing the partition table with parted Display the partition table of a block device to see the partition layout and details about individual partitions. You can view the partition table on a block device using the parted utility. Procedure Start the parted utility. For example, the following output lists the device /dev/sda : View the partition table: Optional: Switch to the device you want to examine : For a detailed description of the print command output, see the following: Model: ATA SAMSUNG MZNLN256 (scsi) The disk type, manufacturer, model number, and interface. Disk /dev/sda: 256GB The file path to the block device and the storage capacity. Partition Table: msdos The disk label type. Number The partition number. For example, the partition with minor number 1 corresponds to /dev/sda1 . Start and End The location on the device where the partition starts and ends. Type Valid types are metadata, free, primary, extended, or logical. File system The file system type. If the File system field of a device shows no value, this means that its file system type is unknown. The parted utility cannot recognize the file system on encrypted devices. Flags Lists the flags set for the partition. Available flags are boot , root , swap , hidden , raid , lvm , or lba . Additional resources parted(8) man page on your system 7.2. Creating a partition table on a disk with parted Use the parted utility to format a block device with a partition table more easily. Warning Formatting a block device with a partition table deletes all data stored on the device. Procedure Start the interactive parted shell: Determine if there already is a partition table on the device: If the device already contains partitions, they will be deleted in the following steps. Create the new partition table: Replace table-type with with the intended partition table type: msdos for MBR gpt for GPT Example 7.1. Creating a GUID Partition Table (GPT) table To create a GPT table on the disk, use: The changes start applying after you enter this command. View the partition table to confirm that it is created: Exit the parted shell: Additional resources parted(8) man page on your system 7.3. Creating a partition with parted As a system administrator, you can create new partitions on a disk by using the parted utility. Note The required partitions are swap , /boot/ , and / (root) . Prerequisites A partition table on the disk. If the partition you want to create is larger than 2TiB, format the disk with the GUID Partition Table (GPT) . Procedure Start the parted utility: View the current partition table to determine if there is enough free space: Resize the partition in case there is not enough free space. From the partition table, determine: The start and end points of the new partition. On MBR, what partition type it should be. Create the new partition: Replace part-type with with primary , logical , or extended . This applies only to the MBR partition table. Replace name with an arbitrary partition name. This is required for GPT partition tables. Replace fs-type with xfs , ext2 , ext3 , ext4 , fat16 , fat32 , hfs , hfs+ , linux-swap , ntfs , or reiserfs . The fs-type parameter is optional. Note that the parted utility does not create the file system on the partition. Replace start and end with the sizes that determine the starting and ending points of the partition, counting from the beginning of the disk. You can use size suffixes, such as 512MiB , 20GiB , or 1.5TiB . The default size is in megabytes. Example 7.2. Creating a small primary partition To create a primary partition from 1024MiB until 2048MiB on an MBR table, use: The changes start applying after you enter the command. View the partition table to confirm that the created partition is in the partition table with the correct partition type, file system type, and size: Exit the parted shell: Register the new device node: Verify that the kernel recognizes the new partition: Additional resources parted(8) man page on your system Creating a partition table on a disk with parted Resizing a partition with parted 7.4. Removing a partition with parted Using the parted utility, you can remove a disk partition to free up disk space. Warning Removing a partition deletes all data stored on the partition. Procedure Start the interactive parted shell: Replace block-device with the path to the device where you want to remove a partition: for example, /dev/sda . View the current partition table to determine the minor number of the partition to remove: Remove the partition: Replace minor-number with the minor number of the partition you want to remove. The changes start applying as soon as you enter this command. Verify that you have removed the partition from the partition table: Exit the parted shell: Verify that the kernel registers that the partition is removed: Remove the partition from the /etc/fstab file, if it is present. Find the line that declares the removed partition, and remove it from the file. Regenerate mount units so that your system registers the new /etc/fstab configuration: If you have deleted a swap partition or removed pieces of LVM, remove all references to the partition from the kernel command line: List active kernel options and see if any option references the removed partition: Remove the kernel options that reference the removed partition: To register the changes in the early boot system, rebuild the initramfs file system: Additional resources parted(8) man page on your system 7.5. Resizing a partition with parted Using the parted utility, extend a partition to use unused disk space, or shrink a partition to use its capacity for different purposes. Prerequisites Back up the data before shrinking a partition. If the partition you want to create is larger than 2TiB, format the disk with the GUID Partition Table (GPT) . If you want to shrink the partition, first shrink the file system so that it is not larger than the resized partition. Note XFS does not support shrinking. Procedure Start the parted utility: View the current partition table: From the partition table, determine: The minor number of the partition. The location of the existing partition and its new ending point after resizing. Resize the partition: Replace 1 with the minor number of the partition that you are resizing. Replace 2 with the size that determines the new ending point of the resized partition, counting from the beginning of the disk. You can use size suffixes, such as 512MiB , 20GiB , or 1.5TiB . The default size is in megabytes. View the partition table to confirm that the resized partition is in the partition table with the correct size: Exit the parted shell: Verify that the kernel registers the new partition: Optional: If you extended the partition, extend the file system on it as well. Additional resources parted(8) man page on your system | [
"parted /dev/sda",
"(parted) print Model: ATA SAMSUNG MZNLN256 (scsi) Disk /dev/sda: 256GB Sector size (logical/physical): 512B/512B Partition Table: msdos Disk Flags: Number Start End Size Type File system Flags 1 1049kB 269MB 268MB primary xfs boot 2 269MB 34.6GB 34.4GB primary 3 34.6GB 45.4GB 10.7GB primary 4 45.4GB 256GB 211GB extended 5 45.4GB 256GB 211GB logical",
"(parted) select block-device",
"parted block-device",
"(parted) print",
"(parted) mklabel table-type",
"(parted) mklabel gpt",
"(parted) print",
"(parted) quit",
"parted block-device",
"(parted) print",
"(parted) mkpart part-type name fs-type start end",
"(parted) mkpart primary 1024MiB 2048MiB",
"(parted) print",
"(parted) quit",
"udevadm settle",
"cat /proc/partitions",
"parted block-device",
"(parted) print",
"(parted) rm minor-number",
"(parted) print",
"(parted) quit",
"cat /proc/partitions",
"systemctl daemon-reload",
"grubby --info=ALL",
"grubby --update-kernel=ALL --remove-args=\" option \"",
"dracut --force --verbose",
"parted block-device",
"(parted) print",
"(parted) resizepart 1 2GiB",
"(parted) print",
"(parted) quit",
"cat /proc/partitions"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html/managing_file_systems/partition-operations-with-parted_managing-file-systems |
Chapter 22. Graphical User Interface Tools for Guest Virtual Machine Management | Chapter 22. Graphical User Interface Tools for Guest Virtual Machine Management In addition to virt-manager , Red Hat Enterprise Linux 7 provides the following tools that enable you to access your guest virtual machine's console. 22.1. virt-viewer virt-viewer is a minimalistic command-line utility for displaying the graphical console of a guest virtual machine. The console is accessed using the VNC or SPICE protocol. The guest can be referred to by its name, ID, or UUID. If the guest is not already running, the viewer can be set to wait until it starts before attempting to connect to the console. The viewer can connect to remote hosts to get the console information and then also connect to the remote console using the same network transport. In comparison with virt-manager , virt-viewer offers a smaller set of features, but is less resource-demanding. In addition, unlike virt-manager , virt-viewer in most cases does not require read-write permissions to libvirt. Therefore, it can be used by non-privileged users who should be able to connect to and display guests, but not to configure them. To install virt-viewer , run: Syntax The basic virt-viewer command-line syntax is as follows: To see the full list of options available for use with virt-viewer, see the virt-viewer man page. Connecting to a guest virtual machine If used without any options, virt-viewer lists guests that it can connect to on the default hypervisor of the local system. To connect to a specified guest virtual machine that uses the default hypervisor: To connect to a guest virtual machine that uses the KVM-QEMU hypervisor: To connect to a remote console using TLS: To connect to a console on a remote host by using SSH, look up the guest configuration and then make a direct non-tunneled connection to the console: Interface By default, the virt-viewer interface provides only the basic tools for interacting with the guest: Figure 22.1. Sample virt-viewer interface Setting hotkeys To create a customized keyboard shortcut (also referred to as a hotkey) for the virt-viewer session, use the --hotkeys option: The following actions can be assigned to a hotkey: toggle-fullscreen release-cursor smartcard-insert smartcard-remove Key-name combination hotkeys are not case-sensitive. Note that the hotkey setting does not carry over to future virt-viewer sessions. Example 22.1. Setting a virt-viewer hotkey To add a hotkey to change to full screen mode when connecting to a KVM-QEMU guest called testguest: Kiosk mode In kiosk mode, virt-viewer only allows the user to interact with the connected desktop, and does not provide any options to interact with the guest settings or the host system unless the guest is shut down. This can be useful for example when an administrator wants to restrict a user's range of actions to a specified guest. To use kiosk mode, connect to a guest with the -k or --kiosk option. Example 22.2. Using virt-viewer in kiosk mode To connect to a KVM-QEMU virtual machine in kiosk mode that terminates after the machine is shut down, use the following command: Note, however, that kiosk mode alone cannot ensure that the user does not interact with the host system or the guest settings after the guest is shut down. This would require further security measures, such as disabling the window manager on the host. | [
"yum install virt-viewer",
"virt-viewer [OPTIONS] {guest-name|id|uuid}",
"virt-viewer guest-name",
"virt-viewer --connect qemu:///system guest-name",
"virt-viewer --connect qemu://example.org/ guest-name",
"virt-viewer --direct --connect qemu+ssh:// [email protected]/ guest-name",
"virt-viewer --hotkeys= action1 = key-combination1 [, action2 = key-combination2 ] guest-name",
"virt-viewer --hotkeys=toggle-fullscreen=shift+f11 qemu:///system testguest",
"virt-viewer --connect qemu:///system guest-name --kiosk --kiosk-quit on-disconnect"
] | https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/virtualization_deployment_and_administration_guide/chap-graphic_user_interface_tools_for_guest_virtual_machine_management |
23.2.3. Starting ptp4l | 23.2.3. Starting ptp4l The ptp4l program tries to use hardware time stamping by default. To use ptp4l with hardware time stamping capable drivers and NICs, you must provide the network interface to use with the -i option. Enter the following command as root : Where eth3 is the interface you want to configure. Below is example output from ptp4l when the PTP clock on the NIC is synchronized to a master: The master offset value is the measured offset from the master in nanoseconds. The s0 , s1 , s2 strings indicate the different clock servo states: s0 is unlocked, s1 is clock step and s2 is locked. Once the servo is in the locked state ( s2 ), the clock will not be stepped (only slowly adjusted) unless the pi_offset_const option is set to a positive value in the configuration file (described in the ptp4l(8) man page). The freq value is the frequency adjustment of the clock in parts per billion (ppb). The path delay value is the estimated delay of the synchronization messages sent from the master in nanoseconds. Port 0 is a Unix domain socket used for local PTP management. Port 1 is the eth3 interface (based on the example above.) INITIALIZING, LISTENING, UNCALIBRATED and SLAVE are some of possible port states which change on the INITIALIZE, RS_SLAVE, MASTER_CLOCK_SELECTED events. In the last state change message, the port state changed from UNCALIBRATED to SLAVE indicating successful synchronization with a PTP master clock. The ptp4l program can also be started as a service by running: When running as a service, options are specified in the /etc/sysconfig/ptp4l file. More information on the different ptp4l options and the configuration file settings can be found in the ptp4l(8) man page. By default, messages are sent to /var/log/messages . However, specifying the -m option enables logging to standard output which can be useful for debugging purposes. To enable software time stamping, the -S option needs to be used as follows: 23.2.3.1. Selecting a Delay Measurement Mechanism There are two different delay measurement mechanisms and they can be selected by means of an option added to the ptp4l command as follows: -P The -P selects the peer-to-peer ( P2P ) delay measurement mechanism. The P2P mechanism is preferred as it reacts to changes in the network topology faster, and may be more accurate in measuring the delay, than other mechanisms. The P2P mechanism can only be used in topologies where each port exchanges PTP messages with at most one other P2P port. It must be supported and used by all hardware, including transparent clocks, on the communication path. -E The -E selects the end-to-end ( E2E ) delay measurement mechanism. This is the default. The E2E mechanism is also referred to as the delay " request-response " mechanism. -A The -A enables automatic selection of the delay measurement mechanism. The automatic option starts ptp4l in E2E mode. It will change to P2P mode if a peer delay request is received. Note All clocks on a single PTP communication path must use the same mechanism to measure the delay. A warning will be printed when a peer delay request is received on a port using the E2E mechanism. A warning will be printed when a E2E delay request is received on a port using the P2P mechanism. | [
"~]# ptp4l -i eth3 -m",
"~]# ptp4l -i eth3 -m selected eth3 as PTP clock port 1: INITIALIZING to LISTENING on INITIALIZE port 0: INITIALIZING to LISTENING on INITIALIZE port 1: new foreign master 00a069.fffe.0b552d-1 selected best master clock 00a069.fffe.0b552d port 1: LISTENING to UNCALIBRATED on RS_SLAVE master offset -23947 s0 freq +0 path delay 11350 master offset -28867 s0 freq +0 path delay 11236 master offset -32801 s0 freq +0 path delay 10841 master offset -37203 s1 freq +0 path delay 10583 master offset -7275 s2 freq -30575 path delay 10583 port 1: UNCALIBRATED to SLAVE on MASTER_CLOCK_SELECTED master offset -4552 s2 freq -30035 path delay 10385",
"~]# service ptp4l start",
"~]# ptp4l -i eth3 -m -S"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/deployment_guide/s1-starting_ptp4l |
Chapter 7. Data Grid ports and protocols | Chapter 7. Data Grid ports and protocols As Data Grid distributes data across your network and can establish connections for external client requests, you should be aware of the ports and protocols that Data Grid uses to handle network traffic. If run Data Grid as a remote server then you might need to allow remote clients through your firewall. Likewise, you should adjust ports that Data Grid nodes use for cluster communication to prevent conflicts or network issues. 7.1. Data Grid Server ports and protocols Data Grid Server provides network endpoints that allow client access with different protocols. Port Protocol Description 11222 TCP Hot Rod and REST 11221 TCP Memcached (disabled by default) Single port Data Grid Server exposes multiple protocols through a single TCP port, 11222 . Handling multiple protocols with a single port simplifies configuration and reduces management complexity when deploying Data Grid clusters. Using a single port also enhances security by minimizing the attack surface on the network. Data Grid Server handles HTTP/1.1, HTTP/2, and Hot Rod protocol requests from clients via the single port in different ways. HTTP/1.1 upgrade headers Client requests can include the HTTP/1.1 upgrade header field to initiate HTTP/1.1 connections with Data Grid Server. Client applications can then send the Upgrade: protocol header field, where protocol is a server endpoint. Application-Layer Protocol Negotiation (ALPN)/Transport Layer Security (TLS) Client requests include Server Name Indication (SNI) mappings for Data Grid Server endpoints to negotiate protocols over a TLS connection. Automatic Hot Rod detection Client requests that include Hot Rod headers automatically route to Hot Rod endpoints. 7.1.1. Configuring network firewalls for Data Grid traffic Adjust firewall rules to allow traffic between Data Grid Server and client applications. Procedure On Red Hat Enterprise Linux (RHEL) workstations, for example, you can allow traffic to port 11222 with firewalld as follows: To configure firewall rules that apply across a network, you can use the nftables utility. Reference Using and configuring firewalld Getting started with nftables 7.2. TCP and UDP ports for cluster traffic Data Grid uses the following ports for cluster transport messages: Default Port Protocol Description 7800 TCP/UDP JGroups cluster bind port 46655 UDP JGroups multicast Cross-site replication Data Grid uses the following ports for the JGroups RELAY2 protocol: 7900 For Data Grid clusters running on OpenShift. 7800 If using UDP for traffic between nodes and TCP for traffic between clusters. 7801 If using TCP for traffic between nodes and TCP for traffic between clusters. | [
"firewall-cmd --add-port=11222/tcp --permanent success firewall-cmd --list-ports | grep 11222 11222/tcp"
] | https://docs.redhat.com/en/documentation/red_hat_data_grid/8.5/html/data_grid_security_guide/ports_protocols |
5.148. libguestfs | 5.148. libguestfs 5.148.1. RHSA-2012:0774 - libguestfs security, bug fix and enhancement update Updated libguestfs packages that fix one security issue, multiple bugs and add various enhancements are now available for Red Hat Enterprise Linux 6. The Red Hat Security Response Team has rated this update as having low security impact. A Common Vulnerability Scoring System (CVSS) base score, which gives a detailed severity rating, is available from the CVE link associated with the description below. The libguestfs package contains a library for accessing and modifying guest disk images. Note The libguestfs package has been upgraded to upstream version 1.16, which provides a number of bug fixes and enhancements over the version. (BZ# 719879 ) Security Fix CVE-2012-2690 It was found that editing files with virt-edit left said files in a world-readable state (and did not preserve the file owner or Security-Enhanced Linux context). If an administrator on the host used virt-edit to edit a file inside a guest, the file would be left with world-readable permissions. This could lead to unprivileged guest users accessing files they would otherwise be unable to. Bug Fixes BZ# 647174 When cloning, the virt-clone tool incorrectly adopted some of the properties of the original virtual machine image, for example, the udev rules for network interface: the clone was then created with a NIC identical to the NIC of the original virtual machine NIC. With this update, the virt-sysprep and virt-sparsify tools have been added to solve this problem. The virt-sysprep tool can erase the state from guests, and virt-sparsify can make guest images sparse. Users are advised to use virt-sysprep and virt-sparsify either as a replacement for or in conjunction with virt-clone . BZ# 789960 The libguestfs daemon terminated unexpectedly when it attempted to mount a non-existent disk. This happened because libguestfs returned an unexpected error to any program that accidentally tried to mount a non-existent disk and all further operations intended to handle such a situation failed. With this update, libguestfs returns an appropriate error message and remains stable in the scenario described. BZ# 790958 If two threads in one program called the guestfs_launch() function at the same time, an unexpected error could be returned. The respective code in the libguestfs library has been modified to be thread-safe in this scenario and the library can be used from multi-threaded programs with more than one libguestfs handle. BZ# 769359 After a block device was closed, the udev device manager triggered a process which re-opened the block device. Consequently, libguestfs operations occasionally failed as they rely on the disk being immediately free for the kernel to re-read the partition table. This commonly occurred with the virt-resize feature. With this update, the operations now wait for the udev action to finish and no longer fail in the scenario described. BZ# 809401 In Fedora 17, the /bin directory is a symbolic link, while it was a directory in releases. Due to this change, libguestfs could not inspect a guest with Fedora 17 and newer. With this update, the libguestfs inspection has been changed so that it now recognizes such guests as expected. BZ# 729076 Previously, libguestfs considered any disk that contained autoexec.bat or boot.ini or ntldr file in its root a candidate for a Windows root disk. If a guest had an HP recovery partition, libguestfs could not recognize the HP recovery partition and handled the system as being dual-boot. Consequently, some virt tools did not work as they do not support multi-boot guests. With this update, libguestfs investigates a potential Windows root disk properly and no longer recognizes the special HP recovery partition as a Windows root disk. BZ# 811673 If launching of certain appliances failed, libguestfs did not set the error string. As Python programs handling the bindings assumed that the error string was not NULL , the binding process terminated unexpectedly with a segmentation fault when the g.launch() function was called under some circumstances. With this update, the error string is now set properly on all failure paths in the described scenario and Python programs no longer terminate with a segmentation fault when calling the g.launch() function under these circumstances. BZ# 812092 The qemu emulator cannot open disk image files that contain the colon character ( : ). Previously, libguestfs resolved the link to the disk image before sending it to qemu. If the resolved link contained the colon character, qemu failed to run. Also, libguestfs sometimes failed to open a disk image file under these circumstances due to incorrect handling of special characters. With this update, libguestfs no longer resolves a link to a disk image before sending it to qemu and is able to handle any filenames, except for filenames that contain a colon character. Also, libguestfs now returns correct diagnostic messages when presented with a filename that contains a colon character. Enhancements BZ# 741183 The libguestfs application now provides the virt-alignment-scan tool and updated virt-resize , which can diagnose unaligned partitions on a guest, so that you can fix the problem and improve the partitions' performance. For more information, refer to the virt-alignment-scan(1) and virt-resize(1) manual pages. BZ# 760221 Previously, libguestfs operations could not handle paths to HP Smart Array (cciss) devices. When the virt-p2v tool converted a physical machine that uses Linux software RAID devices to run in a VM, the libguestfs inspection failed to handle the paths in the /etc/fstab file. With this update, support for such cciss paths has been added and the virt-p2v tool is now able to successfully convert these guests. BZ# 760223 When the virt-p2v tool converted a physical machine that uses Linux software RAID devices to run in a VM, the libguestfs inspection failed to handle the paths in the /etc/fstab file. With this update, support for such RAID paths has been added and the virt-p2v tool is now able to successfully convert these guests. Users of libguestfs should upgrade to these updated packages, which fix these issues and add these enhancements. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.3_technical_notes/libguestfs |
3.3. Additional Resources | 3.3. Additional Resources For more information about security updates, ways of applying them, the Red Hat Customer Portal, and related topics, see the resources listed below. Installed Documentation yum (8) - The manual page for the Yum package manager provides information about the way Yum can be used to install, update, and remove packages on your systems. rpmkeys (8) - The manual page for the rpmkeys utility describes the way this program can be used to verify the authenticity of downloaded packages. Online Documentation Red Hat Enterprise Linux 7 System Administrator's Guide - The System Administrator's Guide for Red Hat Enterprise Linux 7 documents the use of the Yum and rpm commands that are used to install, update, and remove packages on Red Hat Enterprise Linux 7 systems. Red Hat Enterprise Linux 7 SELinux User's and Administrator's Guide - The SELinux User's and Administrator's Guide for Red Hat Enterprise Linux 7 documents the configuration of the SELinux mandatory access control mechanism. Red Hat Customer Portal Red Hat Customer Portal, Security - The Security section of the Customer Portal contains links to the most important resources, including the Red Hat CVE database, and contacts for Red Hat Product Security. Red Hat Security Blog - Articles about latest security-related issues from Red Hat security professionals. See Also Chapter 2, Security Tips for Installation describes how to configure your system securely from the beginning to make it easier to implement additional security settings later. Section 4.9.2, "Creating GPG Keys" describes how to create a set of personal GPG keys to authenticate your communications. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/security_guide/sec-keeping_your_system_up-to-date-additional_resources |
Chapter 7. Uninstalling OpenShift Data Foundation | Chapter 7. Uninstalling OpenShift Data Foundation 7.1. Uninstalling OpenShift Data Foundation in Internal mode To uninstall OpenShift Data Foundation in Internal mode, refer to the knowledge base article on Uninstalling OpenShift Data Foundation . | null | https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.15/html/deploying_openshift_data_foundation_on_vmware_vsphere/uninstalling_openshift_data_foundation |
Common object reference | Common object reference OpenShift Container Platform 4.15 Reference guide common API objects Red Hat OpenShift Documentation Team | [
"<quantity> ::= <signedNumber><suffix>",
"(Note that <suffix> may be empty, from the \"\" case in <decimalSI>.)",
"(International System of units; See: http://physics.nist.gov/cuu/Units/binary.html)",
"(Note that 1024 = 1Ki but 1000 = 1k; I didn't choose the capitalization.)",
"type MyAPIObject struct { runtime.TypeMeta `json:\",inline\"` MyPlugin runtime.Object `json:\"myPlugin\"` }",
"type PluginA struct { AOption string `json:\"aOption\"` }",
"type MyAPIObject struct { runtime.TypeMeta `json:\",inline\"` MyPlugin runtime.RawExtension `json:\"myPlugin\"` }",
"type PluginA struct { AOption string `json:\"aOption\"` }",
"{ \"kind\":\"MyAPIObject\", \"apiVersion\":\"v1\", \"myPlugin\": { \"kind\":\"PluginA\", \"aOption\":\"foo\", }, }"
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.15/html-single/common_object_reference/index |
Chapter 3. Configuring Networking | Chapter 3. Configuring Networking Each provisioning type requires some network configuration. Use this chapter to configure network services in your integrated Capsule on Satellite Server. New hosts must have access to your Capsule Server. Capsule Server can be either your integrated Capsule on Satellite Server or an external Capsule Server. You might want to provision hosts from an external Capsule Server when the hosts are on isolated networks and cannot connect to Satellite Server directly, or when the content is synchronized with Capsule Server. Provisioning using the external Capsule Server can save on network bandwidth. Configuring Capsule Server has two basic requirements: Configuring network services. This includes: Content delivery services Network services (DHCP, DNS, and TFTP) Puppet configuration Defining network resource data in Satellite Server to help configure network interfaces on new hosts. The following instructions have similar applications to configuring standalone Capsules managing a specific network. To configure Satellite to use external DHCP, DNS, and TFTP services, see Configuring External Services in Installing Satellite Server in a Connected Network Environment . 3.1. Facts and NIC filtering Facts describe aspects such as total memory, operating system version, or architecture as reported by the host. You can find facts in Monitor > Facts and search hosts through facts or use facts in templates. Satellite collects facts from multiple sources: subscription manager ansible puppet Satellite is an inventory system for hosts and network interfaces. For hypervisors or container hosts, adding thousands of interfaces per host and updating the inventory every few minutes is inadequate. For each individual NIC reported, Satellite creates a NIC entry and those entries are never removed from the database. Parsing all the facts and comparing all records in the database makes Satellite extremely slow and unusable. To optimize the performance of various actions, most importantly fact import, you can use the options available on the Facts tab under Administer > Settings . 3.2. Optimizing performance by removing NICs from database Filter and exclude the connections using the Exclude pattern for facts stored in Satellite and Ignore interfaces with matching identifier option. By default, these options are set to most common hypervisors. If you name the virtual interfaces differently, you can update this filter to use it according to your requirements. Procedure In the Satellite web UI, navigate to Administer > Settings and select the Facts tab. To filter out all interfaces starting with specific names, for example, blu , add blu* to the Ignore interfaces with matching identifier option. To prevent databases from storing facts related to interfaces starting with specific names, for example, blu , add blu* to the Exclude pattern for facts stored in Satellite option. By default, it contains the same list as the Ignore interfaces with matching identifier option. You can override it based on the your requirements. This filters out facts completely without storing them. To remove facts from the database, enter the following command: This command removes all facts matching with the filter added in Administer > Settings > Facts > the Exclude pattern for facts stored in Satellite option. To remove interfaces from the database, enter the following command: This command removes all interfaces matching with the filter added in Administer > Settings > Facts > the Ignore interfaces with matching identifier option. 3.3. Network Resources Satellite contains networking resources that you must set up and configure to create a host. It includes the following networking resources: Domain You must assign every host that is managed by Satellite to a domain. Using the domain, Satellite can manage A, AAAA, and PTR records. Even if you do not want Satellite to manage your DNS servers, you still must create and associate at least one domain. Domains are included in the naming conventions Satellite hosts, for example, a host with the name test123 in the example.com domain has the fully qualified domain name test123.example.com . Subnet You must assign every host managed by Satellite to a subnet. Using subnets, Satellite can then manage IPv4 reservations. If there are no reservation integrations, you still must create and associate at least one subnet. When you manage a subnet in Satellite, you cannot create DHCP records for that subnet outside of Satellite. In Satellite, you can use IP Address Management (IPAM) to manage IP addresses with one of the following options: DHCP : DHCP Capsule manages the assignment of IP addresses by finding the available IP address starting from the first address of the range and skipping all addresses that are reserved. Before assigning an IP address, Capsule sends an ICMP and TCP pings to check whether the IP address is in use. Note that if a host is powered off, or has a firewall configured to disable connections, Satellite makes a false assumption that the IP address is available. This check does not work for hosts that are turned off, therefore, the DHCP option can only be used with subnets that Satellite controls and that do not have any hosts created externally. The Capsule DHCP module retains the offered IP addresses for a short period of time to prevent collisions during concurrent access, so some IP addresses in the IP range might remain temporarily unused. Internal DB : Satellite finds the available IP address from the Subnet range by excluding all IP addresses from the Satellite database in sequence. The primary source of data is the database, not DHCP reservations. This IPAM is not safe when multiple hosts are being created in parallel; in that case, use DHCP or Random DB IPAM instead. Random DB : Satellite finds the available IP address from the Subnet range by excluding all IP addresses from the Satellite database randomly. The primary source of data is the database, not DHCP reservations. This IPAM is safe to use with concurrent host creation as IP addresses are returned in random order, minimizing the chance of a conflict. EUI-64 : Extended Unique Identifier (EUI) 64bit IPv6 address generation, as per RFC2373, is obtained through the 48-bit MAC address. External IPAM : Delegates IPAM to an external system through Capsule feature. Satellite currently does not ship with any external IPAM implementations, but several plug-ins are in development. None : IP address for each host must be entered manually. Options DHCP, Internal DB and Random DB can lead to DHCP conflicts on subnets with records created externally. These subnets must be under exclusive Satellite control. For more information about adding a subnet, see Section 3.9, "Adding a Subnet to Satellite Server" . DHCP Ranges You can define the same DHCP range in Satellite Server for both discovered and provisioned systems, but use a separate range for each service within the same subnet. 3.4. Satellite and DHCP Options Satellite manages DHCP reservations through a DHCP Capsule. Satellite also sets the -server and filename DHCP options. The -server option The -server option provides the IP address of the TFTP server to boot from. This option is not set by default and must be set for each TFTP Capsule. You can use the satellite-installer command with the --foreman-proxy-tftp-servername option to set the TFTP server in the /etc/foreman-proxy/settings.d/tftp.yml file: Each TFTP Capsule then reports this setting through the API and Satellite can retrieve the configuration information when it creates the DHCP record. When the PXE loader is set to none , Satellite does not populate the -server option into the DHCP record. If the -server option remains undefined, Satellite uses reverse DNS search to find a TFTP server address to assign, but you might encounter the following problems: DNS timeouts during provisioning Querying of incorrect DNS server. For example, authoritative rather than caching Errors about incorrect IP address for the TFTP server. For example, PTR record was invalid If you encounter these problems, check the DNS setup on both Satellite and Capsule, specifically the PTR record resolution. The filename option The filename option contains the full path to the file that downloads and executes during provisioning. The PXE loader that you select for the host or host group defines which filename option to use. When the PXE loader is set to none , Satellite does not populate the filename option into the DHCP record. Depending on the PXE loader option, the filename changes as follows: PXE loader option filename entry Notes PXELinux BIOS pxelinux.0 PXELinux UEFI pxelinux.efi iPXE Chain BIOS undionly.kpxe PXEGrub2 UEFI grub2/grubx64.efi x64 can differ depending on architecture iPXE UEFI HTTP \http:// capsule.example.com :8000/httpboot/ipxe-x64.efi Requires the httpboot feature and renders the filename as a full URL where capsule.example.com is a known host name of Capsule in Satellite. Grub2 UEFI HTTP \http:// capsule.example.com :8000/httpboot/grub2/grubx64.efi Requires the httpboot feature and renders the filename as a full URL where capsule.example.com is a known host name of Capsule in Satellite. 3.5. Troubleshooting DHCP Problems in Satellite Satellite can manage an ISC DHCP server on internal or external DHCP Capsule. Satellite can list, create, and delete DHCP reservations and leases. However, there are a number of problems that you might encounter on occasions. Out of sync DHCP records When an error occurs during DHCP orchestration, DHCP records in the Satellite database and the DHCP server might not match. To fix this, you must add missing DHCP records from the Satellite database to the DHCP server and then remove unwanted records from the DHCP server as per the following steps: Procedure To preview the DHCP records that are going to be added to the DHCP server, enter the following command: If you are satisfied by the preview changes in the step, apply them by entering the above command with the perform=1 argument: To keep DHCP records in Satellite and in the DHCP server synchronized, you can remove unwanted DHCP records from the DHCP server. Note that Satellite assumes that all managed DHCP servers do not contain third-party records, therefore, this step might delete those unexpected records. To preview what records are going to be removed from the DHCP server, enter the following command: If you are satisfied by the preview changes in the step, apply them by entering the above command with the perform=1 argument: PXE loader option change When the PXE loader option is changed for an existing host, this causes a DHCP conflict. The only workaround is to overwrite the DHCP entry. Incorrect permissions on DHCP files An operating system update can update the dhcpd package. This causes the permissions of important directories and files to reset so that the DHCP Capsule cannot read the required information. For more information, see DHCP error while provisioning host from Satellite server Error ERF12-6899 ProxyAPI::ProxyException: Unable to set DHCP entry RestClient::ResourceNotFound 404 Resource Not Found on Red Hat Knowledgebase. Changing the DHCP Capsule entry Satellite manages DHCP records only for hosts that are assigned to subnets with a DHCP Capsule set. If you create a host and then clear or change the DHCP Capsule, when you attempt to delete the host, the action fails. If you create a host without setting the DHCP Capsule and then try to set the DHCP Capsule, this causes DHCP conflicts. Deleted hosts entries in the dhcpd.leases file Any changes to a DHCP lease are appended to the end of the dhcpd.leases file. Because entries are appended to the file, it is possible that two or more entries of the same lease can exist in the dhcpd.leases file at the same time. When there are two or more entries of the same lease, the last entry in the file takes precedence. Group, subgroup and host declarations in the lease file are processed in the same manner. If a lease is deleted, { deleted; } is appended to the declaration. 3.6. Prerequisites for Image Based Provisioning Post-Boot Configuration Method Images that use the finish post-boot configuration scripts require a managed DHCP server, such as Satellite's integrated Capsule or an external Capsule. The host must be created with a subnet associated with a DHCP Capsule, and the IP address of the host must be a valid IP address from the DHCP range. It is possible to use an external DHCP service, but IP addresses must be entered manually. The SSH credentials corresponding to the configuration in the image must be configured in Satellite to enable the post-boot configuration to be made. Check the following items when troubleshooting a virtual machine booted from an image that depends on post-configuration scripts: The host has a subnet assigned in Satellite Server. The subnet has a DHCP Capsule assigned in Satellite Server. The host has a valid IP address assigned in Satellite Server. The IP address acquired by the virtual machine using DHCP matches the address configured in Satellite Server. The virtual machine created from an image responds to SSH requests. The virtual machine created from an image authorizes the user and password, over SSH, which is associated with the image being deployed. Satellite Server has access to the virtual machine via SSH keys. This is required for the virtual machine to receive post-configuration scripts from Satellite Server. Pre-Boot Initialization Configuration Method Images that use the cloud-init scripts require a DHCP server to avoid having to include the IP address in the image. A managed DHCP Capsule is preferred. The image must have the cloud-init service configured to start when the system boots and fetch a script or configuration data to use in completing the configuration. Check the following items when troubleshooting a virtual machine booted from an image that depends on initialization scripts included in the image: There is a DHCP server on the subnet. The virtual machine has the cloud-init service installed and enabled. For information about the differing levels of support for finish and cloud-init scripts in virtual-machine images, see the Red Hat Knowledgebase Solution What are the supported compute resources for the finish and cloud-init scripts on the Red Hat Customer Portal. 3.7. Configuring Network Services Some provisioning methods use Capsule Server services. For example, a network might require Capsule Server to act as a DHCP server. A network can also use PXE boot services to install the operating system on new hosts. This requires configuring Capsule Server to use the main PXE boot services: DHCP, DNS, and TFTP. Use the satellite-installer command with the options to configure these services on Satellite Server. To configure these services on an external Capsule Server, run satellite-installer --scenario capsule . Satellite Server uses eth0 for external communication, such as connecting to Red Hat's CDN. Procedure Enter the satellite-installer command to configure the required network services: Find Capsule Server that you configure: Refresh features of Capsule Server to view the changes: Verify the services configured on Capsule Server: 3.7.1. Multiple Subnets or Domains Using Installer The satellite-installer options allow only for a single DHCP subnet or DNS domain. One way to define more than one subnet is by using a custom configuration file. For every additional subnet or domain, create an entry in /etc/foreman-installer/custom-hiera.yaml file: Execute satellite-installer to perform the changes and verify that the /etc/dhcp/dhcpd.conf contains appropriate entries. Subnets must be then defined in Satellite database. 3.7.2. DHCP Options for Network Configuration --foreman-proxy-dhcp Enables the DHCP service. You can set this option to true or false . --foreman-proxy-dhcp-managed Enables Foreman to manage the DHCP service. You can set this option to true or false . --foreman-proxy-dhcp-gateway The DHCP pool gateway. Set this to the address of the external gateway for hosts on your private network. --foreman-proxy-dhcp-interface Sets the interface for the DHCP service to listen for requests. Set this to eth1 . --foreman-proxy-dhcp-nameservers Sets the addresses of the nameservers provided to clients through DHCP. Set this to the address for Satellite Server on eth1 . --foreman-proxy-dhcp-range A space-separated DHCP pool range for Discovered and Unmanaged services. --foreman-proxy-dhcp-server Sets the address of the DHCP server to manage. Run satellite-installer --help to view more options related to DHCP and other Capsule services. 3.7.3. DNS Options for Network Configuration --foreman-proxy-dns Enables the DNS feature. You can set this option to true or false . --foreman-proxy-dns-provider Selects the provider to be used. --foreman-proxy-dns-managed Let the installer manage ISC BIND. This is only relevant when using the nsupdate and nsupdate_gss providers. You can set this option to true or false . --foreman-proxy-dns-forwarders Sets the DNS forwarders. Only used when ISC BIND is managed by the installer. Set this to your DNS recursors. --foreman-proxy-dns-interface Sets the interface to listen for DNS requests. Only used when ISC BIND is managed by the installer. Set this to eth1 . --foreman-proxy-dns-reverse The DNS reverse zone name. Only used when ISC BIND is managed by the installer. --foreman-proxy-dns-server Sets the address of the DNS server. Only used by the nsupdate , nsupdate_gss , and infoblox providers. --foreman-proxy-dns-zone Sets the DNS zone name. Only used when ISC BIND is managed by the installer. Run satellite-installer --help to view more options related to DNS and other Capsule services. 3.7.4. TFTP Options for Network Configuration --foreman-proxy-tftp Enables TFTP service. You can set this option to true or false . --foreman-proxy-tftp-managed Enables Foreman to manage the TFTP service. You can set this option to true or false . --foreman-proxy-tftp-servername Sets the TFTP server to use. Ensure that you use Capsule's IP address. Run satellite-installer --help to view more options related to TFTP and other Capsule services. 3.7.5. Using TFTP Services Through NAT You can use Satellite TFTP services through NAT. To do this, on all NAT routers or firewalls, you must enable a TFTP service on UDP port 69 and enable the TFTP state tracking feature. For more information, see the documentation for your NAT device. Using NAT on Red Hat Enterprise Linux 7: Use the following command to allow TFTP service on UDP port 69, load the kernel TFTP state tracking module, and make the changes persistent: Using NAT on Red Hat Enterprise Linux 6: Configure the firewall to allow TFTP service UDP on port 69: Load the ip_conntrack_tftp kernel TFTP state module. In the /etc/sysconfig/iptables-config file, locate IPTABLES_MODULES and add ip_conntrack_tftp as follows: 3.8. Adding a Domain to Satellite Server Satellite Server defines domain names for each host on the network. Satellite Server must have information about the domain and Capsule Server responsible for domain name assignment. Checking for Existing Domains Satellite Server might already have the relevant domain created as part of Satellite Server installation. Switch the context to Any Organization and Any Location then check the domain list to see if it exists. DNS Server Configuration Considerations During the DNS record creation, Satellite performs conflict DNS lookups to verify that the host name is not in active use. This check runs against one of the following DNS servers: The system-wide resolver if Administer > Settings > Query local nameservers is set to true . The nameservers that are defined in the subnet associated with the host. The authoritative NS-Records that are queried from the SOA from the domain name associated with the host. If you experience timeouts during DNS conflict resolution, check the following settings: The subnet nameservers must be reachable from Satellite Server. The domain name must have a Start of Authority (SOA) record available from Satellite Server. The system resolver in the /etc/resolv.conf file must have a valid and working configuration. To use the CLI instead of the Satellite web UI, see the CLI procedure . Procedure In the Satellite web UI, navigate to Infrastructure > Domains and click Create Domain . In the DNS Domain field, enter the full DNS domain name. In the Fullname field, enter the plain text name of the domain. Click the Parameters tab and configure any domain level parameters to apply to hosts attached to this domain. For example, user defined Boolean or string parameters to use in templates. Click Add Parameter and fill in the Name and Value fields. Click the Locations tab, and add the location where the domain resides. Click the Organizations tab, and add the organization that the domain belongs to. Click Submit to save the changes. CLI procedure Use the hammer domain create command to create a domain: In this example, the --dns-id option uses 1 , which is the ID of your integrated Capsule on Satellite Server. 3.9. Adding a Subnet to Satellite Server You must add information for each of your subnets to Satellite Server because Satellite configures interfaces for new hosts. To configure interfaces, Satellite Server must have all the information about the network that connects these interfaces. To use the CLI instead of the Satellite web UI, see the CLI procedure . Procedure In the Satellite web UI, navigate to Infrastructure > Subnets , and in the Subnets window, click Create Subnet . In the Name field, enter a name for the subnet. In the Description field, enter a description for the subnet. In the Network address field, enter the network address for the subnet. In the Network prefix field, enter the network prefix for the subnet. In the Network mask field, enter the network mask for the subnet. In the Gateway address field, enter the external gateway for the subnet. In the Primary DNS server field, enter a primary DNS for the subnet. In the Secondary DNS server , enter a secondary DNS for the subnet. From the IPAM list, select the method that you want to use for IP address management (IPAM). For more information about IPAM, see Chapter 3, Configuring Networking . Enter the information for the IPAM method that you select. Click the Remote Execution tab and select the Capsule that controls the remote execution. Click the Domains tab and select the domains that apply to this subnet. Click the Capsules tab and select the Capsule that applies to each service in the subnet, including DHCP, TFTP, and reverse DNS services. Click the Parameters tab and configure any subnet level parameters to apply to hosts attached to this subnet. For example, user defined Boolean or string parameters to use in templates. Click the Locations tab and select the locations that use this Capsule. Click the Organizations tab and select the organizations that use this Capsule. Click Submit to save the subnet information. CLI procedure Create the subnet with the following command: Note In this example, the --dhcp-id , --dns-id , and --tftp-id options use 1, which is the ID of the integrated Capsule in Satellite Server. | [
"foreman-rake facts:clean",
"foreman-rake interfaces:clean",
"satellite-installer --foreman-proxy-tftp-servername 1.2.3.4",
"foreman-rake orchestration:dhcp:add_missing subnet_name=NAME",
"foreman-rake orchestration:dhcp:add_missing subnet_name=NAME perform=1",
"foreman-rake orchestration:dhcp:remove_offending subnet_name=NAME",
"foreman-rake orchestration:dhcp:remove_offending subnet_name=NAME perform=1",
"satellite-installer --foreman-proxy-dhcp true --foreman-proxy-dhcp-gateway \" 192.168.140.1 \" --foreman-proxy-dhcp-interface \"eth1\" --foreman-proxy-dhcp-managed true --foreman-proxy-dhcp-nameservers \" 192.168.140.2 \" --foreman-proxy-dhcp-range \" 192.168.140.10 192.168.140.110 \" --foreman-proxy-dhcp-server \" 192.168.140.2 \" --foreman-proxy-dns true --foreman-proxy-dns-forwarders \" 8.8.8.8 ; 8.8.4.4 \" --foreman-proxy-dns-interface \"eth1\" --foreman-proxy-dns-managed true --foreman-proxy-dns-reverse \" 140.168.192.in-addr.arpa \" --foreman-proxy-dns-server \" 127.0.0.1 \" --foreman-proxy-dns-zone \" example.com \" --foreman-proxy-tftp true --foreman-proxy-tftp-managed true",
"hammer proxy list",
"hammer proxy refresh-features --name \" satellite.example.com \"",
"hammer proxy info --name \" satellite.example.com \"",
"dhcp::pools: isolated.lan: network: 192.168.99.0 mask: 255.255.255.0 gateway: 192.168.99.1 range: 192.168.99.5 192.168.99.49 dns::zones: # creates @ SOA USD::fqdn root.example.com. # creates USD::fqdn A USD::ipaddress example.com: {} # creates @ SOA test.example.net. hostmaster.example.com. # creates test.example.net A 192.0.2.100 example.net: soa: test.example.net soaip: 192.0.2.100 contact: hostmaster.example.com. # creates @ SOA USD::fqdn root.example.org. # does NOT create an A record example.org: reverse: true # creates @ SOA USD::fqdn hostmaster.example.com. 2.0.192.in-addr.arpa: reverse: true contact: hostmaster.example.com.",
"firewall-cmd --add-service=tftp && firewall-cmd --runtime-to-permanent",
"iptables --sport 69 --state ESTABLISHED -A OUTPUT -i eth0 -j ACCEPT -m state -p udp service iptables save",
"IPTABLES_MODULES=\"ip_conntrack_tftp\"",
"hammer domain create --description \" My_Domain \" --dns-id My_DNS_ID --locations \" My_Location \" --name \" my-domain.tld \" --organizations \" My_Organization \"",
"hammer subnet create --boot-mode \"DHCP\" --description \" My_Description \" --dhcp-id My_DHCP_ID --dns-id My_DNS_ID --dns-primary \"192.168.140.2\" --dns-secondary \"8.8.8.8\" --domains \" my-domain.tld\" \\ --from \"192.168.140.111\" \\ --gateway \"192.168.140.1\" \\ --ipam \"DHCP\" \\ --locations \"_My_Location \" --mask \"255.255.255.0\" --name \" My_Network \" --network \"192.168.140.0\" --organizations \" My_Organization \" --tftp-id My_TFTP_ID --to \"192.168.140.250\" \\"
] | https://docs.redhat.com/en/documentation/red_hat_satellite/6.11/html/provisioning_hosts/Configuring_Networking_provisioning |
Chapter 52. Creating guided rules | Chapter 52. Creating guided rules Guided rules enable you to define business rules in a structured format, based on the data objects associated with the rules. You can create and define guided rules individually for your project. Procedure In Business Central, go to Menu Design Projects and click the project name. Click Add Asset Guided Rule . Enter an informative Guided Rule name and select the appropriate Package . The package that you specify must be the same package where the required data objects have been assigned or will be assigned. You can also select Show declared DSL sentences if any domain specific language (DSL) assets have been defined in your project. These DSL assets will then become usable objects for conditions and actions that you define in the guided rules designer. Click Ok to create the rule asset. The new guided rule is now listed in the Guided Rules panel of the Project Explorer , or in the Guided Rules (with DSL) panel if you selected the Show declared DSL sentences option. Click the Data Objects tab and confirm that all data objects required for your rules are listed. If not, click New item to import data objects from other packages, or create data objects within your package. After all data objects are in place, return to the Model tab of the guided rules designer and use the buttons on the right side of the window to add and define the WHEN (condition) and THEN (action) sections of the rule, based on the available data objects. Figure 52.1. The guided rules designer The WHEN part of the rule contains the conditions that must be met to execute an action. For example, if a bank requires loan applicants to have over 21 years of age, then the WHEN condition of an Underage rule would be Age | less than | 21 . The THEN part of the rule contains the actions to be performed when the conditional part of the rule has been met. For example, when the loan applicant is under 21 years old, the THEN action would set approved to false , declining the loan because the applicant is under age. You can also specify exceptions for more complex rules, such as if a bank may approve of an under-aged applicant when a guarantor is involved. In that case, you would create or import a guarantor data object and then add the field to the guided rule. After you define all components of the rule, click Validate in the upper-right toolbar of the guided rules designer to validate the guided rule. If the rule validation fails, address any problems described in the error message, review all components in the rule, and try again to validate the rule until the rule passes. Click Save in the guided rules designer to save your work. 52.1. Adding WHEN conditions in guided rules The WHEN part of the rule contains the conditions that must be met to execute an action. For example, if a bank requires loan applicants to have over 21 years of age, then the WHEN condition of an Underage rule would be Age | less than | 21 . You can set simple or complex conditions to determine how and when your rules are applied. Prerequisites All data objects required for your rules have been created or imported and are listed in the Data Objects tab of the guided rules designer. Procedure In the guided rules designer, click the plus icon ( ) on the right side of the WHEN section. The Add a condition to the rule window with the available condition elements opens. Figure 52.2. Add a condition to the rule The list includes the data objects from the Data Objects tab of the guided rules designer, any DSL objects defined for the package (if you selected Show declared DSL sentences when you created this guided rule), and the following standard options: The following does not exist: Use this to specify facts and constraints that must not exist. The following exists: Use this to specify facts and constraints that must exist. This option is triggered on only the first match, not subsequent matches. Any of the following are true: Use this to list any facts or constraints that must be true. From: Use this to define a From conditional element for the rule. From Accumulate: Use this to define an Accumulate conditional element for the rule. From Collect: Use this to define a Collect conditional element for the rule. From Entry Point: Use this to define an Entry Point for the pattern. Free form DRL: Use this to insert a free-form DRL field where you can define condition elements freely, without the guided rules designer. Choose a condition element (for example, LoanApplication ) and click Ok . Click the condition element in the guided rules designer and use the Modify constraints for LoanApplication window to add a restriction on a field, apply multiple field constraints, add a new formula style expression, apply an expression editor, or set a variable name. Figure 52.3. Modify a condition Note A variable name enables you to identify a fact or field in other constructs within the guided rule. For example, you could set the variable of LoanApplication to a and then reference a in a separate Bankruptcy constraint that specifies which application the bankruptcy is based on. After you select a constraint, the window closes automatically. Choose an operator for the restriction (for example, greater than ) from the drop-down menu to the added restriction. Click the edit icon ( ) to define the field value. The field value can be a literal value, a formula, or a full MVEL expression. To apply multiple field constraints, click the condition and in the Modify constraints for LoanApplication window, select All of(And) or Any of(Or) from the Multiple field constraint drop-down menu. Figure 52.4. Add multiple field constraints Click the constraint in the guided rules designer and further define the field value. After you define all condition components of the rule, click Validate in the upper-right toolbar of the guided rules designer to validate the guided rule conditions. If the rule validation fails, address any problems described in the error message, review all components in the rule, and try again to validate the rule until the rule passes. Click Save in the guided rules designer to save your work. 52.2. Adding THEN actions in guided rules The THEN part of the rule contains the actions to be performed when the WHEN condition of the rule has been met. For example, when a loan applicant is under 21 years old, the THEN action might set approved to false , declining the loan because the applicant is under age. You can set simple or complex actions to determine how and when your rules are applied. Prerequisites All data objects required for your rules have been created or imported and are listed in the Data Objects tab of the guided rules designer. Procedure In the guided rules designer, click the plus icon ( ) on the right side of the THEN section. The Add a new action window with the available action elements opens. Figure 52.5. Add a new action to the rule The list includes insertion and modification options based on the data objects in the Data Objects tab of the guided rules designer, and on any DSL objects defined for the package (if you selected Show declared DSL sentences when you created this guided rule): Change field values of: Use this to set the value of fields on a fact (such as LoanApplication ) without notifying the decision engine of the change. Delete: Use this to delete a fact. Modify: Use this to specify fields to be modified for a fact and to notify the decision engine of the change. Insert fact: Use this to insert a fact and define resulting fields and values for the fact. Logically Insert fact: Use this to insert a fact logically into the decision engine and define resulting fields and values for the fact. The decision engine is responsible for logical decisions on insertions and retractions of facts. After regular or stated insertions, facts have to be retracted explicitly. After logical insertions, facts are automatically retracted when the conditions that originally asserted the facts are no longer true. Add free form DRL: Use this to insert a free-form DRL field where you can define condition elements freely, without the guided rules designer. Call method on: Use this to invoke a method from another fact. Choose an action element (for example, Modify ) and click Ok . Click the action element in the guided rules designer and use the Add a field window to select a field. Figure 52.6. Add a field After you select a field, the window closes automatically. Click the edit icon ( ) to define the field value. The field value can be a literal value or a formula. After you define all action components of the rule, click Validate in the upper-right toolbar of the guided rules designer to validate the guided rule actions. If the rule validation fails, address any problems described in the error message, review all components in the rule, and try again to validate the rule until the rule passes. Click Save in the guided rules designer to save your work. 52.3. Defining enumerations for drop-down lists in rule assets Enumeration definitions in Business Central determine the possible values of fields for conditions or actions in guided rules, guided rule templates, and guided decision tables. An enumeration definition contains a fact.field mapping to a list of supported values that are displayed as a drop-down list in the relevant field of a rule asset. When a user selects a field that is based on the same fact and field as the enumeration definition, the drop-down list of defined values is displayed. You can define enumerations in Business Central or in the DRL source for your Red Hat Process Automation Manager project. Procedure In Business Central, go to Menu Design Projects and click the project name. Click Add Asset Enumeration . Enter an informative Enumeration name and select the appropriate Package . The package that you specify must be the same package where the required data objects and relevant rule assets have been assigned or will be assigned. Click Ok to create the enumeration. The new enumeration is now listed in the Enumeration Definitions panel of the Project Explorer . In the Model tab of the enumerations designer, click Add enum and define the following values for the enumeration: Fact : Specify an existing data object within the same package of your project with which you want to associate this enumeration. Open the Data Objects panel in the Project Explorer to view the available data objects, or create the relevant data object as a new asset if needed. Field : Specify an existing field identifier that you defined as part of the data object that you selected for the Fact . Open the Data Objects panel in the Project Explorer to select the relevant data object and view the list of available Identifier options. You can create the relevant identifier for the data object if needed. Context : Specify a list of values in the format ['string1','string2','string3'] or [integer1,integer2,integer3] that you want to map to the Fact and Field definitions. These values will be displayed as a drop-down list for the relevant field of the rule asset. For example, the following enumeration defines the drop-down values for applicant credit rating in a loan application decision service: Figure 52.7. Example enumeration for applicant credit rating in Business Central Example enumeration for applicant credit rating in the DRL source In this example, for any guided rule, guided rule template, or guided decision table that is in the same package of the project and that uses the Applicant data object and the creditRating field, the configured values are available as drop-down options: Figure 52.8. Example enumeration drop-down options in a guided rule or guided rule template Figure 52.9. Example enumeration drop-down options in a guided decision table 52.3.1. Advanced enumeration options for rule assets For advanced use cases with enumeration definitions in your Red Hat Process Automation Manager project, consider the following extended options for defining enumerations: Mapping between DRL values and values in Business Central If you want the enumeration values to appear differently or more completely in the Business Central interface than they appear in the DRL source, use a mapping in the format 'fact.field' : ['sourceValue1=UIValue1','sourceValue2=UIValue2', ... ] for your enumeration definition values. For example, in the following enumeration definition for loan status, the options A or D are used in the DRL file but the options Approved or Declined are displayed in Business Central: Enumeration value dependencies If you want the selected value in one drop-down list to determine the available options in a subsequent drop-down list, use the format 'fact.fieldB[fieldA=value1]' : ['value2', 'value3', ... ] for your enumeration definition. For example, in the following enumeration definition for insurance policies, the policyType field accepts the values Home or Car . The type of policy that the user selects determines the policy coverage field options that are then available: Note Enumeration dependencies are not applied across rule conditions and actions. For example, in this insurance policy use case, the selected policy in the rule condition does not determine the available coverage options in the rule actions, if applicable. External data sources in enumerations If you want to retrieve a list of enumeration values from an external data source instead of defining the values directly in the enumeration definition, on the class path of your project, add a helper class that returns a java.util.List list of strings. In the enumeration definition, instead of specifying a list of values, identify the helper class that you configured to retrieve the values externally. For example, in the following enumeration definition for loan applicant region, instead of defining applicant regions explicitly in the format 'Applicant.region' : ['country1', 'country2', ... ] , the enumeration uses a helper class that returns the list of values defined externally: In this example, a DataHelper class contains a getListOfRegions() method that returns a list of strings. The enumerations are loaded in the drop-down list for the relevant field in the rule asset. You can also load dependent enumeration definitions dynamically from a helper class by identifying the dependent field as usual and enclosing the call to the helper class within quotation marks: If you want to load all enumeration data entirely from an external data source, such as a relational database, you can implement a Java class that returns a Map<String, List<String>> map. The key of the map is the fact.field mapping and the value is a java.util.List<String> list of values. For example, the following Java class defines loan applicant regions for the related enumeration: public class SampleDataSource { public Map<String, List<String>> loadData() { Map data = new HashMap(); List d = new ArrayList(); d.add("AU"); d.add("DE"); d.add("ES"); d.add("UK"); d.add("US"); ... data.put("Applicant.region", d); return data; } } The following enumeration definition correlates to this example Java class. The enumeration contains no references to fact or field names because they are defined in the Java class: The = operator enables Business Central to load all enumeration data from the helper class. The helper methods are statically evaluated when the enumeration definition is requested for use in an editor. Note Defining an enumeration without a fact and field definition is currently not supported in Business Central. To define the enumeration for the associated Java class in this way, use the DRL source in your Red Hat Process Automation Manager project. 52.4. Adding other rule options You can also use the rule designer to add metadata within a rule, define additional rule attributes (such as salience and no-loop ), and freeze areas of the rule to restrict modifications to conditions or actions. Procedure In the rule designer, click (show options... ) under the THEN section. Click the plus icon ( ) on the right side of the window to add options. Select an option to be added to the rule: Metadata: Enter a metadata label and click the plus icon ( ). Then enter any needed data in the field provided in the rule designer. Attribute: Select from the list of rule attributes. Then further define the value in the field or option displayed in the rule designer. Freeze areas for editing: Select Conditions or Actions to restrict the area from being modified in the rule designer. Figure 52.10. Rule options Click Save in the rule designer to save your work. 52.4.1. Rule attributes Rule attributes are additional specifications that you can add to business rules to modify rule behavior. The following table lists the names and supported values of the attributes that you can assign to rules: Table 52.1. Rule attributes Attribute Value salience An integer defining the priority of the rule. Rules with a higher salience value are given higher priority when ordered in the activation queue. Example: salience 10 enabled A Boolean value. When the option is selected, the rule is enabled. When the option is not selected, the rule is disabled. Example: enabled true date-effective A string containing a date and time definition. The rule can be activated only if the current date and time is after a date-effective attribute. Example: date-effective "4-Sep-2018" date-expires A string containing a date and time definition. The rule cannot be activated if the current date and time is after the date-expires attribute. Example: date-expires "4-Oct-2018" no-loop A Boolean value. When the option is selected, the rule cannot be reactivated (looped) if a consequence of the rule re-triggers a previously met condition. When the condition is not selected, the rule can be looped in these circumstances. Example: no-loop true agenda-group A string identifying an agenda group to which you want to assign the rule. Agenda groups allow you to partition the agenda to provide more execution control over groups of rules. Only rules in an agenda group that has acquired a focus are able to be activated. Example: agenda-group "GroupName" activation-group A string identifying an activation (or XOR) group to which you want to assign the rule. In activation groups, only one rule can be activated. The first rule to fire will cancel all pending activations of all rules in the activation group. Example: activation-group "GroupName" duration A long integer value defining the duration of time in milliseconds after which the rule can be activated, if the rule conditions are still met. Example: duration 10000 timer A string identifying either int (interval) or cron timer definitions for scheduling the rule. Example: timer ( cron:* 0/15 * * * ? ) (every 15 minutes) calendar A Quartz calendar definition for scheduling the rule. Example: calendars "* * 0-7,18-23 ? * *" (exclude non-business hours) auto-focus A Boolean value, applicable only to rules within agenda groups. When the option is selected, the time the rule is activated, a focus is automatically given to the agenda group to which the rule is assigned. Example: auto-focus true lock-on-active A Boolean value, applicable only to rules within rule flow groups or agenda groups. When the option is selected, the time the ruleflow group for the rule becomes active or the agenda group for the rule receives a focus, the rule cannot be activated again until the ruleflow group is no longer active or the agenda group loses the focus. This is a stronger version of the no-loop attribute, because the activation of a matching rule is discarded regardless of the origin of the update (not only by the rule itself). This attribute is ideal for calculation rules where you have a number of rules that modify a fact and you do not want any rule re-matching and firing again. Example: lock-on-active true ruleflow-group A string identifying a rule flow group. In rule flow groups, rules can fire only when the group is activated by the associated rule flow. Example: ruleflow-group "GroupName" dialect A string identifying either JAVA or MVEL as the language to be used for code expressions in the rule. By default, the rule uses the dialect specified at the package level. Any dialect specified here overrides the package dialect setting for the rule. Example: dialect "JAVA" Note When you use Red Hat Process Automation Manager without the executable model, the dialect "JAVA" rule consequences support only Java 5 syntax. For more information about executable models, see Packaging and deploying an Red Hat Process Automation Manager project . | [
"a : LoanApplication() Bankruptcy( application == a ).",
"'Applicant.creditRating' : ['AA', 'OK', 'Sub prime']",
"'Loan.status' : ['A=Approved','D=Declined']",
"'Insurance.policyType' : ['Home', 'Car'] 'Insurance.coverage[policyType=Home]' : ['property', 'liability'] 'Insurance.coverage[policyType=Car]' : ['collision', 'fullCoverage']",
"'Applicant.region' : (new com.mycompany.DataHelper()).getListOfRegions()",
"'Applicant.region[countryCode]' : '(new com.mycompany.DataHelper()).getListOfRegions(\"@{countryCode}\")'",
"public class SampleDataSource { public Map<String, List<String>> loadData() { Map data = new HashMap(); List d = new ArrayList(); d.add(\"AU\"); d.add(\"DE\"); d.add(\"ES\"); d.add(\"UK\"); d.add(\"US\"); data.put(\"Applicant.region\", d); return data; } }",
"=(new SampleDataSource()).loadData()"
] | https://docs.redhat.com/en/documentation/red_hat_process_automation_manager/7.13/html/developing_decision_services_in_red_hat_process_automation_manager/guided-rules-create-proc_guided-rules |
Chapter 6. Monitoring disaster recovery health | Chapter 6. Monitoring disaster recovery health 6.1. Enable monitoring for disaster recovery Use this procedure to enable basic monitoring for your disaster recovery setup. Procedure On the Hub cluster, open a terminal window Add the following label to openshift-operator namespace. Note You must always add this label for Regional-DR solution. 6.2. Enabling disaster recovery dashboard on Hub cluster This section guides you to enable the disaster recovery dashboard for advanced monitoring on the Hub cluster. For Regional-DR, the dashboard shows monitoring status cards for operator health, cluster health, metrics, alerts and application count. For Metro-DR, you can configure the dashboard to only monitor the ramen setup health and application count. Prerequisites Ensure that you have already installed the following OpenShift Container Platform version 4.16 and have administrator privileges. ODF Multicluster Orchestrator with the console plugin enabled. Red Hat Advanced Cluster Management for Kubernetes 2.11 (RHACM) from Operator Hub. For instructions on how to install, see Installing RHACM . Ensure you have enabled observability on RHACM. See Enabling observability guidelines . Procedure On the Hub cluster, open a terminal window and perform the steps. Create the configmap file named observability-metrics-custom-allowlist.yaml . You can use the following YAML to list the disaster recovery metrics on Hub cluster. For details, see Adding custom metrics . To know more about ramen metrics, see Disaster recovery metrics . In the open-cluster-management-observability namespace, run the following command: After observability-metrics-custom-allowlist yaml is created, RHACM starts collecting the listed OpenShift Data Foundation metrics from all the managed clusters. To exclude a specific managed cluster from collecting the observability data, add the following cluster label to the clusters: observability: disabled . 6.3. Viewing health status of disaster recovery replication relationships Prerequisites Ensure that you have enabled the disaster recovery dashboard for monitoring. For instructions, see chapter Enabling disaster recovery dashboard on Hub cluster . Procedure On the Hub cluster, ensure All Clusters option is selected. Refresh the console to make the DR monitoring dashboard tab accessible. Navigate to Data Services and click Data policies . On the Overview tab, you can view the health status of the operators, clusters and applications. Green tick indicates that the operators are running and available.. Click the Disaster recovery tab to view a list of DR policy details and connected applications. 6.4. Disaster recovery metrics These are the ramen metrics that are scrapped by prometheus. ramen_last_sync_timestamp_seconds ramen_policy_schedule_interval_seconds ramen_last_sync_duration_seconds ramen_last_sync_data_bytes ramen_workload_protection_status Run these metrics from the Hub cluster where Red Hat Advanced Cluster Management for Kubernetes (RHACM operator) is installed. 6.4.1. Last synchronization timestamp in seconds This is the time in seconds which gives the time of the most recent successful synchronization of all PVCs per application. Metric name ramen_last_sync_timestamp_seconds Metrics type Gauge Labels ObjType : Type of the object, here its DRPC ObjName : Name of the object, here it is DRPC-Name ObjNamespace : DRPC namespace Policyname : Name of the DRPolicy SchedulingInterval : Scheduling interval value from DRPolicy Metric value Value is set as Unix seconds which is obtained from lastGroupSyncTime from DRPC status. 6.4.2. Policy schedule interval in seconds This gives the scheduling interval in seconds from DRPolicy. Metric name ramen_policy_schedule_interval_seconds Metrics type Gauge Labels Policyname : Name of the DRPolicy Metric value This is set to a scheduling interval in seconds which is taken from DRPolicy. 6.4.3. Last synchronization duration in seconds This represents the longest time taken to sync from the most recent successful synchronization of all PVCs per application. Metric name ramen_last_sync_duration_seconds Metrics type Gauge Labels obj_type : Type of the object, here it is DRPC obj_name : Name of the object, here it is DRPC-Name obj_namespace : DRPC namespace scheduling_interval : Scheduling interval value from DRPolicy Metric value The value is taken from lastGroupSyncDuration from DRPC status. 6.4.4. Total bytes transferred from most recent synchronization This value represents the total bytes transferred from the most recent successful synchronization of all PVCs per application. Metric name ramen_last_sync_data_bytes Metrics type Gauge Labels obj_type : Type of the object, here it is DRPC obj_name : Name of the object, here it is DRPC-Name obj_namespace : DRPC namespace scheduling_interval : Scheduling interval value from DRPolicy Metric value The value is taken from lastGroupSyncBytes from DRPC status. 6.4.5. Workload protection status This value provides the application protection status per application that is DR protected. Metric name ramen_workload_protection_status Metrics type Gauge Labels ObjType : Type of the object, here its DRPC ObjName : Name of the object, here it is DRPC-Name ObjNamespace : DRPC namespace Metric value The value is either a "1" or a "0", where "1" indicates application DR protection is healthy and a "0" indicates application protection degraged and potentially unprotected. 6.5. Disaster recovery alerts This section provides a list of all supported alerts associated with Red Hat OpenShift Data Foundation within a disaster recovery environment. Recording rules Record: ramen_sync_duration_seconds Expression Purpose The time interval between the volume group's last sync time and the time now in seconds. Record: ramen_rpo_difference Expression Purpose The difference between the expected sync delay and the actual sync delay taken by the volume replication group. Record: count_persistentvolumeclaim_total Expression Purpose Sum of all PVC from the managed cluster. Alerts Alert: VolumeSynchronizationDelay Impact Critical Purpose Actual sync delay taken by the volume replication group is thrice the expected sync delay. YAML Alert: VolumeSynchronizationDelay Impact Warning Purpose Actual sync delay taken by the volume replication group is twice the expected sync delay. YAML Alert: WorkloadUnprotected Impact Warning Purpose Application protection status is degraded for more than 10 minutes YAML | [
"oc label namespace openshift-operators openshift.io/cluster-monitoring='true'",
"kind: ConfigMap apiVersion: v1 metadata: name: observability-metrics-custom-allowlist namespace: open-cluster-management-observability data: metrics_list.yaml: | names: - ceph_rbd_mirror_snapshot_sync_bytes - ceph_rbd_mirror_snapshot_snapshots matches: - __name__=\"csv_succeeded\",exported_namespace=\"openshift-dr-system\",name=~\"odr-cluster-operator.*\" - __name__=\"csv_succeeded\",exported_namespace=\"openshift-operators\",name=~\"volsync.*\"",
"oc apply -n open-cluster-management-observability -f observability-metrics-custom-allowlist.yaml",
"sum by (obj_name, obj_namespace, obj_type, job, policyname)(time() - (ramen_last_sync_timestamp_seconds > 0))",
"ramen_sync_duration_seconds{job=\"ramen-hub-operator-metrics-service\"} / on(policyname, job) group_left() (ramen_policy_schedule_interval_seconds{job=\"ramen-hub-operator-metrics-service\"})",
"count(kube_persistentvolumeclaim_info)",
"alert: VolumeSynchronizationDelay expr: ramen_rpo_difference >= 3 for: 5s labels: severity: critical annotations: description: \"The syncing of volumes is exceeding three times the scheduled snapshot interval, or the volumes have been recently protected. (DRPC: {{ USDlabels.obj_name }}, Namespace: {{ USDlabels.obj_namespace }})\" alert_type: \"DisasterRecovery\"",
"alert: VolumeSynchronizationDelay expr: ramen_rpo_difference > 2 and ramen_rpo_difference < 3 for: 5s labels: severity: warning annotations: description: \"The syncing of volumes is exceeding two times the scheduled snapshot interval, or the volumes have been recently protected. (DRPC: {{ USDlabels.obj_name }}, Namespace: {{ USDlabels.obj_namespace }})\" alert_type: \"DisasterRecovery\"",
"alert: WorkloadUnprotected expr: ramen_workload_protection_status == 0 for: 10m labels: severity: warning annotations: description: \"Workload is not protected for disaster recovery (DRPC: {{ USDlabels.obj_name }}, Namespace: {{ USDlabels.obj_namespace }}).\" alert_type: \"DisasterRecovery\""
] | https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.16/html/configuring_openshift_data_foundation_disaster_recovery_for_openshift_workloads/monitoring_disaster_recovery_health |
Chapter 56. ServiceAccountService | Chapter 56. ServiceAccountService 56.1. ListServiceAccounts GET /v1/serviceaccounts 56.1.1. Description 56.1.2. Parameters 56.1.2.1. Query Parameters Name Description Required Default Pattern query - null pagination.limit - null pagination.offset - null pagination.sortOption.field - null pagination.sortOption.reversed - null pagination.sortOption.aggregateBy.aggrFunc - UNSET pagination.sortOption.aggregateBy.distinct - null 56.1.3. Return Type V1ListServiceAccountResponse 56.1.4. Content Type application/json 56.1.5. Responses Table 56.1. HTTP Response Codes Code Message Datatype 200 A successful response. V1ListServiceAccountResponse 0 An unexpected error response. RuntimeError 56.1.6. Samples 56.1.7. Common object reference 56.1.7.1. ProtobufAny Any contains an arbitrary serialized protocol buffer message along with a URL that describes the type of the serialized message. Protobuf library provides support to pack/unpack Any values in the form of utility functions or additional generated methods of the Any type. Example 1: Pack and unpack a message in C++. Example 2: Pack and unpack a message in Java. The pack methods provided by protobuf library will by default use 'type.googleapis.com/full.type.name' as the type URL and the unpack methods only use the fully qualified type name after the last '/' in the type URL, for example "foo.bar.com/x/y.z" will yield type name "y.z". 56.1.7.1.1. JSON representation The JSON representation of an Any value uses the regular representation of the deserialized, embedded message, with an additional field @type which contains the type URL. Example: If the embedded message type is well-known and has a custom JSON representation, that representation will be embedded adding a field value which holds the custom JSON in addition to the @type field. Example (for message [google.protobuf.Duration][]): Field Name Required Nullable Type Description Format typeUrl String A URL/resource name that uniquely identifies the type of the serialized protocol buffer message. This string must contain at least one \"/\" character. The last segment of the URL's path must represent the fully qualified name of the type (as in path/google.protobuf.Duration ). The name should be in a canonical form (e.g., leading \".\" is not accepted). In practice, teams usually precompile into the binary all types that they expect it to use in the context of Any. However, for URLs which use the scheme http , https , or no scheme, one can optionally set up a type server that maps type URLs to message definitions as follows: * If no scheme is provided, https is assumed. * An HTTP GET on the URL must yield a [google.protobuf.Type][] value in binary format, or produce an error. * Applications are allowed to cache lookup results based on the URL, or have them precompiled into a binary to avoid any lookup. Therefore, binary compatibility needs to be preserved on changes to types. (Use versioned type names to manage breaking changes.) Note: this functionality is not currently available in the official protobuf release, and it is not used for type URLs beginning with type.googleapis.com. As of May 2023, there are no widely used type server implementations and no plans to implement one. Schemes other than http , https (or the empty scheme) might be used with implementation specific semantics. value byte[] Must be a valid serialized protocol buffer of the above specified type. byte 56.1.7.2. RuntimeError Field Name Required Nullable Type Description Format error String code Integer int32 message String details List of ProtobufAny 56.1.7.3. StorageK8sRole Field Name Required Nullable Type Description Format id String name String namespace String clusterId String clusterName String clusterRole Boolean labels Map of string annotations Map of string createdAt Date date-time rules List of StoragePolicyRule 56.1.7.4. StoragePolicyRule Field Name Required Nullable Type Description Format verbs List of string apiGroups List of string resources List of string nonResourceUrls List of string resourceNames List of string 56.1.7.5. StorageServiceAccount Field Name Required Nullable Type Description Format id String name String namespace String clusterName String clusterId String labels Map of string annotations Map of string createdAt Date date-time automountToken Boolean secrets List of string imagePullSecrets List of string 56.1.7.6. V1ListServiceAccountResponse Field Name Required Nullable Type Description Format saAndRoles List of V1ServiceAccountAndRoles 56.1.7.7. V1SADeploymentRelationship Field Name Required Nullable Type Description Format id String name String Name of the deployment. 56.1.7.8. V1ScopedRoles Field Name Required Nullable Type Description Format namespace String roles List of StorageK8sRole 56.1.7.9. V1ServiceAccountAndRoles Field Name Required Nullable Type Description Format serviceAccount StorageServiceAccount clusterRoles List of StorageK8sRole scopedRoles List of V1ScopedRoles deploymentRelationships List of V1SADeploymentRelationship 56.2. GetServiceAccount GET /v1/serviceaccounts/{id} 56.2.1. Description 56.2.2. Parameters 56.2.2.1. Path Parameters Name Description Required Default Pattern id X null 56.2.3. Return Type V1GetServiceAccountResponse 56.2.4. Content Type application/json 56.2.5. Responses Table 56.2. HTTP Response Codes Code Message Datatype 200 A successful response. V1GetServiceAccountResponse 0 An unexpected error response. RuntimeError 56.2.6. Samples 56.2.7. Common object reference 56.2.7.1. ProtobufAny Any contains an arbitrary serialized protocol buffer message along with a URL that describes the type of the serialized message. Protobuf library provides support to pack/unpack Any values in the form of utility functions or additional generated methods of the Any type. Example 1: Pack and unpack a message in C++. Example 2: Pack and unpack a message in Java. The pack methods provided by protobuf library will by default use 'type.googleapis.com/full.type.name' as the type URL and the unpack methods only use the fully qualified type name after the last '/' in the type URL, for example "foo.bar.com/x/y.z" will yield type name "y.z". 56.2.7.1.1. JSON representation The JSON representation of an Any value uses the regular representation of the deserialized, embedded message, with an additional field @type which contains the type URL. Example: If the embedded message type is well-known and has a custom JSON representation, that representation will be embedded adding a field value which holds the custom JSON in addition to the @type field. Example (for message [google.protobuf.Duration][]): Field Name Required Nullable Type Description Format typeUrl String A URL/resource name that uniquely identifies the type of the serialized protocol buffer message. This string must contain at least one \"/\" character. The last segment of the URL's path must represent the fully qualified name of the type (as in path/google.protobuf.Duration ). The name should be in a canonical form (e.g., leading \".\" is not accepted). In practice, teams usually precompile into the binary all types that they expect it to use in the context of Any. However, for URLs which use the scheme http , https , or no scheme, one can optionally set up a type server that maps type URLs to message definitions as follows: * If no scheme is provided, https is assumed. * An HTTP GET on the URL must yield a [google.protobuf.Type][] value in binary format, or produce an error. * Applications are allowed to cache lookup results based on the URL, or have them precompiled into a binary to avoid any lookup. Therefore, binary compatibility needs to be preserved on changes to types. (Use versioned type names to manage breaking changes.) Note: this functionality is not currently available in the official protobuf release, and it is not used for type URLs beginning with type.googleapis.com. As of May 2023, there are no widely used type server implementations and no plans to implement one. Schemes other than http , https (or the empty scheme) might be used with implementation specific semantics. value byte[] Must be a valid serialized protocol buffer of the above specified type. byte 56.2.7.2. RuntimeError Field Name Required Nullable Type Description Format error String code Integer int32 message String details List of ProtobufAny 56.2.7.3. StorageK8sRole Field Name Required Nullable Type Description Format id String name String namespace String clusterId String clusterName String clusterRole Boolean labels Map of string annotations Map of string createdAt Date date-time rules List of StoragePolicyRule 56.2.7.4. StoragePolicyRule Field Name Required Nullable Type Description Format verbs List of string apiGroups List of string resources List of string nonResourceUrls List of string resourceNames List of string 56.2.7.5. StorageServiceAccount Field Name Required Nullable Type Description Format id String name String namespace String clusterName String clusterId String labels Map of string annotations Map of string createdAt Date date-time automountToken Boolean secrets List of string imagePullSecrets List of string 56.2.7.6. V1GetServiceAccountResponse Field Name Required Nullable Type Description Format saAndRole V1ServiceAccountAndRoles 56.2.7.7. V1SADeploymentRelationship Field Name Required Nullable Type Description Format id String name String Name of the deployment. 56.2.7.8. V1ScopedRoles Field Name Required Nullable Type Description Format namespace String roles List of StorageK8sRole 56.2.7.9. V1ServiceAccountAndRoles Field Name Required Nullable Type Description Format serviceAccount StorageServiceAccount clusterRoles List of StorageK8sRole scopedRoles List of V1ScopedRoles deploymentRelationships List of V1SADeploymentRelationship | [
"Foo foo = ...; Any any; any.PackFrom(foo); if (any.UnpackTo(&foo)) { }",
"Foo foo = ...; Any any = Any.pack(foo); if (any.is(Foo.class)) { foo = any.unpack(Foo.class); } // or if (any.isSameTypeAs(Foo.getDefaultInstance())) { foo = any.unpack(Foo.getDefaultInstance()); }",
"Example 3: Pack and unpack a message in Python.",
"foo = Foo(...) any = Any() any.Pack(foo) if any.Is(Foo.DESCRIPTOR): any.Unpack(foo)",
"Example 4: Pack and unpack a message in Go",
"foo := &pb.Foo{...} any, err := anypb.New(foo) if err != nil { } foo := &pb.Foo{} if err := any.UnmarshalTo(foo); err != nil { }",
"package google.profile; message Person { string first_name = 1; string last_name = 2; }",
"{ \"@type\": \"type.googleapis.com/google.profile.Person\", \"firstName\": <string>, \"lastName\": <string> }",
"{ \"@type\": \"type.googleapis.com/google.protobuf.Duration\", \"value\": \"1.212s\" }",
"Properties of an individual k8s Role or ClusterRole. ////////////////////////////////////////",
"Properties of an individual rules that grant permissions to resources. ////////////////////////////////////////",
"Any properties of an individual service account. (regardless of time, scope, or context) ////////////////////////////////////////",
"A list of service accounts (free of scoped information) Next Tag: 2",
"Service accounts can be used by a deployment. Next Tag: 3",
"A service account and the roles that reference it Next Tag: 5",
"Foo foo = ...; Any any; any.PackFrom(foo); if (any.UnpackTo(&foo)) { }",
"Foo foo = ...; Any any = Any.pack(foo); if (any.is(Foo.class)) { foo = any.unpack(Foo.class); } // or if (any.isSameTypeAs(Foo.getDefaultInstance())) { foo = any.unpack(Foo.getDefaultInstance()); }",
"Example 3: Pack and unpack a message in Python.",
"foo = Foo(...) any = Any() any.Pack(foo) if any.Is(Foo.DESCRIPTOR): any.Unpack(foo)",
"Example 4: Pack and unpack a message in Go",
"foo := &pb.Foo{...} any, err := anypb.New(foo) if err != nil { } foo := &pb.Foo{} if err := any.UnmarshalTo(foo); err != nil { }",
"package google.profile; message Person { string first_name = 1; string last_name = 2; }",
"{ \"@type\": \"type.googleapis.com/google.profile.Person\", \"firstName\": <string>, \"lastName\": <string> }",
"{ \"@type\": \"type.googleapis.com/google.protobuf.Duration\", \"value\": \"1.212s\" }",
"Properties of an individual k8s Role or ClusterRole. ////////////////////////////////////////",
"Properties of an individual rules that grant permissions to resources. ////////////////////////////////////////",
"Any properties of an individual service account. (regardless of time, scope, or context) ////////////////////////////////////////",
"One service account Next Tag: 2",
"Service accounts can be used by a deployment. Next Tag: 3",
"A service account and the roles that reference it Next Tag: 5"
] | https://docs.redhat.com/en/documentation/red_hat_advanced_cluster_security_for_kubernetes/4.5/html/api_reference/serviceaccountservice |
Chapter 8. Strategies for repartitioning a disk | Chapter 8. Strategies for repartitioning a disk There are different approaches to repartitioning a disk. These include: Unpartitioned free space is available. An unused partition is available. Free space in an actively used partition is available. Note The following examples are simplified for clarity and do not reflect the exact partition layout when actually installing Red Hat Enterprise Linux. 8.1. Using unpartitioned free space Partitions that are already defined and do not span the entire hard disk, leave unallocated space that is not part of any defined partition. The following diagram shows what this might look like. Figure 8.1. Disk with unpartitioned free space The first diagram represents a disk with one primary partition and an undefined partition with unallocated space. The second diagram represents a disk with two defined partitions with allocated space. An unused hard disk also falls into this category. The only difference is that all the space is not part of any defined partition. On a new disk, you can create the necessary partitions from the unused space. Most preinstalled operating systems are configured to take up all available space on a disk drive. 8.2. Using space from an unused partition In the following example, the first diagram represents a disk with an unused partition. The second diagram represents reallocating an unused partition for Linux. Figure 8.2. Disk with an unused partition To use the space allocated to the unused partition, delete the partition and then create the appropriate Linux partition instead. Alternatively, during the installation process, delete the unused partition and manually create new partitions. 8.3. Using free space from an active partition This process can be difficult to manage because an active partition, that is already in use, contains the required free space. In most cases, hard disks of computers with preinstalled software contain one larger partition holding the operating system and data. Warning If you want to use an operating system (OS) on an active partition, you must reinstall the OS. Be aware that some computers, which include pre-installed software, do not include installation media to reinstall the original OS. Check whether this applies to your OS before you destroy an original partition and the OS installation. To optimise the use of available free space, you can use the methods of destructive or non-destructive repartitioning. 8.3.1. Destructive repartitioning Destructive repartitioning destroys the partition on your hard drive and creates several smaller partitions instead. Backup any needed data from the original partition as this method deletes the complete contents. After creating a smaller partition for your existing operating system, you can: Reinstall software. Restore your data. Start your Red Hat Enterprise Linux installation. The following diagram is a simplified representation of using the destructive repartitioning method. Figure 8.3. Destructive repartitioning action on disk Warning This method deletes all data previously stored in the original partition. 8.3.2. Non-destructive repartitioning Non-destructive repartitioning resizes partitions, without any data loss. This method is reliable, however it takes longer processing time on large drives. The following is a list of methods, which can help initiate non-destructive repartitioning. Compress existing data The storage location of some data cannot be changed. This can prevent the resizing of a partition to the required size, and ultimately lead to a destructive repartition process. Compressing data in an already existing partition can help you resize your partitions as needed. It can also help to maximize the free space available. The following diagram is a simplified representation of this process. Figure 8.4. Data compression on a disk To avoid any possible data loss, create a backup before continuing with the compression process. Resize the existing partition By resizing an already existing partition, you can free up more space. Depending on your resizing software, the results may vary. In the majority of cases, you can create a new unformatted partition of the same type, as the original partition. The steps you take after resizing can depend on the software you use. In the following example, the best practice is to delete the new DOS (Disk Operating System) partition, and create a Linux partition instead. Verify what is most suitable for your disk before initiating the resizing process. Figure 8.5. Partition resizing on a disk Optional: Create new partitions Some pieces of resizing software support Linux based systems. In such cases, there is no need to delete the newly created partition after resizing. Creating a new partition afterwards depends on the software you use. The following diagram represents the disk state, before and after creating a new partition. Figure 8.6. Disk with final partition configuration | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/9/html/managing_file_systems/strategies-for-repartitioning-a-disk_managing-file-systems |
Chapter 6. Refreshing the self-signed CA certificate on hosts | Chapter 6. Refreshing the self-signed CA certificate on hosts When you change the CA certificate on your Satellite Server, you must refresh the CA certificate on your hosts. Ensure that you use a temporary dual CA certificate file for uninterrupted operation. For more information, see Planning for self-signed CA certificate renewal in Administering Red Hat Satellite . If you have already changed the CA certificate on Satellite Server without using the temporary dual CA certificate file, you must refresh the certificate on hosts manually because the scripted variant will not recognize Satellite Server. Important You only must redeploy the CA certificate if you use a self-signed CA certificate. 6.1. Deploying the CA certificate on a host by using Script REX You can use remote execution (REX) with the Script provider to deploy the CA certificate. Prerequisites The host is registered to Satellite. Remote execution is enabled on the host. The CA certificate has been changed on Satellite Server. For more information, see Planning for self-signed CA certificate renewal in Administering Red Hat Satellite . Procedure In the Satellite web UI, navigate to Monitor > Jobs . Click Run Job . From the Job category list, select Commands . From the Job template list, select Download and run a script . Click . Select hosts on which you want to execute the job. In the url field, enter the following URL: Replace satellite.example.com with the FQDN of your Satellite Server. You can use HTTP when the CA certificate is expired. Optional: Click and configure advanced fields and scheduling as you require. Click Run on selected hosts . Verification If the host can access Satellite Server, the following command succeeds on your host: Replace satellite.example.com with the FQDN of your Satellite Server. If the host can access Capsule Server, the following command succeeds on your host: Replace capsule.example.com with the FQDN of your Capsule Server. Additional resources Section 13.22, "Executing a remote job" 6.2. Deploying the CA certificate on a host by using Ansible REX You can use remote execution (REX) with the Ansible provider to deploy the CA certificate. Prerequisites The host is registered to Satellite. Remote execution is enabled on the host. The CA certificate has been changed on Satellite Server. For more information, see Planning for self-signed CA certificate renewal in Administering Red Hat Satellite . Procedure In the Satellite web UI, navigate to Monitor > Jobs . Click Run Job . From the Job category list, select Ansible Commands . From the Job template list, select Download and execute a script . Click . Select hosts on which you want to execute the job. In the url field, enter the following URL: Replace satellite.example.com with the FQDN of your Satellite Server. You can use HTTP when the CA certificate is expired. Optional: Click and configure advanced fields and scheduling as you require. Click Run on selected hosts . Verification If the host can access Satellite Server, the following command succeeds on your host: Replace satellite.example.com with the FQDN of your Satellite Server. If the host can access Capsule Server, the following command succeeds on your host: Replace capsule.example.com with the FQDN of your Capsule Server. Additional resources Section 13.22, "Executing a remote job" 6.3. Deploying the CA certificate on a host manually You can deploy the CA certificate on the host manually by rendering a public provisioning template, which provides the CA certificate. Prerequisites You have root access on both your Satellite Server and your host. Procedure Download the certificate on your Satellite Server: Replace satellite.example.com with the FQDN of your Satellite Server. Transfer the CA certificate to your host securely, for example by using scp . Login to your host by using SSH. Copy the certificate to the Subscription Manager configuration directory: Copy the certificate to the truststore: Update the truststore: Verification If the host can access Satellite Server, the following command succeeds on your host: Replace satellite.example.com with the FQDN of your Satellite Server. If the host can access Capsule Server, the following command succeeds on your host: Replace capsule.example.com with the FQDN of your Capsule Server. | [
"https:// satellite.example.com /unattended/public/foreman_ca_refresh",
"curl --head https:// satellite.example.com",
"curl --head https:// capsule.example.com:9090 /features",
"https:// satellite.example.com /unattended/public/foreman_ca_refresh",
"curl --head https:// satellite.example.com",
"curl --head https:// capsule.example.com:9090 /features",
"curl -o \"satellite_ca_cert.crt\" https:// satellite.example.com /unattended/public/foreman_raw_ca",
"cp -u satellite_ca_cert.crt /etc/rhsm/ca/katello-server-ca.pem",
"cp satellite_ca_cert.crt /etc/pki/ca-trust/source/anchors",
"update-ca-trust",
"curl --head https:// satellite.example.com",
"curl --head https:// capsule.example.com:9090 /features"
] | https://docs.redhat.com/en/documentation/red_hat_satellite/6.16/html/managing_hosts/refreshing-the-self-signed-ca-certificate-on-hosts |
Chapter 2. Managing certificates for users, hosts, and services using the integrated IdM CA | Chapter 2. Managing certificates for users, hosts, and services using the integrated IdM CA To learn more about how to manage certificates in Identity Management (IdM) using the integrated CA, the ipa CA, and its sub-CAs, see the following sections: Requesting new certificates for a user, host, or service using the IdM Web UI . Requesting new certificates for a user, host, or service from the IdM CA using the IdM CLI: Requesting new certificates for a user, host, or service from IdM CA using certutil For a specific example of requesting a new user certificate from the IdM CA using the certutil utility and exporting it to an IdM client, see Requesting a new user certificate and exporting it to the client . Requesting new certificates for a user, host, or service from IdM CA using openssl You can also request new certificates for a service from the IdM CA using the certmonger utility. For more information, see Requesting new certificates for a service from IdM CA using certmonger . Prerequisites Your IdM deployment contains an integrated CA: For information about how to plan your CA services in IdM, see Planning your CA services . For information about how to install an IdM server with integrated DNS and integrated CA as the root CA, see Installing an IdM server: With integrated DNS, with an integrated CA as the root CA For information about how to install an IdM server with integrated DNS and an external CA as the root CA, see Installing an IdM server: With integrated DNS, with an external CA as the root CA For information about how to install an IdM server without integrated DNS and with an integrated CA as the root CA, see Installing an IdM server: Without integrated DNS, with an integrated CA as the root CA . Optional: Your IdM deployment supports users authenticating with a certificate: For information about how to configure your IdM deployment to support user authentication with a certificate stored in the IdM client filesystem, see Configuring authentication with a certificate stored on the desktop of an IdM client . For information about how to configure your IdM deployment to support user authentication with a certificate stored on a smart card inserted into an IdM client, see Configuring Identity Management for smart card authentication . For information about how to configure your IdM deployment to support user authentication with smart cards issued by an Active Directory certificate system, see Configuring certificates issued by ADCS for smart card authentication in IdM . 2.1. Requesting new certificates for a user, host, or service using IdM Web UI Follow this procedure to use the Identity Management (IdM) Web UI to request a new certificate for any IdM entity from the integrated IdM certificate authorities (CAs): the ipa CA or any of its sub-CAs. IdM entities include: Users Hosts Services Important Services typically run on dedicated service nodes on which the private keys are stored. Copying a service's private key to the IdM server is considered insecure. Therefore, when requesting a certificate for a service, create the certificate signing request (CSR) on the service node. Prerequisites Your IdM deployment contains an integrated CA. You are logged into the IdM Web UI as the IdM administrator. Procedure Under the Identity tab, select the Users , Hosts , or Services subtab. Click the name of the user, host, or service to open its configuration page. Figure 2.1. List of Hosts Click Actions New Certificate . Optional: Select the issuing CA and profile ID. Follow the instructions for using the certutil command-line (CLI) utility on the screen. Click Issue . 2.2. Requesting new certificates for a user, host, or service from IdM CA using certutil You can use the certutil utility to request a certificate for an Identity Management (IdM) user, host or service in standard IdM situations. To ensure that a host or service Kerberos alias can use a certificate, use the openssl utility to request a certificate instead. Follow this procedure to request a certificate for an IdM user, host, or service from ipa , the IdM certificate authority (CA), using certutil . Important Services typically run on dedicated service nodes on which the private keys are stored. Copying a service's private key to the IdM server is considered insecure. Therefore, when requesting a certificate for a service, create the certificate signing request (CSR) on the service node. Prerequisites Your IdM deployment contains an integrated CA. You are logged into the IdM command-line interface (CLI) as the IdM administrator. Procedure Create a temporary directory for the certificate database: Create a new temporary certificate database, for example: Create the CSR and redirect the output to a file. For example, to create a CSR for a 4096 bit certificate and to set the subject to CN=server.example.com,O=EXAMPLE.COM : Submit the certificate request file to the CA running on the IdM server. Specify the Kerberos principal to associate with the newly-issued certificate: The ipa cert-request command in IdM uses the following defaults: The caIPAserviceCert certificate profile To select a custom profile, use the --profile-id option. The integrated IdM root CA, ipa To select a sub-CA, use the --ca option. Additional resources See the output of the ipa cert-request --help command. See Creating and managing certificate profiles in Identity Management . 2.3. Requesting new certificates for a user, host, or service from IdM CA using openssl You can use the openssl utility to request a certificate for an Identity Management (IdM) host or service if you want to ensure that the Kerberos alias of the host or service can use the certificate. In standard situations, consider requesting a new certificate using the certutil utility instead. Follow this procedure to request a certificate for an IdM host, or service from ipa , the IdM certificate authority, using openssl . Important Services typically run on dedicated service nodes on which the private keys are stored. Copying a service's private key to the IdM server is considered insecure. Therefore, when requesting a certificate for a service, create the certificate signing request (CSR) on the service node. Prerequisites Your IdM deployment contains an integrated CA. You are logged into the IdM command-line interface (CLI) as the IdM administrator. Procedure Create one or more aliases for your Kerberos principal test/server.example.com . For example, test1/server.example.com and test2/server.example.com . In the CSR, add a subjectAltName for dnsName ( server.example.com ) and otherName ( test2/server.example.com ). To do this, configure the openssl.conf file to include the following line specifying the UPN otherName and subjectAltName: Create a certificate request using openssl : Submit the certificate request file to the CA running on the IdM server. Specify the Kerberos principal to associate with the newly-issued certificate: The ipa cert-request command in IdM uses the following defaults: The caIPAserviceCert certificate profile To select a custom profile, use the --profile-id option. The integrated IdM root CA, ipa To select a sub-CA, use the --ca option. Additional resources See the output of the ipa cert-request --help command. See Creating and managing certificate profiles in Identity Management . 2.4. Additional resources See Revoking certificates with the integrated IdM CAs . See Restoring certificates with the integrated IdM CAs . See Restricting an application to trust only a subset of certificates . | [
"mkdir ~/certdb/",
"certutil -N -d ~/certdb/",
"certutil -R -d ~/certdb/ -a -g 4096 -s \" CN=server.example.com,O=EXAMPLE.COM \" -8 server.example.com > certificate_request.csr",
"ipa cert-request certificate_request.csr --principal= host/server.example.com",
"otherName= 1.3.6.1.4.1.311.20.2.3 ;UTF8: test2/[email protected] DNS.1 = server.example.com",
"openssl req -new -newkey rsa: 2048 -keyout test2service.key -sha256 -nodes -out certificate_request.csr -config openssl.conf",
"ipa cert-request certificate_request.csr --principal= host/server.example.com"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html/managing_certificates_in_idm/managing-certificates-for-users-hosts-and-services-using-the-integrated-idm-ca_working-with-idm-certificates |
Chapter 8. Important links | Chapter 8. Important links Red Hat AMQ 7 Supported Configurations Red Hat AMQ 7 Component Details Revised on 2022-02-02 16:19:20 UTC | null | https://docs.redhat.com/en/documentation/red_hat_amq/2020.q4/html/release_notes_for_amq_streams_1.6_on_openshift/important-links-str |
Chapter 8. Integrating with Sumo Logic | Chapter 8. Integrating with Sumo Logic If you are using Sumo Logic , you can forward alerts from Red Hat Advanced Cluster Security for Kubernetes to Sumo Logic. The following steps represent a high-level workflow for integrating Red Hat Advanced Cluster Security for Kubernetes with Sumo Logic: Add a new Custom App in Sumo Logic, set the HTTP source, and get the HTTP URL. Use the HTTP URL to integrate Sumo Logic with Red Hat Advanced Cluster Security for Kubernetes. Identify the policies you want to send notifications for, and update the notification settings for those policies. 8.1. Configuring Sumo Logic Use the Setup Wizard to set up Streaming Data and get the HTTP URL. Procedure Log in to your Sumo Logic Home page and select Setup Wizard . Move your cursor over to Set Up Streaming Data and select Get Started . On the Select Data Type page, select Your Custom App . On the Set Up Collection page, select HTTP Source . Enter a name for Source Category , for example, rhacs and click Continue . Copy the generated URL. 8.2. Configuring Red Hat Advanced Cluster Security for Kubernetes Create a new integration in Red Hat Advanced Cluster Security for Kubernetes by using the HTTP URL. Procedure In the RHACS portal, go to Platform Configuration Integrations . Scroll down to the Notifier Integrations section and select Sumo Logic . Click New Integration ( add icon). Enter a name for Integration Name . Enter the generated HTTP URL in the HTTP Collector Source Address field. Click Test ( checkmark icon) to test that the integration with Sumo Logic is working. Click Create ( save icon) to create the configuration. 8.3. Configuring policy notifications Enable alert notifications for system policies. Procedure In the RHACS portal, go to Platform Configuration Policy Management . Select one or more policies for which you want to send alerts. Under Bulk actions , select Enable notification . In the Enable notification window, select the Sumo Logic notifier. Note If you have not configured any other integrations, the system displays a message that no notifiers are configured. Click Enable . Note Red Hat Advanced Cluster Security for Kubernetes sends notifications on an opt-in basis. To receive notifications, you must first assign a notifier to the policy. Notifications are only sent once for a given alert. If you have assigned a notifier to a policy, you will not receive a notification unless a violation generates a new alert. Red Hat Advanced Cluster Security for Kubernetes creates a new alert for the following scenarios: A policy violation occurs for the first time in a deployment. A runtime-phase policy violation occurs in a deployment after you resolved the runtime alert for a policy in that deployment. 8.4. Viewing alerts in Sumo Logic You can view alerts from Red Hat Advanced Cluster Security for Kubernetes in Sumo Logic. Log in to your Sumo Logic Home page and click Log Search . In the search box, enter _sourceCategory=rhacs . Make sure to use the same Source Category name that you entered while configuring Sumo Logic. Select the time and then click Start . | null | https://docs.redhat.com/en/documentation/red_hat_advanced_cluster_security_for_kubernetes/4.7/html/integrating/integrate-with-sumologic |
5.76. gawk | 5.76. gawk 5.76.1. RHBA-2012:1146 - gawk bug fix update Updated gawk packages that fix one bug are now available for Red Hat Enterprise Linux 6. The gawk packages provide the GNU version of the text processing utility awk. Awk interprets a special-purpose programming language to do quick and easy text pattern matching and reformatting jobs. Bug Fix BZ# 829558 Prior to this update, the "re_string_skip_chars" function incorrectly used the character count instead of the raw length to estimate the string length. As a consequence, any text in multi-byte encoding that did not use the UTF-8 format failed to be processed correctly. This update modifies the underlying code so that the correct string length is used. multi-byte encoding is processed correctly. All users of gawk requiring multi-byte encodings that do not use UTF-8 are advised to upgrade to these updated packages, which fix this bug. 5.76.2. RHBA-2012:0385 - gawk bug fix update An updated gawk package that fixes three bugs is now available for Red Hat Enterprise Linux 6. The gawk package contains the GNU version of awk, a text processing utility. AWK interprets a special-purpose programming language to do quick and easy text pattern matching and reformatting jobs. Bug Fixes BZ# 648906 Prior to this update, the gawk utility could, under certain circumstances, interpret some run-time variables as internal zero-length variable prototypes. When gawk tried to free such run-time variables, it actually freed the internal prototypes, that were allocated just once due to memory savings. As a consequence, gawk sometimes failed and the error message "awk: double free or corruption" was displayed. With this update the problem has been corrected and the error no longer occurs. BZ# 740673 Prior to this update, the gawk utility did not copy variables from the command line arguments. As a consequence, the variables were not accessible as intended. This update modifies the underlying code so that gawk makes copies of those variables. BZ# 743242 Prior to this update, the Yacc interpreter encountered problems handling larger stacks. As a consequence, the Yacc interpreter could fail with a stack overflow error when interpreting the AWK code. This update enlarges the stack and Yacc can now handle these AWK programs. All users of gawk are advised to upgrade to this updated package, which fixes these bugs. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.3_technical_notes/gawk |
Chapter 4. Examples | Chapter 4. Examples This chapter demonstrates the use of AMQ Ruby through example programs. For more examples, see the AMQ Ruby example suite and the Qpid Proton Ruby examples . 4.1. Sending messages This client program connects to a server using <connection-url> , creates a sender for target <address> , sends a message containing <message-body> , closes the connection, and exits. Example: Sending messages require 'qpid_proton' class SendHandler < Qpid::Proton::MessagingHandler def initialize(conn_url, address, message_body) super() @conn_url = conn_url @address = address @message_body = message_body end def on_container_start(container) conn = container.connect(@conn_url) conn.open_sender(@address) end def on_sender_open(sender) puts "SEND: Opened sender for target address '#{sender.target.address}'\n" end def on_sendable(sender) message = Qpid::Proton::Message.new(@message_body) sender.send(message) puts "SEND: Sent message '#{message.body}'\n" sender.close sender.connection.close end end if ARGV.size == 3 conn_url, address, message_body = ARGV else abort "Usage: send.rb <connection-url> <address> <message-body>\n" end handler = SendHandler.new(conn_url, address, message_body) container = Qpid::Proton::Container.new(handler) container.run Running the example To run the example program, copy it to a local file and invoke it using the ruby command. For more information, see Chapter 3, Getting started . USD ruby send.rb amqp://localhost queue1 hello 4.2. Receiving messages This client program connects to a server using <connection-url> , creates a receiver for source <address> , and receives messages until it is terminated or it reaches <count> messages. Example: Receiving messages require 'qpid_proton' class ReceiveHandler < Qpid::Proton::MessagingHandler def initialize(conn_url, address, desired) super() @conn_url = conn_url @address = address @desired = desired @received = 0 end def on_container_start(container) conn = container.connect(@conn_url) conn.open_receiver(@address) end def on_receiver_open(receiver) puts "RECEIVE: Opened receiver for source address '#{receiver.source.address}'\n" end def on_message(delivery, message) puts "RECEIVE: Received message '#{message.body}'\n" @received += 1 if @received == @desired delivery.receiver.close delivery.receiver.connection.close end end end if ARGV.size > 1 conn_url, address = ARGV[0..1] else abort "Usage: receive.rb <connection-url> <address> [<message-count>]\n" end begin desired = Integer(ARGV[2]) rescue TypeError desired = 0 end handler = ReceiveHandler.new(conn_url, address, desired) container = Qpid::Proton::Container.new(handler) container.run Running the example To run the example program, copy it to a local file and invoke it using the ruby command. For more information, see Chapter 3, Getting started . USD ruby receive.rb amqp://localhost queue1 | [
"require 'qpid_proton' class SendHandler < Qpid::Proton::MessagingHandler def initialize(conn_url, address, message_body) super() @conn_url = conn_url @address = address @message_body = message_body end def on_container_start(container) conn = container.connect(@conn_url) conn.open_sender(@address) end def on_sender_open(sender) puts \"SEND: Opened sender for target address '#{sender.target.address}'\\n\" end def on_sendable(sender) message = Qpid::Proton::Message.new(@message_body) sender.send(message) puts \"SEND: Sent message '#{message.body}'\\n\" sender.close sender.connection.close end end if ARGV.size == 3 conn_url, address, message_body = ARGV else abort \"Usage: send.rb <connection-url> <address> <message-body>\\n\" end handler = SendHandler.new(conn_url, address, message_body) container = Qpid::Proton::Container.new(handler) container.run",
"ruby send.rb amqp://localhost queue1 hello",
"require 'qpid_proton' class ReceiveHandler < Qpid::Proton::MessagingHandler def initialize(conn_url, address, desired) super() @conn_url = conn_url @address = address @desired = desired @received = 0 end def on_container_start(container) conn = container.connect(@conn_url) conn.open_receiver(@address) end def on_receiver_open(receiver) puts \"RECEIVE: Opened receiver for source address '#{receiver.source.address}'\\n\" end def on_message(delivery, message) puts \"RECEIVE: Received message '#{message.body}'\\n\" @received += 1 if @received == @desired delivery.receiver.close delivery.receiver.connection.close end end end if ARGV.size > 1 conn_url, address = ARGV[0..1] else abort \"Usage: receive.rb <connection-url> <address> [<message-count>]\\n\" end begin desired = Integer(ARGV[2]) rescue TypeError desired = 0 end handler = ReceiveHandler.new(conn_url, address, desired) container = Qpid::Proton::Container.new(handler) container.run",
"ruby receive.rb amqp://localhost queue1"
] | https://docs.redhat.com/en/documentation/red_hat_amq/2021.q1/html/using_the_amq_ruby_client/examples |
Chapter 3. Avro Serialize Action | Chapter 3. Avro Serialize Action Serialize payload to Avro 3.1. Configuration Options The following table summarizes the configuration options available for the avro-serialize-action Kamelet: Property Name Description Type Default Example schema * Schema The Avro schema to use during serialization (as single-line, using JSON format) string "{\"type\": \"record\", \"namespace\": \"com.example\", \"name\": \"FullName\", \"fields\": [{\"name\": \"first\", \"type\": \"string\"},{\"name\": \"last\", \"type\": \"string\"}]}" validate Validate Indicates if the content must be validated against the schema boolean true Note Fields marked with an asterisk (*) are mandatory. 3.2. Dependencies At runtime, the avro-serialize-action Kamelet relies upon the presence of the following dependencies: github:openshift-integration.kamelet-catalog:camel-kamelets-utils:kamelet-catalog-1.6-SNAPSHOT camel:kamelet camel:core camel:jackson-avro 3.3. Usage This section describes how you can use the avro-serialize-action . 3.3.1. Knative Action You can use the avro-serialize-action Kamelet as an intermediate step in a Knative binding. avro-serialize-action-binding.yaml apiVersion: camel.apache.org/v1alpha1 kind: KameletBinding metadata: name: avro-serialize-action-binding spec: source: ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: timer-source properties: message: '{"first":"Ada","last":"Lovelace"}' steps: - ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: json-deserialize-action - ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: avro-serialize-action properties: schema: "{\"type\": \"record\", \"namespace\": \"com.example\", \"name\": \"FullName\", \"fields\": [{\"name\": \"first\", \"type\": \"string\"},{\"name\": \"last\", \"type\": \"string\"}]}" sink: ref: kind: Channel apiVersion: messaging.knative.dev/v1 name: mychannel 3.3.1.1. Prerequisite Make sure you have "Red Hat Integration - Camel K" installed into the OpenShift cluster you're connected to. 3.3.1.2. Procedure for using the cluster CLI Save the avro-serialize-action-binding.yaml file to your local drive, and then edit it as needed for your configuration. Run the action by using the following command: oc apply -f avro-serialize-action-binding.yaml 3.3.1.3. Procedure for using the Kamel CLI Configure and run the action by using the following command: kamel bind --name avro-serialize-action-binding timer-source?message='{"first":"Ada","last":"Lovelace"}' --step json-deserialize-action --step avro-serialize-action -p step-1.schema='{"type": "record", "namespace": "com.example", "name": "FullName", "fields": [{"name": "first", "type": "string"},{"name": "last", "type": "string"}]}' channel:mychannel This command creates the KameletBinding in the current namespace on the cluster. 3.3.2. Kafka Action You can use the avro-serialize-action Kamelet as an intermediate step in a Kafka binding. avro-serialize-action-binding.yaml apiVersion: camel.apache.org/v1alpha1 kind: KameletBinding metadata: name: avro-serialize-action-binding spec: source: ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: timer-source properties: message: '{"first":"Ada","last":"Lovelace"}' steps: - ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: json-deserialize-action - ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: avro-serialize-action properties: schema: "{\"type\": \"record\", \"namespace\": \"com.example\", \"name\": \"FullName\", \"fields\": [{\"name\": \"first\", \"type\": \"string\"},{\"name\": \"last\", \"type\": \"string\"}]}" sink: ref: kind: KafkaTopic apiVersion: kafka.strimzi.io/v1beta1 name: my-topic 3.3.2.1. Prerequisites Ensure that you've installed the AMQ Streams operator in your OpenShift cluster and created a topic named my-topic in the current namespace. Make also sure you have "Red Hat Integration - Camel K" installed into the OpenShift cluster you're connected to. 3.3.2.2. Procedure for using the cluster CLI Save the avro-serialize-action-binding.yaml file to your local drive, and then edit it as needed for your configuration. Run the action by using the following command: oc apply -f avro-serialize-action-binding.yaml 3.3.2.3. Procedure for using the Kamel CLI Configure and run the action by using the following command: kamel bind --name avro-serialize-action-binding timer-source?message='{"first":"Ada","last":"Lovelace"}' --step json-deserialize-action --step avro-serialize-action -p step-1.schema='{"type": "record", "namespace": "com.example", "name": "FullName", "fields": [{"name": "first", "type": "string"},{"name": "last", "type": "string"}]}' kafka.strimzi.io/v1beta1:KafkaTopic:my-topic This command creates the KameletBinding in the current namespace on the cluster. 3.4. Kamelet source file https://github.com/openshift-integration/kamelet-catalog/avro-serialize-action.kamelet.yaml | [
"apiVersion: camel.apache.org/v1alpha1 kind: KameletBinding metadata: name: avro-serialize-action-binding spec: source: ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: timer-source properties: message: '{\"first\":\"Ada\",\"last\":\"Lovelace\"}' steps: - ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: json-deserialize-action - ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: avro-serialize-action properties: schema: \"{\\\"type\\\": \\\"record\\\", \\\"namespace\\\": \\\"com.example\\\", \\\"name\\\": \\\"FullName\\\", \\\"fields\\\": [{\\\"name\\\": \\\"first\\\", \\\"type\\\": \\\"string\\\"},{\\\"name\\\": \\\"last\\\", \\\"type\\\": \\\"string\\\"}]}\" sink: ref: kind: Channel apiVersion: messaging.knative.dev/v1 name: mychannel",
"apply -f avro-serialize-action-binding.yaml",
"kamel bind --name avro-serialize-action-binding timer-source?message='{\"first\":\"Ada\",\"last\":\"Lovelace\"}' --step json-deserialize-action --step avro-serialize-action -p step-1.schema='{\"type\": \"record\", \"namespace\": \"com.example\", \"name\": \"FullName\", \"fields\": [{\"name\": \"first\", \"type\": \"string\"},{\"name\": \"last\", \"type\": \"string\"}]}' channel:mychannel",
"apiVersion: camel.apache.org/v1alpha1 kind: KameletBinding metadata: name: avro-serialize-action-binding spec: source: ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: timer-source properties: message: '{\"first\":\"Ada\",\"last\":\"Lovelace\"}' steps: - ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: json-deserialize-action - ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: avro-serialize-action properties: schema: \"{\\\"type\\\": \\\"record\\\", \\\"namespace\\\": \\\"com.example\\\", \\\"name\\\": \\\"FullName\\\", \\\"fields\\\": [{\\\"name\\\": \\\"first\\\", \\\"type\\\": \\\"string\\\"},{\\\"name\\\": \\\"last\\\", \\\"type\\\": \\\"string\\\"}]}\" sink: ref: kind: KafkaTopic apiVersion: kafka.strimzi.io/v1beta1 name: my-topic",
"apply -f avro-serialize-action-binding.yaml",
"kamel bind --name avro-serialize-action-binding timer-source?message='{\"first\":\"Ada\",\"last\":\"Lovelace\"}' --step json-deserialize-action --step avro-serialize-action -p step-1.schema='{\"type\": \"record\", \"namespace\": \"com.example\", \"name\": \"FullName\", \"fields\": [{\"name\": \"first\", \"type\": \"string\"},{\"name\": \"last\", \"type\": \"string\"}]}' kafka.strimzi.io/v1beta1:KafkaTopic:my-topic"
] | https://docs.redhat.com/en/documentation/red_hat_build_of_apache_camel_k/1.10.9/html/kamelets_reference/avro-serialize-action |
Chapter 6. Developing a Kafka client | Chapter 6. Developing a Kafka client Create a Kafka client in your preferred programming language and connect it to Streams for Apache Kafka. To interact with a Kafka cluster, client applications need to be able to produce and consume messages. To develop and configure a basic Kafka client application, as a minimum, you must do the following: Set up configuration to connect to a Kafka cluster Use producers and consumers to send and receive messages Setting up the basic configuration for connecting to a Kafka cluster and using producers and consumers is the first step in developing a Kafka client. After that, you can expand into improving the inputs, security, performance, error handling, and functionality of the client application. Prerequisites You can create a client properties file that contains property values for the following: Basic configuration to connect to the Kafka cluster Configuration for securing the connection Procedure Choose a Kafka client library for your programming language, e.g. Java, Python, .NET, etc. Only client libraries built by Red Hat are supported for Streams for Apache Kafka. Currently, Streams for Apache Kafka only provides a Java client library. Install the library, either through a package manager or manually by downloading the library from its source. Import the necessary classes and dependencies for your Kafka client in your code. Create a Kafka consumer or producer object, depending on the type of client you want to create. A client can be a Kafka consumer, producer, Streams processor, and admin. Provide the configuration properties to connect to the Kafka cluster, including the broker address, port, and credentials if necessary. For a local Kafka deployment, you might start with an address like localhost:9092 . However, when working with a Kafka cluster managed by Streams for Apache Kafka, you can obtain the bootstrap address from the Kafka custom resource status using an oc command: oc get kafka <kafka_cluster_name> -o=jsonpath='{.status.listeners[*].bootstrapServers}{"\n"}' This command retrieves the bootstrap addresses exposed by listeners for client connections on a Kafka cluster. Use the Kafka consumer or producer object to subscribe to topics, produce messages, or retrieve messages from the Kafka cluster. Pay attention to error handling; it's vitally important when connecting and communicating with Kafka, especially in production systems where high availability and ease of operations are valued. Effective error handling is a key differentiator between a prototype and a production-grade application, and it applies not only to Kafka but also to any robust software system. 6.1. Example Kafka producer application This Java-based Kafka producer application is an example of a self-contained application that produces messages to a Kafka topic. The client uses the Kafka Producer API to send messages asynchronously, with some error handling. The client implements the Callback interface for message handling. To run the Kafka producer application, you execute the main method in the Producer class. The client generates a random byte array as the message payload using the randomBytes method. The client produces messages to a specified Kafka topic until NUM_MESSAGES messages (50 in the example configuration) have been sent. The producer is thread-safe, allowing multiple threads to use a single producer instance. Kafka producer instances are designed to be thread-safe, allowing multiple threads to share a single producer instance. This example client provides a basic foundation for building more complex Kafka producers for specific use cases. You can incorporate additional functionality, such as implementing secure connections . Prerequisites Kafka brokers running on the specified BOOTSTRAP_SERVERS A Kafka topic named TOPIC_NAME to which messages are produced. Client dependencies Before implementing the Kafka producer application, your project must include the necessary dependencies. For a Java-based Kafka client, include the Kafka client JAR. This JAR file contains the Kafka libraries required for building and running the client. For information on how to add the dependencies to a pom.xml file in a Maven project, see Section 3.1, "Adding a Kafka clients dependency to your Maven project" . Configuration You can configure the producer application through the following constants specified in the Producer class: BOOTSTRAP_SERVERS The address and port to connect to the Kafka brokers. TOPIC_NAME The name of the Kafka topic to produce messages to. NUM_MESSAGES The number of messages to produce before stopping. MESSAGE_SIZE_BYTES The size of each message in bytes. PROCESSING_DELAY_MS The delay in milliseconds between sending messages. This can simulate message processing time, which is useful for testing. Example producer application import java.util.Properties; import java.util.Random; import java.util.UUID; import java.util.concurrent.TimeUnit; import java.util.concurrent.atomic.AtomicLong; import org.apache.kafka.clients.producer.Callback; import org.apache.kafka.clients.producer.KafkaProducer; import org.apache.kafka.clients.producer.ProducerConfig; import org.apache.kafka.clients.producer.ProducerRecord; import org.apache.kafka.clients.producer.RecordMetadata; import org.apache.kafka.common.errors.RetriableException; import org.apache.kafka.common.serialization.ByteArraySerializer; import org.apache.kafka.common.serialization.LongSerializer; public class Producer implements Callback { private static final Random RND = new Random(0); private static final String BOOTSTRAP_SERVERS = "localhost:9092"; private static final String TOPIC_NAME = "my-topic"; private static final long NUM_MESSAGES = 50; private static final int MESSAGE_SIZE_BYTES = 100; private static final long PROCESSING_DELAY_MS = 1000L; protected AtomicLong messageCount = new AtomicLong(0); public static void main(String[] args) { new Producer().run(); } public void run() { System.out.println("Running producer"); try (var producer = createKafkaProducer()) { 1 byte[] value = randomBytes(MESSAGE_SIZE_BYTES); 2 while (messageCount.get() < NUM_MESSAGES) { 3 sleep(PROCESSING_DELAY_MS); 4 producer.send(new ProducerRecord<>(TOPIC_NAME, messageCount.get(), value), this); 5 messageCount.incrementAndGet(); } } } private KafkaProducer<Long, byte[]> createKafkaProducer() { Properties props = new Properties(); 6 props.put(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG, BOOTSTRAP_SERVERS); 7 props.put(ProducerConfig.CLIENT_ID_CONFIG, "client-" + UUID.randomUUID()); 8 props.put(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG, LongSerializer.class); 9 props.put(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG, ByteArraySerializer.class); return new KafkaProducer<>(props); } private void sleep(long ms) { 10 try { TimeUnit.MILLISECONDS.sleep(ms); } catch (InterruptedException e) { throw new RuntimeException(e); } } private byte[] randomBytes(int size) { 11 if (size <= 0) { throw new IllegalArgumentException("Record size must be greater than zero"); } byte[] payload = new byte[size]; for (int i = 0; i < payload.length; ++i) { payload[i] = (byte) (RND.nextInt(26) + 65); } return payload; } private boolean retriable(Exception e) { 12 if (e instanceof IllegalArgumentException || e instanceof UnsupportedOperationException || !(e instanceof RetriableException)) { return false; } else { return true; } } @Override public void onCompletion(RecordMetadata metadata, Exception e) { 13 if (e != null) { System.err.println(e.getMessage()); if (!retriable(e)) { e.printStackTrace(); System.exit(1); } } else { System.out.printf("Record sent to %s-%d with offset %d%n", metadata.topic(), metadata.partition(), metadata.offset()); } } } 1 The client creates a Kafka producer using the createKafkaProducer method. The producer sends messages to the Kafka topic asynchronously. 2 A byte array is used as the payload for each message sent to the Kafka topic. 3 The maximum number of messages sent is determined by the NUM_MESSAGES constant value. 4 The message rate is controlled with a delay between each message sent. 5 The producer passes the topic name, the message count value, and the message value. 6 The client creates the KafkaProducer instance using the provided configuration. You can use a properties file or add the configuration directly. For more information on the basic configuration, see Chapter 4, Configuring client applications for connecting to a Kafka cluster . 7 The connection to the Kafka brokers. 8 A unique client ID for the producer using a randomly generated UUID. A client ID is not required, but it is useful to track the source of requests. 9 The appropriate serializer classes for handling keys and values as byte arrays. 10 Method to introduce a delay to the message sending process for a specified number of milliseconds. If the thread responsible for sending messages is interrupted while paused, it throws an InterruptedException error. 11 Method to create a random byte array of a specific size, which serves as the payload for each message sent to the Kafka topic. The method generates a random integer and adds 65 to represent an uppercase letter in ascii code (65 is A , 66 is B , and so on). The ascii code is stored as a single byte in the payload array. If the payload size is not greater than zero, it throws an IllegalArgumentException . 12 Method to check whether to retry sending a message following an exception. The Kafka producer automatically handles retries for certain errors, such as connection errors. You can customize this method to include other errors. Returns false for null and specified exceptions, or those that do not implement the RetriableException interface. 13 Method called when a message has been acknowledged by the Kafka broker. On success, a message is printed with the details of the topic, partition, and offset position for the message. If an error ocurred when sending the message, an error message is printed. The method checks the exception and takes appropriate action based on whether it's a fatal or non-fatal error. If the error is non-fatal, the message sending process continues. If the error is fatal, a stack trace is printed and the producer is terminated. Error handling Fatal exceptions caught by the producer application: InterruptedException Error thrown when the current thread is interrupted while paused. Interruption typically occurs when stopping or shutting down the producer. The exception is rethrown as a RuntimeException , which terminates the producer. IllegalArgumentException Error thrown when the producer receives invalid or inappropriate arguments. For example, the exception is thrown if the topic is missing. UnsupportedOperationException Error thrown when an operation is not supported or a method is not implemented. For example, the exception is thrown if an attempt is made to use an unsupported producer configuration or call a method that is not supported by the KafkaProducer class. Non-fatal exceptions caught by the producer application: RetriableException Error thrown for any exception that implements the RetriableException interface provided by the Kafka client library. With non-fatal errors, the producer continues to send messages. Note By default, Kafka operates with at-least-once message delivery semantics, which means that messages can be delivered more than once in certain scenarios, potentially leading to duplicates. To avoid this risk, consider enabling transactions in your Kafka producer . Transactions provide stronger guarantees of exactly-once delivery. Additionally, you can use the retries configuration property to control how many times the producer will retry sending a message before giving up. This setting affects how many times the retriable method may return true during a message send error. 6.2. Example Kafka consumer application This Java-based Kafka consumer application is an example of a self-contained application that consumes messages from a Kafka topic. The client uses the Kafka Consumer API to fetch and process messages from a specified topic asynchronously, with some error handling. It follows at-least-once semantics by committing offsets after successfully processing messages. The client implements the ConsumerRebalanceListener interface for partition handling and the OffsetCommitCallback interface for committing offsets. To run the Kafka consumer application, you execute the main method in the Consumer class. The client consumes messages from the Kafka topic until NUM_MESSAGES messages (50 in the example configuration) have been consumed. The consumer is not designed to be safely accessed concurrently by multiple threads. This example client provides a basic foundation for building more complex Kafka consumers for specific use cases. You can incorporate additional functionality, such as implementing secure connections . Prerequisites Kafka brokers running on the specified BOOTSTRAP_SERVERS A Kafka topic named TOPIC_NAME from which messages are consumed. Client dependencies Before implementing the Kafka consumer application, your project must include the necessary dependencies. For a Java-based Kafka client, include the Kafka client JAR. This JAR file contains the Kafka libraries required for building and running the client. For information on how to add the dependencies to a pom.xml file in a Maven project, see Section 3.1, "Adding a Kafka clients dependency to your Maven project" . Configuration You can configure the consumer application through the following constants specified in the Consumer class: BOOTSTRAP_SERVERS The address and port to connect to the Kafka brokers. GROUP_ID The consumer group identifier. POLL_TIMEOUT_MS The maximum time to wait for new messages during each poll. TOPIC_NAME The name of the Kafka topic to consume messages from. NUM_MESSAGES The number of messages to consume before stopping. PROCESSING_DELAY_MS The delay in milliseconds between sending messages. This can simulate message processing time, which is useful for testing. Example consumer application import java.util.Collection; import java.util.HashMap; import java.util.Map; import java.util.Properties; import java.util.UUID; import java.util.concurrent.TimeUnit; import java.util.concurrent.atomic.AtomicLong; import org.apache.kafka.clients.consumer.ConsumerConfig; import org.apache.kafka.clients.consumer.ConsumerRebalanceListener; import org.apache.kafka.clients.consumer.ConsumerRecord; import org.apache.kafka.clients.consumer.ConsumerRecords; import org.apache.kafka.clients.consumer.KafkaConsumer; import org.apache.kafka.clients.consumer.NoOffsetForPartitionException; import org.apache.kafka.clients.consumer.OffsetAndMetadata; import org.apache.kafka.clients.consumer.OffsetCommitCallback; import org.apache.kafka.clients.consumer.OffsetOutOfRangeException; import org.apache.kafka.common.TopicPartition; import org.apache.kafka.common.errors.RebalanceInProgressException; import org.apache.kafka.common.errors.RetriableException; import org.apache.kafka.common.serialization.ByteArrayDeserializer; import org.apache.kafka.common.serialization.LongDeserializer; import static java.time.Duration.ofMillis; import static java.util.Collections.singleton; public class Consumer implements ConsumerRebalanceListener, OffsetCommitCallback { private static final String BOOTSTRAP_SERVERS = "localhost:9092"; private static final String GROUP_ID = "my-group"; private static final long POLL_TIMEOUT_MS = 1_000L; private static final String TOPIC_NAME = "my-topic"; private static final long NUM_MESSAGES = 50; private static final long PROCESSING_DELAY_MS = 1_000L; private KafkaConsumer<Long, byte[]> kafkaConsumer; protected AtomicLong messageCount = new AtomicLong(0); private Map<TopicPartition, OffsetAndMetadata> pendingOffsets = new HashMap<>(); public static void main(String[] args) { new Consumer().run(); } public void run() { System.out.println("Running consumer"); try (var consumer = createKafkaConsumer()) { 1 kafkaConsumer = consumer; consumer.subscribe(singleton(TOPIC_NAME), this); 2 System.out.printf("Subscribed to %s%n", TOPIC_NAME); while (messageCount.get() < NUM_MESSAGES) { 3 try { ConsumerRecords<Long, byte[]> records = consumer.poll(ofMillis(POLL_TIMEOUT_MS)); 4 if (!records.isEmpty()) { 5 for (ConsumerRecord<Long, byte[]> record : records) { System.out.printf("Record fetched from %s-%d with offset %d%n", record.topic(), record.partition(), record.offset()); sleep(PROCESSING_DELAY_MS); 6 pendingOffsets.put(new TopicPartition(record.topic(), record.partition()), 7 new OffsetAndMetadata(record.offset() + 1, null)); if (messageCount.incrementAndGet() == NUM_MESSAGES) { break; } } consumer.commitAsync(pendingOffsets, this); 8 pendingOffsets.clear(); } } catch (OffsetOutOfRangeException | NoOffsetForPartitionException e) { 9 System.out.println("Invalid or no offset found, and auto.reset.policy unset, using latest"); consumer.seekToEnd(e.partitions()); consumer.commitSync(); } catch (Exception e) { System.err.println(e.getMessage()); if (!retriable(e)) { e.printStackTrace(); System.exit(1); } } } } } private KafkaConsumer<Long, byte[]> createKafkaConsumer() { Properties props = new Properties(); 10 props.put(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG, BOOTSTRAP_SERVERS); 11 props.put(ConsumerConfig.CLIENT_ID_CONFIG, "client-" + UUID.randomUUID()); 12 props.put(ConsumerConfig.GROUP_ID_CONFIG, GROUP_ID); 13 props.put(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG, LongDeserializer.class); 14 props.put(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG, ByteArrayDeserializer.class); props.put(ConsumerConfig.ENABLE_AUTO_COMMIT_CONFIG, false); 15 props.put(ConsumerConfig.AUTO_OFFSET_RESET_CONFIG, "earliest"); 16 return new KafkaConsumer<>(props); } private void sleep(long ms) { 17 try { TimeUnit.MILLISECONDS.sleep(ms); } catch (InterruptedException e) { throw new RuntimeException(e); } } private boolean retriable(Exception e) { 18 if (e == null) { return false; } else if (e instanceof IllegalArgumentException || e instanceof UnsupportedOperationException || !(e instanceof RebalanceInProgressException) || !(e instanceof RetriableException)) { return false; } else { return true; } } @Override public void onPartitionsAssigned(Collection<TopicPartition> partitions) { 19 System.out.printf("Assigned partitions: %s%n", partitions); } @Override public void onPartitionsRevoked(Collection<TopicPartition> partitions) { 20 System.out.printf("Revoked partitions: %s%n", partitions); kafkaConsumer.commitSync(pendingOffsets); pendingOffsets.clear(); } @Override public void onPartitionsLost(Collection<TopicPartition> partitions) { 21 System.out.printf("Lost partitions: {}", partitions); } @Override public void onComplete(Map<TopicPartition, OffsetAndMetadata> map, Exception e) { 22 if (e != null) { System.err.println("Failed to commit offsets"); if (!retriable(e)) { e.printStackTrace(); System.exit(1); } } } } 1 The client creates a Kafka consumer using the createKafkaConsumer method. 2 The consumer subscribes to a specific topic. After subscribing to the topic, a confirmation message is printed. 3 The maximum number of messages consumed is determined by the NUM_MESSAGES constant value. 4 The poll to fetch messages must be called within session.timeout.ms to avoid a rebalance. 5 A condition to check that the records object containing the batch messages fetched from Kafka is not empty. If the records object is empty, there are no new messages to process and the process is skipped. 6 Method to introduce a delay to the message fetching process for a specified number of milliseconds. 7 The consumer uses a pendingOffsets map to store the offsets of the consumed messages that need to be committed. 8 After processing a batch of messages, the consumer asynchronously commits the offsets using the commitAsync method, implementing at-least-once semantics. 9 A catch to handle non-fatal and fatal errors when consuming messages and auto-reset policy is not set. For non-fatal errors, the consumer seeks to the end of the partition and starts consuming from the latest available offset. If an exception cannot be retried, a stack trace is printed, and the consumer is terminated. 10 The client creates the KafkaConsumer instance using the provided configuration. You can use a properties file or add the configuration directly. For more information on the basic configuration, see Chapter 4, Configuring client applications for connecting to a Kafka cluster . 11 The connection to the Kafka brokers. 12 A unique client ID for the producer using a randomly generated UUID. A client ID is not required, but it is useful to track the source of requests. 13 The group ID for consumer coordination of assignments to partitions. 14 The appropriate deserializer classes for handling keys and values as byte arrays. 15 Configuration to disable automatic offset commits. 16 Configuration for the consumer to start consuming messages from the earliest available offset when no committed offset is found for a partition. 17 Method to introduce a delay to the message consuming process for a specified number of milliseconds. If the thread responsible for sending messages is interrupted while paused, it throws an InterruptedException error. 18 Method to check whether to retry committing a message following an exception. Null and specified exceptions are not retried, nor are exceptions that do not implement the RebalanceInProgressException or RetriableException interfaces. You can customize this method to include other errors. 19 Method to print a message to the console indicating the list of partitions that have been assigned to the consumer. 20 Method called when the consumer is about to lose ownership of partitions during a consumer group rebalance. The method prints the list of partitions that are being revoked from the consumer. Any pending offsets are committed. 21 Method called when the consumer loses ownership of partitions during a consumer group rebalance, but failed to commit any pending offsets. The method prints the list of partitions lost by the consumer. 22 Method called when the consumer is committing offsets to Kafka. If an error ocurred when committing an offset, an error message is printed. The method checks the exception and takes appropriate action based on whether it's a fatal or non-fatal error. If the error is non-fatal, the offset committing process continues. If the error is fatal, a stack trace is printed and the consumer is terminated. Error handling Fatal exceptions caught by the consumer application: InterruptedException Error thrown when the current thread is interrupted while paused. Interruption typically occurs when stopping or shutting down the consumer. The exception is rethrown as a RuntimeException , which terminates the consumer. IllegalArgumentException Error thrown when the consumer receives invalid or inappropriate arguments. For example, the exception is thrown if the topic is missing. UnsupportedOperationException Error thrown when an operation is not supported or a method is not implemented. For example, the exception is thrown if an attempt is made to use an unsupported consumer configuration or call a method that is not supported by the KafkaConsumer class. Non-fatal exceptions caught by the consumer application: OffsetOutOfRangeException Error thrown when the consumer attempts to seek to an invalid offset for a partition, typically when the offset is outside the valid range of offsets for that partition, and auto-reset policy is not enabled. To recover, the consumer seeks to the end of the partition to commit the offset synchronously ( commitSync ). If auto-reset policy is enabled, the consumer seeks to the start or end of the partition depending on the setting. NoOffsetForPartitionException Error thrown when there is no committed offset for a partition or the requested offset is invalid, and auto-reset policy is not enabled. To recover, the consumer seeks to the end of the partition to commit the offset synchronously ( commitSync ). If auto-reset policy is enabled, the consumer seeks to the start or end of the partition depending on the setting. RebalanceInProgressException Error thrown during a consumer group rebalance when partitions are being assigned. Offset commits cannot be completed when the consumer is undergoing a rebalance. RetriableException Error thrown for any exception that implements the RetriableException interface provided by the Kafka client library. With non-fatal errors, the consumer continues to process messages. 6.3. Using cooperative rebalancing with consumers Kafka consumers use a partition assignment strategy determined by the rebalancing protocol in place. By default, Kafka employs the RangeAssignor protocol, which involves consumers relinquishing their partition assignments during a rebalance, leading to potential service disruptions. To improve efficiency and reduce downtime, you can switch to the CooperativeStickyAssignor protocol, a cooperative rebalancing approach. Unlike the default protocol, cooperative rebalancing enables consumers to work together, retaining their partition assignments during a rebalance, and releasing partitions only when necessary to achieve a balance within the consumer group. Procedure In the consumer configuration, use the partition.assignment.strategy property to switch to using CooperativeStickyAssignor as the protocol. For example, if the current configuration is partition.assignment.strategy=RangeAssignor, CooperativeStickyAssignor , update it to partition.assignment.strategy=CooperativeStickyAssignor . Instead of modifying the consumer configuration file directly, you can also set the partition assignment strategy using props.put in the consumer application code: # ... props.put(ConsumerConfig.PARTITION_ASSIGNMENT_STRATEGY_CONFIG, "org.apache.kafka.clients.consumer.CooperativeStickyAssignor"); # ... Restart each consumer in the group one at a time, allowing them to rejoin the group after each restart. Warning After switching to the CooperativeStickyAssignor protocol, a RebalanceInProgressException may occur during consumer rebalancing, leading to unexpected stoppages of multiple Kafka clients in the same consumer group. Additionally, this issue may result in the duplication of uncommitted messages, even if Kafka consumers have not changed their partition assignments during rebalancing. If you are using automatic offset commits ( enable.auto.commit=true ), you don't need to make any changes. If you are manually committing offsets ( enable.auto.commit=false ), and a RebalanceInProgressException occurs during the manual commit, change the consumer implementation to call poll() in the loop to complete the consumer rebalancing process. For more information, see the CooperativeStickyAssignor article on the customer portal. | [
"get kafka <kafka_cluster_name> -o=jsonpath='{.status.listeners[*].bootstrapServers}{\"\\n\"}'",
"import java.util.Properties; import java.util.Random; import java.util.UUID; import java.util.concurrent.TimeUnit; import java.util.concurrent.atomic.AtomicLong; import org.apache.kafka.clients.producer.Callback; import org.apache.kafka.clients.producer.KafkaProducer; import org.apache.kafka.clients.producer.ProducerConfig; import org.apache.kafka.clients.producer.ProducerRecord; import org.apache.kafka.clients.producer.RecordMetadata; import org.apache.kafka.common.errors.RetriableException; import org.apache.kafka.common.serialization.ByteArraySerializer; import org.apache.kafka.common.serialization.LongSerializer; public class Producer implements Callback { private static final Random RND = new Random(0); private static final String BOOTSTRAP_SERVERS = \"localhost:9092\"; private static final String TOPIC_NAME = \"my-topic\"; private static final long NUM_MESSAGES = 50; private static final int MESSAGE_SIZE_BYTES = 100; private static final long PROCESSING_DELAY_MS = 1000L; protected AtomicLong messageCount = new AtomicLong(0); public static void main(String[] args) { new Producer().run(); } public void run() { System.out.println(\"Running producer\"); try (var producer = createKafkaProducer()) { 1 byte[] value = randomBytes(MESSAGE_SIZE_BYTES); 2 while (messageCount.get() < NUM_MESSAGES) { 3 sleep(PROCESSING_DELAY_MS); 4 producer.send(new ProducerRecord<>(TOPIC_NAME, messageCount.get(), value), this); 5 messageCount.incrementAndGet(); } } } private KafkaProducer<Long, byte[]> createKafkaProducer() { Properties props = new Properties(); 6 props.put(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG, BOOTSTRAP_SERVERS); 7 props.put(ProducerConfig.CLIENT_ID_CONFIG, \"client-\" + UUID.randomUUID()); 8 props.put(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG, LongSerializer.class); 9 props.put(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG, ByteArraySerializer.class); return new KafkaProducer<>(props); } private void sleep(long ms) { 10 try { TimeUnit.MILLISECONDS.sleep(ms); } catch (InterruptedException e) { throw new RuntimeException(e); } } private byte[] randomBytes(int size) { 11 if (size <= 0) { throw new IllegalArgumentException(\"Record size must be greater than zero\"); } byte[] payload = new byte[size]; for (int i = 0; i < payload.length; ++i) { payload[i] = (byte) (RND.nextInt(26) + 65); } return payload; } private boolean retriable(Exception e) { 12 if (e instanceof IllegalArgumentException || e instanceof UnsupportedOperationException || !(e instanceof RetriableException)) { return false; } else { return true; } } @Override public void onCompletion(RecordMetadata metadata, Exception e) { 13 if (e != null) { System.err.println(e.getMessage()); if (!retriable(e)) { e.printStackTrace(); System.exit(1); } } else { System.out.printf(\"Record sent to %s-%d with offset %d%n\", metadata.topic(), metadata.partition(), metadata.offset()); } } }",
"import java.util.Collection; import java.util.HashMap; import java.util.Map; import java.util.Properties; import java.util.UUID; import java.util.concurrent.TimeUnit; import java.util.concurrent.atomic.AtomicLong; import org.apache.kafka.clients.consumer.ConsumerConfig; import org.apache.kafka.clients.consumer.ConsumerRebalanceListener; import org.apache.kafka.clients.consumer.ConsumerRecord; import org.apache.kafka.clients.consumer.ConsumerRecords; import org.apache.kafka.clients.consumer.KafkaConsumer; import org.apache.kafka.clients.consumer.NoOffsetForPartitionException; import org.apache.kafka.clients.consumer.OffsetAndMetadata; import org.apache.kafka.clients.consumer.OffsetCommitCallback; import org.apache.kafka.clients.consumer.OffsetOutOfRangeException; import org.apache.kafka.common.TopicPartition; import org.apache.kafka.common.errors.RebalanceInProgressException; import org.apache.kafka.common.errors.RetriableException; import org.apache.kafka.common.serialization.ByteArrayDeserializer; import org.apache.kafka.common.serialization.LongDeserializer; import static java.time.Duration.ofMillis; import static java.util.Collections.singleton; public class Consumer implements ConsumerRebalanceListener, OffsetCommitCallback { private static final String BOOTSTRAP_SERVERS = \"localhost:9092\"; private static final String GROUP_ID = \"my-group\"; private static final long POLL_TIMEOUT_MS = 1_000L; private static final String TOPIC_NAME = \"my-topic\"; private static final long NUM_MESSAGES = 50; private static final long PROCESSING_DELAY_MS = 1_000L; private KafkaConsumer<Long, byte[]> kafkaConsumer; protected AtomicLong messageCount = new AtomicLong(0); private Map<TopicPartition, OffsetAndMetadata> pendingOffsets = new HashMap<>(); public static void main(String[] args) { new Consumer().run(); } public void run() { System.out.println(\"Running consumer\"); try (var consumer = createKafkaConsumer()) { 1 kafkaConsumer = consumer; consumer.subscribe(singleton(TOPIC_NAME), this); 2 System.out.printf(\"Subscribed to %s%n\", TOPIC_NAME); while (messageCount.get() < NUM_MESSAGES) { 3 try { ConsumerRecords<Long, byte[]> records = consumer.poll(ofMillis(POLL_TIMEOUT_MS)); 4 if (!records.isEmpty()) { 5 for (ConsumerRecord<Long, byte[]> record : records) { System.out.printf(\"Record fetched from %s-%d with offset %d%n\", record.topic(), record.partition(), record.offset()); sleep(PROCESSING_DELAY_MS); 6 pendingOffsets.put(new TopicPartition(record.topic(), record.partition()), 7 new OffsetAndMetadata(record.offset() + 1, null)); if (messageCount.incrementAndGet() == NUM_MESSAGES) { break; } } consumer.commitAsync(pendingOffsets, this); 8 pendingOffsets.clear(); } } catch (OffsetOutOfRangeException | NoOffsetForPartitionException e) { 9 System.out.println(\"Invalid or no offset found, and auto.reset.policy unset, using latest\"); consumer.seekToEnd(e.partitions()); consumer.commitSync(); } catch (Exception e) { System.err.println(e.getMessage()); if (!retriable(e)) { e.printStackTrace(); System.exit(1); } } } } } private KafkaConsumer<Long, byte[]> createKafkaConsumer() { Properties props = new Properties(); 10 props.put(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG, BOOTSTRAP_SERVERS); 11 props.put(ConsumerConfig.CLIENT_ID_CONFIG, \"client-\" + UUID.randomUUID()); 12 props.put(ConsumerConfig.GROUP_ID_CONFIG, GROUP_ID); 13 props.put(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG, LongDeserializer.class); 14 props.put(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG, ByteArrayDeserializer.class); props.put(ConsumerConfig.ENABLE_AUTO_COMMIT_CONFIG, false); 15 props.put(ConsumerConfig.AUTO_OFFSET_RESET_CONFIG, \"earliest\"); 16 return new KafkaConsumer<>(props); } private void sleep(long ms) { 17 try { TimeUnit.MILLISECONDS.sleep(ms); } catch (InterruptedException e) { throw new RuntimeException(e); } } private boolean retriable(Exception e) { 18 if (e == null) { return false; } else if (e instanceof IllegalArgumentException || e instanceof UnsupportedOperationException || !(e instanceof RebalanceInProgressException) || !(e instanceof RetriableException)) { return false; } else { return true; } } @Override public void onPartitionsAssigned(Collection<TopicPartition> partitions) { 19 System.out.printf(\"Assigned partitions: %s%n\", partitions); } @Override public void onPartitionsRevoked(Collection<TopicPartition> partitions) { 20 System.out.printf(\"Revoked partitions: %s%n\", partitions); kafkaConsumer.commitSync(pendingOffsets); pendingOffsets.clear(); } @Override public void onPartitionsLost(Collection<TopicPartition> partitions) { 21 System.out.printf(\"Lost partitions: {}\", partitions); } @Override public void onComplete(Map<TopicPartition, OffsetAndMetadata> map, Exception e) { 22 if (e != null) { System.err.println(\"Failed to commit offsets\"); if (!retriable(e)) { e.printStackTrace(); System.exit(1); } } } }",
"props.put(ConsumerConfig.PARTITION_ASSIGNMENT_STRATEGY_CONFIG, \"org.apache.kafka.clients.consumer.CooperativeStickyAssignor\");"
] | https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.7/html/developing_kafka_client_applications/proc-generic-java-client-str |
Chapter 14. Setting up a broker cluster | Chapter 14. Setting up a broker cluster A cluster consists of multiple broker instances that have been grouped together. Broker clusters enhance performance by distributing the message processing load across multiple brokers. In addition, broker clusters can minimize downtime through high availability. You can connect brokers together in many different cluster topologies. Within the cluster, each active broker manages its own messages and handles its own connections. You can also balance client connections across the cluster and redistribute messages to avoid broker starvation. 14.1. Understanding broker clusters Before creating a broker cluster, you should understand some important clustering concepts. 14.1.1. How broker clusters balance message load When brokers are connected to form a cluster, AMQ Broker automatically balances the message load between the brokers. This ensures that the cluster can maintain high message throughput. Consider a symmetric cluster of four brokers. Each broker is configured with a queue named OrderQueue . The OrderProducer client connects to Broker1 and sends messages to OrderQueue . Broker1 forwards the messages to the other brokers in round-robin fashion. The OrderConsumer clients connected to each broker consume the messages. Figure 14.1. Message load balancing in a cluster Without message load balancing, the messages sent to Broker1 would stay on Broker1 and only OrderConsumer1 would be able to consume them. The AMQ Broker automatically load balances messages by default, distributing the first group of messages to the first broker and the second group of messages to the second broker. The order in which the brokers were started determines which broker is first, second and so on. You can configure: the cluster to load balance messages to brokers that have a matching queue. the cluster to load balance messages to brokers that have a matching queue with active consumers. the cluster to not load balance, but to perform redistribution of messages from queues that do not have any consumers to queues that do have consumers. an address to automatically redistribute messages from queues that do not have any consumers to queues that do have consumers. Additional resources The message load balancing policy is configured with the message-load-balancing property in each broker's cluster connection. For more information, see Appendix C, Cluster Connection Configuration Elements . For more information about message redistribution, see Section 14.4.2, "Configuring message redistribution" . 14.1.2. How broker clusters improve reliability Broker clusters make high availability and failover possible, which makes them more reliable than standalone brokers. By configuring high availability, you can ensure that client applications can continue to send and receive messages even if a broker encounters a failure event. With high availability, the brokers in the cluster are grouped into live-backup groups. A live-backup group consists of a live broker that serves client requests, and one or more backup brokers that wait passively to replace the live broker if it fails. If a failure occurs, the backup brokers replaces the live broker in its live-backup group, and the clients reconnect and continue their work. 14.1.3. Cluster limitations The following limitation applies when you use AMQ broker in a clustered environment. Temporary Queues During a failover, if a client has consumers that use temporary queues, these queues are automatically recreated. The recreated queue name does not match the original queue name, which causes message redistribution to fail and can leave messages stranded in existing temporary queues. Red Hat recommends that you avoid using temporary queues in a cluster. For example, applications that use a request/reply pattern should use fixed queues for the JMSReplyTo address. 14.1.4. Understanding node IDs The broker node ID is a Globally Unique Identifier (GUID) generated programmatically when the journal for a broker instance is first created and initialized. The node ID is stored in the server.lock file. The node ID is used to uniquely identify a broker instance, regardless of whether the broker is a standalone instance, or part of a cluster. Live-backup broker pairs share the same node ID, since they share the same journal. In a broker cluster, broker instances (nodes) connect to each other and create bridges and internal "store-and-forward" queues. The names of these internal queues are based on the node IDs of the other broker instances. Broker instances also monitor cluster broadcasts for node IDs that match their own. A broker produces a warning message in the log if it identifies a duplicate ID. When you are using the replication high availability (HA) policy, a master broker that starts and has check-for-live-server set to true searches for a broker that is using its node ID. If the master broker finds another broker using the same node ID, it either does not start, or initiates failback, based on the HA configuration. The node ID is durable , meaning that it survives restarts of the broker. However, if you delete a broker instance (including its journal), then the node ID is also permanently deleted. Additional resources For more information about configuring the replication HA policy, see Configuring replication high availability . 14.1.5. Common broker cluster topologies You can connect brokers to form either a symmetric or chain cluster topology. The topology you implement depends on your environment and messaging requirements. Symmetric clusters In a symmetric cluster, every broker is connected to every other broker. This means that every broker is no more than one hop away from every other broker. Figure 14.2. Symmetric cluster topology Each broker in a symmetric cluster is aware of all of the queues that exist on every other broker in the cluster and the consumers that are listening on those queues. Therefore, symmetric clusters are able to load balance and redistribute messages more optimally than a chain cluster. Symmetric clusters are easier to set up than chain clusters, but they can be difficult to use in environments in which network restrictions prevent brokers from being directly connected. Chain clusters In a chain cluster, each broker in the cluster is not connected to every broker in the cluster directly. Instead, the brokers form a chain with a broker on each end of the chain and all other brokers just connecting to the and brokers in the chain. Figure 14.3. Chain cluster topology Chain clusters are more difficult to set up than symmetric clusters, but can be useful when brokers are on separate networks and cannot be directly connected. By using a chain cluster, an intermediary broker can indirectly connect two brokers to enable messages to flow between them even though the two brokers are not directly connected. 14.1.6. Broker discovery methods Discovery is the mechanism by which brokers in a cluster propagate their connection details to each other. AMQ Broker supports both dynamic discovery and static discovery . Dynamic discovery Each broker in the cluster broadcasts its connection settings to the other members through either UDP multicast or JGroups. In this method, each broker uses: A broadcast group to push information about its cluster connection to other potential members of the cluster. A discovery group to receive and store cluster connection information about the other brokers in the cluster. Static discovery If you are not able to use UDP or JGroups in your network, or if you want to manually specify each member of the cluster, you can use static discovery. In this method, a broker "joins" the cluster by connecting to a second broker and sending its connection details. The second broker then propagates those details to the other brokers in the cluster. 14.1.7. Cluster sizing considerations Before creating a broker cluster, consider your messaging throughput, topology, and high availability requirements. These factors affect the number of brokers to include in the cluster. Note After creating the cluster, you can adjust the size by adding and removing brokers. You can add and remove brokers without losing any messages. Messaging throughput The cluster should contain enough brokers to provide the messaging throughput that you require. The more brokers in the cluster, the greater the throughput. However, large clusters can be complex to manage. Topology You can create either symmetric clusters or chain clusters. The type of topology you choose affects the number of brokers you may need. For more information, see Section 14.1.5, "Common broker cluster topologies" . High availability If you require high availability (HA), consider choosing an HA policy before creating the cluster. The HA policy affects the size of the cluster, because each master broker should have at least one slave broker. For more information, see Section 14.3, "Implementing high availability" . 14.2. Creating a broker cluster You create a broker cluster by configuring a cluster connection on each broker that should participate in the cluster. The cluster connection defines how the broker should connect to the other brokers. You can create a broker cluster that uses static discovery or dynamic discovery (either UDP multicast or JGroups). Prerequisites You should have determined the size of the broker cluster. For more information, see Section 14.1.7, "Cluster sizing considerations" . 14.2.1. Creating a broker cluster with static discovery You can create a broker cluster by specifying a static list of brokers. Use this static discovery method if you are unable to use UDP multicast or JGroups on your network. Procedure Open the <broker_instance_dir> /etc/broker.xml configuration file. Within the <core> element, add the following connectors: A connector that defines how other brokers can connect to this one One or more connectors that define how this broker can connect to other brokers in the cluster <configuration> <core> ... <connectors> <connector name="netty-connector">tcp://localhost:61617</connector> 1 <connector name="broker2">tcp://localhost:61618</connector> 2 <connector name="broker3">tcp://localhost:61619</connector> </connectors> ... </core> </configuration> 1 This connector defines connection information that other brokers can use to connect to this one. This information will be sent to other brokers in the cluster during discovery. 2 The broker2 and broker3 connectors define how this broker can connect to two other brokers in the cluster, one of which will always be available. If there are other brokers in the cluster, they will be discovered by one of these connectors when the initial connection is made. For more information about connectors, see Section 2.3, "About connectors" . Add a cluster connection and configure it to use static discovery. By default, the cluster connection will load balance messages for all addresses in a symmetric topology. <configuration> <core> ... <cluster-connections> <cluster-connection name="my-cluster"> <connector-ref>netty-connector</connector-ref> <static-connectors> <connector-ref>broker2-connector</connector-ref> <connector-ref>broker3-connector</connector-ref> </static-connectors> </cluster-connection> </cluster-connections> ... </core> </configuration> cluster-connection Use the name attribute to specify the name of the cluster connection. connector-ref The connector that defines how other brokers can connect to this one. static-connectors One or more connectors that this broker can use to make an initial connection to another broker in the cluster. After making this initial connection, the broker will discover the other brokers in the cluster. You only need to configure this property if the cluster uses static discovery. Configure any additional properties for the cluster connection. These additional cluster connection properties have default values that are suitable for most common use cases. Therefore, you only need to configure these properties if you do not want the default behavior. For more information, see Appendix C, Cluster Connection Configuration Elements . Create the cluster user and password. AMQ Broker ships with default cluster credentials, but you should change them to prevent unauthorized remote clients from using these default credentials to connect to the broker. Important The cluster password must be the same on every broker in the cluster. <configuration> <core> ... <cluster-user>cluster_user</cluster-user> <cluster-password>cluster_user_password</cluster-password> ... </core> </configuration> Repeat this procedure on each additional broker. You can copy the cluster configuration to each additional broker. However, do not copy any of the other AMQ Broker data files (such as the bindings, journal, and large messages directories). These files must be unique among the nodes in the cluster or the cluster will not form properly. Additional resources For an example of a broker cluster that uses static discovery, see the clustered-static-discovery example . 14.2.2. Creating a broker cluster with UDP-based dynamic discovery You can create a broker cluster in which the brokers discover each other dynamically through UDP multicast. Procedure Open the <broker_instance_dir> /etc/broker.xml configuration file. Within the <core> element, add a connector. This connector defines connection information that other brokers can use to connect to this one. This information will be sent to other brokers in the cluster during discovery. <configuration> <core> ... <connectors> <connector name="netty-connector">tcp://localhost:61617</connector> </connectors> ... </core> </configuration> Add a UDP broadcast group. The broadcast group enables the broker to push information about its cluster connection to the other brokers in the cluster. This broadcast group uses UDP to broadcast the connection settings: <configuration> <core> ... <broadcast-groups> <broadcast-group name="my-broadcast-group"> <local-bind-address>172.16.9.3</local-bind-address> <local-bind-port>-1</local-bind-port> <group-address>231.7.7.7</group-address> <group-port>9876</group-port> <broadcast-period>2000</broadcast-period> <connector-ref>netty-connector</connector-ref> </broadcast-group> </broadcast-groups> ... </core> </configuration> The following parameters are required unless otherwise noted: broadcast-group Use the name attribute to specify a unique name for the broadcast group. local-bind-address The address to which the UDP socket is bound. If you have multiple network interfaces on your broker, you should specify which one you want to use for broadcasts. If this property is not specified, the socket will be bound to an IP address chosen by the operating system. This is a UDP-specific attribute. local-bind-port The port to which the datagram socket is bound. In most cases, use the default value of -1 , which specifies an anonymous port. This parameter is used in connection with local-bind-address . This is a UDP-specific attribute. group-address The multicast address to which the data will be broadcast. It is a class D IP address in the range 224.0.0.0 - 239.255.255.255 inclusive. The address 224.0.0.0 is reserved and is not available for use. This is a UDP-specific attribute. group-port The UDP port number used for broadcasting. This is a UDP-specific attribute. broadcast-period (optional) The interval in milliseconds between consecutive broadcasts. The default value is 2000 milliseconds. connector-ref The previously configured cluster connector that should be broadcasted. Add a UDP discovery group. The discovery group defines how this broker receives connector information from other brokers. The broker maintains a list of connectors (one entry for each broker). As it receives broadcasts from a broker, it updates its entry. If it does not receive a broadcast from a broker for a length of time, it removes the entry. This discovery group uses UDP to discover the brokers in the cluster: <configuration> <core> ... <discovery-groups> <discovery-group name="my-discovery-group"> <local-bind-address>172.16.9.7</local-bind-address> <group-address>231.7.7.7</group-address> <group-port>9876</group-port> <refresh-timeout>10000</refresh-timeout> </discovery-group> <discovery-groups> ... </core> </configuration> The following parameters are required unless otherwise noted: discovery-group Use the name attribute to specify a unique name for the discovery group. local-bind-address (optional) If the machine on which the broker is running uses multiple network interfaces, you can specify the network interface to which the discovery group should listen. This is a UDP-specific attribute. group-address The multicast address of the group on which to listen. It should match the group-address in the broadcast group that you want to listen from. This is a UDP-specific attribute. group-port The UDP port number of the multicast group. It should match the group-port in the broadcast group that you want to listen from. This is a UDP-specific attribute. refresh-timeout (optional) The amount of time in milliseconds that the discovery group waits after receiving the last broadcast from a particular broker before removing that broker's connector pair entry from its list. The default is 10000 milliseconds (10 seconds). Set this to a much higher value than the broadcast-period on the broadcast group. Otherwise, brokers might periodically disappear from the list even though they are still broadcasting (due to slight differences in timing). Create a cluster connection and configure it to use dynamic discovery. By default, the cluster connection will load balance messages for all addresses in a symmetric topology. <configuration> <core> ... <cluster-connections> <cluster-connection name="my-cluster"> <connector-ref>netty-connector</connector-ref> <discovery-group-ref discovery-group-name="my-discovery-group"/> </cluster-connection> </cluster-connections> ... </core> </configuration> cluster-connection Use the name attribute to specify the name of the cluster connection. connector-ref The connector that defines how other brokers can connect to this one. discovery-group-ref The discovery group that this broker should use to locate other members of the cluster. You only need to configure this property if the cluster uses dynamic discovery. Configure any additional properties for the cluster connection. These additional cluster connection properties have default values that are suitable for most common use cases. Therefore, you only need to configure these properties if you do not want the default behavior. For more information, see Appendix C, Cluster Connection Configuration Elements . Create the cluster user and password. AMQ Broker ships with default cluster credentials, but you should change them to prevent unauthorized remote clients from using these default credentials to connect to the broker. Important The cluster password must be the same on every broker in the cluster. <configuration> <core> ... <cluster-user>cluster_user</cluster-user> <cluster-password>cluster_user_password</cluster-password> ... </core> </configuration> Repeat this procedure on each additional broker. You can copy the cluster configuration to each additional broker. However, do not copy any of the other AMQ Broker data files (such as the bindings, journal, and large messages directories). These files must be unique among the nodes in the cluster or the cluster will not form properly. Additional resources For an example of a broker cluster configuration that uses dynamic discovery with UDP, see the clustered-queue example . 14.2.3. Creating a broker cluster with JGroups-based dynamic discovery If you are already using JGroups in your environment, you can use it to create a broker cluster in which the brokers discover each other dynamically. Prerequisites JGroups must be installed and configured. For an example of a JGroups configuration file, see the clustered-jgroups example . Procedure Open the <broker_instance_dir> /etc/broker.xml configuration file. Within the <core> element, add a connector. This connector defines connection information that other brokers can use to connect to this one. This information will be sent to other brokers in the cluster during discovery. <configuration> <core> ... <connectors> <connector name="netty-connector">tcp://localhost:61617</connector> </connectors> ... </core> </configuration> Within the <core> element, add a JGroups broadcast group. The broadcast group enables the broker to push information about its cluster connection to the other brokers in the cluster. This broadcast group uses JGroups to broadcast the connection settings: <configuration> <core> ... <broadcast-groups> <broadcast-group name="my-broadcast-group"> <jgroups-file>test-jgroups-file_ping.xml</jgroups-file> <jgroups-channel>activemq_broadcast_channel</jgroups-channel> <broadcast-period>2000</broadcast-period> <connector-ref>netty-connector</connector-ref> </broadcast-group> </broadcast-groups> ... </core> </configuration> The following parameters are required unless otherwise noted: broadcast-group Use the name attribute to specify a unique name for the broadcast group. jgroups-file The name of JGroups configuration file to initialize JGroups channels. The file must be in the Java resource path so that the broker can load it. jgroups-channel The name of the JGroups channel to connect to for broadcasting. broadcast-period (optional) The interval, in milliseconds, between consecutive broadcasts. The default value is 2000 milliseconds. connector-ref The previously configured cluster connector that should be broadcasted. Add a JGroups discovery group. The discovery group defines how connector information is received. The broker maintains a list of connectors (one entry for each broker). As it receives broadcasts from a broker, it updates its entry. If it does not receive a broadcast from a broker for a length of time, it removes the entry. This discovery group uses JGroups to discover the brokers in the cluster: <configuration> <core> ... <discovery-groups> <discovery-group name="my-discovery-group"> <jgroups-file>test-jgroups-file_ping.xml</jgroups-file> <jgroups-channel>activemq_broadcast_channel</jgroups-channel> <refresh-timeout>10000</refresh-timeout> </discovery-group> <discovery-groups> ... </core> </configuration> The following parameters are required unless otherwise noted: discovery-group Use the name attribute to specify a unique name for the discovery group. jgroups-file The name of JGroups configuration file to initialize JGroups channels. The file must be in the Java resource path so that the broker can load it. jgroups-channel The name of the JGroups channel to connect to for receiving broadcasts. refresh-timeout (optional) The amount of time in milliseconds that the discovery group waits after receiving the last broadcast from a particular broker before removing that broker's connector pair entry from its list. The default is 10000 milliseconds (10 seconds). Set this to a much higher value than the broadcast-period on the broadcast group. Otherwise, brokers might periodically disappear from the list even though they are still broadcasting (due to slight differences in timing). Create a cluster connection and configure it to use dynamic discovery. By default, the cluster connection will load balance messages for all addresses in a symmetric topology. <configuration> <core> ... <cluster-connections> <cluster-connection name="my-cluster"> <connector-ref>netty-connector</connector-ref> <discovery-group-ref discovery-group-name="my-discovery-group"/> </cluster-connection> </cluster-connections> ... </core> </configuration> cluster-connection Use the name attribute to specify the name of the cluster connection. connector-ref The connector that defines how other brokers can connect to this one. discovery-group-ref The discovery group that this broker should use to locate other members of the cluster. You only need to configure this property if the cluster uses dynamic discovery. Configure any additional properties for the cluster connection. These additional cluster connection properties have default values that are suitable for most common use cases. Therefore, you only need to configure these properties if you do not want the default behavior. For more information, see Appendix C, Cluster Connection Configuration Elements . Create the cluster user and password. AMQ Broker ships with default cluster credentials, but you should change them to prevent unauthorized remote clients from using these default credentials to connect to the broker. Important The cluster password must be the same on every broker in the cluster. <configuration> <core> ... <cluster-user>cluster_user</cluster-user> <cluster-password>cluster_user_password</cluster-password> ... </core> </configuration> Repeat this procedure on each additional broker. You can copy the cluster configuration to each additional broker. However, do not copy any of the other AMQ Broker data files (such as the bindings, journal, and large messages directories). These files must be unique among the nodes in the cluster or the cluster will not form properly. Additional resources For an example of a broker cluster that uses dynamic discovery with JGroups, see the clustered-jgroups example . 14.3. Implementing high availability You can improve its reliability by implementing high availability (HA), enabling the broker cluster continue to function even if one or more brokers go offline. Implementing HA involves several steps: Configure a broker cluster for your HA implementation as described in Section 14.2, "Creating a broker cluster" . You should understand what live-backup groups are, and choose an HA policy that best meets your requirements. See Understanding how HA works in AMQ Broker . When you have chosen a suitable HA policy, configure the HA policy on each broker in the cluster. See: Configuring shared store high availability Configuring replication high availability Configuring limited high availability with live-only Configuring high availability with colocated backups Configure your client applications to use failover . Note In the later event that you need to troubleshoot a broker cluster configured for high availability, it is recommended that you enable Garbage Collection (GC) logging for each Java Virtual Machine (JVM) instance that is running a broker in the cluster. To learn how to enable GC logs on your JVM, consult the official documentation for the Java Development Kit (JDK) version used by your JVM. For more information on the JVM versions that AMQ Broker supports, see Red Hat AMQ 7 Supported Configurations . 14.3.1. Understanding high availability In AMQ Broker, you implement high availability (HA) by grouping the brokers in the cluster into live-backup groups . In a live-backup group, a live broker is linked to a backup broker, which can take over for the live broker if it fails. AMQ Broker also provides several different strategies for failover (called HA policies ) within a live-backup group. 14.3.1.1. How live-backup groups provide high availability In AMQ Broker, you implement high availability (HA) by linking together the brokers in your cluster to form live-backup groups . Live-backup groups provide failover , which means that if one broker fails, another broker can take over its message processing. A live-backup group consists of one live broker (sometimes called the master broker) linked to one or more backup brokers (sometimes called slave brokers). The live broker serves client requests, while the backup brokers wait in passive mode. If the live broker fails, a backup broker replaces the live broker, enabling the clients to reconnect and continue their work. 14.3.1.2. High availability policies A high availability (HA) policy defines how failover happens in a live-backup group. AMQ Broker provides several different HA policies: Shared store (recommended) The live and backup brokers store their messaging data in a common directory on a shared file system; typically a Storage Area Network (SAN) or Network File System (NFS) server. You can also store broker data in a specified database if you have configured JDBC-based persistence. With shared store, if a live broker fails, the backup broker loads the message data from the shared store and takes over for the failed live broker. In most cases, you should use shared store instead of replication. Because shared store does not replicate data over the network, it typically provides better performance than replication. Shared store also avoids network isolation (also called "split brain") issues in which a live broker and its backup become live at the same time. Figure 14.4. Shared store high availability Replication The live and backup brokers continuously synchronize their messaging data over the network. If the live broker fails, the backup broker loads the synchronized data and takes over for the failed live broker. Data synchronization between the live and backup brokers ensures that no messaging data is lost if the live broker fails. When the live and backup brokers initially join together, the live broker replicates all of its existing data to the backup broker over the network. Once this initial phase is complete, the live broker replicates persistent data to the backup broker as the live broker receives it. This means that if the live broker drops off the network, the backup broker has all of the persistent data that the live broker has received up to that point. Because replication synchronizes data over the network, network failures can result in network isolation in which a live broker and its backup become live at the same time. Figure 14.5. Replication high availability Live-only (limited HA) When a live broker is stopped gracefully, it copies its messages and transaction state to another live broker and then shuts down. Clients can then reconnect to the other broker to continue sending and receiving messages. Figure 14.6. Live-only high availability Additional resources For more information about the persistent message data that is shared between brokers in a live-backup group, see Section 6.1, "Persisting message data in journals" . 14.3.1.3. Replication policy limitations When you use replication to provide high availability, a risk exists that both live and backup brokers can become live at the same time, which is referred to as "split brain". Split brain can happen if a live broker and its backup lose their connection. In this situation, both a live broker and its backup can become active at the same time. Because there is no message replication between the brokers in this situation, they each serve clients and process messages without the other knowing it. In this case, each broker has a completely different journal. Recovering from this situation can be very difficult and in some cases, not possible. To eliminate any possibility of split brain, use the shared store HA policy. If you do use the replication HA policy, take the following steps to reduce the risk of split brain occurring. If you want the brokers to use the ZooKeeper Coordination Service to coordinate brokers, deploy ZooKeeper on at least three nodes. If the brokers lose connection to one ZooKeeper node, using at least three nodes ensures that a majority of nodes are available to coordinate the brokers when a live-backup broker pair experiences a replication interruption. If you want to use the embedded broker coordination, which uses the other available brokers in the cluster to provide a quorum vote, you can reduce (but not eliminate) the chance of encountering split brain by using at least three live-backup pairs . Using at least three live-backup pairs ensures that a majority result can be achieved in any quorum vote that takes place when a live-backup broker pair experiences a replication interruption. Some additional considerations when you use the replication HA policy are described below: When a live broker fails and the backup transitions to live, no further replication takes place until a new backup broker is attached to the live, or failback to the original live broker occurs. If the backup broker in a live-backup group fails, the live broker continues to serve messages. However, messages are not replicated until another broker is added as a backup, or the original backup broker is restarted. During that time, messages are persisted only to the live broker. If the brokers use the embedded broker coordination and both brokers in a live-backup pair are shut down, to avoid message loss, you must restart the most recently active broker first. If the most recently active broker was the backup broker, you need to manually reconfigure this broker as a master broker to enable it to be restarted first. 14.3.2. Configuring shared store high availability You can use the shared store high availability (HA) policy to implement HA in a broker cluster. With shared store, both live and backup brokers access a common directory on a shared file system; typically a Storage Area Network (SAN) or Network File System (NFS) server. You can also store broker data in a specified database if you have configured JDBC-based persistence. With shared store, if a live broker fails, the backup broker loads the message data from the shared store and takes over for the failed live broker. In general, a SAN offers better performance (for example, speed) versus an NFS server, and is the recommended option, if available. If you need to use an NFS server, see Red Hat AMQ 7 Supported Configurations for more information about network file systems that AMQ Broker supports. In most cases, you should use shared store HA instead of replication. Because shared store does not replicate data over the network, it typically provides better performance than replication. Shared store also avoids network isolation (also called "split brain") issues in which a live broker and its backup become live at the same time. Note When using shared store, the startup time for the backup broker depends on the size of the message journal. When the backup broker takes over for a failed live broker, it loads the journal from the shared store. This process can be time consuming if the journal contains a lot of data. 14.3.2.1. Configuring an NFS shared store When using shared store high availability, you must configure both the live and backup brokers to use a common directory on a shared file system. Typically, you use a Storage Area Network (SAN) or Network File System (NFS) server. Listed below are some recommended configuration options when mounting an exported directory from an NFS server on each of your broker machine instances. sync Specifies that all changes are immediately flushed to disk. intr Allows NFS requests to be interrupted if the server is shut down or cannot be reached. noac Disables attribute caching. This behavior is needed to achieve attribute cache coherence among multiple clients. soft Specifies that if the NFS server is unavailable, the error should be reported rather than waiting for the server to come back online. lookupcache=none Disables lookup caching. timeo=n The time, in deciseconds (tenths of a second), that the NFS client (that is, the broker) waits for a response from the NFS server before it retries a request. For NFS over TCP, the default timeo value is 600 (60 seconds). For NFS over UDP, the client uses an adaptive algorithm to estimate an appropriate timeout value for frequently used request types, such as read and write requests. retrans=n The number of times that the NFS client retries a request before it attempts further recovery action. If the retrans option is not specified, the NFS client tries each request three times. Important It is important to use reasonable values when you configure the timeo and retrans options. A default timeo wait time of 600 deciseconds (60 seconds) combined with a retrans value of 5 retries can result in a five-minute wait for AMQ Broker to detect an NFS disconnection. Additional resources To learn how to mount an exported directory from an NFS server, see Mounting an NFS share with mount in the Red Hat Enterprise Linux documentation. For information about network file systems supported by AMQ Broker, see Red Hat AMQ 7 Supported Configurations . 14.3.2.2. Configuring shared store high availability This procedure shows how to configure shared store high availability for a broker cluster. Prerequisites A shared storage system must be accessible to the live and backup brokers. Typically, you use a Storage Area Network (SAN) or Network File System (NFS) server to provide the shared store. For more information about supported network file systems, see Red Hat AMQ 7 Supported Configurations . If you have configured JDBC-based persistence, you can use your specified database to provide the shared store. To learn how to configure JDBC persistence, see Section 6.2, "Persisting message data in a database" . Procedure Group the brokers in your cluster into live-backup groups. In most cases, a live-backup group should consist of two brokers: a live broker and a backup broker. If you have six brokers in your cluster, you would need three live-backup groups. Create the first live-backup group consisting of one live broker and one backup broker. Open the live broker's <broker_instance_dir> /etc/broker.xml configuration file. If you are using: A network file system to provide the shared store, verify that the live broker's paging, bindings, journal, and large messages directories point to a shared location that the backup broker can also access. <configuration> <core> ... <paging-directory>../sharedstore/data/paging</paging-directory> <bindings-directory>../sharedstore/data/bindings</bindings-directory> <journal-directory>../sharedstore/data/journal</journal-directory> <large-messages-directory>../sharedstore/data/large-messages</large-messages-directory> ... </core> </configuration> A database to provide the shared store, ensure that both the master and backup broker can connect to the same database and have the same configuration specified in the database-store element of the broker.xml configuration file. An example configuration is shown below. <configuration> <core> <store> <database-store> <jdbc-connection-url>jdbc:oracle:data/oracle/database-store;create=true</jdbc-connection-url> <jdbc-user>ENC(5493dd76567ee5ec269d11823973462f)</jdbc-user> <jdbc-password>ENC(56a0db3b71043054269d11823973462f)</jdbc-password> <bindings-table-name>BIND_TABLE</bindings-table-name> <message-table-name>MSG_TABLE</message-table-name> <large-message-table-name>LGE_TABLE</large-message-table-name> <page-store-table-name>PAGE_TABLE</page-store-table-name> <node-manager-store-table-name>NODE_TABLE<node-manager-store-table-name> <jdbc-driver-class-name>oracle.jdbc.driver.OracleDriver</jdbc-driver-class-name> <jdbc-network-timeout>10000</jdbc-network-timeout> <jdbc-lock-renew-period>2000</jdbc-lock-renew-period> <jdbc-lock-expiration>15000</jdbc-lock-expiration> <jdbc-journal-sync-period>5</jdbc-journal-sync-period> </database-store> </store> </core> </configuration> Configure the live broker to use shared store for its HA policy. <configuration> <core> ... <ha-policy> <shared-store> <master> <failover-on-shutdown>true</failover-on-shutdown> </master> </shared-store> </ha-policy> ... </core> </configuration> failover-on-shutdown If this broker is stopped normally, this property controls whether the backup broker should become live and take over. Open the backup broker's <broker_instance_dir> /etc/broker.xml configuration file. If you are using: A network file system to provide the shared store, verify that the backup broker's paging, bindings, journal, and large messages directories point to the same shared location as the live broker. <configuration> <core> ... <paging-directory>../sharedstore/data/paging</paging-directory> <bindings-directory>../sharedstore/data/bindings</bindings-directory> <journal-directory>../sharedstore/data/journal</journal-directory> <large-messages-directory>../sharedstore/data/large-messages</large-messages-directory> ... </core> </configuration> A database to provide the shared store, ensure that both the master and backup brokers can connect to the same database and have the same configuration specified in the database-store element of the broker.xml configuration file. Configure the backup broker to use shared store for its HA policy. <configuration> <core> ... <ha-policy> <shared-store> <slave> <failover-on-shutdown>true</failover-on-shutdown> <allow-failback>true</allow-failback> <restart-backup>true</restart-backup> </slave> </shared-store> </ha-policy> ... </core> </configuration> failover-on-shutdown If this broker has become live and then is stopped normally, this property controls whether the backup broker (the original live broker) should become live and take over. allow-failback If failover has occurred and the backup broker has taken over for the live broker, this property controls whether the backup broker should fail back to the original live broker when it restarts and reconnects to the cluster. Note Failback is intended for a live-backup pair (one live broker paired with a single backup broker). If the live broker is configured with multiple backups, then failback will not occur. Instead, if a failover event occurs, the backup broker will become live, and the backup will become its backup. When the original live broker comes back online, it will not be able to initiate failback, because the broker that is now live already has a backup. restart-backup This property controls whether the backup broker automatically restarts after it fails back to the live broker. The default value of this property is true . Repeat Step 2 for each remaining live-backup group in the cluster. 14.3.3. Configuring replication high availability You can use the replication high availability (HA) policy to implement HA in a broker cluster. With replication, persistent data is synchronized between the live and backup brokers. If a live broker encounters a failure, message data is synchronized to the backup broker and it takes over for the failed live broker. You should use replication as an alternative to shared store, if you do not have a shared file system. However, replication can result in a scenario in which a live broker and its backup become live at the same time. Note Because the live and backup brokers must synchronize their messaging data over the network, replication adds a performance overhead. This synchronization process blocks journal operations, but it does not block clients. You can configure the maximum amount of time that journal operations can be blocked for data synchronization. If the replication connection between the live-backup broker pair is interrupted, the brokers require a way to coordinate to determine if the live broker is still active or if it is unavailable and a failover to the backup broker is required. To provide this coordination, you can configure the brokers to use either of the following coordination methods. The Apache ZooKeeper coordination service. The embedded broker coordination, which uses other brokers in the cluster to provide a quorum vote. 14.3.3.1. Choosing a coordination method Red Hat recommends that you use the Apache ZooKeeper coordination service to coordinate broker activation. When choosing a coordination method, it is useful to understand the differences in infrastructure requirements and the management of data consistency between both coordination methods. Infrastructure requirements If you use the ZooKeeper coordination service, you can operate with a single live-backup broker pair. However, you must connect the brokers to at least 3 Apache ZooKeeper nodes to ensure that brokers can continue to function if they lose connection to one node. To provide a coordination service to brokers, you can share existing ZooKeeper nodes that are used by other applications. For more information on setting up Apache ZooKeeper, see the Apache ZooKeeper documentation. If you want to use the embedded broker coordination, which uses the other available brokers in the cluster to provide a quorum vote, you must have at least three live-backup broker pairs. Using at least three live-backup pairs ensures that a majority result can be achieved in any quorum vote that occurs when a live-backup broker pair experiences a replication interruption. Data consistency If you use the Apache ZooKeeper coordination service, ZooKeeper tracks the version of the data on each broker so only the broker that has the most up-to-date journal data can activate as the live broker, irrespective of whether the broker is configured as a primary or backup broker for replication purposes. Version tracking eliminates the possibility that a broker can activate with an out-of-date journal and start serving clients. If you use the embedded broker coordination, no mechanism exists to track the version of the data on each broker to ensure that only the broker that has the most up-to-date journal can become the live broker. Therefore, it is possible for a broker that has an out-of-date journal to become live and start serving clients, which causes a divergence in the journal. 14.3.3.2. How brokers coordinate after a replication interruption This section explains how both coordination methods work after a replication connection is interrupted. Using the ZooKeeper coordination service If you use the ZooKeeper coordination service to manage replication interruptions, both brokers must be connected to multiple Apache ZooKeeper nodes. If, at any time, the live broker loses connection to a majority of the ZooKeeper nodes, it shuts down to avoid the risk of "split brain" occurring. If, at any time, the backup broker loses connection to a majority of the ZooKeeper nodes, it stops receiving replication data and waits until it can connect to a majority of the ZooKeeper nodes before it acts as a backup broker again. When the connection is restored to a majority of the ZooKeeper nodes, the backup broker uses ZooKeeper to determine if it needs to discard its data and search for a live broker from which to replicate, or if it can become the live broker with its current data. ZooKeeper uses the following control mechanisms to manage the failover process: A shared lease lock that can be owned only by a single live broker at any time. An activation sequence counter that tracks the latest version of the broker data. Each broker tracks the version of its journal data in a local counter stored in its server lock file, along with its NodeID. The live broker also shares its version in a coordinated activation sequence counter on ZooKeeper. If the replication connection between the live broker and the backup broker is lost, the live broker increases both its local activation sequence counter value and the coordinated activation sequence counter value on ZooKeeper by 1 to advertise that it has the most up-to-date data. The backup broker's data is now considered stale and the broker cannot become the live broker until the replication connection is restored and the up-to-date data is synchronized. After the replication connection is lost, the backup broker checks if the ZooKeeper lock is owned by the live broker and if the coordinated activation sequence counter on ZooKeeper matches its local counter value. If the lock is owned by the live broker, the backup broker detects that the activation sequence counter on ZooKeeper was updated by the live broker when the replication connection was lost. This indicates that the live broker is running so the backup broker does not try to failover. If the lock is not owned by the live broker, the live broker is not alive. If the value of the activation sequence counter on the backup broker is the same as the coordinated activation sequence counter value on ZooKeeper, which indicates that the backup broker has up-to-date data, the backup broker fails over. If the lock is not owned by the live broker but the value of the activation sequence counter on the backup broker is less than the counter value on ZooKeeper, the data on the backup broker is not up-to-date and the backup broker cannot fail over. Using the embedded broker coordination If a live-backup broker pair use the embedded broker coordination to coordinate a replication interruption, the following two types of quorum votes can be initiated. Table 14.1. Quorum voting Vote type Description Initiator Required configuration Participants Action based on vote result Backup vote If a backup broker loses its replication connection to the live broker, the backup broker decides whether or not to start based on the result of this vote. Backup broker None. A backup vote happens automatically when a backup broker loses connection to its replication partner. However, you can control the properties of a backup vote by specifying custom values for these parameters: quorum-vote-wait vote-retries vote-retry-wait Other live brokers in the cluster The backup broker starts if it receives a majority (that is, a quorum ) vote from the other live brokers in the cluster, indicating that its replication partner is no longer available. Live vote If a live broker loses connection to its replication partner, the live broker decides whether to continue running based on this vote. Live broker A live vote happens when a live broker loses connection to its replication partner and vote-on-replication-failure is set to true . A backup broker that has become active is considered a live broker, and can initiate a live vote. Other live brokers in the cluster The live broker shuts down if it doesn't receive a majority vote from the other live brokers in the cluster, indicating that its cluster connection is still active. Important Listed below are some important things to note about how the configuration of your broker cluster affects the behavior of quorum voting. For a quorum vote to succeed, the size of your cluster must allow a majority result to be achieved. Therefore, your cluster should have at least three live-backup broker pairs. The more live-backup broker pairs that you add to your cluster, the more you increase the overall fault tolerance of the cluster. For example, suppose you have three live-backup pairs. If you lose a complete live-backup pair, the two remaining live-backup pairs cannot achieve a majority result in any subsequent quorum vote. This situation means that any further replication interruption in the cluster might cause a live broker to shut down, and prevent its backup broker from starting up. By configuring your cluster with, say, five broker pairs, the cluster can experience at least two failures, while still ensuring a majority result from any quorum vote. If you intentionally reduce the number of live-backup broker pairs in your cluster, the previously established threshold for a majority vote does not automatically decrease. During this time, any quorum vote triggered by a lost replication connection cannot succeed, making your cluster more vulnerable to split brain. To make your cluster recalculate the majority threshold for a quorum vote, first shut down the live-backup pairs that you are removing from your cluster. Then, restart the remaining live-backup pairs in the cluster. When all of the remaining brokers have been restarted, the cluster recalculates the quorum vote threshold. 14.3.3.3. Configuring replication for a broker cluster using the ZooKeeper coordination service You must specify the same replication configuration for both brokers in a live-backup pair that uses the Apache ZooKeeper coordination service. The brokers then coordinate to determine which broker is the primary broker and which is the backup broker. Prerequisites At least 3 Apache ZooKeeper nodes to ensure that brokers can continue to operate if they lose the connection to one node. The broker machines have a similar hardware specification, that is, you do not have a preference for which machine runs the live broker and which runs the backup broker at any point in time. ZooKeeper must have sufficient resources to ensure that pause times are significantly less than the ZooKeeper server tick time. Depending on the expected load of the broker, consider carefully if the broker and ZooKeeper node can share the same node. For more information, see https://zookeeper.apache.org/ . Procedure Open the <broker_instance_dir> /etc/broker.xml configuration file for both brokers in the live-backup pair. Configure the same replication configuration for both brokers in the pair. For example: <configuration> <core> ... <ha-policy> <replication> <primary> <coordination-id>production-001</coordination-id> <manager> <properties> <property key="connect-string" value="192.168.1.10:6666,192.168.2.10:6667,192.168.3.10:6668"/> </properties> </manager> </primary> </replication> </ha-policy> ... </core> </configuration> primary Configure the replication type as primary to indicate that either broker can be the primary broker depending on the result of the broker coordination. Coordination-id Specify a common string value for both brokers in the live-backup pair. Brokers with the same Coordination-id string coordinate activation together. During the coordination process, both brokers use the Coordination-id string as the node Id and attempt to obtain a lock in ZooKeeper. The first broker that obtains a lock and has up-to-date data starts as a live broker and the other broker becomes the backup. properties Specify a property element within which you can specify a set of key-value pairs to provide the connection details for the ZooKeeper nodes: Table 14.2. ZooKeeper connection details Key Value connect-string Specify a comma-separated list of the IP addresses and port numbers of the ZooKeeper nodes. For example, value="192.168.1.10:6666,192.168.2.10:6667,192.168.3.10:6668" . session-ms The duration that the broker waits before it shuts down after losing connection to a majority of the ZooKeeper nodes. The default value is 18000 ms. A valid value is between 2 times and 20 times the ZooKeeper server tick time. Note The ZooKeeper pause time for garbage collection must be less than 0.33 of the value of the session-ms property in order to allow the ZooKeeper heartbeat to function reliably. If it is not possible to ensure that pause times are less than this limit, increase the value of the session-ms property for each broker and accept a slower failover. Important Broker replication partners automatically exchange "ping" packets every 2 seconds to confirm that the partner broker is available. When a backup broker does not receive a response from the live broker, the backup waits for a response until the broker's connection time-to-live (ttl) expires. The default connection-ttl is 60000 ms which means that a backup broker attempts to fail over after 60 seconds. It is recommended that you set the connection-ttl value to a similar value to the session-ms property value to allow a faster failover. To set a new connection-ttl, configure the connection-ttl-override property. namespace (optional) If the brokers share the ZooKeeper nodes with other applications, you can create a ZooKeeper namespace to store the files that provide a coordination service to brokers. You must specify the same namespace for both brokers in a live-backup pair. Configure any additional HA properties for the brokers. These additional HA properties have default values that are suitable for most common use cases. Therefore, you only need to configure these properties if you do not want the default behavior. For more information, see Appendix F, Additional Replication High Availability Configuration Elements . Repeat steps 1 to 3 to configure each additional live-backup broker pair in the cluster. Additional resources For examples of broker clusters that use replication for HA, see the HA examples . For more information about node IDs, see Understanding node IDs . 14.3.3.4. Configuring a broker cluster for replication high availability using the embedded broker coordination Replication using the embedded broker coordination requires at least three live-backup pairs to lessen (but not eliminate) the risk of "split brain". The following procedure describes how to configure replication high-availability (HA) for a six-broker cluster. In this topology, the six brokers are grouped into three live-backup pairs: each of the three live brokers is paired with a dedicated backup broker. Prerequisites You must have a broker cluster with at least six brokers. The six brokers are configured into three live-backup pairs. For more information about adding brokers to a cluster, see Chapter 14, Setting up a broker cluster . Procedure Group the brokers in your cluster into live-backup groups. In most cases, a live-backup group should consist of two brokers: a live broker and a backup broker. If you have six brokers in your cluster, you need three live-backup groups. Create the first live-backup group consisting of one live broker and one backup broker. Open the live broker's <broker_instance_dir> /etc/broker.xml configuration file. Configure the live broker to use replication for its HA policy. <configuration> <core> ... <ha-policy> <replication> <master> <check-for-live-server>true</check-for-live-server> <group-name>my-group-1</group-name> <vote-on-replication-failure>true</vote-on-replication-failure> ... </master> </replication> </ha-policy> ... </core> </configuration> check-for-live-server If the live broker fails, this property controls whether clients should fail back to it when it restarts. If you set this property to true , when the live broker restarts after a failover, it searches for another broker in the cluster with the same node ID. If the live broker finds another broker with the same node ID, this indicates that a backup broker successfully started upon failure of the live broker. In this case, the live broker synchronizes its data with the backup broker. The live broker then requests the backup broker to shut down. If the backup broker is configured for failback, as shown below, it shuts down. The live broker then resumes its active role, and clients reconnect to it. Warning If you do not set check-for-live-server to true on the live broker, you might experience duplicate messaging handling when you restart the live broker after a failover. Specifically, if you restart a live broker with this property set to false , the live broker does not synchronize data with its backup broker. In this case, the live broker might process the same messages that the backup broker has already handled, causing duplicates. group-name A name for this live-backup group (optional). To form a live-backup group, the live and backup brokers must be configured with the same group name. If you don't specify a group-name, a backup broker can replicate with any live broker. vote-on-replication-failure This property controls whether a live broker initiates a quorum vote called a live vote in the event of an interrupted replication connection. A live vote is a way for a live broker to determine whether it or its partner is the cause of the interrupted replication connection. Based on the result of the vote, the live broker either stays running or shuts down. Important For a quorum vote to succeed, the size of your cluster must allow a majority result to be achieved. Therefore, when you use the replication HA policy, your cluster should have at least three live-backup broker pairs. The more broker pairs you configure in your cluster, the more you increase the overall fault tolerance of the cluster. For example, suppose you have three live-backup broker pairs. If you lose connection to a complete live-backup pair, the two remaining live-backup pairs can no longer achieve a majority result in a quorum vote. This situation means that any subsequent replication interruption might cause a live broker to shut down, and prevent its backup broker from starting up. By configuring your cluster with, say, five broker pairs, the cluster can experience at least two failures, while still ensuring a majority result from any quorum vote. Configure any additional HA properties for the live broker. These additional HA properties have default values that are suitable for most common use cases. Therefore, you only need to configure these properties if you do not want the default behavior. For more information, see Appendix F, Additional Replication High Availability Configuration Elements . Open the backup broker's <broker_instance_dir> /etc/broker.xml configuration file. Configure the backup broker to use replication for its HA policy. <configuration> <core> ... <ha-policy> <replication> <slave> <allow-failback>true</allow-failback> <group-name>my-group-1</group-name> <vote-on-replication-failure>true</vote-on-replication-failure> ... </slave> </replication> </ha-policy> ... </core> </configuration> allow-failback If failover has occurred and the backup broker has taken over for the live broker, this property controls whether the backup broker should fail back to the original live broker when it restarts and reconnects to the cluster. Note Failback is intended for a live-backup pair (one live broker paired with a single backup broker). If the live broker is configured with multiple backups, then failback will not occur. Instead, if a failover event occurs, the backup broker will become live, and the backup will become its backup. When the original live broker comes back online, it will not be able to initiate failback, because the broker that is now live already has a backup. group-name A name for this live-backup group (optional). To form a live-backup group, the live and backup brokers must be configured with the same group name. If you don't specify a group-name, a backup broker can replicate with any live broker. vote-on-replication-failure This property controls whether a live broker initiates a quorum vote called a live vote in the event of an interrupted replication connection. A backup broker that has become active is considered a live broker and can initiate a live vote. A live vote is a way for a live broker to determine whether it or its partner is the cause of the interrupted replication connection. Based on the result of the vote, the live broker either stays running or shuts down. (Optional) Configure properties of the quorum votes that the backup broker initiates. <configuration> <core> ... <ha-policy> <replication> <slave> ... <vote-retries>12</vote-retries> <vote-retry-wait>5000</vote-retry-wait> ... </slave> </replication> </ha-policy> ... </core> </configuration> vote-retries This property controls how many times the backup broker retries the quorum vote in order to receive a majority result that allows the backup broker to start up. vote-retry-wait This property controls how long, in milliseconds, that the backup broker waits between each retry of the quorum vote. Configure any additional HA properties for the backup broker. These additional HA properties have default values that are suitable for most common use cases. Therefore, you only need to configure these properties if you do not want the default behavior. For more information, see Appendix F, Additional Replication High Availability Configuration Elements . Repeat step 2 for each additional live-backup group in the cluster. If there are six brokers in the cluster, repeat this procedure two more times; once for each remaining live-backup group. Additional resources For examples of broker clusters that use replication for HA, see the HA examples . For more information about node IDs, see Understanding node IDs . 14.3.4. Configuring limited high availability with live-only The live-only HA policy enables you to shut down a broker in a cluster without losing any messages. With live-only, when a live broker is stopped gracefully, it copies its messages and transaction state to another live broker and then shuts down. Clients can then reconnect to the other broker to continue sending and receiving messages. The live-only HA policy only handles cases when the broker is stopped gracefully. It does not handle unexpected broker failures. While live-only HA prevents message loss, it may not preserve message order. If a broker configured with live-only HA is stopped, its messages will be appended to the ends of the queues of another broker. Note When a broker is preparing to scale down, it sends a message to its clients before they are disconnected informing them which new broker is ready to process their messages. However, clients should reconnect to the new broker only after their initial broker has finished scaling down. This ensures that any state, such as queues or transactions, is available on the other broker when the client reconnects. The normal reconnect settings apply when the client is reconnecting, so you should set these high enough to deal with the time needed to scale down. This procedure describes how to configure each broker in the cluster to scale down. After completing this procedure, whenever a broker is stopped gracefully, it will copy its messages and transaction state to another broker in the cluster. Procedure Open the first broker's <broker_instance_dir> /etc/broker.xml configuration file. Configure the broker to use the live-only HA policy. <configuration> <core> ... <ha-policy> <live-only> </live-only> </ha-policy> ... </core> </configuration> Configure a method for scaling down the broker cluster. Specify the broker or group of brokers to which this broker should scale down. Table 14.3. Methods for scaling down a broker cluster To scale down to... Do this... A specific broker in the cluster Specify the connector of the broker to which you want to scale down. <live-only> <scale-down> <connectors> <connector-ref>broker1-connector</connector-ref> </connectors> </scale-down> </live-only> Any broker in the cluster Specify the broker cluster's discovery group. <live-only> <scale-down> <discovery-group-ref discovery-group-name="my-discovery-group"/> </scale-down> </live-only> A broker in a particular broker group Specify a broker group. <live-only> <scale-down> <group-name>my-group-name</group-name> </scale-down> </live-only> Repeat this procedure for each remaining broker in the cluster. Additional resources For an example of a broker cluster that uses live-only to scale down the cluster, see the scale-down example . 14.3.5. Configuring high availability with colocated backups Rather than configure live-backup groups, you can colocate backup brokers in the same JVM as another live broker. In this configuration, each live broker is configured to request another live broker to create and start a backup broker in its JVM. Figure 14.7. Colocated live and backup brokers You can use colocation with either shared store or replication as the high availability (HA) policy. The new backup broker inherits its configuration from the live broker that creates it. The name of the backup is set to colocated_backup_n where n is the number of backups the live broker has created. In addition, the backup broker inherits the configuration for its connectors and acceptors from the live broker that creates it. By default, port offset of 100 is applied to each. For example, if the live broker has an acceptor for port 61616, the first backup broker created will use port 61716, the second backup will use 61816, and so on. Directories for the journal, large messages, and paging are set according to the HA policy you choose. If you choose shared store, the requesting broker notifies the target broker which directories to use. If replication is chosen, directories are inherited from the creating broker and have the new backup's name appended to them. This procedure configures each broker in the cluster to use shared store HA, and to request a backup to be created and colocated with another broker in the cluster. Procedure Open the first broker's <broker_instance_dir> /etc/broker.xml configuration file. Configure the broker to use an HA policy and colocation. In this example, the broker is configured with shared store HA and colocation. <configuration> <core> ... <ha-policy> <shared-store> <colocated> <request-backup>true</request-backup> <max-backups>1</max-backups> <backup-request-retries>-1</backup-request-retries> <backup-request-retry-interval>5000</backup-request-retry-interval/> <backup-port-offset>150</backup-port-offset> <excludes> <connector-ref>remote-connector</connector-ref> </excludes> <master> <failover-on-shutdown>true</failover-on-shutdown> </master> <slave> <failover-on-shutdown>true</failover-on-shutdown> <allow-failback>true</allow-failback> <restart-backup>true</restart-backup> </slave> </colocated> </shared-store> </ha-policy> ... </core> </configuration> request-backup By setting this property to true , this broker will request a backup broker to be created by another live broker in the cluster. max-backups The number of backup brokers that this broker can create. If you set this property to 0 , this broker will not accept backup requests from other brokers in the cluster. backup-request-retries The number of times this broker should try to request a backup broker to be created. The default is -1 , which means unlimited tries. backup-request-retry-interval The amount of time in milliseconds that the broker should wait before retrying a request to create a backup broker. The default is 5000 , or 5 seconds. backup-port-offset The port offset to use for the acceptors and connectors for a new backup broker. If this broker receives a request to create a backup for another broker in the cluster, it will create the backup broker with the ports offset by this amount. The default is 100 . excludes (optional) Excludes connectors from the backup port offset. If you have configured any connectors for external brokers that should be excluded from the backup port offset, add a <connector-ref> for each of the connectors. master The shared store or replication failover configuration for this broker. slave The shared store or replication failover configuration for this broker's backup. Repeat this procedure for each remaining broker in the cluster. Additional resources For examples of broker clusters that use colocated backups, see the HA examples . 14.3.6. Configuring clients to fail over After configuring high availability in a broker cluster, you configure your clients to fail over. Client failover ensures that if a broker fails, the clients connected to it can reconnect to another broker in the cluster with minimal downtime. Note In the event of transient network problems, AMQ Broker automatically reattaches connections to the same broker. This is similar to failover, except that the client reconnects to the same broker. You can configure two different types of client failover: Automatic client failover The client receives information about the broker cluster when it first connects. If the broker to which it is connected fails, the client automatically reconnects to the broker's backup, and the backup broker re-creates any sessions and consumers that existed on each connection before failover. Application-level client failover As an alternative to automatic client failover, you can instead code your client applications with your own custom reconnection logic in a failure handler. Procedure Use AMQ Core Protocol JMS to configure your client application with automatic or application-level failover. For more information, see Using the AMQ Core Protocol JMS Client . 14.4. Enabling message redistribution If your broker cluster is configured with message-load-balancing set to ON_DEMAND or OFF_WITH_REDISTRIBUTION , you can configure message redistribution to prevent messages from being "stuck" in a queue that does not have a consumer to consume the messages. This section contains information about: Understanding message distribution Configuring message redistribution 14.4.1. Understanding message redistribution Broker clusters use load balancing to distribute the message load across the cluster. When configuring load balancing in the cluster connection, you can enable redistribution using the following message-load-balancing settings: ON_DEMAND - enable load balancing and allow redistribution OFF_WITH_REDISTRIBUTION - disable load balancing but allow redistribution In both cases, the broker forwards messages only to other brokers that have matching consumers. This behavior ensures that messages are not moved to queues that do not have any consumers to consume the messages. However, if the consumers attached to a queue close after the messages are forwarded to the broker, those messages become "stuck" in the queue and are not consumed. This issue is sometimes called starvation . Message redistribution prevents starvation by automatically redistributing the messages from queues that have no consumers to brokers in the cluster that do have matching consumers. With OFF_WITH_REDISTRIBUTION , the broker only forwards messages to other brokers that have matching consumers if there are no active local consumers, enabling you to prioritize a broker while providing an alternative when consumers are not available. Message redistribution supports the use of filters (also know as selectors ), that is, messages are redistributed when they do not match the selectors of the available local consumers. Additional resources For more information about cluster load balancing, see Section 14.1.1, "How broker clusters balance message load" . 14.4.2. Configuring message redistribution This procedure shows how to configure message redistribution with load balancing. If you want message redistribution without load balancing, set <message-load-balancing> is set to OFF_WITH_REDISTRIBUTION . Procedure Open the <broker_instance_dir> /etc/broker.xml configuration file. In the <cluster-connection> element, verify that <message-load-balancing> is set to ON_DEMAND . <configuration> <core> ... <cluster-connections> <cluster-connection name="my-cluster"> ... <message-load-balancing>ON_DEMAND</message-load-balancing> ... </cluster-connection> </cluster-connections> </core> </configuration> Within the <address-settings> element, set the redistribution delay for a queue or set of queues. In this example, messages load balanced to my.queue will be redistributed 5000 milliseconds after the last consumer closes. <configuration> <core> ... <address-settings> <address-setting match="my.queue"> <redistribution-delay>5000</redistribution-delay> </address-setting> </address-settings> ... </core> </configuration> address-setting Set the match attribute to be the name of the queue for which you want messages to be redistributed. You can use the broker wildcard syntax to specify a range of queues. For more information, see Section 4.2, "Applying address settings to sets of addresses" . redistribution-delay The amount of time (in milliseconds) that the broker should wait after this queue's final consumer closes before redistributing messages to other brokers in the cluster. If you set this to 0 , messages will be redistributed immediately. However, you should typically set a delay before redistributing - it is common for a consumer to close but another one to be quickly created on the same queue. Repeat this procedure for each additional broker in the cluster. Additional resources For an example of a broker cluster configuration that redistributes messages, see the queue-message-redistribution example . 14.5. Configuring clustered message grouping Message grouping enables clients to send groups of messages of a particular type to be processed serially by the same consumer. By adding a grouping handler to each broker in the cluster, you ensure that clients can send grouped messages to any broker in the cluster and still have those messages consumed in the correct order by the same consumer. Note Grouping and clustering techniques can be summarized as follows: Message grouping imposes an order on message consumption. In a group, each message must be fully consumed and acknowledged prior to proceeding with the message. This methodology leads to serial message processing, where concurrency is not an option. Clustering aims to horizontally scale brokers to boost message throughput. Horizontal scaling is achieved by adding additional consumers that can process messages concurrently. Because these techniques contradict each other, avoid using clustering and grouping together. There are two types of grouping handlers: local handlers and remote handlers . They enable the broker cluster to route all of the messages in a particular group to the appropriate queue so that the intended consumer can consume them in the correct order. Prerequisites There should be at least one consumer on each broker in the cluster. When a message is pinned to a consumer on a queue, all messages with the same group ID will be routed to that queue. If the consumer is removed, the queue will continue to receive the messages even if there are no consumers. Procedure Configure a local handler on one broker in the cluster. If you are using high availability, this should be a master broker. Open the broker's <broker_instance_dir> /etc/broker.xml configuration file. Within the <core> element, add a local handler: The local handler serves as an arbiter for the remote handlers. It stores route information and communicates it to the other brokers. <configuration> <core> ... <grouping-handler name="my-grouping-handler"> <type>LOCAL</type> <timeout>10000</timeout> </grouping-handler> ... </core> </configuration> grouping-handler Use the name attribute to specify a unique name for the grouping handler. type Set this to LOCAL . timeout The amount of time to wait (in milliseconds) for a decision to be made about where to route the message. The default is 5000 milliseconds. If the timeout is reached before a routing decision is made, an exception is thrown, which ensures strict message ordering. When the broker receives a message with a group ID, it proposes a route to a queue to which the consumer is attached. If the route is accepted by the grouping handlers on the other brokers in the cluster, then the route is established: all brokers in the cluster will forward messages with this group ID to that queue. If the broker's route proposal is rejected, then it proposes an alternate route, repeating the process until a route is accepted. If you are using high availability, copy the local handler configuration to the master broker's slave broker. Copying the local handler configuration to the slave broker prevents a single point of failure for the local handler. On each remaining broker in the cluster, configure a remote handler. Open the broker's <broker_instance_dir> /etc/broker.xml configuration file. Within the <core> element, add a remote handler: <configuration> <core> ... <grouping-handler name="my-grouping-handler"> <type>REMOTE</type> <timeout>5000</timeout> </grouping-handler> ... </core> </configuration> grouping-handler Use the name attribute to specify a unique name for the grouping handler. type Set this to REMOTE . timeout The amount of time to wait (in milliseconds) for a decision to be made about where to route the message. The default is 5000 milliseconds. Set this value to at least half of the value of the local handler. Additional resources For an example of a broker cluster configured for message grouping, see the JMS clustered grouping example . 14.6. Connecting clients to a broker cluster You can use the Red Hat build of Apache Qpid JMS clients to connect to the cluster. By using JMS, you can configure your messaging clients to discover the list of brokers dynamically or statically. You can also configure client-side load balancing to distribute the client sessions created from the connection across the cluster. Procedure Use AMQ Core Protocol JMS to configure your client application to connect to the broker cluster. For more information, see Using the AMQ Core Protocol JMS Client . 14.7. Partitioning client connections Partitioning client connections involves routing connections for individual clients to the same broker each time the client initiates a connection. Two use cases for partitioning client connections are: Partitioning clients of durable subscriptions to ensure that a subscriber always connects to the broker where the durable subscriber queue is located. Minimizing the need to move data by attracting clients to data where it originates, also known as data gravity. Durable subscriptions A durable subscription is represented as a queue on a broker and is created when a durable subscriber first connects to the broker. This queue remains on the broker and receives messages until the client unsubscribes. Therefore, you want the client to connect to the same broker repeatedly to consume the messages that are in the subscriber queue. To partition clients for durable subscription queues, you can filter the client ID in client connections. Data gravity If you scale up the number of brokers in your environment without considering data gravity, some of the performance benefits are lost because of the need to move messages between brokers. To support date gravity, you should partition your client connections so that client consumers connect to the broker on which the messages that they need to consume are produced. To partition client connections to support data gravity, you can filter any of the following attributes of a client connection: a role assigned to the connecting user (ROLE_NAME) the username of the user (USER_NAME) the hostname of the client (SNI_HOST) the IP address of the client (SOURCE_IP) 14.7.1. Partitioning client connections to support durable subscriptions To partition clients for durable subscriptions, you can filter client IDs in incoming connections by using a consistent hashing algorithm or a regular expression. Prerequisites Clients are configured so they can connect to all of the brokers in the cluster, for example, by using a load balancer or by having all of the broker instances configured in the connection URL. If a broker rejects a connection because the client details do not match the partition configuration for that broker, the client must be able to connect to the other brokers in the cluster to find a broker that accepts connections from it. 14.7.1.1. Filtering client IDs using a consistent hashing algorithm You can configure each broker in a cluster to use a consistent hashing algorithm to hash the client ID in each client connection. After the broker hashes the client ID, it performs a modulo operation on the hashed value to return an integer value, which identifies the target broker for the client connection. The broker compares the integer value returned to a unique value configured on the broker. If there is a match, the broker accepts the connection. If the values don't match, the broker rejects the connection. This process is repeated on each broker in the cluster until a match is found and a broker accepts the connection. Procedure Open the <broker_instance_dir> /etc/broker.xml configuration file for the first broker. Create a connection-routers element and create a connection-route to filter client IDs by using a consistent hashing algorithm. For example: <configuration> <core> ... <connection-routers> <connection-route name="consistent-hash-routing"> <key>CLIENT_ID</target-key> <local-target-filter>NULL|0</local-target-filter> <policy name="CONSISTENT_HASH_MODULO"> <property key="modulo" value="<number_of_brokers_in_cluster>"> </property> </policy> </connection-route> </connection-routers> ... </core> </configuration> connection-route For the connection-route name , specify an identifying string for this connection routing configuration. You must add this name to each broker acceptor that you want to apply the consistent hashing filter to. key The key type to apply the filter to. To filter the client ID, specify CLIENT_ID in the key field. local-target-filter The value that the broker compares to the integer value returned by the modulo operation to determine if there is a match and the broker can accept the connection. The value of NULL|0 in the example provides a match for connections that have no client ID (NULL) and connections where the number returned by the modulo operation is 0 . policy Accepts a modulo property key, which performs a modulo operation on the hashed client ID to identify the target broker. The value of the modulo property key must equal the number of brokers in the cluster. Important The policy name must be CONSISTENT_HASH_MODULO . Open the <broker_instance_dir> /etc/broker.xml configuration file for the second broker. Create a connection-routers element and create a connection route to filter client IDs by using a consistent hashing algorithm. In the following example, the local-target-filter value of NULL|1 provides a match for connections that have no client ID (NULL) and connections where the value returned by the modulo operation is 1 . <configuration> <core> ... <connection-routers> <connection-route name="consistent-hash-routing"> <key>CLIENT_ID</target-key> <local-target-filter>NULL|1</local-target-filter> <policy name="CONSISTENT_HASH_MODULO"> <property key="modulo" value="<number_of_brokers_in_cluster>"> </property> </policy> </connection-route> </connection-routers> ... </core> </configuration> Repeat this procedure to create a consistent hash filter for each additional broker in the cluster. 14.7.1.2. Filtering client IDs using a regular expression You can partition client connections by configuring brokers to apply a regular expression filter to a part of the client ID in client connections. A broker only accepts a connection if the result of the regular expression filter matches the local target filter configured for the broker. If a match is not found, the broker rejects the connection. This process is repeated on each broker in the cluster until a match is found and a broker accepts the connection. Prerequisites A common string in each client ID that can be filtered by a regular expression. Procedure Open the <broker_instance_dir> /etc/broker.xml configuration file for the first broker. Create a connection-routers element and create a connection-route to filter part of the client ID. For example: <configuration> <core> ... <connection-routers> <connection-route name="regex-routing"> <key>CLIENT_ID</target-key> <key-filter>^.{3}</key-filter> <local-target-filter>NULL|CL1</local-target-filter> </connection-route> </connection-routers> ... </core> </configuration> connection-route For the connection-route name , specify an identifying string for this routing configuration. You must add this name to each broker acceptor that you want to apply the regular expression filter to. key The key to apply the filter to. To filter the client ID, specify CLIENT_ID in the key field. key-filter The part of the client ID string to which the regular expression is applied to extract a key value. In the example for the first broker above, the broker extracts a key value that is the first 3 characters of the client ID. If, for example, the client ID string is CL100.consumer , the broker extracts a key value of CL1 . After the broker extracts the key value, it compares it to the value of the local-target-filter . If an incoming connection does not have a client ID, or if the broker is unable to extract a key value by using the regular expression specified for the key-filter , the key value is set to NULL. local-target-filter The value that the broker compares to the key value to determine if there is a match and the broker can accept the connection. A value of NULL|CL1 , as shown in the example for the first broker above, matches connections that have no client ID (NULL) or have a 3-character prefix of CL1 in the client ID. Open the <broker_instance_dir> /etc/broker.xml configuration file for the second broker. Create a connection-routers element and create a connection route to filter connections based on a part of the client ID. In the following filter example, the broker uses a regular expression to extract a key value that is the first 3 characters of the client ID. The broker compares the values of NULL and CL2 to the key value to determine if there is a match and the broker can accept the connection. <configuration> <core> ... <connection-routers> <connection-route name="regex-routing"> <key>CLIENT_ID</target-key> <key-filter>^.{3}</key-filter> <local-target-filter>NULL|CL2</local-target-filter> </connection-route> </connection-routers> ... </core> </configuration> Repeat this procedure and create the appropriate connection routing filter for each additional broker in the cluster. 14.7.2. Partitioning client connections to support data gravity To support date gravity, you can partition client connections so that client consumers connect to the broker where the messages that they need to consume are produced. For example, if you have a set of addresses that are used by producer and consumer applications, you can configure the addresses on a specific broker. You can then partition client connections for both producers and consumers that use those addresses so they can only connect to that broker. You can partition client connections based on attributes such as the role assigned to the connecting user, the username of the user, or the hostname or IP address of the client. This section shows an example of how to partition client connections by filtering user roles assigned to client users. If clients are required to authenticate to connect to brokers, you can assign roles to client users and filter connections so only users that match the role criteria can connect to a broker. Prerequisites Clients are configured so they can connect to all of the brokers in the cluster, for example, by using a load balancer or by having all of the broker instances configured in the connection URL. If a broker rejects a connection because the client does not match the partitioning filter criteria configured for that broker, the client must be able to connect to the other brokers in the cluster to find a broker that accepts connections from it. Procedure Open the <broker_instance_dir> /etc/artemis-roles.properties file for the first broker. Add a broker1users role and add users to the role. Open the <broker_instance_dir> /etc/broker.xml configuration file for the first broker. Create a connection-routers element and create a connection-route to filter connections based on the roles assigned to users. For example: <configuration> <core> ... <connection-routers> <connection-route name="role-based-routing"> <key>ROLE_NAME</target-key> <key-filter>broker1users</key-filter> <local-target-filter>broker1users</local-target-filter> </connection-route> </connection-routers> ... </core> </configuration> connection-route For the connection-route name , specify an identifying string for this routing configuration. You must add this name to each broker acceptor that you want to apply the role filter to. key The key to apply the filter to. To configure role-based filtering, specify ROLE_NAME in the key field. key-filter The string or regular expression that the broker uses to filter the user's roles and extract a key value. If the broker finds a matching role, it sets the key value to that role. If it does not find a matching role, the broker sets the key value to NULL. In the above example, the broker applies a filter of broker1users to the client user's roles. After the broker extracts the key value, it compares it to the value of the local-target-filter . local-target-filter The value that the broker compares to the key value to determine if there is a match and the broker can accept the connection. In the example, the broker compares a value of broker1users to the key value. It there is a match, which means that the user has a broker1users role, the broker accepts the connection. Repeat this procedure and specify the appropriate role in the filter to partition clients on other brokers in the cluster. 14.7.3. Adding connection routes to acceptors After you configure a connection route on a broker, you must add the route to one or more of the broker's acceptors to partition client connections. After you add a connection route to an acceptor, the broker applies the filter configured in the connection route to connections received by the acceptor. Procedure Open the <broker_instance_dir> /etc/broker.xml configuration file for the first broker. For each acceptor on which you want to enable partitioning, append the router key and specify the connection-route name . In the following example, a connection-route name of consistent-hash-routing is added to the artemis acceptor. <configuration> <core> ... <acceptors> ... <!-- Acceptor for every supported protocol --> <acceptor name="artemis">tcp://0.0.0.0:61616?tcpSendBufferSize=1048576;tcpReceiveBufferSize=1048576;protocols=CORE,AMQP,STOMP,HORNETQ,MQTT,OPENWIRE;useEpoll=true;amqpCredits=1000;amqpLowCredits=300;router="consistent-hash-routing" </acceptor> </acceptors> ... </core> </configuration> Repeat this procedure to specify the appropriate connection route filter for each broker in the cluster. | [
"<configuration> <core> <connectors> <connector name=\"netty-connector\">tcp://localhost:61617</connector> 1 <connector name=\"broker2\">tcp://localhost:61618</connector> 2 <connector name=\"broker3\">tcp://localhost:61619</connector> </connectors> </core> </configuration>",
"<configuration> <core> <cluster-connections> <cluster-connection name=\"my-cluster\"> <connector-ref>netty-connector</connector-ref> <static-connectors> <connector-ref>broker2-connector</connector-ref> <connector-ref>broker3-connector</connector-ref> </static-connectors> </cluster-connection> </cluster-connections> </core> </configuration>",
"<configuration> <core> <cluster-user>cluster_user</cluster-user> <cluster-password>cluster_user_password</cluster-password> </core> </configuration>",
"<configuration> <core> <connectors> <connector name=\"netty-connector\">tcp://localhost:61617</connector> </connectors> </core> </configuration>",
"<configuration> <core> <broadcast-groups> <broadcast-group name=\"my-broadcast-group\"> <local-bind-address>172.16.9.3</local-bind-address> <local-bind-port>-1</local-bind-port> <group-address>231.7.7.7</group-address> <group-port>9876</group-port> <broadcast-period>2000</broadcast-period> <connector-ref>netty-connector</connector-ref> </broadcast-group> </broadcast-groups> </core> </configuration>",
"<configuration> <core> <discovery-groups> <discovery-group name=\"my-discovery-group\"> <local-bind-address>172.16.9.7</local-bind-address> <group-address>231.7.7.7</group-address> <group-port>9876</group-port> <refresh-timeout>10000</refresh-timeout> </discovery-group> <discovery-groups> </core> </configuration>",
"<configuration> <core> <cluster-connections> <cluster-connection name=\"my-cluster\"> <connector-ref>netty-connector</connector-ref> <discovery-group-ref discovery-group-name=\"my-discovery-group\"/> </cluster-connection> </cluster-connections> </core> </configuration>",
"<configuration> <core> <cluster-user>cluster_user</cluster-user> <cluster-password>cluster_user_password</cluster-password> </core> </configuration>",
"<configuration> <core> <connectors> <connector name=\"netty-connector\">tcp://localhost:61617</connector> </connectors> </core> </configuration>",
"<configuration> <core> <broadcast-groups> <broadcast-group name=\"my-broadcast-group\"> <jgroups-file>test-jgroups-file_ping.xml</jgroups-file> <jgroups-channel>activemq_broadcast_channel</jgroups-channel> <broadcast-period>2000</broadcast-period> <connector-ref>netty-connector</connector-ref> </broadcast-group> </broadcast-groups> </core> </configuration>",
"<configuration> <core> <discovery-groups> <discovery-group name=\"my-discovery-group\"> <jgroups-file>test-jgroups-file_ping.xml</jgroups-file> <jgroups-channel>activemq_broadcast_channel</jgroups-channel> <refresh-timeout>10000</refresh-timeout> </discovery-group> <discovery-groups> </core> </configuration>",
"<configuration> <core> <cluster-connections> <cluster-connection name=\"my-cluster\"> <connector-ref>netty-connector</connector-ref> <discovery-group-ref discovery-group-name=\"my-discovery-group\"/> </cluster-connection> </cluster-connections> </core> </configuration>",
"<configuration> <core> <cluster-user>cluster_user</cluster-user> <cluster-password>cluster_user_password</cluster-password> </core> </configuration>",
"<configuration> <core> <paging-directory>../sharedstore/data/paging</paging-directory> <bindings-directory>../sharedstore/data/bindings</bindings-directory> <journal-directory>../sharedstore/data/journal</journal-directory> <large-messages-directory>../sharedstore/data/large-messages</large-messages-directory> </core> </configuration>",
"<configuration> <core> <store> <database-store> <jdbc-connection-url>jdbc:oracle:data/oracle/database-store;create=true</jdbc-connection-url> <jdbc-user>ENC(5493dd76567ee5ec269d11823973462f)</jdbc-user> <jdbc-password>ENC(56a0db3b71043054269d11823973462f)</jdbc-password> <bindings-table-name>BIND_TABLE</bindings-table-name> <message-table-name>MSG_TABLE</message-table-name> <large-message-table-name>LGE_TABLE</large-message-table-name> <page-store-table-name>PAGE_TABLE</page-store-table-name> <node-manager-store-table-name>NODE_TABLE<node-manager-store-table-name> <jdbc-driver-class-name>oracle.jdbc.driver.OracleDriver</jdbc-driver-class-name> <jdbc-network-timeout>10000</jdbc-network-timeout> <jdbc-lock-renew-period>2000</jdbc-lock-renew-period> <jdbc-lock-expiration>15000</jdbc-lock-expiration> <jdbc-journal-sync-period>5</jdbc-journal-sync-period> </database-store> </store> </core> </configuration>",
"<configuration> <core> <ha-policy> <shared-store> <master> <failover-on-shutdown>true</failover-on-shutdown> </master> </shared-store> </ha-policy> </core> </configuration>",
"<configuration> <core> <paging-directory>../sharedstore/data/paging</paging-directory> <bindings-directory>../sharedstore/data/bindings</bindings-directory> <journal-directory>../sharedstore/data/journal</journal-directory> <large-messages-directory>../sharedstore/data/large-messages</large-messages-directory> </core> </configuration>",
"<configuration> <core> <ha-policy> <shared-store> <slave> <failover-on-shutdown>true</failover-on-shutdown> <allow-failback>true</allow-failback> <restart-backup>true</restart-backup> </slave> </shared-store> </ha-policy> </core> </configuration>",
"<configuration> <core> <ha-policy> <replication> <primary> <coordination-id>production-001</coordination-id> <manager> <properties> <property key=\"connect-string\" value=\"192.168.1.10:6666,192.168.2.10:6667,192.168.3.10:6668\"/> </properties> </manager> </primary> </replication> </ha-policy> </core> </configuration>",
"<configuration> <core> <ha-policy> <replication> <master> <check-for-live-server>true</check-for-live-server> <group-name>my-group-1</group-name> <vote-on-replication-failure>true</vote-on-replication-failure> </master> </replication> </ha-policy> </core> </configuration>",
"<configuration> <core> <ha-policy> <replication> <slave> <allow-failback>true</allow-failback> <group-name>my-group-1</group-name> <vote-on-replication-failure>true</vote-on-replication-failure> </slave> </replication> </ha-policy> </core> </configuration>",
"<configuration> <core> <ha-policy> <replication> <slave> <vote-retries>12</vote-retries> <vote-retry-wait>5000</vote-retry-wait> </slave> </replication> </ha-policy> </core> </configuration>",
"<configuration> <core> <ha-policy> <live-only> </live-only> </ha-policy> </core> </configuration>",
"<live-only> <scale-down> <connectors> <connector-ref>broker1-connector</connector-ref> </connectors> </scale-down> </live-only>",
"<live-only> <scale-down> <discovery-group-ref discovery-group-name=\"my-discovery-group\"/> </scale-down> </live-only>",
"<live-only> <scale-down> <group-name>my-group-name</group-name> </scale-down> </live-only>",
"<configuration> <core> <ha-policy> <shared-store> <colocated> <request-backup>true</request-backup> <max-backups>1</max-backups> <backup-request-retries>-1</backup-request-retries> <backup-request-retry-interval>5000</backup-request-retry-interval/> <backup-port-offset>150</backup-port-offset> <excludes> <connector-ref>remote-connector</connector-ref> </excludes> <master> <failover-on-shutdown>true</failover-on-shutdown> </master> <slave> <failover-on-shutdown>true</failover-on-shutdown> <allow-failback>true</allow-failback> <restart-backup>true</restart-backup> </slave> </colocated> </shared-store> </ha-policy> </core> </configuration>",
"<configuration> <core> <cluster-connections> <cluster-connection name=\"my-cluster\"> <message-load-balancing>ON_DEMAND</message-load-balancing> </cluster-connection> </cluster-connections> </core> </configuration>",
"<configuration> <core> <address-settings> <address-setting match=\"my.queue\"> <redistribution-delay>5000</redistribution-delay> </address-setting> </address-settings> </core> </configuration>",
"<configuration> <core> <grouping-handler name=\"my-grouping-handler\"> <type>LOCAL</type> <timeout>10000</timeout> </grouping-handler> </core> </configuration>",
"<configuration> <core> <grouping-handler name=\"my-grouping-handler\"> <type>REMOTE</type> <timeout>5000</timeout> </grouping-handler> </core> </configuration>",
"<configuration> <core> <connection-routers> <connection-route name=\"consistent-hash-routing\"> <key>CLIENT_ID</target-key> <local-target-filter>NULL|0</local-target-filter> <policy name=\"CONSISTENT_HASH_MODULO\"> <property key=\"modulo\" value=\"<number_of_brokers_in_cluster>\"> </property> </policy> </connection-route> </connection-routers> </core> </configuration>",
"<configuration> <core> <connection-routers> <connection-route name=\"consistent-hash-routing\"> <key>CLIENT_ID</target-key> <local-target-filter>NULL|1</local-target-filter> <policy name=\"CONSISTENT_HASH_MODULO\"> <property key=\"modulo\" value=\"<number_of_brokers_in_cluster>\"> </property> </policy> </connection-route> </connection-routers> </core> </configuration>",
"<configuration> <core> <connection-routers> <connection-route name=\"regex-routing\"> <key>CLIENT_ID</target-key> <key-filter>^.{3}</key-filter> <local-target-filter>NULL|CL1</local-target-filter> </connection-route> </connection-routers> </core> </configuration>",
"<configuration> <core> <connection-routers> <connection-route name=\"regex-routing\"> <key>CLIENT_ID</target-key> <key-filter>^.{3}</key-filter> <local-target-filter>NULL|CL2</local-target-filter> </connection-route> </connection-routers> </core> </configuration>",
"<configuration> <core> <connection-routers> <connection-route name=\"role-based-routing\"> <key>ROLE_NAME</target-key> <key-filter>broker1users</key-filter> <local-target-filter>broker1users</local-target-filter> </connection-route> </connection-routers> </core> </configuration>",
"<configuration> <core> <acceptors> <!-- Acceptor for every supported protocol --> <acceptor name=\"artemis\">tcp://0.0.0.0:61616?tcpSendBufferSize=1048576;tcpReceiveBufferSize=1048576;protocols=CORE,AMQP,STOMP,HORNETQ,MQTT,OPENWIRE;useEpoll=true;amqpCredits=1000;amqpLowCredits=300;router=\"consistent-hash-routing\" </acceptor> </acceptors> </core> </configuration>"
] | https://docs.redhat.com/en/documentation/red_hat_amq_broker/7.12/html/configuring_amq_broker/setting-up-broker-cluster-configuring |
15.3. Configuring Memory Slots Reserved for the Self-Hosted Engine on Additional Hosts | 15.3. Configuring Memory Slots Reserved for the Self-Hosted Engine on Additional Hosts If the Manager virtual machine shuts down or needs to be migrated, there must be enough memory on a self-hosted engine node for the Manager virtual machine to restart on or migrate to it. This memory can be reserved on multiple self-hosted engine nodes by using a scheduling policy. The scheduling policy checks if enough memory to start the Manager virtual machine will remain on the specified number of additional self-hosted engine nodes before starting or migrating any virtual machines. See Creating a Scheduling Policy in the Administration Guide for more information about scheduling policies. To add more self-hosted engine nodes to the Red Hat Virtualization Manager, see Section 15.4, "Adding Self-Hosted Engine Nodes to the Red Hat Virtualization Manager" . Configuring Memory Slots Reserved for the Self-Hosted Engine on Additional Hosts Click Compute Clusters and select the cluster containing the self-hosted engine nodes. Click Edit . Click the Scheduling Policy tab. Click + and select HeSparesCount . Enter the number of additional self-hosted engine nodes that will reserve enough free memory to start the Manager virtual machine. Click OK . | null | https://docs.redhat.com/en/documentation/red_hat_virtualization/4.3/html/administration_guide/Configuring_Memory_Slots_Reserved_for_the_SHE |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.